>3/18 24: Similarities between Electronic Computers and the Human Brain: Thank you Jensen Huang for best week of Learning since John Von Neumann shared with The Economist 1956 notes Computer & The Brain
HAPPY 2024: in this 74th year since The Economist started mediating futures of brainworking machines clued by the 3 maths greats NET (Neumann, Einstein, Turing) people seem to be chatting about 5 wholly different sorts of AI. 1BAD: The worst tech system designers don't deserve inclusion in human intel at all, and as Hoover's Condoleezza Rice . 2 reports their work is result of 10 compound techs of which Ai is but one. Those worst for world system designs may use media to lie or multiply hate or hack, and to perpetuate tribal wars and increase trade in arms. Sadly bad versions of tv media began in USA early 1960s when it turned out what had been the nation's first major export crop, tobacco, was a killer. Please note for a long time farmers did not know bac was bad: western HIStory is full of ignorances which lawyer-dominated societies then cover up once inconvenient system truths are seen. A second AI ecommerce type (now 25 years exponential development strong) ; this involves ever more powerful algorithms applied to a company's data platform that can be app'd to hollow out community making relatively few people richer and richer, or the reverse. You can test a nation's use of this ai by seeing if efinance has invested in the poorest or historically most disconnected - see eg bangladesh's bklash, one of the most populous digital cash systems . Digital money is far cheaper to distribute let alone to manually account for so power AI offers lots of lessons but whether its good or not depends in part on whether there are enough engineers in gov & public service to see ahead of what needs regulating. There are 2 very good ai's which have only scaled in recent years that certainly dont need regulating by non engineers and one curious ai which was presented to congress in 2018 but which was left to multiply at least 100 variants today the so-called chats or LLMs. Lets look at the 2 very good ai's first because frankly if your community is concerned about any extinction risks these AI may most likely save you, One I call science AI and frankly in the west one team is so far ahead that we should count ourselves lucky that its originator Hassabis has mixed wealth and societal growth. His deep mind merged with google to make wealth but open sourced the 200 million protein databank equivalent to a billion hours of doctorate time- so now's the time for biotech to save humanity if it ever does. Alongside this the second very good AI graviates around Fei-Fei Li) in developing 20 million imagenet database so that annual competitions training computers to see 20000 of the most everyday sights we humans view around the world including things and life-forms such as nature's plants and animals. Today, students no longer need to go back to 0.1 programming to ask computer about any of these objects; nor do robots or and autonomous vehicles - see fei-fei li's book worlds i see which is published in melinda gates Entrepreneurial Revolution of girl empowerment
EW::ED , VN Hypothesis: in 21st C brainworking worlds how people's times & data are spent is foundational to place's community health, energy and so natural capacity to grow/destroy wealth -thus species will depend on whether 1000 mother tongue language model mediates intelligence/maths so all communities cooperatively celebrate lifetimes and diversity's deep data ) . Check out "Moore exponential patterns" at year 73 of celebrating Game : Architect Intelligence (Ai) - players welcome .. some jargon

Friday, December 31, 1999

FFL tours Stanford HAI -and more - in conversation with Hoffman (linkedin founder)

full video of fei-fei li & Reid Hoffman , 2022- Chapters, 0:00 Introduction 1:46 The Goal For Human Centered AI 6:00 The Role of HAI in the Industry 10:46 Importance of Being Human Centered 13:49 Early Career 17:51 Role of Industry in AI 23:01 Model Safety and Reliability 30:08 Ethics and Society View 38:23 AI in Healthcare 44:17 Robotics in the Business World 49:43 Making America More Competitive in AI 52:52 Diversity in AI AI Index Report 2023 HeiFeil Li Stanford HAI . (follows 445) and Etchemendy.. directors conversstions -Hai Launch Circles 2019 -  Hai seed Funds April 2023 Nai newsletter archives Health(PAC) & Adeli Sail behavior Object Folder MOMA Pair Robots AI4All Team ::::history and branches  including early support from Melinda Gates
  0:09 thank you all for joining us for today's conversations it's my pleasure to introduce my friend dr fei-fei lee;she is the sequoia professor of computer science at stanford university and the denning co-director of the stanford institute of human centered ai also known as hai before founding hai in 2019 she served as the director of stanford's ai lab she was a vp at google and chief scientist of ai and ml at google cloud during her stanford sabbatical in 2017 through 2018 she is also a co-founder and chairperson of the board of the national non-profit called ai for all focusing on training diverse k-12 students of underprivileged communities they've come tomorrow's ai's leaders obviously we all know that's super important and among her many distinctions she is elected member of the national academy of engineering the national academy of medicine and the academy of arts and sciences ; dr li also serves  on the 12th person national ai resource task force commissioned by the congress and white house official office of science and technology policy which is super important for all of us
1.26 let's get started - it's been more than two years since you started the stanford institute of human centered ai or HAI as we as as we call it what's the goal of the institute and what have you accomplished so far ?
FFL first of all reid; thank you for the invitation. it's such a pleasure to just have a conversation with you. The Goal For Human Centered AI--as we began in 2019, its now two and half years of the global pandemic but we have also benn born out of a very important mission/vision : to advance ai research education outreach and practice including policy to better human conditions because we believe this is such an important technology it's one of those revolutionary horizontal technology that will fundamentally change the way business is conducted and we peoples live our lives so we want to be focusing on benevolent usage and purpose of this technology
more specifically, let me just try to index main focal points of our work 2:50 in research education and policy i'll try to briefly introduce each of the three areas on the research side we have more than 250 faculty and hundreds of students researchers involved in all kinds of interdisciplinary cutting-edge ai related research thanks to our generous friends we have multiple programs encouraging range
from moonshot program to seed level to budding ideas that includes
ai for drug discovery
ai for you know poverty assessment
ai for future of work

fundamental reinforcement learning algorithms connecting dozens and dozens of disciplines on the education side h.a.i focus on both educating our students and the community and the ecosystem within stanford we are we have encouraged and continue to support multiple courses 4:05 some of the courses are really new for example technology and ethics has quickly become one of the most popular undergraduate and graduate level classes on campus we have courses on ai for human well-being and and ai for climate ai for healthcare focusing on data and fairness and all kinds of education program outreach, ... we recognize the responsibility of stanford ai expertise to recognize the lack of opportunity for getting objective information about ai so we focused on working with:
policy makers -congressional staffers uh to train our nation's policymakers
we also have courses towards business executives
and we we have courses towards reporters and journalists and will continue to expand that external education program
last but not the least we believe this era of ai and technology is so important that we can provide a platform to work with all gov levels: national, international as well as state level so you mentioned earlier i'm personally honored to be on task force chartered by the congress for a national ai research resource but we are working with multiple federal agencies and and policy makers on various aspects of so that's a short summary of what stanford AI is keepingbusy doing
reid - can you tell me more about The Role of HAI in the Industry- i'm familiar from chairing your advisory board but can we talk more about the bridges that neeed building how much hai is saying we have to focus on what is good for humanity and then how do we build lots of important bridges: the policy world bridges <> the research and academic world any any other important bridges to be particularly useful to this audience is what's the role ofStanford HAI institute with respect to industry? what are the kind of the interactions that industrialists or technologists or the industry should look at hai

FFL: reid that's a great question first of all -let's just recognize in the ai age that industry is one of the most vibrant and fertile ground for both ai innovative ai applications as well as cutting edge ai research right so so it's it's such an important part of the ecosystem and frankly i think it's it's such a unique strength of america for the past century; i like the entire stanford community we fiercely and profoundly believe in our academic freedom and independence 7:35 this value statement is on our very website : we believe in a lot of free exchanges and ideas and forums for discussions' so from that point of view hai is actively engaging with industry partners; for example to begin
with formally we have industry partnership as corporate partners;
and affiliate programs where we can engage in research exchanges
and ideas protected under our academic freedom and independence as a policy
but more than that we see ourselves at stanford to be a rare platform where industry partners colleagues civil society policy makers researchers of all disciplines can use it as a neutral platform to discuss debate and explore ideas
Frankly consider some of the toughest toughest issues of ai -for example, reid, i know you know thiS: we geek out as generative adversarial network --this is a mouthful of a of a name for a really exciting neural network technology that can generate images, speeches, texts of course it can be used for creative usages for generating training data these are all great uses but sadly the same technology can be used for deep fake disinformation.So how do we how do we continue to exploit this technology for better use but put guard rails? These are tough questions and industry you know innovators entrepreneurs are trying to use this, but policymakers civil society stakeholders are thinking about the guard rails; stanford provided a platform for them to get together and discuss this
another example is visual recognition technology, this is a technology of compared to many other ai technology to a certain degree of maturity yet it also can cause a lot of harms from bias such as state surveillance; and how do we really grapple with these challenging issues> what we do is continuously to provide forums and platforms for industry leaders and partners as well as other stakeholders to come together and discuss this; so we are absolutely we see the value of our ecosystem; and industry is a huge player and we love to continue engaging =
Reid i think it's actually Importance of Being Human Centered as you describe it's super important for industry because it gives a independent um motivated by you know kind of truth and kind of integrity and objectivity that is in the academic side to build bridges but also to give good feedback and good ideas into industry and you know too often especially the technology industry thinks it can just do it alone it's like- no this is getting too important and part of that uh too important is that you know ai is obviously going to redefine many of the landscapes of industry and and have therefore have really serious impacts on society and i think it was your call to arms in a new york times article: that you published about putting humans at the center of AI and therefore the name of the institute HAI Stanford obviously tells us a little about how you define the term and why it was so important to be human-centered ..?

FFL thank you reid -o i always believed that since the dawn of human civilization,there's something in our species dna that we will never stop innovating;we innovate all kinds of ools to better our lives,better productivity,and to also interact and change our environment,but these tools are fundamentally part of human creation and part of the the human fabric so now we don't call them tools,we tend to call them machines because they're much more sophisticated - so philosophically i do believe that there's no independent machine values,machine values reflect our human values; ai is exciting and and this technology is is made by people and it's going to be used for people; so fundamentally how we create this technology, how we use this technology, how we continue to innovate but also put the right guard rails is up to us humans; doing it for humans so at the heart of all this it's all human-centered and that's how i i see this in the fundamental way
and of course it i hope it continue to enhance our humanity and capability and impact our human community human lives; human society in about that benevolent and positive way

reid and thinking about the human side let's let's take it a step more personal : what was it in your early career that prompted you to focus on the human side of ai; that's unusual for someone who is as deep in computer science and engineering and technical excellence as you are -so how did how did how did you make that turn?

FFL: yeah well okay so read here's a secret i don't think i ever said that i don't have a computer science degree: my journey into all this started from physics that i was deeply asking those fundamental questions of beginning of the universe and then what is you know the smallest the particle structure of the the atoms and that love for fundamental questions led me to the writings of the 20th century physics giants like einstein, schrodinger, roger penrose (who just got nobel prize last year) and i noticed that these physicists in the second half of their life start asking a different kind of fundamental question and it's the question about life and that led me into a lifelong passion towards trying to understand the fundamental questions of life questions that really captured my imagination even early on as undergraduate, it was intelligence, what makes intelligence, how it arose in uh animals and especially high intelligence in humans[ and so i started my entire journey in intelligence with human intelligenc,e human neuroscience, human cognitive science, but i guess its still thanks to my physics background: i quickly gravitated to the mathematical principles of what is the underlying mathematical expression of intelligence; and that got me into computer science; so it was a very long journey, but along the way i had an unusual training as well as exposure to human neuroscience cognitive science and one more dimension to the human side ofthis technology is also a personal journey. I happen come from a fairly humble background as an immigrant you know, as an entrepreneur i opened a dry cleaner shop and ran it for seven years, i have a parent whose health condition is fairly weak, so i had a lot of interaction just a person living a life where i see how hhuman lives can be impacted by incredible technologies and so there is a duality of the philosophically intriguing quest for intelligence plus the grounding of human life; i experience on a daily basis pointing me to the belief that technology can be framed in a human-centered way; science and technology -we must seize every opportunity we can to make it human benevolent; and obviously the role of Industry in AI ....

reid - the personally is interesting; obviously you've participated in industry in multiple ways not just putting yourself through school and supporting your family through dry cleaner- for example, the major of google cloud and others that you participated in - so what are you personally excited about with the role of industry and ai and the industries you know about

FFL i most benefit from applied ai, and how human centered ai plays out- i'm tremendously excited about the democratization of this technology; the innovation and eventually the human impact of this technology is mostly delivered through industry, through startups, throughout companie,s through their products and services, there's no doubt about it ;
i was very thankful to have that sabbatical experience at google and seeing at google cloud we served enterprises in various vertical industries right from healthcare, to financial institute, to energy gas to, media, to retail new transportation : you name it so i'm just very excited about entrepreneurial efforts the startups where ai is very new , but the sky is really the limit in terms of how we imagine this technology can can serve human well-being and personally there is definitely one industry that i i feel deeply deeply connected to through my research and personal experiences healthcare; ten years ago i i was still directing stanford's ai lab and the silicon valley was is the middle of the excitement for self-driving car because convergence of technology the sensors the algorithms the hardware of course and maps- all this technology is leading to this realization transportation and mobility can be reimagined and during that time it really dawned on me perhaps during one of those hospital stays of of my mom that i realized that a similar way of using technology can be applied in healthcare industry;where one of the major pain points of our patients and clinics is all the lack of context of what's happening to the human in the center of this and that human is the vulnerable patient you know my mom is a cardio patient doctors constantly want to know how her behavior;is how her heart rate is changing because of the activities and also in the hospitals;doctors and nurses worry about worry about their patients but the system adds to the work through lack of flow knowledge, lack of context of a patient patient behavior so i started this program at stanford with dr ernie milstein on what we call illuminating the ambient intelligence of healthcare and start researching on how ai sensors, edge compute, deep learning algorithms of human behaviors can help doctors and and nurses and patients to recover better, detect condition, conditions earlier and keep them safe; and i continue to work on this at stanford and i continue to feel very excited to start to see that there are startups starting to getting to the space innovating rapidly.i really want to see one day that i don't have to worry about my mom if i'm at work or not with her and her health well-being is being helped by ai technology Model Safety and Reliability

reid: indeed and and actually this is a good place to ask one of the audience questions, while there is huge opportunity in ai for health and how to transforms that industry but criminal justice system or the financial system regarfing racial or social equity

FFL: great question -and reid as you know we care and talk a lot about this
so so the word safety is actually i'm loaded with different dimensions let me try to unpack that a little bit as the question mentioned fairness and you know the the flip side of that is bias; one big chunk of which is safety. one quetion on the technology is how do we quantifiably and reliably understand the robustness, there is also the the trustworthiness which has a lot to do with transparency and explainability of the the technology, and then there is also the whole practice of how ethics can be incorporated into the design and development so many parts to this but let's just start with the fairness and bias - you know ai as a technology is a system: if the the the pipeline of the system starting with defining the problem to curating the data to designing the algorithm to to developing the product is to deliver in the service, we must see that along every point of this pipeline there is opportunity to introduce bias--at the end of the day a lot of bias or maybe rooted in human bias- our history,our human psychology is where the biases start; so i think at Stanford HAI you can see our researchers are working on every point of this pipelines bias, we've got researchers myself included working on the upstream data bias you know how we become vigilant and mitigate the bias that's introduced into the data ; how we try to fix that? consider classic example we've got researchers showing that in america most of medical ai research data come from three coastal states massachusetts new york and california-- imagine while this is a good thing we've got medical data to do research, it's also a deeply deeply biased way of of using data; so we need to be vigilant and mitigate that then we get into algorithm ; another case, historically let's say you reid know from linkedin looking at job applicants. they're just a lot fewer women in computer science discipline historically but if we throw our hands in the air and say well we'll just use whatever historical data to train whatever algorithm it'll fundamentally be unfair to women of today and women of tomorrow ; so we look at an algorithm's objective functions and work on technical methods we need to mitigate bias and then it comes down to ucomes down to decision making inference
there is another whole you know a bucket of technology that our researchers are exploring i'm actually really excited we call out machine bias; and in fact machines xan be the best to call out human bias 27:59 because there's so much human bias in our data my favorite for example was a few years ago : face recognition algorithm called out hollywood's bias on using male actors -they have a lot more screen time and talking time than female actors. So we can see these kind of mass data analysis and and machines calling out bias is really
Its important we continue to do that and then there is explainability and robustness research; we have researchers in medical school in computer science department in gender studies program,s they are working closely together in trying to look at these robustness and explainability technologies and and of course um there is the whole design process and reid i know you are one of the staunchest supporters that we have stanford hai have led to an innovative research proposal review process called the ethics and society; review that is a step up from the classic human subject review in universities called irb but in this what we call esr process.With hai funded research, every research team needed needs to go through an ethics and society review before we we provide funding. To support this research and the philosophy behind this is to bake ethics into the design of the of the research program not as aftermath afterthought for mitigation
So that was a long answer to this very profound question of you know how hai our research and our own practice is addressing this issue of safety and trustworthiness 30:08

Reid: it's a super important topic and i'm glad you were comprehensive because it shows how much work hai is actually doing on this topic and i think it's worth double clicking on the ethics and society review -- It's actually one of the things that we'd hoped would spread throughout not just industry but also academia and all the rest as a tool for being human value centered in ai, so what's been the learning so far? what what kind of things have come out of it

FFL: great question so reid in fact i you probably are aware : even companies are now trying to practice ethics review for their products i think what is um in common is everybody recognized the importance but what is special and iand im take a lot of pride in this is that at stanford we have true experts from sociology ethics political science computer science bioethics law coming together to form a deeply knowledgeable panel and their tjob is to help our researchers who maybe deeply technical researchers that do not have the training to guide them to think about when they design their project what are the human mythical societal impact that might come out of this research intended or unintended
i'll use a personal example because that's close to my heart is that i talked about our healthcare research in ai that uses smart sensors to help for example to help monitor if patients are at the risk of falling in a fragile senior's home and that's a very painful problem because more than 40 billion dollars of health care money are spent in mitigating potential fall for our seniors
are excited as technologists to think about how computer vision and smart sensors and edge computing can help um we were also confronted with the question of privacy with the question of you know legal 6 ramification that we never thought of what if the censor picked up care abuse cases - can they serve as legal witnesses; are there other you know adversarial events we have not thought deeply at the beginning of how to explain what's the interpretability of this technology -how can e help decsions made for elderly parents if this technology is good for them and as we write up our proposal we will go through this esr review process the bioethicists legal scholars
I thnk its cool that the privacy concern pushed our technology further and pushed us to think about all kinds of secure computing - some people fear that ethical guardrails slow down innovation. In many cases i disagree i think this kind of human ethical concern pushes our technology further in the elders case it provided so much value when we did our our survey that the engineers and scientists asked for more of esr to the point the panel is like we'll need more resources to beef up our team; so it's really heartening to see that there is now the mutual recognition there's no us versus them in this we are all humans we as technologists wants the best for we as the community, and they are asking for more of this so we were so encouraged to see um this one year into our program and we're absolutely doubling down to expand on this. wWe hope the whole of stanford adopts this program, and the world more generally .

Reds: i think it's important for people to realize that sometimes constraints help with innovation, and the whole goal in innovation is the right innovation for the right outcome
actually folks once they're engaged with ethics training, find it productive and useful and energizing uh and so this is one of things that you know industry people can learn.Once we are accelerating towards the right outcome, the mission and the energy is in your blood in your heart about where you're going um and that i think that what you guys have been doing with uh aer is important and everyone should know that 37:11

FFL: thank you reid and also frankly reid , i believe that is a business competitive advantage when you make the more trustworthy and safe products and services you you're better off in your market so it really is not to slow you down, avoiding competitive disadvantage is quite the opposite

Reid: If we're playing for greatness we're playing for something that could make huge differences in society and it's very important you know kind of classic english idiom um you know don't throw the baby out with the bathwater and so let's return to the kind of ai in healthcare:and um and it's one of the areas that you've been personally intensely focused on inaddition to the overall ai you know all of ai and industry and policy and all the rest talk a little bit about um the ways that you're seeing that ai can benefit health care>

FFL: 3-what are some of the things that we should be what is the future we should be accelerating towards yeah we talked a lot about good healthcare is in my opinion the most important;industry that can take advantage of ai and you know it is also so human-centered;it's not just human physical well-being it's also human mental well-being and human dignity and and it it frankly does excite me to work in a industry where the benevolence is so pronounced and it's the goal of the industry so one thing about healthcare that's really a paradoxical:is it's actually extremely data rich so one wSuld think if it's data rich it's ai rich.But it's not true as yet healthcare is data rich and insight poor. so you know, poorly designed measurements on a patient can suddenly have the results that your clinicians and you know doctors nurses are overwhelmed overworked over charting and you know spending too much time charting and yet they cannot they don't have tools they don't have opportunities to glean important insights from from what's going on in the patien.t So i absolutely see this is a huge area of opportunity is for entrepreneurs and startups and and companies to really focus on not giving our octors and nurses more overwhelming amount of data but is really how we deliver critical insights that is timely and and precise and accurate to really help our patients and that's one huge area. Another area is absolutely decision support; as you know i lived with my mom in hospital systems over 30 years . i go i see the nurses and doctors overworked they spend you know an average shift a nurse works 200 plus tasks walks five or four miles per day;spent two hours charting and the american nurses burnout rate is outrageous. oiu know the the the heart of healthcare is humans caring for humans yet our clinicians are not spending time with the patients and anything this technology can do to reduce that burden to support their work ; their productivity at the end of it is their humanity in helping our patients. So that's another area of uh opportunity in healthcare and of course drug discovery we're just at the beginning, AI in drug discovery is becoming hot area f investment and startups thanks to a lot of these molecular cellular genetic technology but they are turning out volumes of data and now with machine learning it can help glean the data and help discover important drugs and of course there is
Also the global pandemic has taught us that is extremely urgent data issue we need to break the barriers of data we need to you know modernize the way public health data is organized and and information can be gleaned . I want to finish by emphasizing um in this question is this ai as well as the surrounding technology is what i see that can augment the humanity of healthcare industry not to replace staff; we've heard of people talking about doctors being replaced and nurses being replaced, but as someone who spent 30 years as a patient family i can tell you no one can replace them the the human to human care human intelligence and emotion is critically at the very heart of this industry but anything ai can do to enhance that is what i see as exciting and the the the opportunities are boundless. let's generalise:it's not just not replacing nurses not replacing doctors but actually in fact you know part of human-centered ai is to amplify the ability to work well work meaningfully and you know one of the common misconceptions about ai is it's gonna replace jobs uand people. While this may happen to some jobs,generally speaking what we find a lot of what's going on is that ai can help collaborate with people and help productivity and we have you know eric brynjolfsson in his lab uh at hai is putting a lot of energy into making this happen 45:06
i also know that you've been shifting some of your research to robotics because ai is obviously going to be central to robotics what do you see happening with robotics in the business world?

FFL: well reid i this is something that we we talked a lot so so first of all um it's healthcare is my application area but my foundational research now is more in robotics let me just say to you that i'm so excited intellectually by robotics because that is the closing the loop of nature that a living moving interactive organism the course of hundreds of millions of years of volution to lead to an organizing organism like human is 46:09 nature showing us that intelligence and action come together ; this incredible machinery and robotics research is a vehicle to that you you suddenly have a system that can perceive can learn and can can do and that is the future of ai so swhatever revolution we've seen in the past 10 years read i think it's a prelude of what uh what what's to come what's more exciting to come and and in that sense i i i'm definitely shifted from from passive visual intelligence in comparison to the more active robotic perceptual robots research but that also has a profound impact in industry in fact you know i mean obviously manufacturing but there is you know fulfillment agriculture um everywhere humans are conducting you know a lot of physical labor robotics is um is potentially an area that can become an assistive technology and in fact to start with 47:31 i actually do believe there's certain type of work that needs to be humans need to be replaced by machines especially the work that puts humans in danger whether it's deep water exploration or a lot of rescue situations or you know other dangerous work you and i have talked about 47:59 it our friends at mckinsey have have told us repeatedly it's the tasks that might be replaced and assisted, not the jobs;almost every human job is consisted of multi-dimensional tasks many different kind of tasks there are tasks that are difficult for 48:23 humans are dangerous for humans and i can see robotics you know play a 48:28 huge role but there are tasks that are more reserved to human cognition human 48:34 in emotion you know and and there i i just don't see that and this 48:40 especially if as a society we make sure um you know how to 48:46 how to you know address these issues so um the future of work 48:53 in the age of ai is a profound um question it inevitably will 48:59 impact workers but the collective uh efforts in in 49:06 how we train the future workers how we mitigate uh skill set shifts how we address 49:14 um you know job landscape evolution and together with how we use technology 49:21 in a smart and humane way i'm hopeful that humanity having gone through 49:28 several rounds of industrial industrialization and labor shifts that we can address 49:35 this together but we have to be mindful of how we do that so so there 49:42 i've noticed the time so we're going to quickly hit on two last important questions uh just because Making America More Competitive in AI 49:49 you wouldn't be complete without doing them um the first is that there are a lot of countries 49:54 that are engaging in ai and you were recently appointed by the white house to the national 50:01 artificial intelligence research resource task force um the this task force launched due to efforts hei led 50:08 to call for a national research cloud super important which resulted in legislation passed in 50:13 january to create the task force and make recommendations awesome um so 50:18 what role does hai make in in making america more competitive in ai and then um how are you helping the 50:26 government and uh and the government as it interfaces industry understand the risk rewards for the 50:32 future uh for ai yeah important question reid first of 50:38 all as we discussed earlier that america has been very unique we have the world's healthiest most 50:46 vibrant ecosystem for the past more than half a century close to a closer to a century in terms 50:53 of innovation and our innovation goes as upstream as basic science technology 50:59 all the way to the practice and the the industrialization commercialization 51:04 of of our technology and that ecosystem brought this very prosperous um 51:11 you know uh society for us of course it's an imperfect society we have a lot of issues to address uh 51:18 from you know the the the way we we look at how different groups of people are treated 51:24 and and meaning imperfections but it's a society that's rooted in the 51:30 belief of democracy the belief of human rights and human values the belief of 51:36 equality and justice and i think that combination of such a healthy vibrant 51:43 innovative science and technology ecosystem plus the the country's value 51:50 is really important to all of us and so is it important to hai so to start with we hope to be a player 51:58 and to contribute to that ecosystem you know academia is where some of the most innovative 52:05 science and technology happen including deep learning first happened in academia 52:11 so we want to continue to contribute that but we also want to continue to 52:16 support policy policymakers to to support america's ecosystem this is why we we participated 52:23 in the legislation this is why i'm personally honored to be part of that 52:29 effort um so needless to say we see ourselves as a player we 52:35 stand by to help our nation and to um you know rise to the location 52:41 whenever uh that's needed and most importantly we educate our nation's future and we 52:49 will continue to do that yeah and speaking for many many of us Diversity in AI 52:55 in industry now thank you for your public service on this it it's really important on the nation and 53:00 as part of you just spoke so eloquently about about um america the idea its values its 53:06 aspirations um so i think it's fitting that this would get to our i think our last question which is 53:13 um increasing diversity in ai um and uh because part of the thing about 53:18 the future that we want to build to is to make sure it's inclusive for all of us um and you know one of the things that you 53:25 uh founded and and and put a lot of in addition to public service a lot of 53:31 personal energy and time for is ai for all so if you could if you could um say a 53:37 little bit about that and then also how people can help with it yeah thank you reed so so talking about 53:44 america one of the most beautiful thing about america is we're a nation of all people 53:49 people of all backgrounds people of all race people of all walks of life but it's also a 53:56 reality not um in today's world in our country um for example in the ai 54:03 world it's not well represented you know we're lacking women we're lacking people of color we're lacking people of 54:10 all all walks of life and i realized this it became really front and center to me 54:17 as the deep learning revolution was taking off around 2014 and my co-founder 54:24 former student ogre rusekovsky and i i recognize this really important question 54:30 is that if we know and believe ai will change the world change our future the key question is 54:37 really who will change ai who would be at the steering wheel of 54:42 designing developing and deploying ai once we asked that question we realized we knew the answer we don't 54:49 know how to reach the answer we have a long way to reach the answer yeah the answer is we want the representation of the world of 54:57 america to to to to to be at the steering wheel of ai that means we want to invite a lot 55:05 more students from underrepresented underserved background 55:13 who were not traditionally part of this technology to be trained as tomorrow's leader and 55:19 that was the birth of the national non-profit ai for all focusing on k-12 education 55:27 of ai and we we serve high school students who will come to different 55:35 chapters of ai for all across the nation around 20 of them but still 55:41 increasing to learn about ai in our summer programs we partner with local universities and colleges 55:48 so that the education to these students are are you know tailored towards the 55:55 community needs we also um have online program to encourage both teachers k-12 teachers 56:03 and students to participate in in understanding ai and 56:09 we also have a couple of programs geared towards our alumni throughout 56:15 their college years and early career to mentor them into the workforce of ai 56:21 and want to make sure they become tomorrow's ai leaders so ai4all 56:26 is a growing national nonprofit organization we definitely encourage we partner with 56:34 companies we partner with mentors who believe in this mission 56:39 and we of course partner with supporters who believe in our mission and we would 56:44 love to work with any of you out there who who would be leaving us and and help us 56:52 so faye an honor and a pleasure we got to only about half 56:58 of the questions and i think it's really important that the industry hears from you but that's kind of classic um and so thank you so much 57:05 um you know as always i learn i'm certain everyone here with us did as well um 57:11 and then thanks for everyone who joined us um please keep an eye out for the next eye conversation event you can hear all 57:17 these conversations on the greylock podcast gray matter uh last but not least if you'd like to share your thoughts on 57:23 this event you can felt the survey we will send you tomorrow uh thanks again for joining us and pepe as always 57:29 uh an honor a pleasure a delight thank you of course the the feeling is mutual thank you reed always great to have a 57:35 conversation with you yeah have a great day everybody 57:41 [Music] Skip navigation Search Avatar image 22:58 / 57:51 • Model Safety and Reliability Dr. Fei-Fei Li on Human-Centered AI Greylock 87K subscribers Subscribe 138 Share Download Clip 5,742 views Jul 13, 2021 If AI is to serve the collective needs of humanity, how should machine intelligence be built and designed so that it can understand human language, feelings, intentions and behaviors, and interact with nuance and in multiple dimensions? Stanford University computer science professor Dr. Fei-Fei Li and Greylock general partner Reid Hoffman discuss the ethical considerations researchers, technologists and policymakers should make when developing and deploying AI. This episode was recorded as part of Greylock's Iconversations virtual speaker series. You can also find the podcast and transcript of this discussion here: greylock.com/greymatter/fei-fei…human-centered-ai/ Chapters View all 4 Comments chris Macrae Add a comment... Tony Leonard Tony Leonard 1 year ago Thank you for this very informative discussion! 1 Reply Devshliin Garts Devshliin Garts 1 year ago Great talk thank you. AI might turn out to be biggest partner humans ever would find. Reply Tathagat Verma Tathagat Verma 1 year ago Thanks for the amazing talk! Looks like Reed is going to nod his head off XD 3 Reply 1 reply Transcript Search in video Introduction 0:00 [

Advisory Council Members

Chaired by Reid Hoffman of Greylock Partners, the council also includes Jim Breyer, Breyer Capital; Jeff Dean, Google; Steve Denning, General Atlantic; John Hennessy, Stanford University; Eric Horvitz, Microsoft Research; Bob King, Peninsula Capital; James Manyika, McKinsey & Company; Marissa Mayer, Lumi Labs; Sam Palmisano, Center for Global Enterprise; Heidi Roizen, DFJ/Threshold Ventures; Eric Schmidt, Alphabet; Kevin Scott, Microsoft; Ram Shriram, Sherpalo Ventures; Vishal Sikka, Vian Systems; Neil Shen, Sequoia Capital; Jerry Yang, AME Cloud Ventures.

Associate Directors

Russ Altman, the Kenneth Fong Professor and professor of bioengineering, genetics, medicine and biomedical data science; Susan Athey, the Economics of Technology Professor at the Graduate School of Business; Surya Ganguli, assistant professor of applied physics; James Landay, the Anand Rajaraman and Venky Harinarayan Professor and professor of computer science; Christopher Manning, the Thomas M. Siebel Professor in Machine Learning and professor of linguistics and computer science: and Robert Reich, the Marc and Laura Andreessen Faculty Co-Director of the Center on Philanthropy and Civil Society and professor of political science.

Distinguished Fellows

The inaugural group of Distinguished Fellows will include: Yoshua Bengio, University of Montreal; Rodney Brooks, MIT; Erik Brynjolfsson, MIT; Jeff Dean, Google; Daniel Dennett, Tufts University; Susan Dumais, Microsoft Research; Edward Feigenbaum, Stanford University; Barbara Grosz, Harvard; Demis Hassabis, DeepMind; Geoff Hinton, University of Toronto; Eric Horvitz, Microsoft Research; James Manyika, McKinsey & Company; John Markoff, Center for Advanced Study in the Behavioral Sciences; Helen Nissenbaum, Cornell Tech; Judea Pearl, UCLA; Stuart Russell, UC Berkeley; Mustafa Suleyman, DeepMind; Terry Winograd, Stanford University; and Hal Varian, Google.

"HAI Denning" refers to the Stanford Institute for Human-Centered Artificial Intelligence (HAI), named in honor of computer science pioneer and Stanford alumnus Arthur Samuel and computer science and electrical engineering professor John McCarthy, who co-founded the field of AI. The "Denning" part of the name is a nod to former Stanford computer science professor and tech entrepreneur Dorothy Denning, who was an early pioneer in computer security and encryption. The HAI Denning House is a new building on the Stanford campus that serves as a hub for the institute's activities, including research, education, and public engagement on the topic of human-centered AI.

No comments:

Post a Comment