WHO PRESNTED STANFORD HAI LAUNCH 2019? Stanford President Tessier + 2 Directors appointed to lead Stanford HAI Fei-Fei Li and John Etchemendy Reid Hoffman panel moderator of Demis Hassibis Jeff Dean Chris Manning Alison Gopnik Michael Frank Percy Liang Serya Ganguli Bill Gates with Amy Jin and Stephani Tena-Meza - full speakers -full planning committee ; WHAT OTHER FIRST 50 SPONSORS TURNED UP EXAMPLES 1 2 OF PARTNER LABS AROUND WORLD IN 2023. | Lead Peers of Von Neumann had all gone by his death in 1957 but they had left behind at least 3 innovations streams in one: hardware, coiding or software, how would human brains behaviors change the more time spent in digital world-one reminder of slow slow quick quick slow is schools- i was in last gen to use slide rulers; when i git to high school my pride & joy a pocket calculator; whilst i saw people pumnching cards to program mainframes at university in europe at least online terminals linking mini computer arrived circa 1971; and for much longer than that those who did ai were programing in heuristics from experts; this isn't what breakthrough AI has done since 2006 when eg fei-fei li started training a computer's vision analogously to a baby's brain- it took one heck of a lot of computing power to do this; and although the mobile web2 era eclipsted invenet in deep ai fir 7 or more yeras its HAI at Stanford 2019 that I'd suggest as dual benchmark for magic leaps beyond human brainpower alone and desire to maintain AI as tool to augment what humans do best upd NAIRR OCT 2021 This month, researchers affiliated with the Stanford Institute for Human-Centered Artificial Intelligence released a blueprint for how to build a National AI Research Resource (NAIRR), a system that would allow the broader AI academic research community to access the expensive computing and data resources to conduct fundamental and non-commercial AI research. The report, a culmination of a multidisciplinary two-quarter practicum offered at Stanford Law School and based on dozens of interviews with leading computer scientists, government officials, and policy experts, outlines the necessary steps to create this resource |
Urgent coopreration calls from SDGSyouth latest May 2023:::HAISDGS 1 2 3 4; 5: 30 coops making women 3 times more productive than men 6 7 8. leaps 1 - Beyond the Moon ..
| 33 years ago we started practicing core brand transformation inspired by new systems modeling -eg of CK Prahalad & Gary Hamel- typically when needing to transform as large brand entity as the UN we'd propose better start again and then reverse takeover-; we realise that's not an option UN2.0 so urgent to address what's the 9 piece combo of UN2.0 Tech Envoy Team at Guterres HQ- how contextually value roadmaps for anyone SDG partnering Guterres:
Global Connectivity since 1865 (ITU); AIforgoodreborn ITU 2018 stems from Neumann peers 100 times more tech per decade since 1935 -see dad's bio of vn Digital Cooperation launched by Guterres 2018 but seen by those valuing youth generation as antidote to failure of millennium goals to value education more that primary school; Digital Capacity Building: sustainable gov tech requires UN2 to be skills benchmark wherever gov designed to empower; this leaves 4 traditional connections of UN to digitalise inclusion commons human rights so that trust/safety is brand's reality; 9th piece CODES environmental sustainability - this seems to have emerged as it became clear that cops may lead on adaptation but adaptation needs to be borderless community replication of deep solutions 379 UN Antonio Guterres :: Family & Smithian Diary: why 1984's 2025 report published to ask Economist Readers to co-search 3 billion new jobs 2025=1985 following on prt 1 teachforsdgs.com |
| 00Fazle Abed: Which educational and economic partnerships most empower a billion women to end extreme poverty, and value their children’s sustainability? Fortunately for SDGS.games 2020s, start deep village maps around partners/alumni of 50 years of servant leadership by fazle abed 1970-2019 IN 1970, life expectancy tropical villages up to 25 years below world average _skills trainers priority last mile health world’s most trusted eds needed eg epidemiologists UNICEF Grant, Brilliant, later Jim KIm –& to end starvation food's borlaug 3) last mile health |
|
Automation and Robotics, Economy and Markets, Human Reasoning: Opening the Gate part 1 https://hai.stanford.edu/news/opening-gate
ReplyDeleteStanford’s new Institute for Human-Centered Artificial Intelligence aims to fundamentally change the field of AI by integrating a wide range of disciplines and prioritizing true diversity of thought.
Mar 17, 2019 | Fei-Fei Li and John Etchemendy
https://twitter.com/StanfordHAI?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor
https://www.facebook.com/StanfordHAI/
https://www.youtube.com/channel/UChugFTK0KyrES9terTid8vA
https://www.linkedin.com/company/stanfordhai
https://www.instagram.com/stanfordhai/?hl=en
It all started in Fei-Fei’s driveway.
It was the summer of 2016.
“John,” she said, “As Stanford’s provost, you’ve led an effort to draw an arrow from technology to the humanities, to help humanists innovate their methodology.”
“It’s time to build another arrow coming back the other direction. It should become a complete feedback loop. We need to bring the humanities and social thinking into tech.”
She went on to explain an epiphany she had recently had — a problem she could no longer ignore. The people building the future all seemed to come from similar backgrounds: math, computer science and engineering. There were not enough philosophers, historians or behavioral scientists influencing new technology. There were very few women or people from underrepresented groups. “The way we educate and promote technology is not inspiring to enough people. So much of the discussion about AI is focused narrowly around engineering and algorithms,” she said. “We need a broader discussion: something deeper, something linked to our collective future. And even more importantly, that broader discussion and mindset will bring us a much more human-centered technology to make life better for everyone.”
Standing in Fei-Fei’s driveway, John saw the vision clearly. As a mathematical logician, he had been actively following the progress of AI for decades; as a philosopher, he understood the importance of the humanities as a guide to what we create. It was obvious that not only would AI be foundational to the future — its development was suddenly, drastically accelerating.
If guided properly, AI could have a profound, positive impact on people’s lives: It could help mitigate the effects of climate change; aid in the prevention and early detection of disease; make it possible to deliver quality medical care to more people; help us find ways to provide better access to clean water and healthy food; contribute to the development of personalized education; help billions of people out of poverty and help solve many other challenges we face as a society.
ReplyDeleteWe believe AI can and should be collaborative, augmentative, and enhancing to human productivity and the quality of our work and life.
But AI could also exacerbate existing problems, such as income inequality and systemic bias. In the past couple of years, the tech industry has struggled through a dark time. Multiple companies violated the trust and privacy of their customers, communities and employees. Others released products into the world that were not properly safety tested. Some applications of AI turned out to be biased against women and people of color. Still more led to other harmful unintended consequences. Some hoped the technology would replace human workers, not seeing the opportunity to augment them.
That day began a conversation that continued over many months. We discovered that we both had been on a similar quest throughout our careers: to discover how the mind works — Fei-Fei from the perspective of cognitive science and AI, and John from the perspective of philosophy.
Meanwhile, Fei-Fei took off for a sabbatical to Google, where she became Chief Scientist of AI at Google Cloud. During her time there, she saw the massive investments the technology industry was making in AI, and worked with many customers from every industry that are in great need of a digital and AI transformation. She became even more committed to the idea of creating a human-centered AI institute at Stanford.
Our Mission is to advance AI research, education, policy and practice to improve the human condition.
In 2017, Fei-Fei began discussing the future of AI with Marc Tessier-Lavigne, the university’s new president and a neuroscientist. She brought in Stanford Computer Science Professors James Landay, who specializes in human/computer interaction, and Chris Manning, who specializes in machine learning and linguistics, to further develop the idea. When John stepped down as Provost in 2017, Fei-Fei asked him to co-direct the undertaking. Together they brought in Russ Altman, a Stanford Professor of Bioengineering and Data Science; Susan Athey, an Economics of Technology Professor at Stanford Graduate School of Business; Surya Ganguli, a Stanford Professor of Applied Physics and Neurobiology; and Rob Reich, a Stanford Professor of Political Science and Philosophy. Encouraged by Stanford’s school deans, especially Jon Levin, Jennifer Widom and Debra Satz (Business, Engineering and Humanities and Sciences), the new team evangelized the idea with colleagues and friends. Soon dozens of accomplished faculty members were contributing their perspectives.
Nearly three years and many deep conversations later, we are humbled and proud to announce the official launch of The Stanford Institute for Human-Centered Artificial Intelligence (HAI).
At HAI our work is guided by three principles: that we must study and forecast AI’s Human impact, and guide its development in light of that impact; that AI applications should Augment human capabilities, not replace humans; and that we must develop Intelligence as subtle and nuanced as human intelligence.
Our aim is for Stanford HAI to become an interdisciplinary, global hub for AI learners, researchers, developers, builders and users from academia, government and industry, as well as leaders and policymakers who want to understand and influence AI’s impact and potential.
These principles extend the discipline of AI far beyond the confines of engineering. Understanding its impact requires expertise from the humanities and social sciences; mitigating that impact demands insights from economics and education; and guiding it requires scholars of law, policy and ethics. Just so, designing applications to augment human capacities calls for collaborations that reach from engineering to medicine to the arts and design. And creating intelligence with the flexibility, nuance and depth of our own will require inspiration from neuroscience, psychology and cognitive science.
ReplyDeleteStanford HAI has sponsored multiple symposia bringing together experts on topics including the Future of Work, and AI, Humanities and the Arts. This summer we will be launching our first Executive Education program in partnership with the Graduate School of Business and our first Congressional Bootcamp in partnership with the Freeman-Spogli Institute for International Studies. We are also sponsoring a summer AI research internship program for “graduates” of the AI4All diversity education program, to enable these young people to maintain their interests and hone their skills.
We are now selecting finalists for three fellowship programs that we have launched in partnership with other Stanford units: HAI Ethics Fellows (with the McCoy Family Center for Ethics in Society), HAI Journalism Fellows (with the John S. Knight Journalism Fellows Program), and HAI International Security Fellows (with the Center for International Security and Cooperation). The winners of these fellowships will join the HAI community in the Fall. We will also soon be announcing the appointments of our first HAI Journalist-in-Residence and our first HAI Artist-in-Residence, both of whom will join us this Fall, as well as our first class of HAI Engineering Fellows.
***
It has been quite a journey since that first meeting in Fei-Fei’s driveway three years ago. That meeting happened because John was buying a house adjoining Fei-Fei’s back yard. We became neighbors, and soon after John moved in, he replaced the old fence between our houses. In one spot, connecting Fei-Fei’s back yard to John’s side yard, he built a gate. Little did we know how well-trodden the path between our houses would become thanks to HAI! We hope Stanford HAI will provide the same opportunity for partnership, trust and collaboration to many, many others. Please join us in our quest to improve the human condition through Human-centered AI.
according to Bard May 2023 these are examples of partners of Stanford HAI Some of Stanford HAI's key partners include:
ReplyDeleteThe University of Oxford
The University of Cambridge
The Massachusetts Institute of Technology
The University of California, Berkeley
The University of Toronto
The National University of Singapore
The Tsinghua University
The Peking University
The Indian Institute of Technology Bombay
The Indian Institute of Science
These partnerships are helping to create a global network of experts who are committed to using AI for good. By working together, we can ensure that AI is used to solve the world's most pressing problems, and that it does not become a threat to humanity.
In addition to these formal partnerships, Stanford HAI also works with a number of other organizations to promote the responsible development of AI. These organizations include:
The World Economic Forum
The Aspen Institute
The Brookings Institution
The Carnegie Endowment for International Peace
The Center for American Progress
The Center for Strategic and International Studies
The Council on Foreign Relations
The New America Foundation
The OpenAI Foundation
The Partnership on AI
These organizations are playing a critical role in shaping the future of AI. By working together, we can ensure that AI is used to create a better world for all.