Urgent coopreration calls from SDGSyouth latest May 2023:::HAISDGS 1 2 3 4; 5: 30 coops making women 3 times more productive than men 6 7 8.

leaps 1 - Beyond the Moon ..: can you find good enough questions for teachers of any age group to ask AI to share

Freedom to read: if only permitted one read on humanising AI I'd pick feifei short article at https://hai.stanford.edu/sites/default/files/2023-03/Generative_AI_HAI_Perspectives.pdf rsvp chris.macrae@yahoo.co.uk if you have a different pick of a lifetime

Welcome to HAI: Gamechanging AI for humans combines unique combos of tech wizards since 2006Self-id Q: Can you introduce us to 4 main wizardly components of America’s most famous 2023 model CGPT

What is Conversational?




(see also congress debriefings april 2023 on this choice as most famous )

SO in what ways does 2023 connectivity of chatgpt go beyond any human brain however famous

Am I correct that while cgpt is fluent in many languages, 90% of your training involved texts in English. Discuss!

Are there people working on advanced CGPTs tuned to specific crises- eg where UN leader guterres made 21-22 year of SDgoal 4 crisis – education no longer fit for purpose ............we asked CGPT top 10 goods it expects Ai to help humans with in 2020s--.. Healthcare - improve diagnosis, treatment, and personalized medicine..Environment - monitor and manage natural resources, predict and mitigate natural disasters, reduce carbon emissions...Education - personalize learning, new ed opportunities, and improve accessibility for learners with disabilities..Aid and development - improve disaster response, humanitarian and economic development... Agriculture - optimize farming practices, increase yields, reduce environmental impact..Transportation - improve efficiency, safety, and reduce emissions..Energy - optimize energy consumption and distribution, accelerate the transition to renewable energy..Cybersecurity - detect and prevent cyber attacks, protect personal data, and secure critical infrastructure..Manufacturing - improve efficiency, reduce waste, and increase productivity in manufacturing..Space exploration - analyze large amounts of data and enable more efficient space exploration missions..

 Epoch changing Guides

1 AI Training AI Training.docx

  2 Exploring cultural weakness of encounters with greatest brain tool.docx

 Alphafold new Protein maps can be used to design enzymes to fight plastic pollution; potentially fight cancer with molecular syringe;   to circumvent antibiotic resistance ; to combat neglected diseases like African sleeping sickness'Chagas disease;Cysticercosis; Leprosy ]Lymphatic filariasis [Onchocerciasis ;Schistosomiasis; Soil-transmitted helminthiasis ;Trachoma ;Tuberculosis; Buruli ulcer; it may help accelerate vavvines for malaria;

You shared me that the famous Alpha models of DeepLearning ( see broadcast of 60 minutes april 2023) which has mapped every human protein saving millions of human hours of work ( biggest ever change in biotech) don’t really use C G P T- so what does deep learning ai architecture use RSDE?

What is Reinforcement Learning?

What is Specific Task instead of General Purpose?

What are Deep Neural Networks?


What are Evolutionary Algorithms?

AdamSmith.app Moral Sentiments SDGs.games mapping with Guterres AIforGood decade became sustainability's gen's last chance of south-north and east-west- or the death of United Nations' world trade transparency

Here is Chatgpt May 2023 answer to comparing itself with biotech greatest ever leap alphafold
Sure, here's an analogy that may help: Imagine that CGPT-4 is like a language teacher who has read and understood a huge number of books in various languages. It has learned to understand the patterns and nuances of language, and can generate its own sentences and paragraphs based on what it has learned. When it encounters a new piece of text, it can use its understanding of language to make predictions about what will come next, and generate a response that makes sense in context.
On the other hand, Alphafold is like a puzzle solver who has been trained on a vast number of jigsaw puzzles. It has learned to recognize the patterns and shapes of puzzle pieces, and can predict how they fit together to form a complete picture. When it encounters a new puzzle, it can use its understanding of these patterns to make predictions about where each piece should go, and assemble the puzzle quickly and accurately : to question -...................................................................................
More questions: Q: Could you extend to include wizardry of Fei-Fei Li with comparison above a teacher of 12 year olds may use to vision similar and different work on architectures of CGPT4 and Alpha Deep Learning by Demis Hassibis

You mentioned major tech companies offer catalogues – 5 minute lesson plans on different wizard components. How do you find these?

60 seconds exercise: suppose for teen earthlings: 2020s is more exciting than 1960s moonrace decade of my teens; You mentioned searching ai for beginners but what if a teacher wants to help kids imagineer back from future of 2020s possibilities or just next 3 years of AI is epoch-changing


Q: please tell me how could a 5th grade teacher explain why Stanford HAI is proud of being one of the world's leading GAN (Generative Adversarial Network?)
Chatgpt reply23/5 -: A GAN, or Generative Adversarial Network, is a type of computer program that can create new things that look like they were made by humans. It works by having two players eg two artists who are trying to compete with each other. One player, called the generator, tries to create something new, like a picture of a dog. The other player, called the discriminator, tries to figure out if the picture was made by a human or by the generator. The generator keeps trying to make better pictures until the discriminator can't tell the difference between a picture made by a human and one made by the generator.
In simpler terms, a GAN is like having a computer artist that can create new pictures that look like they were made by a person, and it gets better the more it practices.
33 years ago we started practicing core brand transformation inspired by new systems modeling -eg of CK Prahalad & Gary Hamel- typically when needing to transform as large brand entity as the UN we'd propose better start again and then reverse takeover-; we realise that's not an option UN2.0 so urgent to address what's the 9 piece combo of UN2.0 Tech Envoy Team at Guterres HQ- how contextually value roadmaps for anyone SDG partnering Guterres: Global Connectivity since 1865 (ITU); AIforgoodreborn ITU 2018 stems from Neumann peers 100 times more tech per decade since 1935 -see dad's bio of vn
Digital Cooperation launched by Guterres 2018 but seen by those valuing youth generation as antidote to failure of millennium goals to value education more that primary school; Digital Capacity Building: sustainable gov tech requires UN2 to be skills benchmark wherever gov designed to empower; this leaves 4 traditional connections of UN to digitalise inclusion commons human rights so that trust/safety is brand's reality; 9th piece CODES environmental sustainability - this seems to have emerged as it became clear that cops may lead on adaptation but adaptation needs to be borderless community replication of deep solutions
379 UN Antonio Guterres :: Family & Smithian Diary: why 1984's 2025 report published to ask Economist Readers to co-search 3 billion new jobs 2025=1985 following on prt 1 teachforsdgs.com
Learning's Unconventional Worldwide Premieres
300 vriti world premier ed3 tour ^^^ NFT V 0 1 2 3

2025Report- download monthly update 100 learning networks millennials girls love most
(Economist Surveys inspired by Von Neumann 1984-1951; why 1936 dad & von neumann started 100 year race to prevent extinction; why dad's last year nominated Fazle Abed Entrepreneurial Revolution GOAT - who do you vote for SDGoats.com

00Fazle Abed: Which educational and economic partnerships most empower a billion women to end extreme poverty, and value their children’s sustainability? Fortunately for SDGS.games 2020s, start deep village maps around partners/alumni of 50 years of servant leadership by fazle abed 1970-2019

IN 1970, life expectancy tropical villages up to 25 years below world average _skills trainers priority last mile health world’s most trusted eds needed eg epidemiologists UNICEF Grant, Brilliant, later Jim KIm –& to end starvation food's borlaug

3) last mile health
2) agriculture for village food security

4)non-linear livelihood education
5) timing what platforms partners could facilitate entrepreneurial revolution not not just inclusive community but cooperation in full and meaningful entrepreneurial employment

financial entrepreneurial revolution for nation's people history excluded from machine age
billion women's Sustainability COLLABS 

 Intergenerational collaboration entrepreneur platforms 5.1  metavillage sustainable community building - women empowered:15000 families at a time;5.2   billion asian women,5.3  brac net; 5.4   asian universities share sdg graduates 5.5  climate smart village exchanges,5.6 meta and zoom-me up scotty

BANK FOR ALL 1.1  1.2  1.3   1.4   1.5   1.6 celebrate 30 most human collaborations from developing world of last half-century - inspiring  anyone valuing UN and youth as first sustainability generation
EDUCATION  adult village entrepreneurs 4.1; primary 4.2  ; teen 4.3; university4.4 ; pre-school4.5;tech multidisciplinary luminaries 4.6 
HEALTH oral rehydration 3.1 ;para health "doordash" basic meds 3,2; scale vaccination3.3 ;tuberculosis & 3.4  Frugal processes eg wash sanitation, maternity3.5  ; James Grant School of public health 3.6
FOOD/land security 2.1  rice; 2.2 veggie  2.3    cash crops & village fair; 2.4  poultry;2.5  dairy, 2,6  14 nation leading supply chains financial opportunities to end poverty ;
....................................please tell me how could a 5th grade teacher explain why Stanford HAI is proud of being one of the world's leading GAN (Generative Adversarial Network?)
Chatgpt reply23/5 -: A GAN, or Generative Adversarial Network, is a type of computer program that can create new things that look like they were made by humans. It works by having two players eg two artists who are trying to compete with each other. One player, called the generator, tries to create something new, like a picture of a dog. The other player, called the discriminator, tries to figure out if the picture was made by a human or by the generator. The generator keeps trying to make better pictures until the discriminator can't tell the difference between a picture made by a human and one made by the generator.
In simpler terms, a GAN is like having a computer artist that can create new pictures that look like they were made by a person, and it gets better the more it practices.

Tuesday, May 16, 2023

AI Serial Giant Leapers

Stanford President Tessier + 2 Directors appointed to lead Stanford HAI Fei-Fei Li and John Etchemendy

Reid Hoffman panel moderator of Demis Hassibis Jeff Dean Chris Manning Alison Gopnik

Michael Frank Percy Liang   Serya Ganguli

Bill Gates with Amy Jin and Stephani Tena-Meza - full speakers  -full planning committee


 Lead Peers of Von Neumann had all gone by his death in  1957 but they had left behind at least 3 innovations streams in one: hardware, coiding or software, how would human brains behaviors change the more time spent in digital world-one reminder of slow slow quick quick slow is schools- i was in last gen to use slide rulers; when i git to high school my pride & joy a pocket calculator; whilst i saw people pumnching cards to program mainframes at university in europe at least online terminals linking mini computer arrived circa 1971; and for much longer than that those who did ai were programing in heuristics from experts; this isn't what breakthrough AI has done since 2006 when eg fei-fei li started training a computer's vision analogously to a baby's brain- it took one heck of a lot of computing power to do this; and although the mobile web2 era eclipsted invenet in deep ai fir 7 or more yeras its HAI at Stanford 2019 that I'd suggest as dual benchmark for magic leaps beyond human brainpower alone and desire to maintain AI as tool to augment what humans do best

upd  NAIRR OCT 2021

This month, researchers affiliated with the Stanford Institute for Human-Centered Artificial Intelligence released a blueprint for how to build a National AI Research Resource (NAIRR), a system that would allow the broader AI academic research community to access the expensive computing and data resources to conduct fundamental and non-commercial AI research. The report, a culmination of a multidisciplinary two-quarter practicum offered at Stanford Law School and based on dozens of interviews with leading computer scientists, government officials, and policy experts, outlines the necessary steps to create this resource


  1. Automation and Robotics, Economy and Markets, Human Reasoning: Opening the Gate part 1 https://hai.stanford.edu/news/opening-gate

    Stanford’s new Institute for Human-Centered Artificial Intelligence aims to fundamentally change the field of AI by integrating a wide range of disciplines and prioritizing true diversity of thought.

    Mar 17, 2019 | Fei-Fei Li and John Etchemendy
    It all started in Fei-Fei’s driveway.

    It was the summer of 2016.

    “John,” she said, “As Stanford’s provost, you’ve led an effort to draw an arrow from technology to the humanities, to help humanists innovate their methodology.”

    “It’s time to build another arrow coming back the other direction. It should become a complete feedback loop. We need to bring the humanities and social thinking into tech.”

    She went on to explain an epiphany she had recently had — a problem she could no longer ignore. The people building the future all seemed to come from similar backgrounds: math, computer science and engineering. There were not enough philosophers, historians or behavioral scientists influencing new technology. There were very few women or people from underrepresented groups. “The way we educate and promote technology is not inspiring to enough people. So much of the discussion about AI is focused narrowly around engineering and algorithms,” she said. “We need a broader discussion: something deeper, something linked to our collective future. And even more importantly, that broader discussion and mindset will bring us a much more human-centered technology to make life better for everyone.”

    Standing in Fei-Fei’s driveway, John saw the vision clearly. As a mathematical logician, he had been actively following the progress of AI for decades; as a philosopher, he understood the importance of the humanities as a guide to what we create. It was obvious that not only would AI be foundational to the future — its development was suddenly, drastically accelerating.

    If guided properly, AI could have a profound, positive impact on people’s lives: It could help mitigate the effects of climate change; aid in the prevention and early detection of disease; make it possible to deliver quality medical care to more people; help us find ways to provide better access to clean water and healthy food; contribute to the development of personalized education; help billions of people out of poverty and help solve many other challenges we face as a society.


  2. We believe AI can and should be collaborative, augmentative, and enhancing to human productivity and the quality of our work and life.
    But AI could also exacerbate existing problems, such as income inequality and systemic bias. In the past couple of years, the tech industry has struggled through a dark time. Multiple companies violated the trust and privacy of their customers, communities and employees. Others released products into the world that were not properly safety tested. Some applications of AI turned out to be biased against women and people of color. Still more led to other harmful unintended consequences. Some hoped the technology would replace human workers, not seeing the opportunity to augment them.

    That day began a conversation that continued over many months. We discovered that we both had been on a similar quest throughout our careers: to discover how the mind works — Fei-Fei from the perspective of cognitive science and AI, and John from the perspective of philosophy.

    Meanwhile, Fei-Fei took off for a sabbatical to Google, where she became Chief Scientist of AI at Google Cloud. During her time there, she saw the massive investments the technology industry was making in AI, and worked with many customers from every industry that are in great need of a digital and AI transformation. She became even more committed to the idea of creating a human-centered AI institute at Stanford.

    Our Mission is to advance AI research, education, policy and practice to improve the human condition.
    In 2017, Fei-Fei began discussing the future of AI with Marc Tessier-Lavigne, the university’s new president and a neuroscientist. She brought in Stanford Computer Science Professors James Landay, who specializes in human/computer interaction, and Chris Manning, who specializes in machine learning and linguistics, to further develop the idea. When John stepped down as Provost in 2017, Fei-Fei asked him to co-direct the undertaking. Together they brought in Russ Altman, a Stanford Professor of Bioengineering and Data Science; Susan Athey, an Economics of Technology Professor at Stanford Graduate School of Business; Surya Ganguli, a Stanford Professor of Applied Physics and Neurobiology; and Rob Reich, a Stanford Professor of Political Science and Philosophy. Encouraged by Stanford’s school deans, especially Jon Levin, Jennifer Widom and Debra Satz (Business, Engineering and Humanities and Sciences), the new team evangelized the idea with colleagues and friends. Soon dozens of accomplished faculty members were contributing their perspectives.

    Nearly three years and many deep conversations later, we are humbled and proud to announce the official launch of The Stanford Institute for Human-Centered Artificial Intelligence (HAI).

    At HAI our work is guided by three principles: that we must study and forecast AI’s Human impact, and guide its development in light of that impact; that AI applications should Augment human capabilities, not replace humans; and that we must develop Intelligence as subtle and nuanced as human intelligence.

    Our aim is for Stanford HAI to become an interdisciplinary, global hub for AI learners, researchers, developers, builders and users from academia, government and industry, as well as leaders and policymakers who want to understand and influence AI’s impact and potential.
    These principles extend the discipline of AI far beyond the confines of engineering. Understanding its impact requires expertise from the humanities and social sciences; mitigating that impact demands insights from economics and education; and guiding it requires scholars of law, policy and ethics. Just so, designing applications to augment human capacities calls for collaborations that reach from engineering to medicine to the arts and design. And creating intelligence with the flexibility, nuance and depth of our own will require inspiration from neuroscience, psychology and cognitive science.


  3. Stanford HAI has sponsored multiple symposia bringing together experts on topics including the Future of Work, and AI, Humanities and the Arts. This summer we will be launching our first Executive Education program in partnership with the Graduate School of Business and our first Congressional Bootcamp in partnership with the Freeman-Spogli Institute for International Studies. We are also sponsoring a summer AI research internship program for “graduates” of the AI4All diversity education program, to enable these young people to maintain their interests and hone their skills.
    We are now selecting finalists for three fellowship programs that we have launched in partnership with other Stanford units: HAI Ethics Fellows (with the McCoy Family Center for Ethics in Society), HAI Journalism Fellows (with the John S. Knight Journalism Fellows Program), and HAI International Security Fellows (with the Center for International Security and Cooperation). The winners of these fellowships will join the HAI community in the Fall. We will also soon be announcing the appointments of our first HAI Journalist-in-Residence and our first HAI Artist-in-Residence, both of whom will join us this Fall, as well as our first class of HAI Engineering Fellows.


    It has been quite a journey since that first meeting in Fei-Fei’s driveway three years ago. That meeting happened because John was buying a house adjoining Fei-Fei’s back yard. We became neighbors, and soon after John moved in, he replaced the old fence between our houses. In one spot, connecting Fei-Fei’s back yard to John’s side yard, he built a gate. Little did we know how well-trodden the path between our houses would become thanks to HAI! We hope Stanford HAI will provide the same opportunity for partnership, trust and collaboration to many, many others. Please join us in our quest to improve the human condition through Human-centered AI.

  4. according to Bard May 2023 these are examples of partners of Stanford HAI Some of Stanford HAI's key partners include:

    The University of Oxford
    The University of Cambridge
    The Massachusetts Institute of Technology
    The University of California, Berkeley
    The University of Toronto
    The National University of Singapore
    The Tsinghua University
    The Peking University
    The Indian Institute of Technology Bombay
    The Indian Institute of Science
    These partnerships are helping to create a global network of experts who are committed to using AI for good. By working together, we can ensure that AI is used to solve the world's most pressing problems, and that it does not become a threat to humanity.

    In addition to these formal partnerships, Stanford HAI also works with a number of other organizations to promote the responsible development of AI. These organizations include:

    The World Economic Forum
    The Aspen Institute
    The Brookings Institution
    The Carnegie Endowment for International Peace
    The Center for American Progress
    The Center for Strategic and International Studies
    The Council on Foreign Relations
    The New America Foundation
    The OpenAI Foundation
    The Partnership on AI
    These organizations are playing a critical role in shaping the future of AI. By working together, we can ensure that AI is used to create a better world for all.