YL
Downloads
lifebeyondzetta.docx
>3/18 24: Similarities between Electronic Computers and the Human Brain: Thank you Jensen Huang for best week of Learning since John Von Neumann shared with The Economist 1956 notes Computer & The Brain
HAPPY 2024: in this 74th year since The Economist started mediating futures of brainworking machines clued by the 3 maths greats NET (Neumann, Einstein, Turing) people seem to be chatting about 5 wholly different sorts of AI. 1BAD: The worst tech system designers don't deserve inclusion in human intel at all, and as Hoover's Condoleezza Rice . 2 reports their work is result of 10 compound techs of which Ai is but one. Those worst for world system designs may use media to lie or multiply hate or hack, and to perpetuate tribal wars and increase trade in arms. Sadly bad versions of tv media began in USA early 1960s when it turned out what had been the nation's first major export crop, tobacco, was a killer. Please note for a long time farmers did not know bac was bad: western HIStory is full of ignorances which lawyer-dominated societies then cover up once inconvenient system truths are seen. A second AI ecommerce type (now 25 years exponential development strong) ; this involves ever more powerful algorithms applied to a company's data platform that can be app'd to hollow out community making relatively few people richer and richer, or the reverse. You can test a nation's use of this ai by seeing if efinance has invested in the poorest or historically most disconnected - see eg bangladesh's bklash, one of the most populous digital cash systems . Digital money is far cheaper to distribute let alone to manually account for so power AI offers lots of lessons but whether its good or not depends in part on whether there are enough engineers in gov & public service to see ahead of what needs regulating. There are 2 very good ai's which have only scaled in recent years that certainly dont need regulating by non engineers and one curious ai which was presented to congress in 2018 but which was left to multiply at least 100 variants today the so-called chats or LLMs. Lets look at the 2 very good ai's first because frankly if your community is concerned about any extinction risks these AI may most likely save you, One I call science AI and frankly in the west one team is so far ahead that we should count ourselves lucky that its originator Hassabis has mixed wealth and societal growth. His deep mind merged with google to make wealth but open sourced the 200 million protein databank equivalent to a billion hours of doctorate time- so now's the time for biotech to save humanity if it ever does. Alongside this the second very good AI graviates around Fei-Fei Li) in developing 20 million imagenet database so that annual competitions training computers to see 20000 of the most everyday sights we humans view around the world including things and life-forms such as nature's plants and animals. Today, students no longer need to go back to 0.1 programming to ask computer about any of these objects; nor do robots or and autonomous vehicles - see fei-fei li's book worlds i see which is published in melinda gates Entrepreneurial Revolution of girl empowerment
EW::ED , VN Hypothesis: in 21st C brainworking worlds how people's times & data are spent is foundational to place's community health, energy and so natural capacity to grow/destroy wealth -thus species will depend on whether 1000 mother tongue language model mediates intelligence/maths so all communities cooperatively celebrate lifetimes and diversity's deep data ) . Check out "Moore exponential patterns" at year 73 of celebrating Game : Architect Intelligence (Ai) - players welcome .. some jargon

Friday, May 5, 2023

 Hello Family & friends  have a search problem - main subject of this post below 

first some updates on AI Architect Intelligence (solutions to sustainability most coooerative crises?) from 73 years of chatter which began with Von Neumann 1951 and his peer NET - Neumann Einstein Turing diarised in a thousand interviews (and about 30 surveys each based on a month of worldwide interviews and reporting - eg Entrepreneurial Revolution 1976; Consider Japan 1962;  Silicon Valley 1982 (1 week nictoreport) in The Economist 1951-1989 which since dad's death and with 16 mentoring sessions by fazle abed and japanese and 1billiongirls.coms/girlsworldbank.com- bard.solar Neumann,ning friends and I now try to update at eg http://povertymuseums.blogspot.com Economistdiary.com linkedin unwomens these twitter subscription lists

here's the problem - i wanted to give bard every chance of a detailed answer in spite of my ignorance - if you find a shorter question please tell me chris.macrae@yahoo.co.uk

i have a search problem - i believe i know where to start with ai for eg science breakthrough- billion hours saved of phd times by Hassabis; protein dayabase - all biotech and human consequences chnaged by alphafold2;  generally where deep ai changed from 2006 with fei-fei li (now hundreds of people around her at Stanford HAI) training recognition and even training whotehouse on future of education  https://www2.ed.gov/documents/ai-report/ai-report.pdf  ; I also understand a little about where people are putting  youth's most exciting sdg stories on metaverse but I really dont understand who changed bots from selling to the gpt or llm - so i hope you may find this q&A with bard relevant - more at www.economistlearning.com - if you have time for a ny coffee before october please chose one  Q to Bard  if i want to understand who imagenet began witg I see that talking to someone whos tracked fei fei li helps; if I wanted to understand deep learning i see that talking to an alumni of hassabis helps; what I dont understand is if I want to undestand roughly how llm or gpt evolved who are the first team members (or the first articles they wrote in non jargon but for mass human readers) that I should try and track? I ask because I undestand how bots were designed to sell people stuff- but is design of bots to help humans choose what they need to learn a very different design challenge even if the bot tech is common; if there was a better question to ask to understand where llm or gpt first became educational please feel free to answer that

View other drafts

Sure, here are a few people you could talk to to understand the evolution of LLMs and GPTs:


Ilya Sutskever: Sutskever is a co-founder of OpenAI and one of the leading researchers in the field of deep learning. He is the author of several papers on LLMs and GPTs, including the paper that introduced the GPT-2 model.

Oriol Vinyals: Vinyals is a research scientist at Google AI. He is the author of several papers on LLMs and GPTs, including the paper that introduced the GPT-3 model.

Quoc Le: Le is a research scientist at Google AI. He is the author of several papers on LLMs and GPTs, including the paper that introduced the GPT-4 model.

You could also read the following articles to learn more about the evolution of LLMs and GPTs:


A Neural Conversational Model by Ilya Sutskever, Oriol Vinyals, and Quoc Le (2015)

Attention Is All You Need by Vaswani et al. (2017)

GPT-3: OpenAI's Generative Pre-trained Transformer by Brown et al. (2020)

GPT-4: Language Models are Few-Shot Learners by Brown et al. (2022)

As for your question about whether the design of bots to help humans choose what they need to learn is a very different design challenge, I would say that it is. Bots that are designed to sell people stuff are typically designed to be persuasive and to make people feel good about buying the product. Bots that are designed to help humans choose what they need to learn, on the other hand, are typically designed to be informative and to help people learn in a way that is efficient and effective.


The design of bots to help humans choose what they need to learn is a relatively new field, and there is still a lot of research that needs to be done. However, I believe that this is a very important field, and I am excited to see how it develops in the years to come.

No comments:

Post a Comment