>3/18 24: Similarities between Electronic Computers and the Human Brain: Thank you Jensen Huang for best week of Learning since John Von Neumann shared with The Economist 1956 notes Computer & The Brain
HAPPY 2024: in this 74th year since The Economist started mediating futures of brainworking machines clued by the 3 maths greats NET (Neumann, Einstein, Turing) people seem to be chatting about 5 wholly different sorts of AI. 1BAD: The worst tech system designers don't deserve inclusion in human intel at all, and as Hoover's Condoleezza Rice . 2 reports their work is result of 10 compound techs of which Ai is but one. Those worst for world system designs may use media to lie or multiply hate or hack, and to perpetuate tribal wars and increase trade in arms. Sadly bad versions of tv media began in USA early 1960s when it turned out what had been the nation's first major export crop, tobacco, was a killer. Please note for a long time farmers did not know bac was bad: western HIStory is full of ignorances which lawyer-dominated societies then cover up once inconvenient system truths are seen. A second AI ecommerce type (now 25 years exponential development strong) ; this involves ever more powerful algorithms applied to a company's data platform that can be app'd to hollow out community making relatively few people richer and richer, or the reverse. You can test a nation's use of this ai by seeing if efinance has invested in the poorest or historically most disconnected - see eg bangladesh's bklash, one of the most populous digital cash systems . Digital money is far cheaper to distribute let alone to manually account for so power AI offers lots of lessons but whether its good or not depends in part on whether there are enough engineers in gov & public service to see ahead of what needs regulating. There are 2 very good ai's which have only scaled in recent years that certainly dont need regulating by non engineers and one curious ai which was presented to congress in 2018 but which was left to multiply at least 100 variants today the so-called chats or LLMs. Lets look at the 2 very good ai's first because frankly if your community is concerned about any extinction risks these AI may most likely save you, One I call science AI and frankly in the west one team is so far ahead that we should count ourselves lucky that its originator Hassabis has mixed wealth and societal growth. His deep mind merged with google to make wealth but open sourced the 200 million protein databank equivalent to a billion hours of doctorate time- so now's the time for biotech to save humanity if it ever does. Alongside this the second very good AI graviates around Fei-Fei Li) in developing 20 million imagenet database so that annual competitions training computers to see 20000 of the most everyday sights we humans view around the world including things and life-forms such as nature's plants and animals. Today, students no longer need to go back to 0.1 programming to ask computer about any of these objects; nor do robots or and autonomous vehicles - see fei-fei li's book worlds i see which is published in melinda gates Entrepreneurial Revolution of girl empowerment
EW::ED , VN Hypothesis: in 21st C brainworking worlds how people's times & data are spent is foundational to place's community health, energy and so natural capacity to grow/destroy wealth -thus species will depend on whether 1000 mother tongue language model mediates intelligence/maths so all communities cooperatively celebrate lifetimes and diversity's deep data ) . Check out "Moore exponential patterns" at year 73 of celebrating Game : Architect Intelligence (Ai) - players welcome .. some jargon

Tuesday, May 2, 2023

latest updates 5/4 - how did imagenet alumni (around Fei-Fei Li) change world from 2006 onwards

 we welcome corrections -our current understanding

in 2006 out of stanford fei-fei li became the most connected & cooperative alumni of imagenet research

from 2010-2017 Imagenet Large Scale challenges were held

typically the organisers were :  Prof. Li Fei-Fei, PI, Stanford University Prof. Jia Deng, Princeton University Prof. Olga Russakovsky, Princeton University Prof. Alex Berg, UNC Chapel Hill, Facebook, Shopagon Prof. Kai Li, Princeton University

typically 5 winners were announced annually-when pressed chatgpt says all winners were important but 3 to understand conequences of first may be:

  1. ResNet (2015): As you mentioned, ResNet is significant because of its connection to AlphaFold, which has revolutionized the field of protein folding prediction. ResNet's deep neural network architecture with skip connections enabled the training of extremely deep models, which has been instrumental in many other deep learning applications beyond image recognition.

  2. AlexNet (2012): AlexNet was the first deep convolutional neural network to win the ImageNet competition, and its success helped establish deep learning as a dominant approach in AI. Its architecture consisted of multiple layers of convolutional and pooling layers, followed by fully connected layers. AlexNet's breakthrough performance in image classification demonstrated the potential of deep learning for solving complex tasks.

  3. VGG (2014): VGG is another deep convolutional neural network architecture that achieved top performance in the ImageNet competition. Its architecture is simpler than AlexNet, with multiple 3x3 convolutional layers stacked on top of each other. VGG's success demonstrated that increasing the depth of neural networks can improve performance, paving the way for ResNet and other deep neural network architectures.

These three winners have all made significant contributions to the advancement of AI and have influenced the development of subsequent deep learning models.

x 2017 saw the last competition in the series and this FFL presentation 
  • L. Fei-Fei and J. Deng. ImageNet: Where have we been? Where are we going?CVPR Beyond ImageNet Large Scale Visual Recognition Challenge workshop, 2017pdf

No comments:

Post a Comment