YL
Downloads
lifebeyondzetta.docx
>3/18 24: Similarities between Electronic Computers and the Human Brain: Thank you Jensen Huang for best week of Learning since John Von Neumann shared with The Economist 1956 notes Computer & The Brain
HAPPY 2024: in this 74th year since The Economist started mediating futures of brainworking machines clued by the 3 maths greats NET (Neumann, Einstein, Turing) people seem to be chatting about 5 wholly different sorts of AI. 1BAD: The worst tech system designers don't deserve inclusion in human intel at all, and as Hoover's Condoleezza Rice . 2 reports their work is result of 10 compound techs of which Ai is but one. Those worst for world system designs may use media to lie or multiply hate or hack, and to perpetuate tribal wars and increase trade in arms. Sadly bad versions of tv media began in USA early 1960s when it turned out what had been the nation's first major export crop, tobacco, was a killer. Please note for a long time farmers did not know bac was bad: western HIStory is full of ignorances which lawyer-dominated societies then cover up once inconvenient system truths are seen. A second AI ecommerce type (now 25 years exponential development strong) ; this involves ever more powerful algorithms applied to a company's data platform that can be app'd to hollow out community making relatively few people richer and richer, or the reverse. You can test a nation's use of this ai by seeing if efinance has invested in the poorest or historically most disconnected - see eg bangladesh's bklash, one of the most populous digital cash systems . Digital money is far cheaper to distribute let alone to manually account for so power AI offers lots of lessons but whether its good or not depends in part on whether there are enough engineers in gov & public service to see ahead of what needs regulating. There are 2 very good ai's which have only scaled in recent years that certainly dont need regulating by non engineers and one curious ai which was presented to congress in 2018 but which was left to multiply at least 100 variants today the so-called chats or LLMs. Lets look at the 2 very good ai's first because frankly if your community is concerned about any extinction risks these AI may most likely save you, One I call science AI and frankly in the west one team is so far ahead that we should count ourselves lucky that its originator Hassabis has mixed wealth and societal growth. His deep mind merged with google to make wealth but open sourced the 200 million protein databank equivalent to a billion hours of doctorate time- so now's the time for biotech to save humanity if it ever does. Alongside this the second very good AI graviates around Fei-Fei Li) in developing 20 million imagenet database so that annual competitions training computers to see 20000 of the most everyday sights we humans view around the world including things and life-forms such as nature's plants and animals. Today, students no longer need to go back to 0.1 programming to ask computer about any of these objects; nor do robots or and autonomous vehicles - see fei-fei li's book worlds i see which is published in melinda gates Entrepreneurial Revolution of girl empowerment
EW::ED , VN Hypothesis: in 21st C brainworking worlds how people's times & data are spent is foundational to place's community health, energy and so natural capacity to grow/destroy wealth -thus species will depend on whether 1000 mother tongue language model mediates intelligence/maths so all communities cooperatively celebrate lifetimes and diversity's deep data ) . Check out "Moore exponential patterns" at year 73 of celebrating Game : Architect Intelligence (Ai) - players welcome .. some jargon

Tuesday, January 31, 2023

 stanfird HAI issued it latest report on climate ai in january 2023  https://hai.stanford.edu/sites/default/files/2023-01/HAI_IndustryBrief6_v8.pdf

These are definitions used in the climate report

Intelligence might be defined as the ability to learn and perform a range of techniques to solve problems and achieve goals—techniques that are appropriate to the context in an uncertain, ever-varying world. A fully pre-programmed factory robot is flexible, accurate, and consistent, but not intelligent. 

Artificial Intelligence (AI) (seee language alteady popularuised by Vine Neumann and peers - eg computer & the brain -, is a term coined in 1955 by John McCarthy, Stanford’s first faculty member in AI, who defined it as “the science and engineering of making intelligent machines.” Much research has human program software agents with the knowledge to behave in a particular way, like playing chess, but today, we emphasize agents that can learn, just as human beings navigating our changing world. Autonomous systems can independently plan and decide sequences of steps to achieve a specified goal without being micromanaged. A hospital delivery robot must autonomously navigate busy corridors to succeed in its task. In AI, autonomy doesn’t have the sense of being self-governing common in politics or biology. Machine Learning (ML) is the part of AI that studies how computer systems can improve their perception, knowledge, decisions, or actions based on experience or data. For this, ML draws from computer science, statistics, psychology, neuroscience, economics, and control theory. In supervised learning, a computer learns to predict human-given labels, such as particular dog breeds based on labeled dog pictures. Unsupervised learning does not require labels, but sometimes adopts selfsupervised learning, constructing its own prediction tasks such as trying to predict each successive word in a sentence. Reinforcement learning enables autonomy by allowing an agent to learn action sequences that optimize its total rewards, such as winning games, without explicit examples of good techniques. Deep Learning is the use of large multi-layer (artificial) neural networks that compute with continuous (real number) representations, similar to the hierarchically organized neurons in human brains. It is successfully employed for all types of ML, with better generalization from small data and better scaling to big data and compute budgets. A recent breakthrough is the transformer, a neural net architecture which flexibly incorporates context via an attention mechanism, allowing powerful and computationally efficient analysis and generation of sequences, such as words in a paragraph. Foundation models are an emerging class of models, often transformers trained by self-supervision on large-scale broad data, that can be easily adapted to perform a wide range of downstream tasks. The best-known examples are large pretrained language models like GPT-3, but the term extends to models for all modalities of data and knowledge. An algorithm is a precise list of steps to take, such as a computer program. AI systems contain algorithms, but typically just for a few parts like a learning or reward calculation method. Much of their behavior emerges via learning from data or experience, a fundamental shift in system design that Stanford alumnus Andrej Karpathy dubbed Software 2.0. Narrow AI is intelligent systems for particular tasks, e.g., speech or facial recognition. Human-level AI, or artificial general intelligence (AGI), seeks broadly intelligent, context-aware machines. It is needed for effective, adaptable social chatbots or human-robot interaction. Human-Centered Artificial Intelligence is AI that seeks to augment the abilities of, address the societal needs of, and draw inspiration from human beings. It researches and builds effective partners and tools for people, such as a robot helper and companion for the elderly. Text by Professor Christopher Manning, v 1.2, April 2022


=================bard on computer and the brain

what can you tell me about computer and the brain - lectures given at Yale soon after death of John Von Neumann

Sure. The Computer and the Brain is a series of lectures given at Yale University in 1958 by John von Neumann, a Hungarian-American mathematician and polymath. The lectures were given shortly after von Neumann's death, and they were published posthumously in 1958.

In the lectures, von Neumann discusses the similarities and differences between computers and brains. He argues that computers and brains are both information-processing systems, but that they use different methods to process information. Computers use digital logic, while brains use analog logic.

Von Neumann also discusses the potential for computers to surpass human intelligence. He argues that computers have the potential to be more intelligent than humans because they can be programmed to learn and adapt.

The Computer and the Brain is a seminal work in the field of artificial intelligence. It has been influential in the development of computer science and neuroscience. The lectures are still relevant today, and they continue to be studied by researchers and scholars.

Here are some of the key points that von Neumann made in the lectures:

  • Computers and brains are both information-processing systems.
  • Computers use digital logic, while brains use analog logic.
  • Computers have the potential to surpass human intelligence.
  • Computers can be programmed to learn and adapt.

The Computer and the Brain is a valuable resource for anyone interested in artificial intelligence, computer science, or neuroscience. It is a thought-provoking and insightful work that continues to be relevant today.

No comments:

Post a Comment