Downloads
lifebeyondzetta.docx
>3/18 24: Similarities between Electronic Computers and the Human Brain: Thank you Jensen Huang for best week of Learning since John Von Neumann shared with The Economist 1956 notes Computer & The Brain
HAPPY 2024: in this 74th year since The Economist started mediating futures of brainworking machines clued by the 3 maths greats NET (Neumann, Einstein, Turing) people seem to be chatting about 5 wholly different sorts of AI. 1BAD: The worst tech system designers don't deserve inclusion in human intel at all, and as Hoover's Condoleezza Rice . 2 reports their work is result of 10 compound techs of which Ai is but one. Those worst for world system designs may use media to lie or multiply hate or hack, and to perpetuate tribal wars and increase trade in arms. Sadly bad versions of tv media began in USA early 1960s when it turned out what had been the nation's first major export crop, tobacco, was a killer. Please note for a long time farmers did not know bac was bad: western HIStory is full of ignorances which lawyer-dominated societies then cover up once inconvenient system truths are seen. A second AI ecommerce type (now 25 years exponential development strong) ; this involves ever more powerful algorithms applied to a company's data platform that can be app'd to hollow out community making relatively few people richer and richer, or the reverse. You can test a nation's use of this ai by seeing if efinance has invested in the poorest or historically most disconnected - see eg bangladesh's bklash, one of the most populous digital cash systems . Digital money is far cheaper to distribute let alone to manually account for so power AI offers lots of lessons but whether its good or not depends in part on whether there are enough engineers in gov & public service to see ahead of what needs regulating. There are 2 very good ai's which have only scaled in recent years that certainly dont need regulating by non engineers and one curious ai which was presented to congress in 2018 but which was left to multiply at least 100 variants today the so-called chats or LLMs. Lets look at the 2 very good ai's first because frankly if your community is concerned about any extinction risks these AI may most likely save you, One I call science AI and frankly in the west one team is so far ahead that we should count ourselves lucky that its originator Hassabis has mixed wealth and societal growth. His deep mind merged with google to make wealth but open sourced the 200 million protein databank equivalent to a billion hours of doctorate time- so now's the time for biotech to save humanity if it ever does. Alongside this the second very good AI graviates around Fei-Fei Li) in developing 20 million imagenet database so that annual competitions training computers to see 20000 of the most everyday sights we humans view around the world including things and life-forms such as nature's plants and animals. Today, students no longer need to go back to 0.1 programming to ask computer about any of these objects; nor do robots or and autonomous vehicles - see fei-fei li's book worlds i see which is published in melinda gates Entrepreneurial Revolution of girl empowerment
EW::ED , VN Hypothesis: in 21st C brainworking worlds how people's times & data are spent is foundational to place's community health, energy and so natural capacity to grow/destroy wealth -thus species will depend on whether 1000 mother tongue language model mediates intelligence/maths so all communities cooperatively celebrate lifetimes and diversity's deep data ) . Check out "Moore exponential patterns" at year 73 of celebrating Game : Architect Intelligence (Ai) - players welcome .. some jargon

Wednesday, December 31, 1980

1980 uses mainly to reprise with chatgpt intel and digital dev before entrepreneurial revolution of fei-fei li et al from 2006

 chatdate 5/10/23

User
Before von neumann died in 1957 can you j=help me understand who he shared 3 sorts of digital resources with 1 hardware of computing 2 software/coding 3 likely conequences for human brain and ai connectivity? \
ChatGPT

Certainly, I can help you understand what John von Neumann shared before he passed away in 1957.

  1. Hardware of computing: Von Neumann was instrumental in the development of the first electronic computer, the Electronic Numerical Integrator and Computer (ENIAC). He later designed the architecture for what is now known as the von Neumann architecture, which is the basis for most modern computer systems. The von Neumann architecture is a stored-program computer architecture that uses a central processing unit (CPU), memory, and input/output devices to perform computations.

  2. Software/coding: Von Neumann also made significant contributions to the development of computer programming languages. He wrote the first book on game theory, which laid the foundation for the development of modern computer algorithms. He also helped develop the first programming language, called Short Code, which was used on the early computers.

  3. Likely consequences for human brain and AI connectivity: Von Neumann was interested in the relationship between computers and the human brain. He believed that computers could be used to model and simulate the workings of the brain, and that this could lead to a better understanding of how the brain works. He also speculated that computers and the human brain could one day be connected, allowing humans to enhance their cognitive abilities and communicate with each other in new ways.

Overall, John von Neumann was a pioneering figure in the development of digital resources, from the hardware of computing to the software/coding used to program it. He also had a visionary outlook on the potential consequences of this technology, particularly in the relationship between computers and the human brain.

User
My understanding is that much of von neumann life up to 1945 involved secret research because of the war but unlike nuclear research from 1945 he wanted to openly share as much of his tech intel on hradware sofware and brain science - am i correct that he gave all of his hardware resarch to IBM - anyone else too?
ChatGPT

It is true that much of John von Neumann's work during World War II involved classified research related to the development of nuclear weapons. However, after the war, von Neumann was one of the leading advocates for open sharing of scientific and technological knowledge.

Regarding his work on hardware and software, von Neumann did not specifically give all of his hardware research to IBM, but he did work closely with the company on several projects. For example, he consulted with IBM on the development of the Selective Sequence Electronic Calculator (SSEC), one of the first large-scale electronic computers.

Von Neumann's contributions to computer architecture and programming were also widely shared and adopted in the broader computing community. His work on the von Neumann architecture, for example, was influential in the development of many early computers, including the EDVAC, BINAC, and UNIVAC machines.

As for his ideas on the relationship between computers and the human brain, von Neumann did not give specific research to any particular company or institution. However, his ideas were widely discussed and debated in academic and scientific circles, and many researchers have built on his work in the decades since his death.

User
Von Neumann spent much of his last 2 years 1955-7 preparing a curriculum for yale called computer and the brain- in the event his wife presented this soon after his death but the whole is published. Who would have been the yale researchers of the brain in 1957 ; who might have attended von neumann wife's lectures from other east coast institutes interested in neuroscience or coding. How did von neumnn inspired work first get to MIT and did this continue as a collaboration up the east coast princeton yale mit dartmouth?
ChatGPT

John von Neumann's curriculum for a course titled "The Computer and the Brain" was published posthumously in 1958. The course was originally intended to be taught at Yale University, and it is likely that the students would have been a mix of computer science and neuroscience students. It is unclear who specifically would have been the Yale researchers of the brain in 1957, but some notable neuroscientists who were active in the field at the time include Wilder Penfield, Karl Lashley, and Donald Hebb.

It is possible that von Neumann's wife gave lectures on the material from the course after his death, but there is no record of these lectures. It is also unclear who might have attended these lectures from other East Coast institutions interested in neuroscience or coding.

As for von Neumann's work inspiring research at MIT and other East Coast institutions, von Neumann had a close relationship with several researchers at MIT, including Norbert Wiener, who was a pioneer in the field of cybernetics. Von Neumann and Wiener collaborated on several projects, and their work was influential in the development of early computers and cybernetic systems.

The Dartmouth Conference, which was held in 1956, is considered a significant event in the history of artificial intelligence (AI) research. Von Neumann was not directly involved in the conference, but his ideas about the relationship between computers and the brain were influential in the development of early AI research. Many of the attendees at the conference were from East Coast institutions, including MIT and Dartmouth College.

Overall, von Neumann's work on the computer and the brain was influential in the development of both computer science and neuroscience, and his ideas were discussed and debated among researchers at institutions up and down the East Coast, including Princeton, Yale, MIT, and Dartmouth.

User
When Macarthy first came to MIT was he working with Wiener or somebody else?
ChatGPT

John McCarthy joined the faculty at MIT in 1956, the same year that Norbert Wiener retired from the university. Therefore, McCarthy did not work directly with Wiener at MIT. However, Wiener was a significant influence on McCarthy's work, particularly in the area of cybernetics.

McCarthy's research at MIT focused on artificial intelligence (AI), and he is considered one of the pioneers of the field. He was part of a group of researchers at MIT who were exploring ways to use computers to simulate human intelligence. This group included Marvin Minsky, who later co-founded the Artificial Intelligence Laboratory at MIT, and Claude Shannon, who was a pioneer in the field of information theory.

While McCarthy did not work directly with Wiener at MIT, he was influenced by Wiener's work on cybernetics and feedback systems. Wiener's ideas about the relationship between machines and humans informed McCarthy's ideas about how computers could be programmed to learn and adapt, and his work on AI was influenced by the broader cybernetics movement that Wiener helped to establish.

User
Is there any correspondence or shared work between Minsky and the Princeton TEN namely Turing Einstein Neumann
ChatGPT

Marvin Minsky did correspond with some members of the Princeton TEN, but his work was more closely associated with the MIT AI community.

Minsky was a student of John McCarthy at Dartmouth College, where the Dartmouth Conference on Artificial Intelligence was held in 1956. This conference is considered a milestone in the development of AI research, and it brought together many of the leading researchers in the field at the time. While none of the Princeton TEN were directly involved in the Dartmouth Conference, their work on computing and mathematics helped to lay the groundwork for the development of AI.

Minsky's research at MIT was focused on developing intelligent machines that could perform tasks that were traditionally associated with human intelligence, such as reasoning, problem-solving, and learning. He co-founded the MIT Artificial Intelligence Laboratory with John McCarthy in the 1960s, and this lab became a leading center for AI research.

While Minsky did correspond with some members of the Princeton TEN, such as John von Neumann, his work was more closely associated with the AI research community at MIT. However, the work of the Princeton TEN and other pioneers of computing and mathematics helped to create the intellectual climate that made the development of AI possible.

User
Did Macarthy work at Dartmouth before MIT? Who would he have been his first coordnatr after he left princetion with his 1951 ohd?
ChatGPT

John McCarthy did not work at Dartmouth before MIT. He received his PhD in mathematics from Princeton University in 1951, and he spent several years teaching at a number of different institutions before joining the faculty at MIT in 1956.

After completing his PhD at Princeton, McCarthy held positions at Stanford University, Dartmouth College, and Princeton University before joining the faculty at MIT. At Dartmouth, McCarthy worked as a research fellow in the Mathematics Department from 1951 to 1953. He was not a faculty member at Dartmouth, but he did work on a project that would eventually lead to the Dartmouth Conference on Artificial Intelligence in 1956.

It is unclear who McCarthy's first coordinator was after he left Princeton, as he held several different positions at different institutions before joining the faculty at MIT. However, some of his early mentors and collaborators included mathematicians such as Alonzo Church, Stephen Kleene, and Emil Post. These mathematicians were influential in the development of the field of mathematical logic, which provided the theoretical foundation for the development of computer science and artificial intelligence.

User
Who was the Dartmouth convenor of the 1956 Dartmouth ai conference in 1956
ChatGPT

The 1956 Dartmouth Conference on Artificial Intelligence was organized by a group of researchers that included John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. While there was no single "convenor" of the conference, McCarthy is often credited with playing a central role in organizing and planning the event.

The Dartmouth Conference brought together researchers from a variety of disciplines, including mathematics, psychology, and engineering, to explore the possibility of creating machines that could exhibit intelligent behavior. The conference is considered a landmark event in the history of artificial intelligence, as it helped to establish the field as a distinct area of research and set the agenda for AI research in the years to come.

User
Would industry eg IBM hasve participated at 1956 dartmouth conference?
ChatGPT

Yes, industry representatives did participate in the 1956 Dartmouth Conference on Artificial Intelligence. IBM, which was one of the leading computer companies of the time, sent a number of representatives to the conference, including Nathaniel Rochester, who was an IBM researcher and one of the organizers of the conference.

In addition to IBM, representatives from other companies, such as General Electric and Bell Labs, also attended the conference. These companies were interested in the potential applications of artificial intelligence research to their business operations, and they provided financial support for some of the early AI research projects.

The presence of industry representatives at the Dartmouth Conference helped to signal the growing interest in AI research among businesses and government agencies, and it helped to establish AI as a field of research that had both theoretical and practical applications.

User
when was xerox parc established in silicon valley?
ChatGPT

Xerox PARC (Palo Alto Research Center) was established in Silicon Valley in 1970. Xerox PARC was a research and development center created by Xerox Corporation to explore new technologies and develop products that could help the company maintain its position as a leader in the copier industry.

At PARC, researchers and engineers worked on a wide range of projects, including computer hardware, software, and networking technologies. Some of the most notable achievements of Xerox PARC include the development of the first personal computer, the Alto, as well as the invention of Ethernet networking and the graphical user interface (GUI).

The work done at Xerox PARC had a significant impact on the development of the computer industry and the internet, and many of the technologies developed at PARC were later commercialized by other companies.

User
where was us research on satellites mainly done during the first 10 years of satellite development
ChatGPT

During the first 10 years of satellite development, much of the research on satellites in the United States was conducted by the military and various government agencies. The main organization responsible for satellite research during this time was the United States Air Force, which established the Air Force Cambridge Research Center in Massachusetts in 1950. The center was tasked with developing new technologies and conducting research related to rockets, missiles, and satellites.

In addition to the Air Force, the National Advisory Committee for Aeronautics (NACA) played a significant role in satellite research during this period. NACA, which was later replaced by the National Aeronautics and Space Administration (NASA), was a government agency established in 1915 to conduct research related to aeronautics and aviation. In the 1950s and early 1960s, NACA conducted extensive research on rocketry and satellite technology, and many of its researchers played key roles in the early development of the US space program.

Other government agencies, such as the Department of Defense, the Central Intelligence Agency (CIA), and the National Security Agency (NSA), were also involved in satellite research during this time, primarily for military and intelligence purposes. Overall, much of the early research on satellites in the United States was driven by national security concerns and the desire to maintain a technological edge in the Cold War.

User
Regarding the telecommunications and bradcasting compoents of satellites in the west presumably that was scaled by industry not just by nasa and gov research. Where would that v=have happened in usa and was the Swiss or Austrian UN offices involved eg ITU
ChatGPT

Yes, the telecommunications and broadcasting components of satellites were scaled by industry in addition to government and NASA research. In the United States, the development of commercial satellite communications was led by private companies, particularly in the telecommunications industry. Companies like AT&T, RCA, and Hughes Aircraft played a significant role in the early development of satellite communication technology.

The International Telecommunication Union (ITU), which is a specialized agency of the United Nations, was also involved in the development of satellite telecommunications. The ITU is responsible for coordinating the use of the radio frequency spectrum, which is used for both terrestrial and satellite-based communications. The ITU played a key role in developing the international regulatory framework for satellite communications, including the allocation of radio frequencies for satellite use.

The ITU's role in satellite communications continues to this day. The organization is responsible for regulating the use of the radio frequency spectrum and ensuring that satellite communications are used in a manner that is consistent with international agreements and regulations. The ITU also provides a forum for international cooperation on satellite communications issues and works to promote the development and deployment of new satellite technologies.

User
Where were th hq of AT&T, RCA Hughes at origib of satellite telecoms
ChatGPT

AT&T, RCA, and Hughes Aircraft, three companies that played a significant role in the early development of satellite communication technology in the United States, were headquartered in different locations.

AT&T, also known as the American Telephone and Telegraph Company, was headquartered in New York City during the early development of satellite telecommunications. AT&T was a major player in the telecommunications industry at the time, and its research labs in New Jersey played a key role in the early development of satellite communications technology.

RCA, which was a leading electronics company at the time, was headquartered in New York City as well. RCA played a key role in the early development of satellite technology and was responsible for building the first communication satellite, the SCORE (Signal Communication by Orbiting Relay Equipment), which was launched in 1958.

Hughes Aircraft, which was founded by Howard Hughes, was headquartered in Los Angeles, California. The company played a significant role in the development of satellite communication technology and built several communication satellites in the 1960s and 1970s, including the Syncom series of satellites, which were used for early satellite television broadcasts.

User
I belive the first live satellite briadcast of world scale was the tokyo olympics- do we know which company's satellite telecoms was used in that?
ChatGPT

Yes, the first live satellite broadcast of a global event was the 1964 Summer Olympics held in Tokyo, Japan. The broadcast was made possible by the Syncom 3 satellite, which was built by Hughes Aircraft and launched by NASA.

The satellite was used to relay the Olympic Games' opening ceremony from Tokyo to the United States, where it was broadcast on television. The broadcast was a landmark event, as it was the first time that a live television program had been transmitted from one side of the globe to the other.

Hughes Aircraft's Syncom series of satellites played a key role in the early development of satellite communication technology and paved the way for the modern satellite communications industry.

User
Would Hughes have used any university labs as part of satellite telcoms dev? If so which?
ChatGPT

Hughes Aircraft did collaborate with universities on satellite communication technology development during the early years of the space race. One notable collaboration was with the Jet Propulsion Laboratory (JPL) at the California Institute of Technology (Caltech) in Pasadena, California.

In the late 1950s and early 1960s, Hughes Aircraft worked with JPL on the development of the Syncom series of communication satellites. JPL provided expertise in rocket and satellite design, while Hughes developed the communication technology for the satellites.

The Syncom series of satellites were built to provide continuous communication coverage between the United States and other parts of the world. The first Syncom satellite was launched in 1963, and the series included three satellites in total. These satellites were used for various communication purposes, including live television broadcasts of major events, such as the 1964 Olympics in Tokyo.

User
I believe the actual coding for moon landing was done at MIT. Who would have been MIT's lead coding team in mid 1960s?
ChatGPT

Yes, the software for the guidance and control of the Apollo spacecraft and Lunar Module during the moon landing was developed by the MIT Instrumentation Laboratory (now known as the Charles Stark Draper Laboratory). The lead of the MIT team responsible for this software was Margaret Hamilton.

Margaret Hamilton was the Director of the Software Engineering Division at the MIT Instrumentation Laboratory, and she led the team that developed the software for the Apollo Guidance Computer (AGC), which was the first computer to use integrated circuits. Hamilton's team was responsible for developing the software that controlled the spacecraft during critical phases of the mission, including the descent to the lunar surface.

Hamilton's team was also responsible for developing the software for the Lunar Module's rendezvous and docking with the Command Module in lunar orbit. This software played a critical role in the success of the Apollo 11 mission, which resulted in the first manned moon landing in 1969.

User
Did the software/code of hamilton's team emerge from other people's software or was it one off machine code for the miltary?
ChatGPT

The software developed by Margaret Hamilton's team at the MIT Instrumentation Laboratory was specifically designed for the Apollo Guidance Computer (AGC), which was a specialized computer system developed for the Apollo missions.

The AGC was designed to handle the unique requirements of the Apollo spacecraft, including the need to navigate and land on the Moon. It was a very advanced system for its time, using integrated circuits and core memory to provide high-speed, reliable performance in the harsh conditions of space.

Hamilton's team had to develop the software for the AGC from scratch, as there were no existing software systems that could be used for this purpose. The team had to work closely with the engineers who designed the hardware of the AGC to ensure that the software was optimized for the computer's capabilities.

The software was written in machine code, which is a low-level programming language that is specific to a particular computer system. This allowed the software to take full advantage of the capabilities of the AGC hardware, but it also required a great deal of expertise and attention to detail to ensure that the code was correct and efficient.

In summary, the software developed by Margaret Hamilton's team was custom-designed for the Apollo Guidance Computer, and it was written in machine code from scratch, rather than being based on existing software systems.

User
Where had hamilto graduated from and been supervised by before elading her own team at MIT?
ChatGPT

Margaret Hamilton earned her Bachelor's degree in mathematics from Earlham College in 1958. She then went on to pursue a graduate degree in mathematics at the University of Michigan, where she was supervised by the prominent computer scientist and mathematician, Ralph Gomory.

After completing her graduate studies, Hamilton took a job at MIT's Charles Stark Draper Laboratory, where she worked on developing software for various guidance and navigation systems for the US Navy. It was through her work at Draper Lab that Hamilton became involved in the Apollo program and was eventually tasked with leading the software team at the MIT Instrumentation Laboratory.

User
What was the main operating system or cpde of dec mini computers? Was it designed in house by DEc or with an academic lab?
ChatGPT

DEC (Digital Equipment Corporation) mini computers used a variety of operating systems and software, depending on the specific model and time period. Some of the notable operating systems used by DEC included:

  • OS/8: A small, simple operating system developed by DEC for its PDP-8 series of minicomputers in the mid-1960s. OS/8 was written in assembly language and designed to be very efficient with limited hardware resources.

  • TOPS-10: A multi-user, multi-tasking operating system developed by DEC for its PDP-10 series of mainframe-class minicomputers in the late 1960s and early 1970s. TOPS-10 was written in assembly language and supported a wide range of programming languages and software tools.

  • VAX/VMS: A powerful and popular operating system developed by DEC for its VAX series of minicomputers in the late 1970s and 1980s. VMS (Virtual Memory System) was written in a mix of assembly language and high-level languages like Pascal and Bliss, and was known for its robustness, security features, and support for advanced networking and distributed computing.

DEC developed many of its operating systems in-house, often in collaboration with academic and research labs. For example, the early versions of TOPS-10 were developed in close partnership with MIT's Project MAC (now the MIT Computer Science and Artificial Intelligence Laboratory), which was one of the leading computer science research labs of the time. Similarly, VMS was developed in-house by DEC's engineering teams, but drew on ideas and techniques from a variety of academic and industry sources.

User
So bradly speaking I am correct in thinking that most general purpose software up to 1980 was decveloped by east coast comapnies or university labs??
ChatGPT

It is true that many significant developments in computing during the 1950s, 60s, and 70s were made by companies and university labs on the East Coast of the United States, such as IBM, Bell Labs, MIT, and Harvard. However, there were also important contributions from other regions and companies during this time. For example, the operating system UNIX was developed at Bell Labs on the East Coast, but it became popular on West Coast computers made by companies like Sun Microsystems and Silicon Graphics. Additionally, companies like Xerox PARC on the West Coast made significant contributions to the development of personal computing and graphical user interfaces during the 1970s.

User
yes why did xerox locate parc over on weast coast- how did programmable silicon chips change the variety of work to0 be done as coding?
ChatGPT

Xerox located its Palo Alto Research Center (PARC) on the West Coast for a few reasons. First, the company wanted to tap into the innovation and creativity that was already happening in the area, particularly around computing and electronics. Second, Xerox was looking to expand its business beyond traditional paper-based products, and believed that the West Coast's focus on technology and innovation would be a good fit for this goal. Finally, Xerox hoped that by creating a separate research center on the West Coast, it could encourage more experimentation and risk-taking than might be possible within the company's traditional East Coast operations.

The development of programmable silicon chips, such as microprocessors, had a significant impact on the variety of work that could be done in computing. Prior to the invention of microprocessors, computers were typically built using discrete components that were hard-wired together to perform specific functions. This made them expensive, inflexible, and difficult to modify or upgrade. With the advent of microprocessors, however, computers could be built using a single chip that contained all of the necessary components for processing, memory, and input/output operations. This made computers much cheaper, more versatile, and easier to upgrade and modify. It also allowed for the development of new types of software and applications that were not feasible with earlier computer architectures.

User
Did dec minicomputer use silicon chips?
ChatGPT

Yes, Digital Equipment Corporation (DEC) minicomputers used silicon chips. DEC was one of the pioneers in the use of integrated circuits (ICs) in computer systems, and they used silicon-based ICs in their minicomputers starting in the 1960s. DEC developed their own IC manufacturing capabilities in-house and produced a wide range of custom ICs specifically designed for their systems. The use of silicon chips allowed DEC and other computer manufacturers to increase the performance and reliability of their systems while reducing their size and cost.

User
What was the bigegst breakthrough of AI up to early 1980s - I mean befire arrival of personal computer networks?
ChatGPT

The biggest breakthrough in AI up to the early 1980s was the development of expert systems. Expert systems are computer programs that emulate the decision-making ability of a human expert in a particular domain by using a set of rules and a knowledge base to make decisions or solve problems. They were seen as a major breakthrough in AI because they demonstrated that it was possible to encode human expertise in a machine-readable format and use it to solve complex problems.

The development of expert systems led to the creation of successful commercial applications in areas such as medical diagnosis, financial planning, and industrial process control. They were also used in government and military applications, such as tactical decision aids for military operations. The success of expert systems in the 1980s led to a surge of interest and investment in AI research and development, and paved the way for further advances in AI in the decades that followed.

User
What 3 types of expert system scaled most up to early 1980s?
ChatGPT

The three types of expert systems that scaled the most up to the early 1980s were:

  1. Medical expert systems: These systems were used to help diagnose and treat diseases. They worked by analyzing patient data and symptoms, and then using a knowledge base of medical information to suggest possible diagnoses and treatments.

  2. Financial expert systems: These systems were used to help make financial decisions, such as investment recommendations. They worked by analyzing market data and trends, and then using a set of rules and algorithms to suggest investment strategies.

  3. Industrial process control expert systems: These systems were used to optimize industrial processes, such as chemical manufacturing. They worked by analyzing data from sensors and other sources, and then using a set of rules and algorithms to control the various parameters of the process for optimal performance.

User
With the emergence since 2006 of deep dtaat ai , are there hybrid ai systems with both deep data expert sustems or kave most expert system approaches become outdated?
ChatGPT

With the emergence of deep learning and other advanced machine learning techniques since 2006, there has been a shift away from rule-based expert systems towards data-driven approaches. However, there are still applications where expert systems can be useful, particularly in areas where there is a well-defined set of rules or where interpretability is important.

In some cases, hybrid systems that combine deep learning and expert systems have been developed. For example, an expert system may be used to interpret the output of a deep learning model, or a deep learning model may be used to generate training data for an expert system. Overall, the approach taken depends on the specific application and the available data and expertise.

User
Is there a best known exsmppe of a hybrid deep learning and expert system. Is that what ibm watson is?
ChatGPT

Yes, IBM Watson is an example of a hybrid system that combines deep learning with expert system approaches. Watson uses deep learning algorithms for natural language processing and machine learning to improve its performance over time. However, Watson also employs rule-based systems and knowledge representation techniques to allow it to reason over the results it obtains from its deep learning algorithms. This hybrid approach allows Watson to provide more accurate and contextually relevant responses to complex queries. Other examples of hybrid systems include Google DeepMind's AlphaGo and Microsoft's Cortana