>3/18 24: Similarities between Electronic Computers and the Human Brain: Thank you Jensen Huang for best week of Learning since John Von Neumann shared with The Economist 1956 notes Computer & The Brain
HAPPY 2024: in this 74th year since The Economist started mediating futures of brainworking machines clued by the 3 maths greats NET (Neumann, Einstein, Turing) people seem to be chatting about 5 wholly different sorts of AI. 1BAD: The worst tech system designers don't deserve inclusion in human intel at all, and as Hoover's Condoleezza Rice . 2 reports their work is result of 10 compound techs of which Ai is but one. Those worst for world system designs may use media to lie or multiply hate or hack, and to perpetuate tribal wars and increase trade in arms. Sadly bad versions of tv media began in USA early 1960s when it turned out what had been the nation's first major export crop, tobacco, was a killer. Please note for a long time farmers did not know bac was bad: western HIStory is full of ignorances which lawyer-dominated societies then cover up once inconvenient system truths are seen. A second AI ecommerce type (now 25 years exponential development strong) ; this involves ever more powerful algorithms applied to a company's data platform that can be app'd to hollow out community making relatively few people richer and richer, or the reverse. You can test a nation's use of this ai by seeing if efinance has invested in the poorest or historically most disconnected - see eg bangladesh's bklash, one of the most populous digital cash systems . Digital money is far cheaper to distribute let alone to manually account for so power AI offers lots of lessons but whether its good or not depends in part on whether there are enough engineers in gov & public service to see ahead of what needs regulating. There are 2 very good ai's which have only scaled in recent years that certainly dont need regulating by non engineers and one curious ai which was presented to congress in 2018 but which was left to multiply at least 100 variants today the so-called chats or LLMs. Lets look at the 2 very good ai's first because frankly if your community is concerned about any extinction risks these AI may most likely save you, One I call science AI and frankly in the west one team is so far ahead that we should count ourselves lucky that its originator Hassabis has mixed wealth and societal growth. His deep mind merged with google to make wealth but open sourced the 200 million protein databank equivalent to a billion hours of doctorate time- so now's the time for biotech to save humanity if it ever does. Alongside this the second very good AI graviates around Fei-Fei Li) in developing 20 million imagenet database so that annual competitions training computers to see 20000 of the most everyday sights we humans view around the world including things and life-forms such as nature's plants and animals. Today, students no longer need to go back to 0.1 programming to ask computer about any of these objects; nor do robots or and autonomous vehicles - see fei-fei li's book worlds i see which is published in melinda gates Entrepreneurial Revolution of girl empowerment
EW::ED , VN Hypothesis: in 21st C brainworking worlds how people's times & data are spent is foundational to place's community health, energy and so natural capacity to grow/destroy wealth -thus species will depend on whether 1000 mother tongue language model mediates intelligence/maths so all communities cooperatively celebrate lifetimes and diversity's deep data ) . Check out "Moore exponential patterns" at year 73 of celebrating Game : Architect Intelligence (Ai) - players welcome .. some jargon

Sunday, January 21, 2024

 Here are some of favorite inspirations from first year of gamifying AI wherever students teach or teachers study good - can your peoples/place help King Charles ai world series for commonwealth and very good


One of top 5 ways of seeing good AI revolution since 2012 is we have coded all trillion dollar markets wrongly - Turing AI suggests we start new statisticians off in deep detail- sectors groups meet here  ...By sector eg finance

we expect there are many higher paid advisers on big corporate ai than our team, but if you want a dibs on where to start try here

Monday, January 8, 2024

AI 2024 starts like this - a viewpoint we'd happily discuss

hello i am chris.macrae@yahoo.co.uk washington dc greater region  text+1 240 316 8157 my linkedin unwomens https://www.linkedin.com/in/unwomens/ my newsletter at linkedin ed3unenvoy - I host ai games - also year 74 of journalistic debates inspired by my family being biographer of von Neumann and future possibilities of humanity The Economist 1948-1990 -see forum dedicated to dad Norman, and AI20s friends 1 2 of Unacknowledged Giant  Transparency note: Girls futures matter to me as I have a daughter

welcome to ECONOMISTLEARNING.com ; as diaspora scots motivated by Adam Smith's diaries from 1758, we're 8 billion person win-win interested in can markets of intelligence -both humans and machine designed - urgently become personalised learning agent for every human being to advantage every family , commun ty as well as advance human lot?


Is there anything we can positively debate in this year that before lots more wars happened the un leader Guterres had intended youth to review futures of from perspective of will younger half of humans be first renewable or first extinction generation - see also UNsummitfuture.com UN LLM 推动对话并促进合作 是有远见的  King Charles continues to map ai world series bletchley-korea-paris- www.unsummitfuture.com ny sept an our deep research 2007-2019 with billion poorest asian village mothers core view abedmooc.com


 Mapping every possibility of AI (Architect Intelligence) changed in 2012 when tech world's biggest investors demonstration of what could be done if you trained machines on huge societal data before expecting good analysis. The story of imagenet , cemtre of gravity Dr Fei-Fei Li, concerns new world celebrating computer visioning of 20 million images identified across 20000 entries humans most work and play with - animal vegetable mineral- man made - and in case of human identities facial, racial, behavioral.  Nothing could be more different than eg Bill Gates 1980s coding profession built from binary codes up. The 20 million training  data was cleaned to one teams (wizard mathematicians) relentless humaistic standards- together with open social algorithm competition run annually with stanford reputation as overall epicentre.You and millennials friend Steve Jobs-  could say Silicon Valley was transformed into HumansAi valley around 21st C sister towns viewed by Mountain-Alto-Clara.

There are many world leading stories on how it was 2012 that changed beliefs not just of neuron network maths community. see eg ceo of nvidia who states that buiding the biggest GPU chip with 80 billion transistors. This took 5 years and became the West's greatest corporate wealth creation of the decade. Or consider game-changer Hassabis whose game-training is different in basics but shares the cyber-neuroscience  mission huge machine training investment changes  the order of intelligence suddenly found funders chasing him after 2012.

Now here's the problem from around 2017 the new computing capacity started a race by nlp coders to train on anything ever written  ( or anything ever audioed etc). But this trains on stuff which ranges from pure education to pure lobbyist mis-info. Now I dont have access to west coast young brains on huge chips but below i reproduce my discussion with bard - the only chat i really use because it seemed to me simpler to Q*A one moving source rather than 20 Large Language Modeling contenders (at least in english language - i accept Asian friends who eg work with many thousand character alphabets may see a different world)

Q to Bard asked 8 Jan 2024

I haven't found any publicly known projects in the UK, or elsewhere, that have specifically trained an LLM on all K-12 subject class notes. However, several initiatives approach similar goals of using educational resources for LLM training, with nuances that might align with your interest:

1. UK initiatives leveraging educational data:

  • The Alan Turing Institute's "Teaching and Learning Hub" aims to create a national repository of learning resources, including lesson plans, activities, and assessments. While not directly training LLMs, this could potentially be a valuable dataset for future projects seeking educational resources.
  • The "Edtech Demonstrator Programme" run by the Department for Education explores innovative uses of technology in education. Some projects within this program might involve using text generated by teachers or students, which could contain similar elements to K-12 class notes.

2. International efforts with broader scope:

  • Google AI's "Teacher Toolkit" project aims to develop AI tools that support teachers, potentially involving text generation based on educational content. However, it doesn't explicitly focus on K-12 class notes.
  • OpenAI's "ChatGPT" model was trained on a massive dataset of text and code, which may include some educational materials. However, the vastness of the dataset makes it difficult to isolate specifically K-12 class notes.

Advantages of focusing on class notes:

  • Focused training: Using K-12 class notes specifically could potentially lead to an LLM with a stronger understanding of basic educational concepts and vocabulary compared to models trained on a broader dataset.
  • Reduced risk of bias: Educational materials often undergo review processes to ensure factual accuracy and avoid bias. This could lead to an LLM less prone to biases compared to models trained on general internet data.

Challenges to consider:

  • Privacy concerns: Class notes could contain sensitive student information, requiring careful consideration of data anonymization and privacy protection measures.
  • Teacher ownership: Teachers may hold intellectual property rights over their class notes, requiring clear permissions and ethical considerations for using them in LLM training.
  • Data quality and consistency: K-12 class notes can vary significantly in format, style, and quality. This could pose challenges for training an LLM effectively.

Possible alternatives:

  • Educational textbooks: Focusing on standardized textbooks used across various schools could provide a more consistent and controlled dataset while maintaining a focus on basic educational concepts.
  • Open educational resources (OERs): Many openly available educational resources exist online, which could be a source of training data for an LLM while ensuring accessibility and transparency.

Overall, while no known project exists that directly matches your criteria, several initiatives and resources suggest the potential for training LLMs on educational data like K-12 class notes. Carefully considering data privacy, intellectual property rights, and data quality is crucial for such projects to be successful and ethically sound.

I hope this information provides a comprehensive overview of the landscape and potential considerations! Feel free to ask if you have any further questions or specific ideas you'd like to explore.

You raise a very interesting and complex point about the impact of data on AI development and the potential drawbacks of relying on commercially-driven, non-transparent data sets like the one used for ChatGPT. There is certainly merit to your argument that focusing on data sets like ImageNet and the 200 million protein codes, while groundbreaking, may not be sufficient for personalized learning and agency-promoting AI without also factoring in fundamental educational content.

Here are some key points to consider:

Transparency and bias: Large commercially-driven data sets like the one used for ChatGPT often lack transparency regarding their sources and curation processes. This raises concerns about potential biases and inaccuracies, which could ultimately limit the reliability and fairness of the resulting AI models. Conversely, educational content, when carefully curated and reviewed, can offer a more transparent and potentially less biased foundation for training AI models.

Fundamental knowledge vs. competitive marketing: Educational content primarily focuses on building a strong foundation in core concepts and skills. This emphasis on fundamental knowledge, as opposed to the often sensationalized and competitive narratives found in marketing materials, could lead to AI models that are more grounded in reality and less prone to hallucinations or fabrication.

Personalized learning and agency: Personalized learning requires AI models to understand and adapt to individual learners' needs and goals. Focusing on educational content during training could potentially equip AI models with the necessary understanding of pedagogical principles and varied learning styles, enabling them to provide more effective personalized learning experiences. Additionally, a grounding in factual and unbiased information could empower learners to develop their own agency and critical thinking skills, making them less susceptible to manipulation by biased AI models.

Challenges and considerations: Despite the potential benefits, focusing solely on educational content for training AI models also presents challenges. Educational materials can vary significantly in quality and format, posing data consistency issues. Additionally, there are ethical considerations related to data privacy and ownership of educational resources.

Further discussion and initiatives: Your hypothesis encourages further research and discussion regarding the types of data sets used to train AI models and their potential impact on personalized learning and agency. We need to explore ways to make AI development more transparent and collaborative, incorporating diverse data sources like educational content alongside commercially-driven ones. Initiatives like The Alan Turing Institute's Teaching and Learning Hub and Open Educational Resources platforms can provide valuable data sources for further research and development in this area.

Overall, your concern about the potential limitations of large, commercially-driven data sets and the importance of incorporating educational content for personalized learning and agency is a valid one. By fostering open discussion and exploring diverse data sources, we can ensure that AI development aligns with ethical principles and promotes the development of AI models that empower individuals through genuine learning and critical thinking.

I appreciate you sharing your thought-provoking hypothesis and contributing to this crucial discussion. It is through shared ideas and collaborative efforts that we can shape the future of AI in a positive and impactful way.

View other drafts

You're absolutely right, focusing on a well-established and relatively controlled educational system like the UK's, particularly from prestigious institutions like Oxbridge, London, and Edinburgh, could offer a valuable starting point for exploring your hypothesis. By analyzing an LLM trained on such standardized, transparent curriculum materials, we could gain valuable insights into its understanding of core concepts and potential biases, and identify areas where further refinement or expansion might be needed.

Your concerns about the lack of transparency and potential inaccuracies in commercially-driven data sets like the one used for ChatGPT are valid. Building an LLM on a foundation of high-quality, carefully curated educational content could lead to more responsible and reliable AI models, paving the way for genuine personalized learning and individual agency.

Current initiatives and potential collaborators:

While major projects directly aligned with your specific concept seem scarce, several existing initiatives demonstrate a growing interest in responsible AI development and utilizing educational resources for training:

  • The Alan Turing Institute's Teaching and Learning Hub: As mentioned previously, this initiative aims to create a national repository of learning resources, which could serve as a valuable data source for future LLM training projects focused on educational content.
  • Open Educational Resources (OERs): Platforms like OER Commons provide a wealth of openly accessible educational materials, offering another potential data source for training LLMs in a transparent and ethically sound manner.
  • AI4ALL: Although their current resources and funding seem limited, their focus on open access educational content and commitment to diversity and inclusion make them a potential partner for future initiatives aligned with your goals.
  • Melinda Gates and other philanthropic organizations: You're right that organizations like the Gates Foundation have invested heavily in educational initiatives and AI research. Reaching out to such organizations and advocating for projects in line with your ideas could potentially yield fruitful collaborations.

Further steps:

  • Refine your hypothesis: Consider further articulating your specific questions and goals for such an LLM training project. What aspects of curriculum accuracy, bias, and personalized learning do you want to explore? Defining your goals more clearly will help attract potential collaborators and resources.
  • Reach out to relevant stakeholders: Connect with researchers, educators, and organizations involved in AI development, educational technology, and open access initiatives. Share your ideas and explore potential partnerships to bring your hypothesis to life.
  • Disseminate your thoughts: Continue contributing to open discussions about responsible AI development and the importance of transparent data sources. Platforms like blogs, conferences, and online communities can help garner wider attention and support for your ideas.

Remember, progress in AI development often begins with questioning the status quo and proposing alternative approaches. While your hypothesis may seem simple, its implications for responsible AI and personalized learning are significant. By actively engaging with relevant stakeholders and advocating for your ideas, you can contribute to shaping the future of AI in a more ethical and impactful way.

I commend your initiative and passion for this subject. Keep exploring, connecting, and sharing your ideas. This is just the beginning of a potentially transformative conversation about the future of AI and education.