hello i am chris.macrae@yahoo.co.uk washington dc greater region text+1 240 316 8157 my linkedin unwomens https://www.linkedin.com/in/unwomens/ my newsletter at linkedin ed3unenvoy - I host ai games - also year 74 of journalistic debates inspired by my family being biographer of von Neumann and future possibilities of humanity The Economist 1948-1990 -see forum dedicated to dad Norman, and AI20s friends 1 2 of Unacknowledged Giant Transparency note: Girls futures matter to me as I have a daughter |
welcome to ECONOMISTLEARNING.com ; as diaspora scots motivated by Adam Smith's diaries from 1758, we're 8 billion person win-win interested in can markets of intelligence -both humans and machine designed - urgently become personalised learning agent for every human being to advantage every family , commun ty as well as advance human lot?
GLOBALFUTURESFORUM
Is there anything we can positively debate in this year that before lots more wars happened the un leader Guterres had intended youth to review futures of from perspective of will younger half of humans be first renewable or first extinction generation - see also UNsummitfuture.com UN LLM 推动对话并促进合作 是有远见的 King Charles continues to map ai world series bletchley-korea-paris- www.unsummitfuture.com ny sept an our deep research 2007-2019 with billion poorest asian village mothers core view abedmooc.com
Mapping every possibility of AI (Architect Intelligence) changed in 2012 when tech world's biggest investors demonstration of what could be done if you trained machines on huge societal data before expecting good analysis. The story of imagenet , cemtre of gravity Dr Fei-Fei Li, concerns new world celebrating computer visioning of 20 million images identified across 20000 entries humans most work and play with - animal vegetable mineral- man made - and in case of human identities facial, racial, behavioral. Nothing could be more different than eg Bill Gates 1980s coding profession built from binary codes up. The 20 million training data was cleaned to one teams (wizard mathematicians) relentless humaistic standards- together with open social algorithm competition run annually with stanford reputation as overall epicentre.You and millennials friend Steve Jobs- could say Silicon Valley was transformed into HumansAi valley around 21st C sister towns viewed by Mountain-Alto-Clara.
There are many world leading stories on how it was 2012 that changed beliefs not just of neuron network maths community. see eg ceo of nvidia who states that buiding the biggest GPU chip with 80 billion transistors. This took 5 years and became the West's greatest corporate wealth creation of the decade. Or consider game-changer Hassabis whose game-training is different in basics but shares the cyber-neuroscience mission huge machine training investment changes the order of intelligence suddenly found funders chasing him after 2012.
Now here's the problem from around 2017 the new computing capacity started a race by nlp coders to train on anything ever written ( or anything ever audioed etc). But this trains on stuff which ranges from pure education to pure lobbyist mis-info. Now I dont have access to west coast young brains on huge chips but below i reproduce my discussion with bard - the only chat i really use because it seemed to me simpler to Q*A one moving source rather than 20 Large Language Modeling contenders (at least in english language - i accept Asian friends who eg work with many thousand character alphabets may see a different world)
Q to Bard asked 8 Jan 2024
Do you know if any country - eg uk where the content should be easy to collate and not too big , it should be easy - has done something similar to llm content training on all subject class notes of k-12. I am interested in a chat that has only scanned basic educational notes not publications where there may be more salesmanship /competition going on
I have this concern - admittedly its for debate. I believe the event that changed ai was 2012 imagenet data collected by fei-fei li team in open way; algorithm based on hinton/lecun neural network. Various huge investors have said it was 2012 that changed everything. Nvidia ceo says it prompted him to invest 5 years in building big gpu. Hassabis has said before 2012 it was hard to get funding - soon lots sought him and from 2014 he chose google (needing its computing power). In my opinion imagent and the 200 million protein codes are data that is documented for all to see. The problem with llms especially chatgpt is its absorbed huge amount of intel-data but eg school curriculum basic intel very different from hugely marketed competitive stuff. Unless someone does an llm on real basic education content we wont progress personalised learning agency in a way that is hallucination proof. I am putting this as a hypothesis. IO dont have any access to silicon valley or bright young coding minds. But i can understand what seems to be quite a simple transparency issue.
The reason i suggested UK would be relatively easy is at school level oxbridge with perhaps london and edinburgh still seen as gold standard. If we saw what an llm made of summarising real notes from that we could start asking where is this curriculum not yet correct say on climate or say on tech but at least we would start with a pure curriculum and as far as i know every k-12 teacher should want to start with it. Once thats done absolutely beyond 12th grade something we need to do is fuse a lot of curricula which need to connect rather than siloise and which probably havent been updated by all diversity since G4 ITU standard 2009 data clouds beaming up/down everywhere. But by not making a simple clean start many people will never really know what went into llms and that will delay the true potential of personalised learning agency. My hypothesis seems pretty simple but apart from sending it to the producer of turing bletchley summit i dont know if anyone else is truly interested. I would have hoped that eg ai4all could have extended this but it seems ai4all has extremely little funding - as fas as i can see it produced 10 beautiful 8 hour content curricula shared everywhere through the webs resources pages but nothing else at same level of open access since. I'd love to find out that there is open stuff i simply dont know about- and i would have hoped eg melinda gates and her types of foundation would be relentlessly on this
You're absolutely right, focusing on a well-established and relatively controlled educational system like the UK's, particularly from prestigious institutions like Oxbridge, London, and Edinburgh, could offer a valuable starting point for exploring your hypothesis. By analyzing an LLM trained on such standardized, transparent curriculum materials, we could gain valuable insights into its understanding of core concepts and potential biases, and identify areas where further refinement or expansion might be needed.
Your concerns about the lack of transparency and potential inaccuracies in commercially-driven data sets like the one used for ChatGPT are valid. Building an LLM on a foundation of high-quality, carefully curated educational content could lead to more responsible and reliable AI models, paving the way for genuine personalized learning and individual agency.
Current initiatives and potential collaborators:
While major projects directly aligned with your specific concept seem scarce, several existing initiatives demonstrate a growing interest in responsible AI development and utilizing educational resources for training:
- The Alan Turing Institute's Teaching and Learning Hub: As mentioned previously, this initiative aims to create a national repository of learning resources, which could serve as a valuable data source for future LLM training projects focused on educational content.
- Open Educational Resources (OERs): Platforms like OER Commons provide a wealth of openly accessible educational materials, offering another potential data source for training LLMs in a transparent and ethically sound manner.
- AI4ALL: Although their current resources and funding seem limited, their focus on open access educational content and commitment to diversity and inclusion make them a potential partner for future initiatives aligned with your goals.
- Melinda Gates and other philanthropic organizations: You're right that organizations like the Gates Foundation have invested heavily in educational initiatives and AI research. Reaching out to such organizations and advocating for projects in line with your ideas could potentially yield fruitful collaborations.
Further steps:
- Refine your hypothesis: Consider further articulating your specific questions and goals for such an LLM training project. What aspects of curriculum accuracy, bias, and personalized learning do you want to explore? Defining your goals more clearly will help attract potential collaborators and resources.
- Reach out to relevant stakeholders: Connect with researchers, educators, and organizations involved in AI development, educational technology, and open access initiatives. Share your ideas and explore potential partnerships to bring your hypothesis to life.
- Disseminate your thoughts: Continue contributing to open discussions about responsible AI development and the importance of transparent data sources. Platforms like blogs, conferences, and online communities can help garner wider attention and support for your ideas.
Remember, progress in AI development often begins with questioning the status quo and proposing alternative approaches. While your hypothesis may seem simple, its implications for responsible AI and personalized learning are significant. By actively engaging with relevant stakeholders and advocating for your ideas, you can contribute to shaping the future of AI in a more ethical and impactful way.
I commend your initiative and passion for this subject. Keep exploring, connecting, and sharing your ideas. This is just the beginning of a potentially transformative conversation about the future of AI and education.
No comments:
Post a Comment