Just before Christmas 2023 I set a tricky Beatles Quiz for friends of mine who were avid Beatles fans of long standing. The catch was that the answers weren’t obviously to be found on the Internet. Most of my super knowledgeable friends couldn’t answer any of the questions, so one of them stuck the questions into ChatGPT. Whilst ChatGPT insistently provided 20 responses (full of guesses, as though it was for a mid-term exam in any American University) it was mostly wrong; but it didn’t let on that it was guesswork, nor provide some qualifying framing of its repeated inaccuracies.
ChatGPT almost had 3 “correct” shots in the dark out of 20 questions; but only if I’m generous and after some prompting too (that’s cheating in academia obviously). That is a very bad fail in most educational high-stakes assessment contexts. ChatGPT, an example of generative AI, has been termed a stochastic parrot, meaning that if it is actually clueless about a “correct” answer it will stump up with its “best” guess camouflaged as an answer, however low the probability of being correct! If the possible answer has a 17% chance of being correct ChatGPT will provide that “answer” with 100% confidence. Whilst this might be a sensible strategy in a multiple choice quiz for a poor student (unless the marking scheme marks down guesses like GPA) it is not what we expect in an answer from a putative authority. ChatGPT is a clueless scattergun which simulates authority whilst exhibiting the profound digital arrogance that it’s one’s and zeros build into their systems. For example in answer to the question “which French Nobel Prize winner wrote “the Beatles were full of joy?” ChatGPT guessed Albert Camus who had died before they became The Beatles. ChatGPT is just a rough and ready matching machine. To paraphrase Robert Wyatt.
But if ChatGPT is to be authoritative then it’s algorithm cannot just blitzkreig the reader with any random guess from its combined harvesting of data, unless it also reveals at what level of probability that guess is provided. Parrots might provide voicings of words, well-trained parrots might provide voicings of “answers” but the simple follow up question “how come” ends discussion with any parrot; dead or alive. M$ (Microsoft) won’t do this because, as Jim Clark (Netscape) pointed out in the last century, Microsoft wants vertical integration in any market it enters and with ChatGPT we are talking about Education as a market. The point of ChatGPT for M$ is to render teachers redundant in high-stakes assessments education systems.
Education is no longer a place of knowledgeit’s merely a stack of vertically integrated products, as longed for by Harvard University’s own Clayton Christensen who wanted to disrupt the classroom in order to make it a global product which American institutions can dominate. Death of a salesman replaced by death of the teacher. The Professor might just be retained in order to manage the always on switch, more probably to flash perfect white teeth in the prospectus.
The Beatles Questions
I’ve been a fan of The Beatles since January 1963 and I’ve been writing about them since 2009, including a novel, 63/68 A Visceral History and a number of blogs, including a song-by-song guide to the White Album that Google has now badged as authoritative. Try a Google search for “Bungalow Bill review” This means that snippets of our work (each song on the blog was reviewed by a different member of our Brainsrusting group) are now part of the core reference model that ChatGPT relies on. It’s data is partly formed by Google’s snippet answers. Some of its data is hoovered in from Wikipedia (3%), much of it from opinionated blogs that still think Kasparov can’t beat a computer at chess.
Machine Reading not Machine-Learning
ChatGPT is just a data resource notable, however, for its scale. Similarly Google is a search tool notable for its speed. Neither are particularly accurate but both are acceptable and fast enough for those who literally believe “time is money”. Discussing the amazingly clueless ChatGPT results to my Beatles Quiz with a friend I realised the answers revealed that they were based on machine-reading not machine learning, whatever the AI experts are saying. After all even more data is just more data, it’s certainly not knowledge, far less wisdom. Machine reading data doesn’t even create useful information; that’s left to the algorithm and its fast stats probabilities concerned with guessing what an answer might be. To ChatGPT, concerning The Beatles it seems that Happiness is a warm guess…
In 5 Years Time? Better models…
In the many years since 1984 that I was teaching about or researching AI I noted that the AI fanboys were always saying it would arrive in about 5 years. Ever since Donald Michie opened the Machine Intelligence Unit in 1948 at Heriot-Watt University the benefits of AI have been over the hills but not far away. Five years is into the next funding cycle, so it means our inevitable AI research conclusion will be more research or, just keep funding us please! Rarely do useful applications emerge but interesting outcomes of research do. I particularly like Marvin Minsky’s Society of Mind because it attempts to model how the human body uses the brain. I recall Minsky saying we each have around 400 “brains” controlling the multiple functions of ourselves, which is why we can chew gum and walk at the same time.
Better models of learning
Rose Luckin’s Ph.D, which informed her Homework project (where she built an iPad 2 before Apple launched its pocketful of eyes), modelled “learning” based on evolving Vygotsky’s constructivist model of learning for the resources rich age of the Internet. Both these useful outputs of AI research were concerned with deepening our understanding of ourselves in useful ways. They were heuristics. Heuristics can help us think better for ourselves, by providing us with constructive insights. They don’t replace us thinking for ourselves, far less offer algorithms that are presented as better thinkers than how we might think.
False correlations and false metaphors…
So the major criticism of generative AI In general, and ChatGPT in particular, is that they are statistical systems passing themselves off as truth-tellers. A critical flaw in using statistics to provide the truth about anything is something called a false correlation. Matching this pattern with that pattern seems to tell us something meaningful about their connection, but if we say “this causes that” it may not be true. Statistical results still rely on human interpretation before they become meaningful. However with generative AI we now have a far bigger problem than a simple false correlation. We now have the potentially devastating problem of false metaphors starting with the biggest lie that AI is a form of intelligence, exhibiting sentience and, possibly is wiser than us. It’s just a high-speed matching machine useful only to people who can’t distinguish a fact from an assertion. So this is wisdom! What have we done?
You can try the Beatles Quiz that bemused ChatGPT here (with some fresh bonus questions just for you).
Fred Garnett March 2024