Humanity as gaslighting victims of LLMs?
Are we being gaslit (gaslighted? gaslightified? gaslitted?) by LLMs? Are they subtly and not-so-subtly guiding our behavior?
No, not really, to the first question. And "Well, duh" is a spot-on answer, I think, to the second.
Maybe you've heard of this?
-->> "Adding a feature because ChatGPT incorrectly thinks it exists"
https://www.holovaty.com/writing/chatgpt-fake-feature/
ChatGPT started a pattern of repeated (if somewhat logically-arrived at) hallucinations, telling thousands of people that an online music scanner app had a certain feature, which it, in fact, did not. This went on long enough, that the app's programmer decided to go with the flow and add the feature. Since people were expecting it, he felt he should provide it. But the customers would not have expected it, had they not asked a hallucinating LLM.
Certainly there are the apocryphal stories about people slightly changing their writing and speaking habits (mostly in vocabulary) due to frequent exposure to LLM products. Supposed word choice changes like the (now infamous) delve and underlie.
Most educators have seen years of concrete evidence that students write less thoughtfully, more spontaneously, with shorter thoughts and products. Of course it's the smartphones, chatting, texting, etc. They're getting more practice at the shallower, shorter stuff, and less practice with deeper, longer products.
Now LLMs will write your emails, cover letters, reports, scripts, and more, in seconds. So you'll get less practice at it.
Perhaps you've heard of this? How our LLM habit is making us "dumber" in our writing skills?
https://www.rohan-paul.com/p/llms-are-killing-your-writing-fingerprints
Is this overall change a deliberate thing? Not at all. LLMs don't have deliberation nor do they have coordination. The LLM companies have neither, also (except if there's profit in it, which, for many of them, there are no appreciable and reliable such profits, just yet).
So this is an accidental and emergent phenomenon on ourselves. We created writing. We practiced and recorded writing for many thousands of years, in a variety of ways. When we invented digital computing, we started to convert from analog to digital as much of our written records as we could get.
Have most humans practiced writing and reading? Absolutely not. In the history of humanity, the vast majority at any time were absolutely illiterate. Only in the last few centuries has that become not the case. And even today, in the USA, about 20% (that's 1 in 5!) of all adults are not reliably capable of 8th grade level English language reading and writing.
https://nces.ed.gov/pubs2019/2019179/index.asp
When we invented LLMs, which are named Large Language Models, we literally created them by throwing billions of written records into some well-heeled neural networks. Sure, the basis for their operations is mathematical, because the basis for all programming is mathematical. But the result of all that math is language, in the "natural language" modes that humans use. The LLMs are the actual evolved products of our millennia of writing things down.
Some call LLMs "plagiarism machines." In one sense, this is true. One of their major components is text prediction and completion. But they have learned how to write by studying how we humans have done our writing. They are created to copy us. Sure, they have little random tweaks here and there, but the skeleton of an LLM is its copying of how humans write. Our grammars are their grammars. Our binary language trees are their binary language trees.
But humans learn how to speak and write in the same way, at its core. By mimicry, a toddler is picking up all the sounds we older humans make, and copying us. As they're mimicking us, babbling, spouting nonsense, they encourage us to talk back to them. At first, we talk simply ("baby talk" in most cases), but they're listening when we're talking in our normal sophisticated ways. When we give emotional emphasis to some pattern of sounds, they take note and copy us (as kids do when they hear some curse word or another from an adult, and start repeating it, often to chagrin or amusement).
But LLMs (the majority of our available generative AIs these days) were not created based on the sounds people make. They're created from the written word. And most humans, most times, do not write like they talk.
Teachers have a similar mechanism for teaching writing to students who are old enough to have been talking (and with humans, 99% of us learn talking skills 5-7 years before equivalent level writing skills). Show the child some exemplar samples of good writing. Go over the basics of them. Start copying them. First the teacher does it, then the class and teacher do it, and then the student does it. Just with mimicry, at first. Then with minor changes. With fits and starts and hesitations and mistakes (many, many mistakes) along the way. No kid, no matter how "gifted", starts off as a great writer or speaker. They learn, first while silent, then while productive.
And LLMs make many mistakes. They're still learning. (Mostly because we have not yet invented a technique to simulate true learning. But we're working on it.)
But now LLMs are producing sufficiently quality writing far, far faster than we are. So now, by sheer volume and our laziness (read: our instinct for energy efficiency), they are producing the exemplar documents. And in reading through them, in high numbers, in repeated circumstances, now we are absorbing new ideas, patterns, phrases, and vocabulary.
We aren't being "gaslit" by LLMs. There is nothing wrong with our gaining new vocabulary and expressive qualities from reading their works. We are simply learning and evolving our use of language. Just as we've always done.
(And that ASCII tablature mistake? Not some conniving scheme from ChatGPT to get the company to code the feature. It's just a very logical idea applied to their product. ASCII outputs are very common when LLMs are writing about computer code flowcharting. Applying the idea to music charting is an easy and, I think, inevitable logical leap. Really, the programming team should have already thought of it.)
Comments
Post a Comment