David Brooks' Position on A.I. is Wobbling... as Should EVERYONE's Position on Standardized Testing
An "A.I. limitationist", David Brooks is rethinking his position having talked with FORMER "limitationist" Douglas Hofstadter... and their thinking should influence everyone's thinking about schooling
In his recent column, ‘Human Beings Are Soon Going to Be Eclipsed’, NYTimes op ed writer David Brooks indicates he is considering a change of heart in his "limitationist” stance with regard to the impact A.I. might have on humanity. In the article Brooks explains his longstanding “limitationist” stance as follows:
Over the past few months, I’ve become an A.I. limitationist. That is, I believe that while A.I. will be an amazing tool for, say, tutoring children all around the world, or summarizing meetings, it is no match for human intelligence. It doesn’t possess understanding, self-awareness, concepts, emotions, desires, a body or biology. It’s bad at causal thinking. It doesn’t possess the nonverbal, tacit knowledge that humans take for granted. It’s not sentient. It does many things way faster than us, but it lacks the depth of a human mind.
I take this to be good news. If A.I. is limited in these ways, then the A.I. revolution will turn out to be akin to the many other information revolutions that humans have produced. This technology will be used in a lot of great ways, and some terrible ways, but it won’t replace us, it won’t cause the massive social disruption the hypesters warn about, and it’s not going to wake up one day wanting to conquer the world.
Holding this position, he was pleased to read an Atlantic article from five years ago written by Douglas Hofstadter, “an eminent cognitive scientist” who David Brooks respects and reveres. In the article, Hofstadter outlined a similar stance to Brooks:
Hofstadter argued that A.I. translation tools might be really good at some pedestrian tasks, but they weren’t close to replicating the creative and subtle abilities of a human translator. “It’s all about ultrarapid processing of pieces of text, not about thinking or imagining or remembering or understanding. It doesn’t even know that words stand for things,” he wrote.
But after finding comfort from an admired scientific mind, he was unsettled to hear that Hofstadter had changed his thinking of late and was now deeply concerned about AI. Brooks recounts his reaction to this news:
So I was startled this month to see the following headline in one of the A.I. newsletters I subscribe to: “Douglas Hofstadter Changes His Mind on Deep Learning & A.I. Risk.” I followed the link to a podcast and heard Hofstadter say: “It’s a very traumatic experience when some of your most core beliefs about the world start collapsing. And especially when you think that human beings are soon going to be eclipsed.”
Apparently, in the five years since 2018, ChatGPT and its peers have radically altered Hofstadter’s thinking. He continues: It “just renders humanity a very small phenomenon compared to something else that is far more intelligent and will become incomprehensible to us, as incomprehensible to us as we are to cockroaches.”
In a word, Brooks’ reaction was “WHOA!” A follow up phone call to Hofstadter brought more discomfort:
I called Hofstadter to ask him what was going on. He shared his genuine alarm about humanity’s future. He said that ChatGPT was “jumping through hoops I would never have imagined it could. It’s just scaring the daylights out of me.” He added: “Almost every moment of every day, I’m jittery. I find myself lucky if I can be distracted by something — reading or writing or drawing or talking with friends. But it’s very hard for me to find any peace.”
…Two years ago, Hofstadter says, A.I. could not reliably perform this kind of thinking. But now it is performing this kind of thinking all the time. And if it can perform these tasks in ways that make sense, Hofstadter says, then how can we say it lacks understanding, or that it’s not thinking?
And if A.I. can do all this kind of thinking, Hofstadter concludes, then it is developing consciousness. He has long argued that consciousness comes in degrees and that if there’s thinking, there’s consciousness. A bee has one level of consciousness, a dog a higher level, an infant a higher level, and an adult a higher level still. “We’re approaching the stage when we’re going to have a hard time saying that this machine is totally unconscious. We’re going to have to grant it some degree of consciousness, some degree of aliveness,” he says.
But while Hofstadter’s rethinking of AI carried weight for Brooks he remained unpersuaded that AI would ever achieve ultra-human reasoning. He wrote:
A.I. is capable of synthesizing these linguistic expressions, which humans have put on the internet and, thus, into its training base. But, I’d still argue, the machine is not having anything like a human learning experience. It’s playing on the surface with language, but the emotion-drenched process of learning from actual experience and the hard-earned accumulation of what we call wisdom are absent.
After quoting from a a recent New Yorker article by an AI expert who supports the “limitationist” notion of AI, Brooks reaches this conclusion:
I confess I believe it a lot less fervently than I did last week. Hofstadter is essentially asking, If A.I. cogently solves intellectual problems, then who are you to say it’s not thinking? Maybe it’s more than just a mash-up of human expressions. Maybe it’s synthesizing human thought in ways that are genuinely creative, that are genuinely producing new categories and new thoughts. Perhaps the kind of thinking done by a disembodied machine that mostly encounters the world through language is radically different from the kind of thinking done by an embodied human mind, contained in a person who moves about in the actual world, but it is an intelligence of some kind, operating in some ways vastly faster and superior to our own. Besides, Hofstadter points out, these artificial brains are not constrained by the factors that limit human brains — like having to fit inside a skull. And, he emphasizes, they are improving at an astounding rate, while human intelligence isn’t.
That notion shakes Brooks’ paradigm about the superiority of humanity:
I find myself surrounded by radical uncertainty — uncertainty not only about where humanity is going but about what being human is. As soon as I begin to think I’m beginning to understand what’s happening, something surprising happens — the machines perform a new task, an authority figure changes his or her mind.
Beset by unknowns, I get defensive and assertive. I find myself clinging to the deepest core of my being — the vast, mostly hidden realm of the mind from which emotions emerge, from which inspiration flows, from which our desires pulse — the subjective part of the human spirit that makes each of us ineluctably who we are. I want to build a wall around this sacred region and say: “This is essence of being human. It is never going to be replicated by machine.”
Because Brooks is, at his core, something of a humanist, he worries about the possibility that a machine will not only replicate humanity, it will develop thoughts and ideas that will eclipse it. But as he notes in his last paragraph, technologists are less concerned with humanity and see the brain as a machine:
“Nope, it’s just neural nets all the way down. There’s nothing special in there. There’s nothing about you that can’t be surpassed.”
Some of the technologists seem oddly sanguine as they talk this way. At least Hofstadter is enough of a humanist to be horrified.
As I trust readers of this blog have surmised, I strongly oppose the metrics used to “measure” learning and intelligence. This article helped me realize why this is so. Like Brooks, I find myself “clinging to the deepest core of my being — the vast, mostly hidden realm of the mind from which emotions emerge, from which inspiration flows, from which our desires pulse — the subjective part of the human spirit that makes each of us ineluctably who we are. I want to build a wall around this sacred region and say: “This is essence of being human. It is never going to be replicated by machine.””
These recent advances in AI should awaken others to the limitations of standardized testing of any kind because, at their root, these tests are based on the premise that the human mind is like a computer. Like AI, standardized tests cannot measure “…the subjective part of the human spirit that makes each of us ineluctably who we are” and so long as schools and our culture cling to them as the basis for judging the “ability” of each student we will fail to see the humanity in each child, the “…subjective part of the human spirit that makes each of us ineluctably who we are.”
When I read about the “learning gaps” that resulted from the “schooling gap” that took place during the pandemic, I find my belly tightening because the “learning gaps” measured by the standardized tests are gaps in the “pedestrian tasks” Hofstadter describes in his original essay: “…the ultra-rapid processing of pieces of text, not about thinking or imagining or remembering or understanding.” The tests do not capture the unique talents each student possesses, nor do they help students gain the kind of deep understanding that is the essence of learning. Getting that understanding requires a human interaction, an interaction that cannot be intermediated by technology or measured with a timed multiple choice examination.