Reading for Wisdom in the Age of AI

Photo: MindInventory

The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.

Isaac Asimov, Isaac Asimov’s Book of Science and Nature Quotations

Over the past several years, there has emerged a serious new threat to the project of reading for wisdom through literature today—namely, the ease with which students can short-circuit the hard work of coming to terms with a complex text through recourse to the tools of generative AI.  Many of the drivers of unwisdom that we must contend with today, such as social media’s tendency to keep us engaged by effectively doubling down on our prejudices and predilections, depend upon already deployed forms of artificial intelligence.  The increasingly widescale adoption of generative AI compounds this problem.  Because tools such as ChatGPT, Claude, Perplexity, and Google Gemini have been trained on text from the Internet, the underlying algorithms of which favor maximalist statements that speak first and foremost to our emotional brains, they risk reinforcing political and social bubbles, exacerbating bias and tribal tension, and undermining users’ capacity for empathy and “healthy democratic deliberation” (Kim and Mejia).  At the same time, these tools tend to homogenize ideas and presentational styles in a way that is deeply “unfavorable for diversity in opinions,” not to mention the pleasure of a well-framed, original argument (Chan 2608).  

Photo: TECHVIFY

Not surprisingly, most of the early press on the advent of generative AI in higher education has focused on the ways in which AI tools allow some students to cheat relative to their peers.  The disquiet of professors forced to rethink how they evaluate student submissions—be they in the form of essays, problem sets, coding assignments, or graphic designs—has been matched by that of rule-abiding students seeking to demonstrate excellence in today’s high-stakes academic environments.  Lost in the press of these concerns has been a focus on how students who rely heavily on generative AI to complete assignments can effectively cheat themselves by short-circuiting essential struggles—not just the struggles (and pleasures) inherent in the process of writing, problem solving, coding, or design, but also the struggles with cognitive and ethical complexities so essential to the development of wisdom.  Generative AI does remarkably well with providing essential context for domains in which the user is not proficient, but it does so in a way that often fails to reward curiosity and foster deep and lasting understanding.  Moreover, the remarkable speed with which tools like ChatGPT and Gemini 3.0 Pro produce their results risks leaving students with the impression that they can effectively ‘know it all,’ thereby undermining their development of intellectual humility, the very cornerstone of true wisdom.  

A biochemist by training and a titan in the field of science fiction, Isaac Asimov was no one’s definition of a Luddite.  In 1988, four years before his death, Asimov published a compendium of quotations on science and nature that included this epigraph: “The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom” (Asimov and Shulman 281).  The widespread belief that technological innovations like carbon capture and geoengineering will solve our ongoing climate crisis, for example, is but the latest manifestation of postwar American society’s irrational faith in technology, and its related confusion of intelligence and wisdom.  Indeed, there is plenty of reason to think that this confusion has only gotten deeper since Asimov’s time.  Inasmuch as the wise practice of science demands that scientists and engineers assess the ethical, social, and economic impact of their work on a global scale, increasing specialization in research around artificial intelligence has meant that the “transition from data to wisdom has become progressively more challenging” (Xu et al.).  Or as philosopher of science David Casacuberta Sevilla writes: 

When AI moved from the symbolic approach to neural networks, machine learning and other statistical methods, we lost our desire to explain things and just got interested in making things that work.  We stopped being scientists and philosophers and became only techno-centric engineers. (206)

Just a few months ago, a significant segment of America’s political class were in thrall to the idea that a demonstrably brilliant technological entrepreneur possessed the wisdom sufficient to fashion a more efficient American government.  But, once again, intelligence is not wisdom.  Indeed, as the members of the Wisdom Task Force note:

There is growing consensus that the relationship between wisdom and intelligence is best understood from a threshold perspective.  That is, a certain amount of intelligence is necessary for wisdom, but beyond that, intelligence provides little added value. (Grossman et al., “Science” 116) 

Photo: Xeven Solutions

Over the past decade, as companies such as OpenAI, Meta, Anthropic, and Google DeepMind have made significant progress toward the realization of an artificial general intelligence that would exceed most forms of human cognition, researchers have embarked on a still rudimentary (and possibly fruitless) quest for artificial wisdom.  I am by no means qualified to fully assess the feasibility of this quest.  It is worth noting, however, that many scholars more qualified than I remain skeptical.  Deborah Williams and Gerhard Shipley suggest that “AI may likely never develop a sense of self and therefore never exhibit the full range of human cognition, including imagination, aesthetics, values, humor, and wisdom” (46).  Sevilla likewise argues that “[e]mbodiment and self-relevance are probably the two key features that make the task of actually creating a wise artificial creature so difficult” (204). 

I suspect, however, that the greater difficulty lies in wisdom’s traditional association with the vision of a life (or lives) well lived.  Because the exercise of wisdom, unlike intelligence, hinges on an underlying set of values, many have argued that artificial wisdom can only be as good as the wisdom of the relevant programmers (Williams and Shipley).  Within certain limits, each of us is able to articulate reasonably well our individual goals and values.  But the true promise of artificial wisdom hinges on AI’s ability to articulate and encode the ultimate goals of a community, organization, nation, or globe, to build upon what Alisdair MacIntyre calls “a shared impersonal standard in virtue” (Tsai, MacIntyre ix).  At a moment in history marked by the demise of nearly all prior forms of consensus—be they religious, national, political, or philosophical—it is difficult to imagine how the articulation of artificial wisdom would not be vitiated by an unavoidable zone of disagreement on any question of final goals.  Even if we manage to avoid the worst possible eventuality, which is that superintelligent AI machines come to modify the values and goals we have programmed into them in the course of their learning, artificial wisdom would almost certainly be shaped by the narrow interests of the corporations, governments, and other organizations that develop and control it, in which case it is unlikely to be wisdom at all (Sinha and Lakhanpal).

Photo: Amrita Gautam

As remarkable as they are, today’s generative AI tools are notoriously unwise in a very precise way.  In a discussion of the much-noted tendency of large language models like ChatGPT to “hallucinate”—i.e., to make up its answers “on the fly”—Ethan Mollick writes that these “LLMs are not generally optimized to say ‘I don’t know’ when they don’t have enough information.  Instead, they will give you an answer, expressing confidence” (96).  In their current form, in other words, intellectual humility is very much not their forte.  

All of this noted, generative AI is here to stay.  Mollick is surely right to suggest that, used responsibly, generative AI has the salutary effect of narrowing gaps of talent and training in a wide variety of areas, including expository writing, coding, the drafting of business plans, and the creation of marketing strategies.  There is no doubt, moreover, that those who truly understand what AI tools can and cannot (or should not) do, and thus come to use those tools symbiotically, will become more effective, efficient, and creative in their daily tasks, both professional and personal.  I recently asked Claude 3.5, for example, to create an active learning lesson plan for teaching Sans soleil to undergraduates and was astonished at both the number and quality of its suggestions.  The future of higher education belongs to those who understand both the power and the limitations of the emerging AI tools and use them responsibly to help their students become more professionally capable and personally wise.

Share this post!