AI is Killing the Internet. Don’t Let It Kill the Classroom Too.

X
Story Stream
recent articles

The internet used to feel like a vibrant but messy patchwork of human curiosity. Websites, discussion forums, blogs, and conversations were stitched together by real people sharing unpolished yet real ideas.

Today’s internet remains full of debate and unfiltered opinions. Yet it feels much less alive — and much less human — than it once did.

The internet’s dominant force is social media, which produces infinite TikTok-style clips stitched from stock footage and narrated by a perky synthetic voice. These AI-generated clips (“Five simple hacks that will change everything!”) seem to exist only so that one AI bot can recommend them to another AI bot that dutifully clicks “like.” It’s a perpetual-motion machine whose sole product is noise.

There’s a name for this phenomenon: the Dead Internet Theory, which posits that a significant amount of online content is produced not by humans but by AI. The evidence suggests a hard kernel of truth at the core of this argument. More than 40% of Facebook’s long-form posts and more than half of longer LinkedIn posts are likely generated by AI. Engagement with this content is often powered by automated click farms.

AI isn’t merely churning out fluff. In one striking example, bots fueled a disproportionate share of the online discourse following mass shootings, and AI actively spreads misinformation. Online content is increasingly spun up by algorithms for other algorithms to amplify. This deluge of automated content is drowning humanity on the internet.

Lately, it seems that a similar dynamic is charging into our college classrooms with developers of educational technology at its vanguard. Let’s call it the Dead Education Theory, and it works something like this: 

A college professor uses one of many dozens of free commercial AI tools to draft a rubric and an assignment prompt for their class. A student pastes that prompt into another AI app that produces an essay that they submit as their completed assignment. Pressed for time, the professor runs the paper through an AI tool that instantly spits out tidy boilerplate feedback. Off in the background, originality checkers and paraphrasing bots duel in an endless game of evasion and detection. On paper, the learning loop is complete. The essay is written. The grade is given. And the class moves on to its next assignment.

It’s entirely likely that this scenario is playing out thousands of times every day. A 2024 global survey from the Digital Education Council found that 86% of college students use AI in their studies, with more than half (54%) deploying it at least weekly and a quarter using it daily. Faculty are increasingly using AI to create teaching materials, boost student engagement, and generate student feedback, although most report just minimal to moderate AI use.

The growing use of AI by itself isn’t what’s so concerning. Companies are rapidly incorporating AI and automation into one or more business functions. Postsecondary institutions have a responsibility to prepare students to use AI in the workplace.

What’s so worrisome is that AI in education can eliminate the friction and human struggle where real learning happens. Recent studies have shown that accepting generative AI’s recommendations without question can harm students’ critical thinking, analytical reasoning, and other critical cognitive capabilities. An MIT study that measured the brain activity of students writing essays found that those who used ChatGPT “consistently underperformed at neural, linguistic, and behavioral levels.”

Developers of education technology have a responsibility to build AI tools that support rather than supplant human learning. AI needs to encourage creativity, curiosity, and genuine engagement from both students and faculty. AI needs to be a coach and a tutor, not a crutch.

Dutch research study published earlier this year found that students with unlimited access to generative AI performed better on homework but worse on exams. But when large language models were configured to act as a tutor that supported students' learning, it improved students' understanding.

Higher education institutions and their leaders need to be increasingly skeptical that AI tools don’t eliminate the learning experience or the productive struggle — the messy collaborative muddle of learning — that classrooms are meant to provide. They should be especially wary of black box algorithms that take on human tasks and produce factual hallucinations, subtle misunderstandingserroneous logical reasoning, and other inexplicable outcomes that can overwhelm our capacity to think critically and review meaningfully.

Because the attraction of AI-powered time savings has become too strong, students and their professors are dutifully performing a ritual whose purpose has drifted away, like lighthouse keepers polishing lenses long after ships adopted GPS. Where’s the human spark? Where’s the struggle? Where’s the insight and the revision that education is meant to cultivate? 

Banning AI tools isn’t realistic; the genie has escaped that bottle. But instead of allowing AI to drain higher education of its humanity, we must design a future where AI amplifies authentic human thinking. AI will be in the classroom — there’s no question about that. The urgent question is how to keep humanity there as well.



Comment
Show comments Hide Comments