The Dead Internet Theory: Origins, Evolution, and Future Perspectives
A level-headed look at a theory that seems to be gaining some traction.
The Dead Internet Theory claims our modern internet is largely an illusion filled with robots and automated content rather than real people. Its persistence (and some eerie supporting trends) make it worth examining with a level head. In this piece, I wanted to explore how this theory emerged and gained traction, what its status is today, whether it holds any water under critical scrutiny, and how the concerns it raises might shape the internet’s near future.
Origins of the Dead Internet Theory and Its Main Claims
The exact genesis of the Dead Internet Theory is hard to pin down, but it clearly coalesced into a defined concept around the late 2010s and early 2020s on fringe online forums. The central claim is that the internet “died” sometime around 2016 or 2017, meaning that genuine human activity on the web dwindled and what we see now is mostly generated by bots or artificial intelligence. Proponents argue that most content, from social media posts to blog articles and even user comments, is fake, produced automatically by AI networks or scripts. In their view, the vibrant, user-driven internet of the past has become a wasteland of algorithmically generated text and recycled posts, with real human users quietly fading into the background.
This theory was first articulated in detail on the small forum Agora Road’s Macintosh Café in early 2021, where a user called “IlluminatiPirate” started a thread titled “Dead Internet Theory: Most of the Internet is Fake.” In that post (which built on even earlier musings from imageboards like Wizardchan and 4chan), IlluminatiPirate alleged that “most of what we assume to be human-created content is just AI networks working in cahoots with secretive media influencers” to manipulate the masses. In other words, not only are bots generating content, but there’s an intentional effort by powerful actors (governments or big corporations) to flood the web with fake posts and “curate” what we see. By doing so, they would control public opinion and consumer behavior while masking the fact that real user engagement has essentially been replaced by automated propaganda. This claim of a coordinated AI-driven gaslighting campaign – essentially a conspiracy that “the U.S. government is engaging in an AI-powered gaslighting of the entire world” – became a hallmark of the Dead Internet Theory’s more extreme version.
In the lore of this theory, the “death” of the internet around 2016–2017 wasn’t due to a single event but is often linked to a few trends. Some point to the proliferation of social bots that by the mid-2010s were swarming sites like Twitter and Facebook, amplifying certain narratives. Others cite the observation that around that time, online content started feeling repetitive and soulless. People reported seeing the “same threads, the same pics, and the same replies reposted over and over” across different sites. It felt as if originality had drained from the web. The theory latched onto such observations and retroactively declared them evidence that real people (with original thoughts) were no longer the primary drivers of internet content.
Despite its fringe beginnings, the Dead Internet Theory didn’t remain in obscurity. A turning point for its visibility was a September 2021 article in The Atlantic by journalist Kaitlyn Tiffany, ominously titled “Maybe You Missed It, but the Internet ‘Died’ Five Years Ago.” While Tiffany approached the topic with healthy skepticism, the very fact that a reputable outlet like The Atlantic was discussing it lent the theory a burst of mainstream attention. In that piece, Tiffany described one proponent’s suggestion that “the internet died in 2016 or early 2017, and now it is ‘empty and devoid of people,’ as well as ‘entirely sterile’”, capturing the essence of IlluminatiPirate’s message. The article ultimately labeled the Dead Internet Theory “wrong but feels true,” acknowledging that the internet did seem to have changed in recent years, even if not literally “dead.” Nonetheless, coverage from The Atlantic and others marked the theory’s emergence from obscure forums into wider discourse.
A number of YouTube channels and internet commentators then picked up the topic, treating it sometimes as a creepy “what if” scenario and other times as a metaphor for real trends. Discussions popped up on the Linus Tech Tips forums (a popular tech community) and even on the Joe Rogan subreddit. These venues with their large audiences introduced the theory to more mainstream tech-savvy people. Over time, the term “Dead Internet” gained a foothold in internet culture vernacular. By the time generative AI started booming, the theory had evolved from a relatively niche conspiracy theory into a reference point in conversations about online authenticity.
Now, with the context laid out, to summarize the original claims of Dead Internet Theory we can break them down into a few core points.
#1 - Most Online Content is Fake
According to the theory’s originators, the vast majority of posts, articles, and even users online are bots or AI. Proponents believe human-generated content was overtaken by AI sometime after 2016, making much of today’s web essentially a ghost town populated by digital apparitions.
#2 - Coordinated Conspiracy
They claim this shift didn’t happen by accident. Rather, shadowy organizations (often fingers point to government agencies or big tech companies) deliberately deployed armies of bots to overwhelm human voices. The motives suggested include manipulating public opinion, boosting consumerism, and generally controlling the narrative by “curating” search results and social feeds to show only certain content.
#3 - The Internet is a Walled Garden/Illusion
Proponents often argue that what we see on search engines and social media is a carefully filtered Potemkin village, giving the impression of a vast, active internet, but in reality it’s the same recycled or approved content. Google might claim millions of search results for a query, but users realistically can only access a few dozen meaningful results, often from the same sources. The rest, they suspect, is either unindexed or meaningless filler. This ties into real phenomena like link rot (old content disappearing) and aggressive content moderation, which they interpret as part of a plot to limit what’s accessible.
#4 - Loss of Human “Vibe” Online
Less quantifiable but often mentioned is the feeling that interactions on the internet have lost their human touch. In forums or comment sections, one might notice formulaic, repetitive replies, as if scripted. Social media feeds became dominated by “stereotypical relatable posts” and viral copy-pasta tweets rather than genuine personal updates. This subjective sense of “soullessness”, that many users are just going through the motions or that trends are artificially amplified, is held up as a sign that the human element has been quietly excised, leaving an internet that is “dead inside” even if not literally devoid of humans.
It’s important to note that early on, the Dead Internet Theory was very much a conspiracy theory in tone. It wasn’t merely lamenting that there are a lot of bots online. It was asserting a grand secret plot behind the scenes. This is why many observers initially dismissed it outright as a kind of digital-age creepypasta, a spooky story to entertain internet forums. But parts of this theory kept resonating because they tapped into real changes and frustrations with the modern internet.
The Dead Internet Theory Today: Discourse, Evolution, and Evidence
Talk of the “dead internet” has now shifted from fringe forums to mainstream conversations about the state of the web. In fact, the term is sometimes invoked even by people who aren’t aware of (or invested in) the full conspiracy theory, a sign that it has evolved beyond its original context. Several developments in recent years have breathed new life into the discussion.
Explosion of AI-Generated Content
The late-2022 release of ChatGPT was obviously a watershed moment. Suddenly, average internet users had access to powerful text-generating AI, and the web saw a surge of AI-written essays, answers, and even spam. Journalists noted that the Dead Internet Theory suddenly felt more plausible in a world where chatbot-written content was everywhere. The theory’s earlier focus on government bots gave way to a broader concern. What if regular people (or spammers) flood the internet with auto-generated material to the point that genuine human content gets drowned out? By 2023, this was no longer a paranoid hypothetical. It was starting to happen. Even Google began sounding alarm bells that its search results were getting cluttered by “low-quality AI-generated content that’s designed to attract clicks”. In early 2024, Google announced changes to purge what one Googler called “AI trash” from search rankings, content created just to game SEO algorithms with no human value. This was confirmation that AI-driven content farms were on the rise, lending some credence to the “dead internet” worries (at least about content authenticity).
Bot Traffic Hitting New Highs
Every year, cybersecurity firms measure how much internet traffic comes from bots versus real users. The numbers have been eye-opening. By 2023, roughly half of all global web traffic was reported to be non-human. A study by Imperva found that 49.6% of internet traffic in 2023 came from bots, the highest level since they started tracking, and an increase over the previous year. About one-third of all traffic was from “bad bots” (those engaging in spam, ad fraud, hacking, etc). While not all these bots are creating content (some are just scraping data or automating clicks), the sheer scale of automated activity online is undeniable. It’s easy to see why people might look at those statistics and wonder how much of the content on the web, not just traffic, is bot-generated too. In any case, the bot presence supports one pillar of the Dead Internet Theory. The internet is crawling with non-human actors.
Recycled and Monotonous Content on Social Platforms
Many user anecdotes bolster the “it all feels the same now” sentiment. On X, certain generic, emotionally relatable one-liners often go viral and then get copied by countless bot accounts or content farms. One example was the tweet “I hate texting I just wanna kiss u” (and many similar saccharine posts), which appeared word-for-word on multiple anonymous accounts, garnering tens of thousands of likes each time. This blurs the line between human and bot content. It could be humans plagiarizing a formula that works, or bots auto-posting the same popular phrases. Either way, it gives the timeline an automated vibe. As one article wryly noted, Twitter’s aggressive algorithmic curation in the late 2010s meant “hundreds of thousands of users [were being served] the same ‘relatable’ posts,” and people ended up behaving “like bots,” reposting and remixing the same basic quips over and over. The result is an online experience that feels depersonalized, even when real people are involved.
Viral “AI Slop” and Bot Engagement
In 2023–24, a bizarre trend on Facebook underscored how far AI-generated nonsense can go. AI-generated images, often absurd by design, started going viral, accompanied by thousands of comments that were essentially gibberish or generic praise. One notorious example was an image of “Jesus” merged with a crab or a heap of shrimp (dubbed “Shrimp Jesus” or “Crab Jesus” by the internet). The image itself was a goofy, obviously AI-created mashup, but it brought in hundreds of “Amen” comments and reactions from users, many of which appeared to be bot accounts or at least indiscriminately automated responses. This “AI slop” content farmed engagement through sheer novelty and algorithm manipulation. Similar posts (Jesus mixed with other random elements like flight attendants or kids with art) each gathered enormous interaction, much of it suspected to be non-human. Here were bots posting AI-made content and other bots responding to it, creating the appearance of viral popularity. It was as if parts of Facebook had turned into bots talking to bots – a mini realization of the dead internet scenario.
Mainstream Discourse and Personalities
Unlike some conspiracy theories that remain niche, the Dead Internet Theory has permeated mainstream tech discourse in a fairly significant way. Aside from The Atlantic’s coverage, other major outlets and personalities have discussed it. The Guardian’s tech newsletter TechScape ran a piece in 2024 acknowledging that, while born of paranoia, this theory “has a morsel of truth to it.” The Guardian highlighted how discussions about the theory were growing thanks to a mix of “true believers, sarcastic trolls, and idly curious lovers of chitchat”, a typical cocktail in most forums. Tech reporters at Forbes, Fast Company, Business Insider, and Live Science have all published explainers or analyses of the theory, especially as AI content became a hot topic the last couple of years. On social media, talk of a “dead internet” pops up in TikTok explainer videos and X threads. Even communities not known for tech conspiracy have shown interest, often in a tongue-in-cheek way, asking “Is the internet actually dead?” This widespread attention has, in a sense, evolved the theory. It’s no longer solely about a secret government AI plot. It has become a catch-all reference to very real issues like spam bots, content homogenization, and the decline of the early internet’s vibrancy.
Early on, the Dead Internet Theory was heavy on unproven conspiracy and relatively light on concrete evidence. Now, many people are pointing to tangible trends and data (like the Imperva bot traffic reports or the flood of AI text/images) as partial validation. The discussion has also matured in some spaces. Instead of insisting “the internet is 100% fake,” many acknowledge that the theory is exaggerated, but it symbolically captures a genuine concern, that the internet is becoming less human-friendly and less human-driven than it used to be. In fact, by 2024 the term “dead internet” was sometimes being used loosely to describe the rise of LLM-generated content in general, without invoking the full conspiracy. Someone might half-jokingly say “the internet is dead” simply upon stumbling across a webpage clearly written by AI for SEO purposes, or when noticing their feed is full of cookie-cutter posts.
While the Dead Internet Theory remains unproven as a literal conspiracy, the online world has indeed moved in a direction that makes parts of the theory ring true. But which parts are credible or supported by evidence, and which parts veer into fantasy?
Separating Fact from Fiction: A Critical Assessment of the Theory’s Merits
It’s easy to dismiss the Dead Internet Theory as just another wild conspiracy, as it claims pretty much the entire internet is an elaborate fake. That’s a huge proposition. But the reason this theory hasn’t died off (despite many debunkings) is that it interweaves some real phenomena and valid critiques of today’s internet.
Bots Everywhere – True (with a Catch)
There is no doubt that bots play an enormous role online. Automated accounts and scripts are used for everything from posting spam and inflating follower counts to scraping websites and interacting on forums. About half of web traffic in 2023 was automated. Social media companies have acknowledged millions of fake accounts. Major events like elections saw bot armies pushing political propaganda. So the quantity of non-human activity is high. The Dead Internet theorists have that part right. However, where the theory exaggerates is in implying that almost all content is bot-generated. While bots might be half of traffic, they are not writing half of all tweets or articles. A lot of bot traffic is behind-the-scenes (crawlers, data harvesters) rather than content creators. And humans still comprise billions of users who produce vast amounts of genuine content every day, from YouTube videos and blogs to endless social media posts. The difference is that now those human voices compete with a whole lot of noise and manipulation. In short, bots are pervasive and getting more advanced, but they haven’t completely edged out humans (yet). The internet isn’t literally a ghost town. It’s more like a bustling city where a significant chunk of the “population” is robots mingling with the humans. This does create confusion and can distort perception (e.g. you might think an opinion is widely held because thousands of accounts echo it, not realizing many are fake). It’s a real problem, but not proof that the entire web is a scripted illusion.
AI-Generated Content – A Growing Concern, Not the End of Humanity
The last couple of years have indeed seen AI content creation explode. Automation can now produce text, images, video, and voice that passably mimic human-made media. It’s no longer just spambots posting random jibberish. We have AI that can write blog posts that read almost like a person, or deepfake influencers that look and sound real. This lends some credibility to the Dead Internet Theory’s warning that “the internet now consists mainly of automatically generated content.” Researchers and tech experts have speculated about extreme scenarios. Futurist Timothy Shoup mused that if an AI like GPT were let loose uncontrolled, “99% to 99.9% of content online might be AI-generated by 2025–2030,” making the internet unrecognizable. This is a dramatic forecast (and not a certainty), but it shows that knowledgeable people take the possibility seriously. So yes, the balance of content is tilting. We may well be heading toward a web where more content is AI-made than human-made.
However, saying “most of the internet is fake right now” is premature. AI content can often be spotted by inconsistencies, lack of true insight, or subtle glitches. It tends to be derivative (because it’s trained on human data), so AI alone hasn’t yet created anything that dominates culture the way human creativity can. Also, when AI content does proliferate (like those SEO articles or meme images), humans often respond by flagging or satirizing it. In effect, people are still actively engaging and discerning. So, while the trend is worrisome and aligns with the theory’s general direction, we’re not at an “all-AI internet” just yet. Importantly, economic incentives drive this flood. It’s cheap and easy to have AI churn out 1,000 clickbait articles for ad revenue, so of course some companies and individuals will do it. That doesn’t require a grand conspiracy, just opportunism. The Dead Internet Theory is right that money and algorithms are fueling a lot of low-quality content, but that’s more a story of capitalism and tech, not a covert government replacement of humans.
Decline in Organic Human Content and the “Soul” of the Web
This aspect is more subjective, but many people feel it. The web of the early 2000s or 2010s felt different: more personal blogs, niche fan forums, lively message boards, and quirky independent sites. Now, a huge chunk of online activity happens on a few centralized platforms (Facebook, X, Reddit, YouTube, TikTok, etc.), which encourages homogenization. When everyone is on the same mega-platforms, the content that trends tends to be what appeals to the masses (or to the platform’s algorithms). That often means formulaic, easily digestible posts. The Dead Internet Theory pinpoints this when it notes seeing the same memes and responses everywhere. That’s a real phenomenon. But again, the cause is likely not that humans are gone. It’s that humans adapt their behavior to what the algorithms favor. We end up with what Cory Doctorow calls “enshittification”, the process by which platforms gradually optimize for profit and in doing so make the content more repetitive and commercialized. Over time, genuine grassroots conversation can get drowned out by viral bait, corporate media, and bot-driven engagement.
So, there is a kernel of truth. Organic content creation by everyday users has, in some spaces, been overshadowed by curated or incentivized content. Consider how many people now create content with an eye on monetization or virality, writing the kind of clicky headlines or doing the latest TikTok challenge, as opposed to spontaneously sharing thoughts. Even news sites optimize for SEO and social media trends, churning out shallow articles because they “perform” well. This makes the internet feel less like a community of people and more like a mall run by attention algorithms. It’s a qualitative shift, not a literal disappearance of humans. The Dead Internet Theory interprets this shift as “the internet is empty of real people,” but a more grounded interpretation is “the internet is full of people behaving in copycat ways because they’re guided by the same algorithms.” One might say the internet isn’t dead – it’s just basic. Real humans are still here, we’re just all posting the same five jokes and listicles because that’s what the system nudges us to do. It’s a critique of online culture that resonates, even if the theory’s phrasing is hyperbolic.
Economic and Political Incentives for Fakery
The Dead Internet Theory correctly points out that there are strong incentives to deploy bots and fake content. For spammers and marketers, more clicks = more money, so why not unleash a thousand bot accounts to plaster links everywhere or auto-generate engagement? For political actors, steering public discourse via fake personas can be cheaper and more effective than traditional propaganda. And it’s not just nation-states. Domestic interest groups, companies astroturfing support for their products, scammers pushing crypto schemes, all have reasons to use automation to amplify their message. So yes, parts of the internet are astroturfed and manipulated. But is it coordinated to the point of killing organic content globally? That’s where evidence is lacking. There’s no credible proof of a secret government project that replaced most of the internet with AI. What we see is a more decentralized mess of many actors, some government-run, many profit-driven, each adding to the cacophony of fake or low-value content. No one is in total control. If anything, it often feels like things are spiraling out of any one entity’s control. Paradoxically, the “truth more sinister” might be that nobody is steering this ship. Instead, billions of clicks and views, governed by opaque algorithms, created a fertile ground for an economy of bots that no single authority fully understands or contains. That is less of a classic conspiracy and more of an emergent problem with our digital ecosystem.
Centralization and Curation by Tech Giants
One more element worth examining is the idea that big tech companies (like Google) are filtering what we see, creating a kind of illusion of a big open web while actually herding users into a limited set of content. And indeed companies do optimize what content they show. Google search prioritizes certain sites and buries others, and as the Dead Internet theorists note, it’s true that despite Google saying “10 million results found,” you effectively only browse the first page or two. Link rot and the archiving problem also mean a lot of older or niche content isn’t easily accessible, giving the impression that only current, SEO-optimized content exists. Social media feeds are even more curated. You don’t see a chronological list of all posts from all friends, you see what the algorithm thinks will engage you (or what advertisers have paid to place). This walled garden effect can certainly make the internet feel smaller and more manipulated. It’s not a far stretch for someone to imagine that undesirable viewpoints or independent creators are quietly being hidden while bots push the “approved” content. Some aspects of this have happened. There have been cases of platforms throttling or banning content deemed misinformation, etc., and plenty of benign content gets lost in the shuffle due to algorithm changes.
But the leap from “Big platforms curate and inadvertently favor cheap content” to “The internet is entirely a Potemkin village” is a big one. Despite centralization, the internet still has pockets of authenticity, niche communities, smaller forums, personal newsletters, open-source networks, where real people interact in meaningful ways. They may not dominate the traffic charts, but they exist and thrive. The merit in this part of the theory is a warning about over-reliance on a few gatekeepers (Google, Facebook, etc.) for information. If we only get our content through these funnel points, then yes, our view of the internet can be orchestrated and limited. That’s not a spooky AI plot. It’s the reality of corporate control and monetization. It’s fixable (or at least avoidable) if users diversify how they find information and if transparency increases.
Overall, The Dead Internet Theory takes legitimate issues like bot proliferation, AI content, the staleness that comes from algorithms and economic incentives, and strings them together into an overarching narrative that is more extreme than reality. There is no solid evidence that the whole internet is fake or that human engagement has vanished. On most days, if you talk to friends on an app, read a heartfelt blog post, or watch a YouTuber’s live stream, you are experiencing authentic human communication. The internet as a whole is not “dead.” But there are days where you might browse a popular site and feel like it’s dead or dying, because all you see are repetitive clickbait articles, botted accounts squabbling in comments, and content that seems generated rather than created. That feeling makes the Dead Internet Theory “ridiculous, but eerily resonant”. The theory isn’t factually valid “in all reality,” but it went viral because it “feels true” in spirit. We tend to latch onto metaphors and myths to explain our anxieties, and the myth of the internet’s “death” is a compelling way to articulate the genuine worry that the online world is losing its humanity.
Even those who debunk the theory concede the underlying concerns. New Models founder Caroline Busta called the full conspiracy theory a “paranoid fantasy,” but in the same breath agreed with the “overarching idea” that the integrity of online content is in question. And the Guardian’s tech column noted that unlike many conspiracies, this one “has a morsel of truth to it.” The signals are telling us something is wrong (or at least weird) about our current internet experience.
Looking Ahead: The Internet’s Future in Light of “Dead Internet” Fears
It’s clear that the themes raised by the Dead Internet Theory – authenticity, AI proliferation, platform control, and user behavior – will remain front and center. Even though the theory itself may be an exaggeration, it’s spurred a valuable conversation. How can we keep the internet a place for real human connection and reliable information, in an era of rampant automation and monetization?
AI Content Will Continue to Proliferate – But So Will Detection and Differentiation
The cat is out of the bag with generative AI. We will continue to see even more sophisticated AI models producing content. They’ll be writing articles, generating social media posts, creating photorealistic images and videos, and engaging in chats autonomously. This could make the internet feel even “deader” in the sense that machine-made material might flood our feeds. However, awareness of this issue is also growing. We can expect a parallel rise in tools and practices to identify AI-generated content. Researchers are working on techniques like watermarking AI outputs and on AI detectors that can flag content as likely machine-written (though this is an arms race, as AI will also get better at mimicking human quirks). Big tech companies might start labeling AI-generated media on their platforms. In fact, some regulatory moves are already pushing in that direction. The European Union’s upcoming AI Act includes requirements to disclose deepfakes or AI-produced content in certain contexts, and a new law in California will, by 2026, mandate watermarking of AI-generated works in some applications. Platforms like Facebook have announced plans to label “organic” AI-generated posts to inform users when they’re looking at something made by a machine. The effectiveness of these measures remains to be seen, but the goal is to maintain a kind of authenticity signal in the digital environment. In a few years, it might be commonplace to see a little icon or note indicating “This post was created by AI,” just as we now see labels for advertisements or sponsored content. In an ideal world, that transparency would help users navigate an AI-saturated web and give human-made content a chance to stand out.
A Push for Authenticity and Human-Centric Design
There may also be a cultural and business shift towards valuing authentic, human content. Some content creators already proudly advertise that their work is “handmade” or done without AI assistance, appealing to an audience that craves a human touch. In the future, we might see platforms or communities spring up that are dedicated to verified human-only content. Imagine a forum where every member is verified as a real individual (perhaps via some proof-of-identity or a Turing test of sorts) and bots are aggressively banned. Mainstream platforms might also implement more stringent bot detection, not just for malicious bots but even for subtle astroturfing. There is consumer demand for more genuine interaction, and that could influence how products are built. We could see social networks return to emphasizing friends and family content over algorithmic trends, essentially making the experience more personal, so that a human voice (your friend’s post, your cousin’s photo) is front and center rather than a random viral tweet by an unknown account. In design terms, the pendulum might swing back slightly from hyper-optimized feeds to more curated or community-oriented spaces that foster real conversation. This is partly wishful thinking, but not impossible, especially if users start leaving platforms that feel too spammy or bot-ridden, creating an incentive for companies to course-correct.
Regulatory and Policy Actions
Governments are increasingly aware of issues like election interference via bots, AI-generated disinformation, and the general chaos a “bot internet” could unleash. We can expect more regulation aimed at transparency and accountability. Laws may require that bots operating on large platforms identify themselves as bots (some jurisdictions already have basic versions of this, but enforcement is lax). There’s also likely to be action on the consumer protection front. If AI-generated reviews are deceiving customers, or AI deepfake ads are scamming people, regulators could step in. On a bigger scale, antitrust or competition actions against mega-platforms could indirectly help with centralization issues (e.g. if Google’s search monopoly is challenged, alternative search engines with different philosophies might gain ground, giving people more choice in how they find content). The political will to tackle these issues tends to lag the technology, but by the late 2020s, after a few more election cycles and high-profile AI incidents, lawmakers will have no choice but to engage. The goal of smart regulation would be to preserve the open, innovative nature of the internet while mitigating the worst abuses of automation and centralization. It’s a tightrope walk. Over-regulate and you risk stifling free expression. Under-regulate and you get the Wild West of bots we have now. We may see experiments like requiring social media firms to verify a certain percentage of active users as real or to provide tools for users to filter out unverified accounts. Already, ideas like blue checkmarks for identity verification (not just for status) were floated to ensure you know you’re interacting with a real person. These kinds of measures, if implemented carefully, could address some Dead Internet Theory anxieties by assuring users that “there are humans here.”
Decentralization and the Rise of Alternative Platforms
One promising trend is the growth of decentralized and community-driven platforms as an antidote to Big Tech’s dominance. Projects in the “Fediverse” (like Mastodon, an open-source X alternative) gained popularity, especially whenever mainstream platforms made controversial changes. These decentralized networks allow users to form smaller, interconnected communities with their own rules and moderation. Without a single algorithm trying to maximize engagement across a billion users, you often get a more chronological, human scale interaction. It feels more like a traditional forum or email list. As the major platforms become inundated with AI spam and enshittification, a certain segment of users will migrate to these quieter digital neighborhoods. They won’t replace Facebook or YouTube entirely, but they don’t have to. Even if a few million people move to more open platforms, that creates a diverse ecosystem rather than a monolithic experience. Plus, interest in technologies like blockchain for content authenticity and peer-to-peer networks for sharing could underpin new solutions where no single entity controls the flow of information (making it harder for one agenda or a swarm of bots to dominate everything). We might also see a renaissance of smaller forums, private groups, and niche communities, a fragmentation of the social web back into interest-based clusters. This could help isolate and reduce the impact of widespread bot spam (a bot network that floods X can reach millions, but the same spam dropped into a well-moderated small forum would just get removed by a vigilant admin). In the early days of the internet, decentralization was the default (thousands of independent sites rather than five big ones). The future might rediscover some of that model, aided by modern tech but guided by an old principle: don’t put all your digital eggs in one basket.
User Education and Behavior Changes
One more crucial factor is us, the users. As awareness grows, people might simply get smarter about distinguishing real from fake, and alter their behavior accordingly. This has happened before in smaller ways. Email users learned to spot obvious spam and phishing attempts over time, reducing their effectiveness. Likewise, more internet users may develop a sort of sixth sense for botted content (“this comment section looks fishy, probably a bot brigade”) and not take it at face value. We might also see a cultural shift where authenticity is valued over virality. Users might gravitate more towards creators who show verifiable personal involvement, or towards platforms that foreground direct human communication (like live video chats, voice chats, in-person meetups facilitated by online connections) as opposed to text posts that could be generated. It’s possible that being a “real human on the internet” could even become a kind of status symbol or selling point. On the flip side, some people won’t care. They’ll consume content whether it’s made by AI or human as long as it’s entertaining or useful. But those who do care will create demand for more genuine interactions.
Think of how, in a world of digital music, live concerts and vinyl records made a comeback. In an internet full of AI articles, maybe hand-written newsletters or human-curated discussion circles will gain prestige. Human creativity and spontaneity could become the premium version of content. If that happens, market forces might adjust. Companies may have to highlight human-created options (“real user reviews” versus “AI generated summary”) to cater to that preference.
In a more optimistic scenario, five years from now we might have an internet where:
AI is used responsibly and mostly in support roles (e.g. helping human creators, not replacing them wholesale),
Content that is important (news, personal communications) is authenticated or at least traceable to real people,
Bots are still around but largely corralled to less visible corners,
Users have more choice of platforms (some big, some small) depending on what experience they want,
and the average person online feels they can trust that there’s a human on the other end of their interactions.
Of course, the pessimistic scenario is that the tools to authenticate and the will to decentralize don’t materialize, and by 2030 we do indeed have an “internet of bots”, a noisy, spammy environment where meaningful human discourse migrates to private enclaves or goes offline entirely.
The Dead Internet Theory has no doubt sparked very live discussions about the quality and reality of our online spaces, and has served as a strong reminder of what many people fear the internet could become. Taking a close look at those fears makes us better positioned to address them, and to have a better chance that the internet of the future remains a place with real voices, real creativity, and real connection in spite of all the bots and algorithms buzzing in the background.
This idea started in my brain, was hashed out with AI, and then heavily edited by yours truly.
This is such a brilliant piece. I read it twice. The depth of the article is truly amazing. It resonates heavily with what we are seeing around us most of the time.
The way I see it is that, the content that is consumed now (as opposed to 10-15 years back) is driven by what people want. Tik Tok demonstrated that people are willing to spend hours and hours consuming mind-numbing content from absolute strangers.
In a way, Tik Tok probably just tapped into the human brain that was waiting to consume such content. If humans, didn't want that kind of content, Tik Tok would not be where it is right now. That even made FB, Instagram and every other social media platform to adopt similar algorithms and pivot to content that resonates with most of the people.
They quickly realized that people, more than consuming content posted by their friends and family, are actually more interested in consuming senseless content. When I look around myself, I see most of the people (specifically all modes of public transport) with their heads down in their phones scrolling through mindless content.
Maybe this is what humans want, and they are getting it delivered on a platter and for free!.
In my view, we are actually dealing with the enshittification of the human mind
I LOBE YOU SEAN!!! I LOVE YOUR ARTICLES!! AND YOUR EYES!!!