At the speed of internet, it’s yesterday’s news that Google laid off workers from its ad sales division in favour of using various LLMs and RNNs to perform the ad sales optimizations. From an immediate optimization perspective it is, in fact, a no brainer. An LLM can generate hundreds of thousands of word generations for an ad, reacting instantly to trends in clicks and searches. A human just can’t compete. And that’s on speed alone, never mind compensation (and therefore profit margin).
Thing is, this is a sort of digital tragedy of the commons. Google will be serving computer optimized versions of their ad text descriptions, digitally regurgitating amalgamations of prior ads until some function has been maximized for the current profit potential. And that’s it. The model can’t create a new work. The model can’t invent a new ad concept. The model won’t create a new analogy, allegory or sales. There won’t be any new rainbows to taste. There will never be a new most interesting man in the world. We won’t find any new beef.
Well, someone else might. A smaller firm, either not as versed as Google in the art of large models, or because they need to differentiate themselves, or because for whatever reason they’ve made a promise not to use generative models. They’re the losers. They do the hard work, publish a unique piece of text, only to have the models vaccuum up their creativity.
I think we’re going to be in a bit of a rut for the next while, as this percolates through the legal and creative systems. Right now, generative models can regurgitate enough content to seem new enough, but they’ll struggle to actually coin things, to generate, to lead. The economic activity that made GenAI possible will lock down, removing future training data, and making that part prohibitively expensive or poor.
In the end, the incentive is stacked against human creation. Making something new is hard. Writing this blog post took me probably more than an hour’s worth of thought, from ideation, to reading other people’s work, to then trying to formulate what’s my opinion. This takes time. This isn’t profitable. And so it’ll be avoided when possible. To illustrate, I’ll end by cross-posting a ChatGPT generated blog post on this same topic I asked it to make. It took it just two minutes to spit out vaguely impressive sounding garbage, the digital equivalent of High Fructose Corn Syrup. It tastes sweet, but in the end it’s just a sham.
Title: Navigating the Digital Tragedy of the Commons: The Generative AI Dilemma
Introduction In the rapidly evolving landscape of digital content creation, the advent of generative artificial intelligence (GenAI) has ushered in a new era. This technology, capable of producing vast quantities of content at unprecedented speeds, has become a double-edged sword. While it promises efficiency and innovation, it also poses a significant challenge, akin to a digital tragedy of the commons. This phenomenon occurs when the overuse of a shared resource—in this case, the digital content space—leads to depletion or degradation. In the context of GenAI, the resource at risk is the originality and authenticity of human-generated content.
The Rise of Generative AI GenAI tools have revolutionized content creation, offering businesses the ability to generate articles, images, and even code at a fraction of the time it takes humans. This efficiency is not just about speed; it’s also cost-effective, making it an attractive option for companies looking to maximize output while minimizing expenses. The allure of GenAI is undeniable, but it’s not without its consequences.
The Impact on Human Creativity As companies increasingly rely on GenAI for content creation, there’s a noticeable shift away from human-generated works. This shift isn’t just about the quantity of content being produced; it’s about the quality and uniqueness of that content. Human creativity is characterized by its ability to think outside the box, to infuse personal experiences and emotions into the work. GenAI, while impressive, operates within the parameters of its programming and the data it’s been fed. It lacks the nuanced understanding and personal touch that human creators bring to the table.
The Oversaturation of AI-Generated Content The market is rapidly becoming saturated with AI-generated content, leading to a homogenization of the digital landscape. This saturation is akin to overgrazing in a common pasture, where the resource (in this case, the audience’s attention and the freshness of content) is being depleted. The uniqueness of content is getting lost in a sea of AI-generated material, making it increasingly challenging for original human-created works to stand out.
The Long-Term Consequences If this trend continues, we might witness a significant decline in human-driven creativity in the digital realm. The incentive for individuals to invest time and effort into creating original content diminishes when AI-generated content dominates the landscape. This could lead to a cultural shift where the value and appreciation of human creativity are undermined, potentially stifling innovation and diversity in content creation.
Conclusion As we stand at this crossroads, it’s crucial to find a balance. GenAI has a role to play in supporting and enhancing human creativity, not replacing it. The challenge lies in ensuring that GenAI is used responsibly, complementing rather than overshadowing human efforts. By recognizing the potential pitfalls and taking proactive steps to mitigate them, we can prevent the digital tragedy of the commons and preserve the vibrancy and diversity of human-generated content in the digital age.ChatGPT