Some Random Thoughts on Generative AI
Some Random Thoughts on Generative AI
A Fool With A Tool, etc.
I opened up Bluesky last Sunday morning and saw this post about a Generative AI tool deleting someone’s production DB. It gets worse if you read the whole thread; the cluelessness of this guy makes me start to lose my faith in humanity…
Replit, in case you’re not familiar with it, is an AI Tool that advertises itself as “the safest place for vibe coding. Vibe coding makes software creation accessible to everyone, entirely through natural language” because it “turns your ideas into apps”.
And then deletes all your data, apparently. Caveat Emptor.
The real danger of AI
Some of the recent incidents with Grok (“MechaHitler”, indeed) illustrate one of the real dangers of AI. It isn’t that Generative AI is going to evolve into some sort of autonomous super-intelligence that poses existential risks to humanity and to the planet. The danger is that organizations and governments that own and control AI systems will tweak and transform its output in order to influence and control users. Elon Musk’s clumsy and heavy-handed attempts to force Grok to conform to his desired viewpoints are so laughably obvious that you might be tempted to dismiss this possibility. But many people don’t understand the probabilistic nature of Gen AI. Instead they view it as a sort of oracle and thus open themselves up to being subtly influenced and manipulated by whoever controls the AI they’re using.
Also, the news that Grok is now begin enabled on some Teslas and that Grok will be used by the Department of Defense does not inspire me with confidence…
History doesn’t repeat itself itself, but it sure does rhyme
My best guess is that the Generative AI boom ends up a lot like the dotcom boom and bust - a lot of companies invest heavily in AI, fail to develop sustainable revenues, and crash and burn in spectacular fashion. It will be a bonanza for tech journalists who write books about the coming AI-pocalypse, but maybe not so good for the rest of us.
Companies like OpenAI and Anthropic have burn rates in the billions of dollars with no clear indication of when they might turn profitable. They need to both add many new paying customers and then charge those customers more to have any hope of getting there. And customers who’ve been getting access to LLMs for free or at very low costs may not like paying more - especially when the hype around things like vibe coding and Agentic AI isn’t living up to the reality. An early indicator of this is the kerfuffle around Cursor changing its pricing model which sparked a backlash among developers.
A wider AI backlash has been brewing even as the hype is still growing. My experience trying to use AI in my own work can be summed up as “meh” - sometimes it’s helpful, usually it overcomplicates things, I always need to edit and improve its output, and sometimes it’s just annoying and breaks my concentration. My initial take was that it was the equivalent of an intern who’s memorized everything but doesn’t actually understand anything, and that still holds true. I’m certainly not willing to pay a premium price for this stuff anytime soon. Lots of companies will - for a while - until they see that the investment isn’t worth the return.
I wrote the two paragraphs above before I read the post about Replit deleting a production database. There is now a site called IncidentDatabase.AI that’s up to 1,147 incidents as I write this. I’m not a complete AI skeptic - I’m using AI to help write an application for me right now, after all - but I think it’s still very immature. And in the hands of fools like the Replit guy it’s like using a chainsaw while blindfolded. The usual cycle of adoption from introduction of a significant technology to maturity usually takes at least a decade. We have to learn both what the new technology is actually good for - what it does well and what it doesn’t do well - and we have to learn how to use it effectively. Maybe AI is so amazing that it accelerates this cycle, but that cycle still has to play itself out.
So I think the bursting of the AI bubble is inevitable, and it will have similar effects to the dotcom crash. Out of the ashes (companies going under, layoffs, recession, falling stocks, etc.), a few companies will survive and enough developers will have learned how to effectively use AI. And those uses will be both a lot more mundane and a lot more useful than the current hype. I think it was Esther Dyson that said something really pertinent during an earlier AI hype cycle, back in the late 80’s or early 90’s. I don’t remember the exact quote, but it was something like “when the things we call artificial intelligence work, they’re no longer artificial intelligence - they’re just something computers do”. And I think that’s what will actually happen; AI will be embedded in all our technologies,
AI is no different than any of the other major advances in computing - it’s a rise in the level of abstraction, as profound (but no more profound) as the microprocessor, high-level languages, or the internet. I expect to see things like the use of templated, structured English (or any other language) as an AI programming language and built-for-purpose small language models/agents for specific domains like python programming or creating legal briefs or drug discovery or real estate marketing or predictive maintenance or any number of other things. Those small language models and agents won’t be noticeable because they’ll be embedded in whatever tool someone is using. The key thing is that they’ll still be tools in the hands of people, and even the highest-quality, most sophisticated tools still depend on the skill of the user. AI will be pervasive but in the background, automating the mundane and assisting us in our work. There will never be good substitutes for human judgement, decision-making, and taste. But there will be plenty of bad ones and that will be a hard lesson for all the people gullible enough to uncritically swallow the AI hype.
A final word: