There is a machine learning bubble, but the technology is here to stay. Once the bubble pops, the world will be changed by machine learning. But it will probably be crappier, not better.
What will happen to AI is boring old capitalism. Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots.
AI is defined by aggressive capitalism. The hype bubble has been engineered by investors and capitalists dumping money into it, and the returns they expect on that investment are going to come out of your pocket. The singularity is not coming, but the most realistic promises of AI are going to make the world worse. The AI revolution is here, and I don’t really like it.
The fact that AI evangelists have the gall to call everyone who disagrees with them “luddites” is absolutely astounding to me. It’s a word I see people like you throw around over and over again.
And before you heap the same nonsense on me, I use AI and have for years. But the entire discourse by “advocates” is quarter-baked, pretentious, and almost religious. It’s bizarre. These are just tools, and people calling for us to think about how we use these tools as more and more ethical issues arise are not “luddites.” They are not halting progress. They are asking reasonable questions about what we want to unleash on ourselves. Meanwhile nothing is stopping you or I from using LLM’s and running our own local instance of ChatGPT-like systems. Or whatever else we can come up with. So what is the problem?
Imagine if we had taken an extra five minutes before embracing Facebook and all the other social media that came to define “Web2.0.” Maybe things could be slightly better. Maybe we wouldn’t have as big of a radicalization/silo-ing issue. But we don’t know, because anyone who dares to even ask “should we do this?” in the tech world is treated like they need to be sent to a retirement home for their own safety. It’s anathema, it’s heresy.
So once again: What is the problem? What are those people doing to you? Why are they so threatening? Why are you so angry and insulting them?
I feel like we are just entering the new iteration of crypto bro culture. 
There are definitely people who are harmed by FUD like this. For example the current writers strike, which has 11,000 people putting down tools… indefinitely shutting down global movie productions that employ millions of people and leaving them unemployed for who knows how long.
I stand with my colleagues in the WGA/SAG-AFTRA. Their support for the strike is near unanimous. As is SAG-AFTRA’s (97.91%). Do not speak on things you don’t understand, and definitely don’t leverage the collective action of those of us in the film industry against our own interests to make some oblique argument about AI.
I don’t have anything against you or your colleagues. You’ve got every right to strike if that’s what you want to do.
But there are millions of people being harmed by the strike. That’s a simple fact.
Journalists/etc need to do their job and provide good balanced information on critical issues like this one. FUD like Drew Devalt posted inflames the debate and makes it nearly impossible for reasonable people to figure out what to do about Large Language Models… because like it or not, they exist, and they’re not going away.
PS: while I’m not a film writer, I am paid to spend my day typing creative works and my industry is also facing upheaval. I also have friends who work in the film industry, so I’m very aware and sympathetic to the issues.
Unrestricted AI usage without creative attribution and runaway studio power is harming them. The strike is a result of that. The strike isn’t happening because they’re luddites about AI. They know exactly what it’s capable of. Your argument isn’t grounded in reality and is just you piling assumption on top of assumption.
You aren’t dumb, clearly, yet you are acting ignorant of the issue and being so reductionist it’s borderline dishonest. Especially if you are familiar with the industry and its stated woes.