One of my former (and very long-term) freelance gigs, How Stuff Works, has replaced writers with ChatGPT-generated content and also laid off its excellent editorial staff.
It seems that going forward, when articles I wrote are updated by ChatGPT, my byline will still appear at the top of the article with a note at the bottom of the article saying that AI was used. So it will look as if I wrote the article using AI.
To be clear: I did not write articles using ChatGPT.
#AI #LLM #ChatGPT
This seems really short-sighted. Why would I go to How Stuff Works when I can just ask the LLM myself?
Maybe there’s just no possible business model for them anymore with the advent of LLMs, but at least if they focused on the “actually written by humans!” angle there’d be some hook to draw people in.
The thing is, the LLM doesn’t actually know anything, and lies about it.
So you go to How Stuff Works now, and you get bullshit lies instead of real information, you’ll also get nonsense that looks like language at first glance, but is gibberish pretending to be an article. Because sometimes the language model changes topics midway through and doesn’t correct, because it can’t correct. It doesn’t actually know what it’s saying.
See, these language models are pre-trained, that the P in chatGPT. They just regurgitate the training data, but put together in ways that sort of look like more of the same training data.
There are some hard coded filters and responses, but other than that, nope, just a spew of garbage out from the random garbage in.
And yet, all sorts of people think this shit is ready to take over writing duties for everyone, saving money and winning court cases.
Yeah, this is why I can’t really take anyone seriously when they say it’ll take over the world. It’s certainly cool, but it’s always going to be limited in usefulness.
Some areas I can see it being really useful are:
generating believable text - scams, placeholder text, and general structure
distilling existing information - especially if it can actually cite sources, but even then I’d take it with a grain of salt
generating believable text - scams, placeholder text, and general structure
LLM generated scams are going to such problem. Quality isn’t even a problem there as they specifically go for people with poor awareness of these scams, and having a bot that responds with reasonable dialogue will make it that much easier for people to buy into it.
AI tools can be very powerful, but they usually need to be tailored to a specific use case by competent people.
With LLMs it seems to be the opposite, where people not competent for ML are applying it for the broadest of use cases. Just that it looks so good they are easily fooled and lack the understanding to realize the limits.
But there is a very important Usecase too:
Writing stuff that is only read and evaluated by similiar AI tools. It makes sense to write cover letters with ChatGPT because they are demanded but never read by a human on the other side of the job application. Since the weights and stuff behind it serm to be similiar, writing it with ChatGPT helps to pass the automatic analysis.
Rationally that is complete nonsense, but you basically need an AI tool to jump through the hoops made by an AI tool applied by stupid people who need to make themselves look smart.
I’ve graded papers from students who obviously used chatGPT to write them. They were a pass at best. Zero critical synthesis of ideas and application of them to the topic. I’m sure chatGPT has its uses but people really overhype its writing ability. There’s more to writing than putting words in the right places.
It could be AI sport when we actually have an general purpose AI. That based on people working on llm and gpt, would take between 6 years and never happening.
It’s not easy to create a super ai who’s realistically smarter than humans in every aspect.
I mean I would say maybe “regurgitating their training data” is putting it a bit too simple. But it’s true, we’re currently at the point where the AI can mimic real text. But that’s it - no one tells it not to lie rn, the programmatic goal of the AI is to get indistinguishable from real text with no bearing on the truthfulness of the information whatsoever.
Basically we train our AIs to pretend to know, not to know. And sometimes it’s good at pretending, sometimes it isn’t.
The “right” way to handle what the CEOs are doing would be to let go of a chunk of the staff, then let the rest write their articles with the help of chatgpt. But most CEOs are a bit too gullible when it comes to the abilities of AI.
It doesn’t know the limits of it’s knowledge or indeed know anything. It just “knows” what an answer smells like. It even “knows” what excuses are supposed to look like when you call it out.
The thing is, the LLM doesn’t actually know anything, and lies about it.
Just like your average human journalist. If you ever read an article from not specialist journal on a topic you are familiar with - you know. This seems actually where LLM are very similar to how human brain works - if we don’t know something, we come up with some bullshit.
Even medium human writers can comprehend their work as a whole, though. There is a cohesiveness even to the bullshit. The LLM is just putting words down that match the prompt. It’s rng driven, readable Lorum Ipsum.
If the results were still edited afterwards, there may be some merit to the output, but any company going full LLM isn’t looking for quality. They want to use it to churn out endless content that they simply can’t get from even a team of humans. More than could be edited even if they kept editors on staff.
but any company going full LLM isn’t looking for quality.
That is true for 24h news cycle of online media, regardless LLM.
Yes, that was my point. Setting up your company to put out more content than can possibly be processed by humans is a glaring sign of their values - ie quantity far above quality.
I’v read writing worse than GTP. I had to help someone write an essay - and I just wrote it for him in the end, because he absolutely lacked the skills to write a long meaningful text. At at the same time - genius of a percussionist.
But yeah, the quality of what is passing as journalism now is often ridiculous. But the only way to combat this is by having editors that are knowledgable about topics. But it seemed editors were the first people laid off, when internet articles became a thing.
24 hours news cycle of online media creates junk journalism on new level. Good journalism needs time and can’t spit out news articles every minute of the day. Editors won’t help, because it’s just not possible to do good journalism on that scale. But jeh - in general with AI, the jobs will shift more to editing. Which will be extremely soul-draining, going though tons of AI generated bullshit
This reminds me of the short story “The Great Automatic Grammatizator” by Roald Dahl. In the story a machine is invented that can write great stories, but it’s creators go around buying the naming rights of authors so people will actually not their books.
I think I meant buy. I’ve edited the comment. That said, after rereading the story last tonight, the reason they buy the rights to authors names is to eliminate competition and maximize profits.
Correct me if I’m wrong, but isn’t AI generated content not copyrightable? Therefore nothing is stopping someone from taking all their content, rebranding it as “how stuff really works” or something, and then start stealing their business & ad revenue.
Humans aren’t much different. 99.9% of what we create is just a remix of existing parts/ideas. It’s why people spend 12-20 years pre-training on all the existing knowledge in the field they’re going to work in.
This seems really short-sighted. Why would I go to How Stuff Works when I can just ask the LLM myself?
Maybe there’s just no possible business model for them anymore with the advent of LLMs, but at least if they focused on the “actually written by humans!” angle there’d be some hook to draw people in.
The thing is, the LLM doesn’t actually know anything, and lies about it.
So you go to How Stuff Works now, and you get bullshit lies instead of real information, you’ll also get nonsense that looks like language at first glance, but is gibberish pretending to be an article. Because sometimes the language model changes topics midway through and doesn’t correct, because it can’t correct. It doesn’t actually know what it’s saying.
See, these language models are pre-trained, that the P in chatGPT. They just regurgitate the training data, but put together in ways that sort of look like more of the same training data.
There are some hard coded filters and responses, but other than that, nope, just a spew of garbage out from the random garbage in.
And yet, all sorts of people think this shit is ready to take over writing duties for everyone, saving money and winning court cases.
Yeah, this is why I can’t really take anyone seriously when they say it’ll take over the world. It’s certainly cool, but it’s always going to be limited in usefulness.
Some areas I can see it being really useful are:
That’s about it.
It isnt going to take over, its being put in control by idiots.
LLM generated scams are going to such problem. Quality isn’t even a problem there as they specifically go for people with poor awareness of these scams, and having a bot that responds with reasonable dialogue will make it that much easier for people to buy into it.
AI tools can be very powerful, but they usually need to be tailored to a specific use case by competent people.
With LLMs it seems to be the opposite, where people not competent for ML are applying it for the broadest of use cases. Just that it looks so good they are easily fooled and lack the understanding to realize the limits.
But there is a very important Usecase too:
Writing stuff that is only read and evaluated by similiar AI tools. It makes sense to write cover letters with ChatGPT because they are demanded but never read by a human on the other side of the job application. Since the weights and stuff behind it serm to be similiar, writing it with ChatGPT helps to pass the automatic analysis.
Rationally that is complete nonsense, but you basically need an AI tool to jump through the hoops made by an AI tool applied by stupid people who need to make themselves look smart.
Here is an alternative Piped link(s): https://piped.video/watch?v=oqSYljRYDEM
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
I’ve graded papers from students who obviously used chatGPT to write them. They were a pass at best. Zero critical synthesis of ideas and application of them to the topic. I’m sure chatGPT has its uses but people really overhype its writing ability. There’s more to writing than putting words in the right places.
Absolutely. Creating new documentation will always be a human sport.
It could be AI sport when we actually have an general purpose AI. That based on people working on llm and gpt, would take between 6 years and never happening.
It’s not easy to create a super ai who’s realistically smarter than humans in every aspect.
Just like the mutant Olympics that we have today.
I mean I would say maybe “regurgitating their training data” is putting it a bit too simple. But it’s true, we’re currently at the point where the AI can mimic real text. But that’s it - no one tells it not to lie rn, the programmatic goal of the AI is to get indistinguishable from real text with no bearing on the truthfulness of the information whatsoever.
Basically we train our AIs to pretend to know, not to know. And sometimes it’s good at pretending, sometimes it isn’t.
The “right” way to handle what the CEOs are doing would be to let go of a chunk of the staff, then let the rest write their articles with the help of chatgpt. But most CEOs are a bit too gullible when it comes to the abilities of AI.
Literally predictive text but for whole articles.
It doesn’t know the limits of it’s knowledge or indeed know anything. It just “knows” what an answer smells like. It even “knows” what excuses are supposed to look like when you call it out.
This is a very good write up about how ChatGPT works.
Just like your average human journalist. If you ever read an article from not specialist journal on a topic you are familiar with - you know. This seems actually where LLM are very similar to how human brain works - if we don’t know something, we come up with some bullshit.
Even medium human writers can comprehend their work as a whole, though. There is a cohesiveness even to the bullshit. The LLM is just putting words down that match the prompt. It’s rng driven, readable Lorum Ipsum.
If the results were still edited afterwards, there may be some merit to the output, but any company going full LLM isn’t looking for quality. They want to use it to churn out endless content that they simply can’t get from even a team of humans. More than could be edited even if they kept editors on staff.
Sure, but a lot of humans are rather bad writers.
That is true for 24h news cycle of online media, regardless LLM.
Bad writing is still a step above rng junk, imo.
Yes, that was my point. Setting up your company to put out more content than can possibly be processed by humans is a glaring sign of their values - ie quantity far above quality.
I’v read writing worse than GTP. I had to help someone write an essay - and I just wrote it for him in the end, because he absolutely lacked the skills to write a long meaningful text. At at the same time - genius of a percussionist.
Do you think that person was signing up for jobs writing for blogs or content farms?
Have you read some low quality journalism? The whole yellow press can be replaced with GTP and no one would ever see a difference.
So modern journalists were redundant all along?
But yeah, the quality of what is passing as journalism now is often ridiculous. But the only way to combat this is by having editors that are knowledgable about topics. But it seemed editors were the first people laid off, when internet articles became a thing.
24 hours news cycle of online media creates junk journalism on new level. Good journalism needs time and can’t spit out news articles every minute of the day. Editors won’t help, because it’s just not possible to do good journalism on that scale. But jeh - in general with AI, the jobs will shift more to editing. Which will be extremely soul-draining, going though tons of AI generated bullshit
It’s a combination of three things:
1- most people still google things;
2- the more content you have the more organic traffic you’re likely to attract from Google;
3- displaying ads on your website makes you money.
Websites full of LLM generated content are just the natural continuation of MFAs (Made For AdSense).
This reminds me of the short story “The Great Automatic Grammatizator” by Roald Dahl. In the story a machine is invented that can write great stories, but it’s creators go around buying the naming rights of authors so people will actually not their books.
What?
I think I meant buy. I’ve edited the comment. That said, after rereading the story last tonight, the reason they buy the rights to authors names is to eliminate competition and maximize profits.
Here it is if you’re interested. It’s a great read.
Correct me if I’m wrong, but isn’t AI generated content not copyrightable? Therefore nothing is stopping someone from taking all their content, rebranding it as “how stuff really works” or something, and then start stealing their business & ad revenue.
LLM cannot create new concepts, it can only create a mishmash of things it has been fed on.
Humans aren’t much different. 99.9% of what we create is just a remix of existing parts/ideas. It’s why people spend 12-20 years pre-training on all the existing knowledge in the field they’re going to work in.
It’s completely different. We can come up with new ideas, language models can’t.
Isn’t that exactly how howstuffworks operates though?
You are what you eat. So kind of?
Just like Hollywood!