There is a machine learning bubble, but the technology is here to stay. Once the bubble pops, the world will be changed by machine learning. But it will probably be crappier, not better.

What will happen to AI is boring old capitalism. Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots.

AI is defined by aggressive capitalism. The hype bubble has been engineered by investors and capitalists dumping money into it, and the returns they expect on that investment are going to come out of your pocket. The singularity is not coming, but the most realistic promises of AI are going to make the world worse. The AI revolution is here, and I don’t really like it.

  • ReCursing@kbin.social
    link
    fedilink
    arrow-up
    23
    ·
    1 year ago

    Top quality luddite opinions right here. Plenty of fear and oprobium being directed against the technology, while taking the kleprocratic capitalism and kakistocracy as a given that can’t be challenged.

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 year ago

      That seems to be the theme of the era.

      Yes, it is incompatible with the status quo. That’s a good thing. The status quo is unsustainable. The status quo is on course to kill us all.

      The only real danger AI brings is it will let our current corrupt leaders and corrupt institutions be more efficient in their corruption. The problem there is not the AI; it’s the corruption.

      • Umbrias@beehaw.org
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        Improving human efficiency is essentially the purpose of technology after all. Any new invention will generally have this effect.

          • Umbrias@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            What? It’s the sociological definition of technology. A cultural tool which is used by a community for making a task xyz, easier, faster, more efficient.

            Efficiency is an extremely broad term.

            What’s your counter definition of technology and efficiency that is leading you to disagree?

      • lol3droflxp@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        These are easily avoidable problems. There are always reputable authors on topics and why would a self published foraging book by some random person be better than an AI one? You buy books written by experts, especially when it’s about life or death.

        • norb@lem.norbz.org
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          “Easily avoidable” if you know to look for them or if they’re labelled appropriately. This was just an example of a danger that autocomplete AI is creating today. Unscrupulous people will continue to shit out AI generated nonsense to try to sell when the seller does zero vetting of the products in their store (one of the many reasons I no longer shop at Amazon).

          Many people, especially beginners, are not going to take the time to fully investigate their sources of knowledge, and to be honest they probably shouldn’t have to. If you get a book about mushrooms from the library, you can probably assume it’s giving valid information as the library has people to vet books. People will see Amazon as being responsible for keeping them safe, for better or worse.

          I agree that generally there is a bunch of nonsense about ChatGPT and LLM AIs that isn’t really valid, and we’re seeing some amount of AI bubble happening where it’s a self feeding thing. In the end it will shake out, but before that all happens you have some outright dangerous and harmful things occurring today.

        • abraxas@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I think the idea is that someone buying a basic book on foraging mushrooms isn’t going to know who the experts are.

          They’re going to google it, and they’re going to find AI-generated reviews (with affiliate links!) of AI-generated foraging books.

          Now, if said AI is generating foraging books more accurate than humans, that’s fine by me. Until that’s the case, we should be marking AI-generated books in some clear way.

          • norb@lem.norbz.org
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            Now, if said AI is generating foraging books more accurate than humans, that’s fine by me. Until that’s the case, we should be marking AI-generated books in some clear way.

            The problem is, the LLM AIs we have today literally cannot do this because they are not thinking machines. These AIs are beefed-up autocompletes without any actual knowledge of the underlying information being conveyed. The sentences are grammatically correct and read (mostly) like we would expect human written words to read, however the actual factual content is non-existent. The appearance of correctness just comes from the fact that the model was trained on information that was (probably mostly) correct in the first place.

            I mean, we should still be calling these things algorithms and not “AI” as “AI” carries a lot of subtext in people’s minds. Most people understand “algorithms” to mean math, and that dehumanizes it. If you call something AI, all of a sudden people have sci-fi ideas of truly independent thinking machines. ChatGPT is not that, at all.

            • abraxas@beehaw.org
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              I agree. And ML may never be able to cross that line.

              That said, we’ve been calling it AI for decades now. It was weird enough to me when people started using ML more. I remember the AI classes I took in college, and the AI experts I met in my jobs. Then one day it was “just ML”. In most situations, it’s the same darn thing.

    • Gaywallet (they/it)@beehaw.org
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      taking the kleprocratic capitalism and kakistocracy as a given that can’t be challenged.

      It’s literally baked into the models themselves. AI will reinforce kleptocratic capitalism and kakistocracy as you so aptly put it because the very data it’s trained on is a slice of the society it resembles. People on the internet share bad, racist opinions and the bots trained on this data do the same. When AI models are put in charge of systems because it’s cheaper than putting humans in place, the systems themselves become entrenched in status-quo. The problem isn’t so much the technology itself, but how the technology is being rolled out, driven by capitalistic incentives, and the consequences that brings.

    • jatone@reddthat.com
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      snicker drewdevault is an avid critic of capitalism. thats entirely the point of this post actually.

    • lazyraccoon@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      Thank you for educating me twice: Kleptocracy and kakistocracy. Now I can refer to my government with astute Greek terminology!

    • acastcandream@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      The fact that AI evangelists have the gall to call everyone who disagrees with them “luddites” is absolutely astounding to me. It’s a word I see people like you throw around over and over again.

      And before you heap the same nonsense on me, I use AI and have for years. But the entire discourse by “advocates” is quarter-baked, pretentious, and almost religious. It’s bizarre. These are just tools, and people calling for us to think about how we use these tools as more and more ethical issues arise are not “luddites.” They are not halting progress. They are asking reasonable questions about what we want to unleash on ourselves. Meanwhile nothing is stopping you or I from using LLM’s and running our own local instance of ChatGPT-like systems. Or whatever else we can come up with. So what is the problem?

      Imagine if we had taken an extra five minutes before embracing Facebook and all the other social media that came to define “Web2.0.” Maybe things could be slightly better. Maybe we wouldn’t have as big of a radicalization/silo-ing issue. But we don’t know, because anyone who dares to even ask “should we do this?” in the tech world is treated like they need to be sent to a retirement home for their own safety. It’s anathema, it’s heresy.

      So once again: What is the problem? What are those people doing to you? Why are they so threatening? Why are you so angry and insulting them?

      I feel like we are just entering the new iteration of crypto bro culture. 

      • abhibeckert@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        What are those people doing to you?

        There are definitely people who are harmed by FUD like this. For example the current writers strike, which has 11,000 people putting down tools… indefinitely shutting down global movie productions that employ millions of people and leaving them unemployed for who knows how long.

        • acastcandream@beehaw.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          I stand with my colleagues in the WGA/SAG-AFTRA. Their support for the strike is near unanimous. As is SAG-AFTRA’s (97.91%). Do not speak on things you don’t understand, and definitely don’t leverage the collective action of those of us in the film industry against our own interests to make some oblique argument about AI.

          • abhibeckert@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 year ago

            I don’t have anything against you or your colleagues. You’ve got every right to strike if that’s what you want to do.

            But there are millions of people being harmed by the strike. That’s a simple fact.

            Journalists/etc need to do their job and provide good balanced information on critical issues like this one. FUD like Drew Devalt posted inflames the debate and makes it nearly impossible for reasonable people to figure out what to do about Large Language Models… because like it or not, they exist, and they’re not going away.

            PS: while I’m not a film writer, I am paid to spend my day typing creative works and my industry is also facing upheaval. I also have friends who work in the film industry, so I’m very aware and sympathetic to the issues.

            • acastcandream@beehaw.org
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              1 year ago

              Unrestricted AI usage without creative attribution and runaway studio power is harming them. The strike is a result of that. The strike isn’t happening because they’re luddites about AI. They know exactly what it’s capable of. Your argument isn’t grounded in reality and is just you piling assumption on top of assumption.

              You aren’t dumb, clearly, yet you are acting ignorant of the issue and being so reductionist it’s borderline dishonest. Especially if you are familiar with the industry and its stated woes.

  • lily33@lemm.ee
    link
    fedilink
    arrow-up
    22
    ·
    edit-2
    1 year ago

    You could have said the same for factories in the 18th century. But instead of the reactionary sentiment to just reject the new, we should be pushing for ways to have it work for everyone.

    • Jummit@lemmy.one
      link
      fedilink
      arrow-up
      18
      ·
      edit-2
      1 year ago

      I don’t see how rejecting 18th century-style factories or exploitative neural networks is a bad thing. We should have the option of saying “no” to the ideas of capitalists looking for a quick buck. There was an insightful blog post that I can’t find right now…

    • TwilightVulpine@kbin.social
      link
      fedilink
      arrow-up
      14
      ·
      1 year ago

      Lets not forget all the exploitation that happened in that period also. People, even children, working for endless hours for nearly no pay, losing limbs to machinery and simply getting discarded for it. Just as there is a history of technology, there is a history of it being used inequitably and even sociopathically, through greed that has no consideration for human well-being. It took a lot of fighting, often literally, to get to the point we have some dignity, and even that is being eroded.

      I get your point, it’s not the tech, it’s the system, and while I lost all excitement for AI I don’t think that genie can’t be put back in the bottle. But if the whole system isn’t changing, we should at least regulate the tech.

      But AI will eliminate so many jobs that it will affect a lot of people, and strain the whole system even more. There isn’t a “just become a programmer” solution to AI, because even intellectually-oriented jobs are now on the line for elimination. This won’t create more jobs than it takes away.

      Which shows why people are so fearful of this tech. Freeing people from manual labor to go to intellectual work was overall good, though in retrospect even then it came at a cost of passionate artisans. But now people might be “freed” from being artists to having to become sweatshop workers, who can’t outperform machines so their only option is to undercut them. Who is being helped by this?

      • lily33@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Yes, I know about the exploitation that happened during early industrialization, and it was horrible. But if people had just rejected and banned factories back then, we’d still be living in feudalism.

        I know that I don’t want to work a job that can be easily automated, but intentionally isn’t just so I can “have a purpose”.

        What will happen if AI were to automate all jobs? In the most extreme case, where literally everyone lost their job, then nobody would be able to buy stuff, but also, no company would be able to sell products and make profit. Then, either capitalism would collapse - or more likely, it will adapt by implementing some mechanism such as UBI. Of course, the real effect of AI will not be quite that extreme, but it may well destabilize things.

        That said, if you want to change the system, it’s exactly in periods of instability that can be done. So I’m not going to try to stop progress and cling to the status quo out of fear what those changes might be - and instead join a movement that tries to shape them.

        we should at least regulate the tech.

        Maybe. But generally on Lemmy I see sooo many articles about “Oh, no, AI bad”. But no good suggestions on what exactly regulations should we want.

        • TwilightVulpine@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Movements that shape changes can also happen by resisting or by popular pressure. There is no lack of well-reasoned articles about the issues with AI and how they should be addressed, or even how they should have been addressed before AI engineers charged ahead not even asking for forgiveness after also not asking for permission. The thing is that AI proponents and the companies embracing them don’t care to listen, and governments are infamously slow to act.

          For all that is said of “progress”, a word with a misleading connotation, once again this technology puts wealthy people, who can build data centers for it, at an advantage compared to regular people who at best can only use lesser versions of it, if even that, they might instead just receive the end result of whatever the technology owners want to offer. Like the article itself mentions, it has immense potential for advertising, scams and political propaganda. I haven’t seen AI proponents offering meaningful rebuttals to that.

          At this point I’m bracing for the dystopian horrors that will come before it all comes to a head, and who knows how it might turn out this time around.

          • abhibeckert@beehaw.org
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            1 year ago

            Like the article itself mentions, it has immense potential for advertising, scams and political propaganda. I haven’t seen AI proponents offering meaningful rebuttals to that.

            You won’t get a direct rebuttal because, obviously, an AI can be used to write ads, scams and political propaganda.

            But every day millions of people are cut by knives. It hurts. A lot. Sometimes the injuries are fatal. Does that mean knives are evil and ruining the world? I’d argue not. I love my kitchen knives and couldn’t imagine doing without them.

            I’d also argue LLMs can be used fact check and uncover scams/political propaganda/etc and can lower the cost of content production to the point where you don’t need awful advertisements to cover the production costs.

            • TwilightVulpine@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              This knife argument is overused as an excuse to take no precautions about anything whatsoever. The tech industry could stand to be more responsible about what it makes rather than shrugging it off until aging politicians realize this needs to be adressed.

              Using LLMs to fact check is a flawed proposition, because ultimately what it provides are language patterns, not verified information. Nevermind its many examples of mistakes, it’s very easy for them to provide incorrect answers that are widely repeated misconceptions. You may not blame the LLM for that, you can scratch that to generalized ignorance, but it still ends up falling short for this use case.

              But as much as I dislike ads, that last one is part of the problem. Humans losing their livelihood. So, going back to a previous point, how does the lowered ad budget help anyone but executives and investors? The former ad workers get freed to do what? Because the ones focused on art or writing would only have a harder time making a career out of that now.

    • argv_minus_one@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      You could have said the same for factories in the 18th century.

      Everyone who died as a result of their introduction probably would say the same, yes. If corpses could speak, anyway.

      • Echo Dot@feddit.uk
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Well if you can find anyone who’s died because an AI wrote an article then I’ll concede you have a point.

        • flora_explora@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Did you read the whole article including the “flame bait”? The author gives an example there of someone committing suicide because an AI encouraged them…

          • Honytawk@lemmy.zip
            link
            fedilink
            arrow-up
            0
            ·
            edit-2
            1 year ago

            Is that the AI’s fault, or the depressed and suicidal human’s fault?

            Do you not think that the person would have committed suicide whether they asked the AI or not? The AI might have sped up the decision, but it is the human who made it.

            It is not like the AI is out there trying to convince non-depressed humans to become depressed in order they go kill themselves …

            • flora_explora@beehaw.org
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Well, in the linked article the wife of this person said that they wouldn’t have committed suicide without the AI facilitating it. So yes, I would say it is at least in part the AI’s fault. And no, I didn’t say it was the intention of the AI to do so. But that doesn’t mean it won’t do it at all.

              You seem to really wanna push AI and lose your empathy over this…

    • bstix@feddit.dk
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      If the technology actually existed to replace human workers, the human workers could chip in and buy the means of production and replace the company owners as well.

  • jarfil@beehaw.org
    link
    fedilink
    arrow-up
    16
    ·
    1 year ago

    Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots.

    Unlikely to replace the “most” competent humans, but probably the lower 80% (Pareto principle), where “crappy” is “good enough”.

    What’s really troubling, is that it will happen all across the board; I’m yet to find a single field where most tasks couldn’t be replaced by an AI. Used to think 3D design would take the longest, but no, there are already 3D design AIs.

    • potterman28wxcv@beehaw.org
      link
      fedilink
      arrow-up
      12
      ·
      edit-2
      1 year ago

      I’m yet to find a single field where most tasks couldn’t be replaced by an AI

      Critical-application development. For example, developing a program that drives a rocket or an airplane.

      You can have an AI write some code. But good luck proving that the code meets all the safety criteria.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        You just said the same thing the comment responding to did, though. He pointed out that AI can replace the lower 80%, and you said the AI can write some code but that it might have trouble doing the expert work of proving the code meets the safety criteria. That’s where the 20% comes in.

        Also, it becomes easier to recognize the possibility for AI contribution when you widen your view to consider all the work required for critical application development beyond just the particular task of writing code. The company surrounding that task has a lot of non-coding work that gets done that is also amenable to AI replacement.

        • PenguinTD@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          That split won’t work cause the top 20% would not like to do their day job clean up AI codes. It’s much better time investment wise for them to write their own template generation tool so the 80% can write the key part of their task, than taking AI templates that may or may not be wrong and then hunting all over the place to remove bugs.

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            1 year ago

            Use the AI to fix the bugs.

            A couple months ago, I tried it on ChatGPT: I had never ever written or seen a single line in COBOL… so I asked ChatGPT to write me a program to print the first 10 elements of the Fibonacci series. I copy+pasted it into a COBOL web emulator… and it failed, with some errors. Copy+pasted the errors back to ChatGPT, asked it to fix them, and at the second or third iteration, the program was working as intended.

            If an AI were to run with enough context to keep all the requirements for a module, then iterate with input from a test suite, all one would need to write would be the requirements. Use the AI to also write the tests for each requirement, maybe make a library of them, and the core development loop could be reduced to ticking boxes for the requirements you wanted for each module… but maybe an AI could do that too?

            Weird times are coming. 😐

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              3
              ·
              1 year ago

              I’m a professional programmer and this is how I use ChatGPT. Instead of asking it “give me a script to do big complicated task” and then laughing at it when it fails, I tell it “give me a script to do .” Then when I confirm that works, I say "okay, now add a function that takes the output of the first function and does " Repeat until done, correcting it when it makes mistakes. You still need to know how to spot problems but it’s way faster than writing it myself, even if I don’t have to go rummaging through API documentation and whatnot.

              • amki@feddit.de
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                I mean that is exactly what programming is except you type to an AI and have it type the script. What is that good for?

                Could have just typed the script in the first place.

                It ChatGPT can use the API it can’t be too complex otherwise you are in for a surprise once you find out what ChatGPT didn’t care about (caching, usage limits, pricing, usage contracts)

                • abhibeckert@beehaw.org
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  edit-2
                  1 year ago

                  Could have just typed the script in the first place.

                  Sure - but ChatGPT can type faster than me. And for simple tasks, CoPilot is even faster.

                  Also - it doesn’t just speed up typing, it also speeds up basics like “what did bob name that function?”

                • FaceDeer@kbin.social
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  it’s way faster than writing it myself

                  I already explained.

                  I could write the scripts myself, sure. But can I write the scripts in a matter of minutes? Even with a bit of debugging time thrown in, and the time it takes to describe the problem to ChatGPT, it’s not even close. And those descriptions of the problem make for good documentation to boot.

    • Remmock@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      Fashion designers are being replaced by AI.
      Investment capitalists are starting to argue that C-Suite company officers are costing companies too much money.
      Our Ouroboros economy hungers.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        1 year ago

        C-Suites can get replaced by AIs… controlled by a crypto DAO replacing the board. And now that we’re at it, replace all workers by AIs, and investors by AI trading bots.

        Why have any humans, when you can put in some initial capital, and have the bot invert in a DAO that controls a full-AI company. Bonus points if all the clients are also AIs.

        The future is going to be weird AF. 😆😰🙈

          • TwilightVulpine@kbin.social
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            That’s where we need to ask how we define “better”. Is better “when the number goes bigger” or is better “when more people benefit”? If an AI can better optimize to better extract the maximum value from people’s work and discard them, then optimize how many ways they can monetize their product to maximize the profit they get from each customer, the result is a horrible company and a horrible society.

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            In theory yes… but what do we call “doing a better job”? Is it just blindly extracting money? Or is it something more, and do we all agree on what it is? I think there could be a compounded problem of oversight.

            Like, right now an employee pays/invests some money into a retirement fund, whose managers invest into several mutual funds, whose managers invest into several companies, whose owners ask for some performance from their C-suite, who through a chain of command tell the same employee what to do. Even though it’s part of the employee’s capital that’s controlling that company, if it takes an action negative for the employee like fracking under their home, or firing them, they’re powerless to do anything about it with their investment.

            With AI replacing all those steps, it would all happen much quicker, and —since AIs are still basically a black box— with even less transparency than having corruptible humans on the same steps (at least we kind of know what tends to corrupt humans). Adding strict “code as contract” rules to try to keep them in check, would on a first sight look like an improvement, but in practice any unpredicted behavior could spread blindingly fast over the whole ecosystem, with nobody having a “stop” button anymore. That’s even before considering coding errors and malicious actors.

            I guess a possible solution would be requiring every AI to have an external stop trigger, that a judicial system could launch to… possibly paralyze the whole economy. But that would require new legislation to be passed (with AI lawyers), and it would likely get late, and not be fully implemented by those trying to outsmart the system. Replace the judges by AIs too, politicians with AIs, talking heads on TV with AIs… and it becomes an AI world where humans have little to nothing to say. Are humans even of any use, in such a world?

            None of those AIs need to be an AGI, so we could run ourselves into a corner with nobody and nothing having a global plan or oversight. Kind of like right now, but worse for the people.

            Alternatively, all those AIs could be eco-friendly humans-first compassionate black boxes… but I kind of doubt those are the kind of AIs that current businesses are trying to build.

            • amki@feddit.de
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Thing is nobody will do that because once AI finds a way to spazz out that is totally unpredictable (black box) everything might just be gone.

              It’s a totally unrealistic scenario.

    • amki@feddit.de
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Unfortunately everything AI does is kind of shitty. Sure you might have a query for which the chosen AI works well but you might as well not.

      It you accept that it sometimes just doesn’t work at all sure AI is your revolution. Unfortunately there are not too many use cases where this is helpful.

  • bedrooms@kbin.social
    link
    fedilink
    arrow-up
    15
    ·
    1 year ago

    I disagree. If we replace this writer with ChatGPT4, it would generate a more balanced article.

    • Akrenion@programming.dev
      link
      fedilink
      arrow-up
      25
      ·
      1 year ago

      More balanced articles are not necessarily better though. I’d dather read two conflicting opinions that are well thought out than a mild compromise with unknown bias.

    • Hirom@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      1 year ago

      More balanced than what?

      ChatGPT ingest lots of articles from the web and newspapers, identify patterns in the text, and generate relevant reply based on what it ingested.

      I expect ChatGPT to perpetuate biases found in its training data, and don’t see how it’d improve balance.

      • bedrooms@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        What I mean is that the article was full of negative bias.

        ChatGPT 4, when used with care, can take into account different opinions, both positive and negative.

        • acastcandream@beehaw.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          It’s not “negative bias.” They have a negative opinion of AI and are expressing it. Unless you are prepared to describe your stance on AI as a “positive bias” and summarily discount every opinion you have on it.

          • bedrooms@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            If you disregard the positive side that’s a negative bias. I’m not interested in a semantic fight with you.

            • acastcandream@beehaw.org
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              You’re literally making things up now and you’re making a semantic argument lol but sure have a good one then. Cheers mate

              • bedrooms@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                Maybe it was a poor choice of words but I’m honestly tired to do arguments with online people who can’t see biases. Cheers, indeed.

  • slaytswiftfan@beehaw.org
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    I’m getting so so tired of these “AI/ML bad, world is doom” articles being posted multiple times a day. who is funding these narratives??

    • jatone@reddthat.com
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      you have no idea who drew devault is clearly and you entirely missed the point of what he posted.

      • JWBananas@startrek.website
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I also have no idea who he is and I also missed the point. It’s just another “AI bad” article, even if the message this time is “AI bad, but not as bad as you think.”

        • jatone@reddthat.com
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          hes pointing out how the technology is going to be used realistically by corporations. hes not saying AI is bad inherently. hes saying the outcome will be bad for society. which is very true.

    • Storksforlegs@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      And its a hot topic so there are a lot of articles about it since it generates traffic. And the bigger and doomier the article, the more hype.

      But lots of people are unhappy about AI, its not a narrative being funded by a secret cabal or something.

    • 1984@lemmy.today
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      There is a name for this debating technique where you go “sure, there was nothing good about Hitler - except he cared about dogs!”. Can’t remember. Is it strawman?

      I think we all understand that capitalism is mostly bad for humans, and really good for corporations and their owners. AI and robots will be exploited to replace people since they are massively more powerful and much cheaper.

      A few things will be better I guess, but most will be worse. People already are not actually needed to work this much anymore, and as soon as they can be replaced with something cheaper and more efficient they will. That is capitalism.

        • ironhydroxide@partizle.com
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Eventually nobody.

          Capitalism isn’t about sustainability, it’s about making the most amount of profit in the shortest amount of time.

          Eventuall you bleed everyone dry and nobody has a job. But for a short amount of time the shareholders will have had a huge number of 0’s and 1’s in a database somewhere equating to their “worth”

        • 1984@lemmy.today
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          Humans are consumers, they will buy stuff. But most won’t work for corporations anymore since robots and AI are far more effective at most jobs.

          Humans will still buy the stuff robots produce. Maybe the money will come from governments as some kind of citizen coins, distributed differently based on some criteria. Not sure.

          • lol3droflxp@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            That would be UBI and that’s seen as an improvement by a lot of people so why stand in the way of that? Robots do the work, people get a budget they can spend on that work while they don’t have to do it.

            • SokathHisEyesOpen@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              That would be UBI and that’s seen as an improvement by a lot of people so why stand in the way of that?

              Because none of those people have policy changing influence. Nobody here is standing in the way of it, most people here are advocates for it. But we don’t write the laws, and we only get to vote on a very small percentage of them.

  • lloram239@feddit.de
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    This is a very one sided way to look at things. Yes, people will use AI to generate spam and stuff. What it is missing is that people will also use AI to filter it all away. The nice thing about ChatGPT and friends is that it gives me access to information in whatever format I desire. I don’t have to visit dozens of websites to find what I am looking for, the AI will do that for me and report back with what it has found.

    Simply put, AI is a possible path to the Semantic Web, which previously failed since ads and SoC were the driver of the Web, not information.

    Sometimes I really wonder in what magical wonderland those people complaining about AI live, since as far as I am concerned, the Web and a lot of other stuff went to shit a long while ago, long before AI got any mass traction. AI is our best hope to drag ourselves out of the mud.

    The real problem is that AI isn’t good enough yet. It can handle Wikipedia-like questions quite well. But try to use it for product and price information and all you get is garbage.

    • acastcandream@beehaw.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I have no desire to enter an AI arms race. I spent too much time as it is already tinkering with my privacy stuff to deal with Google and other malicious actors. Why the hell would I want to have to maintain yet another front in that obnoxious, daily battle? That is not a reason to have AI, it’s just selling the cure to a problem we are creating ourselves.

      • hglman@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Its either that or extreme fragmentation and or de- informationalization

          • hglman@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Its not a problem if its removed by improved ai; it would be a transient fear that never manifests.

              • acastcandream@beehaw.org
                link
                fedilink
                arrow-up
                2
                ·
                edit-2
                1 year ago

                Remember the promises of social media and the complete ignorance we all shared around it?

                Either way I’m not an idiot. I use AI every day in my work, probably more than you do tbh. The problem is AI evangelists have an almost faith-like quality to them, very much like crypto bros. Y’all have this inability to critically assess it and treat everyone like they’re ignorant children who “just don’t get it” or “hate progress.”

      • lloram239@feddit.de
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        You never wanted to have the computer from StarTrek, the Holodeck or the Universal Translator? Modern AI provides a fundamental shift in how we can interact with data and allows us to do things that would have been impossible by classic means.

        And it’s not like you can escape it anyway, phone cameras use AI, spell checkers use AI, mobile phone keyboards use AI, it’s already everywhere and we have barely started.

    • botengang@feddit.de
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      which previously failed since ads and SoC were the driver of the Web, not information.

      Can you elaborate on why you think the ads wouldn’t sneak in again? The semantic web is a fantastic concept, but I don’t immediately see the AI connection. AI doesn’t magically pay for authored content and there is still an incentive to somehow get ads into LLM answers.

      • lloram239@feddit.de
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Can you elaborate on why you think the ads wouldn’t sneak in again?

        You can run a LLM at home on your own PC. Think of it less as a replacement for Google and more like the computer from StarTrek. You tell it what you want and it goes to search the net for you. What you see is just the answer, in a format specified by you, not the websites they came from.

        Google, Bing and Co. will of course add ads into their services, but that’s a short issue. AI will fundamentally reshape how we interact with computers and information in the long run.

        The semantic web is a fantastic concept, but I don’t immediately see the AI connection.

        The semantic web relies on human doing the markup, that’s doomed to fail, nobody has the time for that and even if they spend the effort, they would miss a whole lot of information that is in the text. A LLM can extract semantic information directly from the text without any markup and you can query that information with natural language. That’s not only way easier on the creators side, but also way more powerful on the users end.

        • xavier666@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          You can run a LLM at home on your own PC. You tell it what you want and it goes to search the net for you.

          Unless it’s open-source and connected to a proper crownsourced dataset, hosted on a paid server managed by a community instead of a big corporation, I don’t see how ads are NOT getting in.

        • botengang@feddit.de
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Thank you very much. My concern is rather in the direction of inserting ads or “promotional information” into the training material, much like SEO plagues search today. If the info is from the web it can still be malicious, even if you run your own LLM.

          • lloram239@feddit.de
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            They’ll certainly try, though it’ll be quite a bit trickier due to LLMs providing direct answers, not just a list of sites. You can’t really sneak a product in there when it doesn’t actually fit the question. I think the bigger problem is just the lack of good information out there. Finding trustworthy reviews these days is getting really hard, most of the time all you have is the product description and some Amazon reviews, which even when done well, fail at providing how product X compares to product Y. No matter how smart the AI will be, that always leaves a ton of room for error and misinformation.

            Hard to tell how things will end up. For the time being, LLMs are pretty much completely useless for product search, ChatGPT just doesn’t know enough and BingChat will just summarize the first three SEO-filled Bing search results. The deep knowledge LLMs have on Wikipedia-like topics is missing when it comes to products and services, and they can’t really do calculations either, so price information is almost always wrong. This will need some specific optimization.

            • david@feddit.uk
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              I don’t know why you want to use an AI to purchase goods and learn about products. That’s what the current www is really really strong at. Lots of people are spending an awful lot of money to make that information really easy to discover, and popular search engines definitely prioritise that information.

              Also, if an AI is to give you price and product information it’s going to have to be reading live web pages, which will of course be full of ads. SEO will become AIO/LLMO. There is no end to the time and money advertisers are prepared to pour into getting products in front of users. The irony is that you seem to want to view products and you have this weird perspective where you’re keen to avoid ads for products so that you can view marketing information about products without the ads.

              It’s already fairly hard to tell without knowing some good websites or reading through to conclusions and using some common sense whether a review website is honest or biased. I don’t know why you think an AI with access to the Internet will filter out fake reviews and content crafted to lead you to specific products over others.

              Also, downloading and configuring your own AI is unlikely to be the way the “AI revolution” comes. Amazon, Google, Microsoft, Apple and other mega corporations will be funding the “AI revolution” and will not sit idly by allowing their kingdoms to crumble.

              The number of people who will be saved from the corporations that run the online world by open source grass roots AI will be smaller than the number of people who are saved by Linux from proprietory products and SAAS.

              Yeah, everyone will get used to using an AI to interact with the web, but it will be freely supplied by a corporation, and I PROMISE you the enshitification of AI has been long planned before we even reach step one of making it awesome for the masses.

              • lloram239@feddit.de
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                That’s what the current www is really really strong at.

                You must be using a different WWW than I am, since product search for me is absolutely terrible. Even the simplest of queries can’t be answered, e.g. something trivial as “what’s the cheapest thing that matches query” fails due to some products coming different package sizes (e.g. 100g vs 1000g). If you want to buy a movie or game, and want to know about sequels and prequels, you have to go to Wikipedia to find out, since I have yet to see a single shop that organizes that well. Or try to find the equivalent of a product in another country where the original product isn’t available. Or try to search for the cheapest way to buy multiple product at once, taking shipping cost into account. Even just figuring out the size or what’s actually in the box is often impossible, I have yet to see another site that gives you a full CAD model of the products like McMaster-Carr.

                Product search on the Web is utter garbage. I am kind of surprises that nobody ever put serious effort into making that work well. Googles product search is garbage and most other search engines don’t even have a specific product search. A product search engine that automatically bundles up information from different, shops, Youtube videos and comments doesn’t exist as far as I know.

                Lots of people are spending an awful lot of money to make that information really easy to discover

                Amazon deliberately puts sponsored products on top to make it harder to discover what you want. Some small shops put effort into it and let you search products according the specs, but that only works in that single shop, I have yet to see a search engine that can handle that across multiple shop and with any semblance of reliability.

                Also, if an AI is to give you price and product information it’s going to have to be reading live web pages, which will of course be full of ads.

                Yes, but that’s irrelevant as long as only the AI reads it. I don’t care what ads my adblocker reads either.

                I don’t know why you think an AI with access to the Internet will filter out fake reviews

                I am not looking for reviews, but for reliable and detailed product information. An LLM can help gather that information from multiple different sources and format it in a unified way. SEO has limited influence on that, as either the product has those specs or it has not, in which case the LLM should be able to find contradictions in the information and automatically write a letter to whatever consumer protection office is responsible for false advertisement.

                Also, downloading and configuring your own AI is unlikely to be the way the “AI revolution” comes.

                Given the way privacy is getting traction in the public consciousness, I wouldn’t be so sure. Look at how many people already use adblockers, around 40% or so, that’s quite a lot, many of them will be upgrading to some form of AI driven adblocking and information gathering sooner or later.

                • david@feddit.uk
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  1 year ago

                  You know that a LLM is a statistical word prediction thing, no? That LLMs “hallucinate”. That this is an inevitable consequence of how they work. They’re designed to take in a context and then sound human, or sound formal, or sound like an excellent programmer, or sound like a lawyer, but there’s no particular reason why the content that they present to you would be accurate. It’s just that their training data contains an awful lot of accurate data which has a surprisingly large amount of commonality of meaning.

                  You say that the current crop of LLMs are good at Wikipedia style questions, but that’s because their authors have trained them with some of the most reliable and easy to verify information on the Web. A lot of that is Wikipedia style stuff. That’s it’s core knowledge, what it grew up reading, the yardstick by which it was judged. And yet it still goes off on inaccurate tangents because there’s nothing inherently accurate about statistically predicting the next word based on your training and the context and content of the prompt.

                  Yes, LLMs sound like they understand your prompt and are very knowledgeable, but the output is fundamentally not a fact-based thing, it’s a synthesized thing, engineered to sound like its training data.