• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle
  • I’m early Gen Z with a kinda poor family. So I had CRT’s and old VHS but also grew up on the internet.

    I feel an extreme gap between me and people a few years younger. I graduated in 2018 so I was some of the last people to have a traditional highschool experience. Before Covid, Zoom, and Chatgpt.

    I also mostly grew up with computers instead of phones so Im only just now getting into TikTok, I’ll likely never truly revolve around it like many others (both older and younger than me).




  • Oh it gets worse with Shadiversity. Huge AI art guy, his brother’s an actual artist too so it’s hard seeing Shad brag to him. Very “anti-woke” and paints his conservative Mormon beliefs on everything.

    The worst unforgivable part is the end of his book has impregnated rape victims step up to defend the rapist protagonist because he “gave them” a child, while the ones that didn’t get pregnant were jealous.

    He loves to bring up that the book is supposed to explore this immoral character. But this isn’t the protagonist’s viewpoint this is just how Shad thinks the world works. This is how Shad believes rape victims think.

    Very sad to see, I followed him for swords and castles but Jesus Christ.










  • In fact I think there’s a missed opportunity for EVs to partner with long distance public transit.

    The main limitations of electric cars is distance, but if people knew they could go across the state or several states comfortably without their car, they might be more willing to take a electric car for city driving.


  • I’m curious, is there actually so many 42’s in the system? (more than 69 sounds unlikely)

    What if the LLM is getting tripped up because 42 is always referred to as the answer to “the Ultimate Question of Life, the Universe, and Everything”.

    So you ask it a question like give a number between 1-100, it answers 42 because that’s the answer to “Everything”, according to it’s training data.

    Something similar happened to Gemini. Google discouraged Gemini from giving unsafe advice because it’s unethical. Then Gemini refused to answer questions about C++ because it’s considered “unsafe” (referring to memory management). But Gemini thinks C++ is “unsafe” (the normal meaning), therefore it’s unethical. It’s like those jailbreak tricks but from its own training set.



  • Wirlocke@lemmy.blahaj.zonetoMemes@lemmy.mlAre we the baddies?
    link
    fedilink
    arrow-up
    17
    arrow-down
    3
    ·
    edit-2
    3 months ago

    Japan has a similar worldview to Americans because there’s been multiple points in history where we brute forced our ways on them, conveniently at times where their old ways were losing faith.

    Forcing Japans borders open while they remained isolated with outdated weaponry, and the end of WW2.

    Capitalism was drilled into their culture until it’s teeth sunk in and they had their economic boom.



  • This specific instance probably.

    But the point is soo much of history ignores the female perspective (or the non-european perspective). Sometimes intentionally like all the female scientists that contribute to foundational studies and don’t get their name on the published paper.

    And this is really damaging; I have a family member that legitimately believes that european-descent men are the smartest throughout history (when I brought up the Islamic Golden Age as a counter example he accused it of being propaganda).

    American schools are so bad at teaching diverse history. So many still struggle with the basic truths about Columbus and the Natives.



  • The way I’ve come to understand it is that LLMs are intelligent in the same way your subconscious is intelligent.

    It works off of kneejerk “this feels right” logic, that’s why images look like dreams, realistic until you examine further.

    We all have a kneejerk responses to situations and questions, but the difference is we filter that through our conscious mind, to apply long-term thinking and our own choices into the mix.

    LLMs just keep getting better at the “this feels right” stage, which is why completely novel or niche situations can still trip it up; because it hasn’t developed enough “reflexes” for that problem yet.