• 0 Posts
  • 38 Comments
Joined 1 year ago
cake
Cake day: July 28th, 2023

help-circle



  • Writing boring shit is LLM dream stuff. Especially tedious corpo shit. I have to write letters and such a lot, it makes it so much easier having a machine that can summarise material and write it in dry corporate language in 10 seconds. I already have to proof read my own writing, and there’s almost always 1 or 2 other approvers, so checking it for errors is no extra effort.









  • AI models don’t resynthesize their training data. They use their training data to determine parameters which enable them to predict a response to an input.

    Consider a simple model (too simple to be called AI but really the underlying concepts are very similar) - a linear regression. In linear regression we produce a model which follows a straight line through the “middle” of our training data. We can then use this to predict values outside the range of the original data - albeit will less certainty about the likely error.

    In the same way, an LLM can give answers to questions that were never asked in its training data - it’s not taking that data and shuffling it around, it’s synthesising an answer by predicting tokens. Also similarly, it does this less well the further outside the training data you go. Feed them the right gibberish and it doesn’t know how to respond. ChatGPT is very good at dealing with nonsense, but if you’ve ever worked with simpler LLMs you’ll know that typos can throw them off notably… They still respond OK, but things get weirder as they go.

    Now it’s certainly true that (at least some) models were trained on CSAM, but it’s also definitely possible that a model that wasn’t could still produce sexual content featuring children. It’s training set need only contain enough disparate elements for it to correctly predict what the prompt is asking for. For example, if the training set contained images of children it will “know” what children look like, and if it contains pornography it will “know” what pornography looks like - conceivably it could mix these two together to produce generated CSAM. It will probably look odd, if I had to guess? Like LLMs struggling with typos, and regression models being unreliable outside their training range, image generation of something totally outside the training set is going to be a bit weird, but it will still work.

    None of this is to defend generating AI CSAM, to be clear, just to say that it is possible to generate things that a model hasn’t “seen”.






  • Yes, but the point is that I will encounter thousands of men in my life, and I may be assaulted by one of them yes, but there will be thousands of them who don’t.

    Each individual encounter with a bear is much more likely to end in death - even though bears often won’t attack you. The proportion of encounters with bears that ends in an attack is much higher than the proportion of encounters with men.

    Look I have a healthy level of caution around men, I completely get why other women choose the bear. It’s a fairly silly hypothetical, and if asked in the street I would probably have answered bear with my tongue in my cheek too*. However, presented with an actual bear and an actual man I will obviously choose the man because they are much less dangerous than bears.

    The whole point of this “discourse” (at least as I understand it) is that men are a more realistic and present danger than bears. The fact that women choose the bear shows the level of threat that they feel on a daily basis. But trying to actually compare the genuine level of danger presented by a single encounter with a random man and a random bear, the man comes out on top. Undeniably.

    *To be honest a lot of this conversation has made me rethink this position. I’m trans so I have experience of being perceived as a man… And honestly I do feel for the men who don’t understand. Fear of men is justified, but I worry that conversations like this slide us further and further away from reasonable caution and towards hatred, and that makes me sad and worried for how the future will look. The more that men feel attacked by women, especially feminism, the less they will support us - and we do need their support.




  • The last time elephants naturally lived in Europe was thousands of years ago. The climate was very different and there wasn’t the same level of human occupation. Yes the vegetation and landscape would need to change, and I’m not sure why on earth you think the elephants would do it?? There aren’t a lot of elephant ecologists as far as I’m aware. Plus the effects of releasing elephants would go beyond the effects on the elephants themselves, there would need to be management of other species that may be impacted by moving elephants in to avoid other damage to the ecosystem.