• axont [she/her, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    24
    ·
    2 months ago

    Is this because AI LLMs don’t do anything good or useful? They get very simple questions wrong, will fabricate nonsense out of thin air, and even at their most useful they’re a conversational version of a Google search. I haven’t seen a single thing they do that a person would need or want.

    Maybe it could be neat in some kind of procedurally generated video game? But even that would be worse than something written by human writers. What is an LLM even for?

    • SSJMarx@lemm.ee
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      2 months ago

      They have places they can be used, and I think that some of the smaller models might find their way into more niches as time goes on.

      But there’s just not enough uses for OpenAI to make back their investment. The hope that LLMs would turn into a General AI is pretty much dead, and the results are in from the early adopters AIs more often increase workloads rather than decrease them.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      13
      ·
      2 months ago

      I think there are legitimate uses for this tech, but they’re pretty niche and difficult to monetize in practice. For most jobs, correctness matters, and if the system can’t be guaranteed to produce reasonably correct results then it’s not really improving productivity in a meaningful way.

      I find this stuff is great in cases where you already have domain knowledge, and maybe you want to bounce ideas off and the output it generates can stimulate an idea in your head. Whether it understands what it’s outputting really doesn’t matter in this scenario. It also works reasonably well as a coding assistant, where it can generate code that points you in the right direction, and it can be faster to do that than googling.

      We’ll probably see some niches where LLMs can be pretty helpful, but their capabilities are incredibly oversold at the moment.

    • autism_2 [any, it/its]@hexbear.net
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      2 months ago

      I’ve been thinking AI generated dialogue in Animal Crossing would be an improvement over the 2020 game.

      To clarify I’m not wanting the writers at the animal crossing factory to be replaced with ChatGPT. Having conversations that are generated in real time in addition to the animals’ normal dialogue just sounds like fun. Also I want them to be catty again because I like drama.

      • fox [comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        15
        ·
        edit-2
        2 months ago

        Nah, something about AI dialogue is just soulless and dull. Instantly uninteresting. Same reason I don’t read the AI slop being published in ebooks. It has no authorial intent and no personality. It isn’t even trying to entertain me. It’s worse than reading marketing emails because at least those have a purpose.

        • It depends on the training data. Once you use all data available, you get the most average output possible. If you limit your training data you can partially avoid the soullessness, but it’s more unhinged and buggy.

    • Owl [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 months ago

      The LLM characters will send you on a quest, and then you’ll go do it, and then you’ll come back and they won’t know you did it and won’t be able to give you a reward, because the game doesn’t know the LLM made up a quest, and doesn’t have a way to detect that you completed the thing that was made up.

    • TrashGoblin [he/him, they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      Cory Doctorow has a good write-up on the reverse centaur problem and why there’s no foreseeable way that LLMs could be profitable. Because of the way they’re error-prone, LLMs are really only suited to low-stakes uses, and there are lots of low-stakes, low-value uses people have found for them. But they need high-value use-cases to be profitable, and all of the high-value use-cases anyone has identified for them are also high-stakes.