• 2 Posts
  • 19 Comments
Joined 16 days ago
cake
Cake day: May 5th, 2024

help-circle


  • Most human training is done through the guidance of another

    Let’s take a step back and not talk about training at all, but about spontaneous learning. A baby learns about the world around it by experiencing things with its senses. They learn a language, for example, simply by hearing it and making connections - getting corrected when they’re wrong, yes, but they are not trained in language until they’ve already learned to speak it. And once they are taught how to read, they can then explore the world through signs, books, the internet, etc. in a way that is often self-directed. More than that, humans are learning at every moment as they interact with the world around them and with the written word.

    An LLM is a static model created through exposure to lots and lots of text. It is trained and then used. To add to the model requires an offline training process, which produces a new version of the model that can then be interacted with.

    you can in fact teach it something and it will maintain it during the session

    It’s still not learning anything. LLMs have what’s known as a context window that is used to augment the model for a given session. It’s still just text that is used as part of the response process.

    They don’t think or understand in any way, full stop.

    I just gave you an example where this appears to be untrue. There is something that looks like understanding going on.

    You seem to have ignored the preceding sentence: “LLMs are sophisticated word generators.” This is the crux of the matter. They simply do not think, much less understand. They are simply taking the text of your prompts (and the text from the context window) and generating more text that is likely to be relevant. Sentences are generated word-by-word using complex math (heavy on linear algebra and probability) where the generation of each new word takes into account everything that came before it, including the previous words in the sentence it’s a part of. There is no thinking or understanding whatsoever.

    This is why Voroxpete@sh.itjust.works said in the original post to this thread, “They hallucinate all answers. Some of those answers will happen to be right.” LLMs have no way of knowing if any of the text they generate is accurate for the simple fact that they don’t know anything at all. They have no capacity for knowledge, understanding, thought, or reasoning. Their models are simply complex networks of words that are able to generate more words, usually in a way that is useful to us. But often, as the hallucination problem shows, in ways that are completely useless and even harmful.


  • the argument that they can’t learn doesn’t make sense because models have definitely become better.

    They have to be either trained with new data or their internal structure has to be improved. It’s an offline process, meaning they don’t learn through chat sessions we have with them (if you open a new session it will have forgotten what you told it in a previous session), and they can’t learn through any kind of self-directed research process like a human can.

    all of your shortcomings you’ve listed humans are guilty of too.

    LLMs are sophisticated word generators. They don’t think or understand in any way, full stop. This is really important to understand about them.











  • It’s definitely happening when I’m getting updates from lemmy.world, and while I don’t know how to get at the HTTP details you’re showing in your video, I do see a lot of 400’s in the nginx log from Docker:

    proxy-1  | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world"
    proxy-1  | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 400 62 "-" "Lemmy/0.19.3; +https://lemmy.world"
    proxy-1  | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 400 62 "-" "Lemmy/0.19.3; +https://lemmy.world"
    proxy-1  | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world"
    proxy-1  | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 400 62 "-" "Lemmy/0.19.3; +https://lemmy.world"
    proxy-1  | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world"
    proxy-1  | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 200 0 "-" "Lemmy/0.19.3; +https://lemmy.world"
    proxy-1  | 135.181.143.221 - - [13/May/2024:23:03:43 +0000] "POST /inbox HTTP/1.1" 400 62 "-" "Lemmy/0.19.3; +https://lemmy.world"
    




  • Old posts aren’t federated. As new posts roll in, they’ll start appearing.

    Oh, that makes sense, and explains why I have content from the ones I subscribed to a couple of days ago but not the ones I just added. And your reply showed up on my server, and I’m posting and replying from my server, so things do seem to be flowing.

    I setup my lemmy log to go to a file as opposed to the console. Then it’s searchable, archiveable, etc.

    Yeah, I was going to look into doing that as well. Are there any docs on how to do it, or is it something you did at the Docker level?