Hey everyone, I’ve been searching for a bit on getting local LLM inference to process legal paperwork (I am not a lawyer, I just have trouble through large documents to figure out my rights). This would help me have conversations with my landlord and various other people who will withhold crucial information such as your rights during a unit inspection or accuse you of things you did not etc.

Given that there are 1000s of pre-trained models, would it be better to train a small model myself on an RTX 4090 or a Daisy chain of other GPUs? Is there a legal archive somewhere that I’m just not seeing or where should I direct my energy? I think lots of us could benefit from a pocket law reference that can serve as an aid to see what to do next.

  • dartos
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    10 months ago

    Generally, training an llm is a bad way to provide it with information. “In-context learning” is probably what you’re looking for. Basically just pasting relevant info and documents into your prompt.

    You might try fine tuning an existing model on a large dataset of legalese, but then it’ll be more likely to generate responses that sound like legalese, which defeats the purpose

    TL;DR Use in context learning to provide information to an LLM Use training and fine tuning to change how the language the llm generates sounds.

    • inspxtr@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      I know nothing about “in context learning” or legal stuff, but intuitively, don’t legal documents tend to reference each other, especially the more complicated ones? If so, how would you apply in context learning if you’re not aware which ones may be relevant?

      • dartos
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        10 months ago

        Yes, you can craft your prompt in such a way that if the llm doesn’t know about a referenced legal document it will ask for it, so you can then paste the relevant section of that document into the prompt to provide it with that information.

        I’d encourage you to look up some info on prompting LLMs and LLM context.

        They’re powerful tools, so it’s good to really learn how to use them, especially for important applications like legalese translators and rent negotiators.

        • inspxtr@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          thanks for your answer! Is this same or different from indexing to provide context? I saw some people ingesting large corpus of documents/structured data, like with LlamaIndex. Is it an alternative way to provide context or similar?

          • dartos
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 months ago

            Indexing and tools like llamaindex use LLM generated embeddings to “intelligently” search for similar documents to a search query.

            Those documents are usually fed into an LLM as part of the prompt (eg. context)

    • gronjo45@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      I’ll read more into “in context learning” and see if I can figure out something useful from the vast corpora of datasets out there.

      I guess I can’t relegate my thinking entirely to a mathematically optimized black box, but one can hope that it could help point me in the direction to understand my rights in my housing complex.