Hey all, I am in the process of testing several models for fine-tuning and that question cropped up.

I would like to add new facts to a foundational model and then train it for instruction tuning. Problem is, I will regularly have new data to add. I was wondering if there is a change that I could do a single LORA for the instruction tuning and reapply it each time I finished a new fine-tuning?

  • Turun@feddit.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    At least in stable diffusion Loras are composable. You can combine different loras and have both effects applied to the resulting image.

    • keepthepace@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yes, but my understanding is that they are commutable? (i.e. the order does not matter) If so, it looks like that a “facts-adding” LORA seem to induce forgetting of formatting.

      And I am especially curious if a facts-LORA + a instructions-LORA results in a model that can use the new facts in the instructions or not. I’ll run experiments but would have loved if people here knew about it already.