:sicko-blur:

    • laziestflagellant [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 months ago

      It can at least protect individual artists from having their future work being made into a LORA, which is happening to basically every NSFW artist in existence at the moment.

        • laziestflagellant [they/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          5
          ·
          9 months ago

          Hm, yeah reading more about it, it doesn’t function the same as Glaze and its possible it wouldn’t have a poisoning effect with a LORA, and it’s unclear if the effect could be subverted by upscaling and downscaling the images before putting them into training data.

        • laziestflagellant [they/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          10
          ·
          edit-2
          9 months ago

          A LORA is a ‘Low rank adaptation’ diffusion model, which are generally created for the Stable Diffusion family of image generators. Unlike the main image generator models which use databases of hundreds of millions of images, LORAs can be trained on a database of a few hundred images or even single digit numbers of images.

          They’re typically used on top of the main Stable Diffusion models in order to get results of a specific subject (ie a LORA trained on a specific anime character) or to get results in specific art styles (ie a LORA trained off of hypothetical porn artist Biggs McTiddies’ body of work)

      • drhead [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        I wouldn’t be confident about that. Usually people training a LORA will be training the text encoder as well as the Unet that does the actual diffusion process. If you pass the model images that visually look like cats, are labeled as “a picture of a cat”, and that the text encoder is aligned towards thinking is “a picture of a dog” (the part that Nightshade does), you would in theory be reinforcing what pictures of cats look like to the text encoder, and it would end up moving the vectors of “picture of a cat” and “picture of a dog” to where they are very well clear of each other. Nightshade essentially relies on being able to line up the Unet to the wrong spots on the text encoder, which shouldn’t happen if the text encoder is allowed to move as well.