Not a member, but thought I’d share this for the artists here in case they haven’t seen the news.

  • RightHandOfIkaros@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    7 months ago

    Only prolonging the inevitable with this. Kinda like DRM in video games, this is going to do literally nothing to the people that want the data except maybe be a minor inconvenience for a month or two.

    Wasn’t the last attempt at this defeated by only 16 lines of Python code?

    • kromem@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      7 months ago

      It’s not delaying anything. It won’t work outside of the paper.

      If you draw fantasy cats, and you bias towards pointillism dogs, and someone else draws cubist cats, so they bias towards anime dogs, you dilute the effect of the biasing data as multiple axes are flipped.

      And this assumes that all artists drawing cats agree on biasing towards dogs and not that some cat artists bias towards horses and others towards cows, which again dilutes any signal to just be noise.

      It had a measurable effect in what were effectively artificial lab conditions to get the authors attention with a clickbaity pitch for the paper, but in the real world this is completely worthless right out of the gate.

    • Mx PhibbOP
      link
      fedilink
      arrow-up
      1
      ·
      7 months ago

      No idea, but it’s worth a shot, worst it can do is nothing.

  • KISSmyOS@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    7 months ago

    If this takes off, it will be illegal within a year.
    There’ll be a new Digital Sabotage Act, written “with input from industry leaders” and voted on without debate or enough time to read the bill.

    • AphoticDev@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      7
      ·
      7 months ago

      If this takes off, it will be bypassed within a month. Adversarial training is something Stable Diffusion users already invented, and we use it to make our artwork better by poisoning the dataset to teach the network what a wrong result looks like. They reinvented our wheel.

    • kromem@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      7 months ago

      It’s fine, as outside of laboratory conditions it won’t work anyways (diverse image “reverse labels” would erase their signal to noise ratio of biased pixels across aggregate real world training data), so no need to stress about any kind of reaction to it.

  • sgbrain7@lemm.ee
    link
    fedilink
    arrow-up
    1
    ·
    6 months ago

    I’ve heard of this recently! I like how they use the term “poison” because it makes me imagine that AI non-art is a bunch of evil aristocrats and Nightshade is the cyanide we’re slipping into their beverages.

    • Mx PhibbOP
      link
      fedilink
      arrow-up
      2
      ·
      6 months ago

      Chuckles, "I like that, but I suspect they’re using calling it poison because of the phrase ‘poisoning the well’