An Asian MIT student asked AI to turn an image of her into a professional headshot. It made her white with lighter skin and blue eyes.::Rona Wang, a 24-year-old MIT student, was experimenting with the AI image creator Playground AI to create a professional LinkedIn photo.

  • GenderNeutralBro@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    70
    ·
    1 year ago

    This is not surprising if you follow the tech, but I think the signal boost from articles like this is important because there are constantly new people just learning about how AI works, and it’s very very important to understand the bias embedded into them.

    It’s also worth actually learning how to use them, too. People expect them to be magic, it seems. They are not magic.

    If you’re going to try something like this, you should describe yourself as clearly as possible. Describe your eye color, hair color/length/style, age, expression, angle, and obviously race. Basically, describe any feature you want it to retain.

    I have not used the specific program mentioned in the article, but the ones I have used simply do not work the way she’s trying to use them. The phrase she used, “the girl from the original photo”, would have no meaning in Stable Diffusion, for example (which I’d bet Playground AI is based on, though they don’t specify). The img2img function makes a new image, with the original as a starting point. It does NOT analyze the content of the original or attempt to retain any features not included in the prompt. There’s no connection between the prompt and the input image, so “the girl from the original photo” is garbage input. Garbage in, garbage out.

    There are special-purpose programs designed for exactly the task of making photos look professional, which presumably go to the trouble to analyze the original, guess these things, and pass those through to the generator to retain the features. (I haven’t tried them, personally, so perhaps I’m giving them too much credit…)

    • CoderKat@lemm.ee
      link
      fedilink
      English
      arrow-up
      23
      ·
      1 year ago

      If it’s stable diffusion img2img, then totally, this is a misunderstanding of how that works. It usually only looks at things like the borders or depth. The text based prompt that the user provides is otherwise everything.

      That said, these kinds of AI are absolutely still biased. If you tell the AI to generate a photo of a professor, it will likely generate an old white dude 90% of the time. The models are very biased by their training data, which often reflects society’s biases (though really more a subset of society that created whatever training data the model used).

      Some AI actually does try to counter bias a bit by injecting details to your prompt if you don’t mention them. Eg, if you just say “photo of a professor”, it might randomly change your prompt to “photo of a female professor” or “photo of a black professor”, which I think is a great way to tackle this bias. I’m not sure how widespread this approach is or how effective this prompt manipulation is.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I’ve taken a look at the website for the one she used and it looks like a cheap crap toy. It’s free, which is the first clue that it’s not going to be great.

      Not a million miles from the old “photo improvement” things that just run a bunch of simple filters and make over-processed HDR crap.