• lily33@lemm.ee
    link
    fedilink
    English
    arrow-up
    21
    ·
    11 months ago

    Is that effect any different than the one you’d get if you have biased references, or biased search results, when doing the researchb for your writing?

    • Mardukas@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      11 months ago

      Well of course it will be different. One has to do with another author publishing questionable data and the other would be related to misunderstanding of someone else’s published data. In this case, the use of AI in writing is implied to result in authors not being in control of what they themselves publish.

      All of these are bad but do not necessarily arise on purpose. But let’s not add ways to muddy the already mudied waters of science.

    • Elephant0991@lemmy.bleh.au
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 months ago

      Those seem like questions for more research.

      I bet it’s more pernicious because it is easy to incorporate AI suggestions. If you do your own research, you may have to think a bit if the references/search results may be bad, and you still have to put the info in your own words so that you don’t offend the copyright gods. With the AI help, well, the spellings are good, the sentences are perfectly formed, the information is plausible, it’s probably not a straight-forward copy, why not just accept?

      • lily33@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 months ago

        I’ve just read the abstract of the study - but it doesn’t seem to be about people mindlessly copying the AI and producing biased text as a result. Rather, it’s about people seeing the points the AI makes, thinking “Good point!” and adjusting their own opinion accordingly.

        So it looks to me like it’s just the effect of where done view points get more exposure.