• wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        17
        ·
        11 days ago

        An LLMs “intent” is always to give you a plausible response even if it doesn’t have the “knowledge”. The same behaviour in a human would be classed as lying IMHO.

        • ContrarianTrail@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          ·
          11 days ago

          But you wouldn’t call it lying if a person tells you something they think is true but turns out to be false. Lying means intentionally giving out false information. LLMs don’t have intentions.

          • ℍ𝕂-𝟞𝟝@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            6
            ·
            11 days ago

            Yeah I think it’s more fitting to use the term bullshitting.

            LLMs actually know that some of their answers have low probability to be the right ones, they give them out regardless, and don’t mention the low confidence of it.

            • ContrarianTrail@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 days ago

              Depends which definition of bullshit you use, I guess.

              Frankfurt determines that bullshit is speech intended to persuade without regard for truth. The liar cares about the truth and attempts to hide it; the bullshitter doesn’t care whether what they say is true or false.

              Wiki

              • conciselyverbose@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                10 days ago

                the bullshitter doesn’t care whether what they say is true or false.

                That’s another way to say “intent is irrelevant”.

                It’s also effectively the perfect definition of LLM output. Content for the sole purpose of looking the part with absolutely no consideration for reality.

                • ContrarianTrail@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  10 days ago

                  …bullshit is speech intended to persuade…

                  Quoting out of context is not going to score you any points

          • wewbull@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 days ago

            …but if they don’t know I expect them to say so. An LLM isn’t trustworthy until it says “I don’t know”.

  • Deceptichum@quokk.au
    link
    fedilink
    English
    arrow-up
    14
    ·
    11 days ago

    Yes.

    If it’s programmed by man, never assume it’s telling the truth.

    If it’s advanced and sentient, don’t trust it because it’s as trustworthy as any other person.

    If they are programmed to lie or not it doesn’t matter, nothing should be taken as fact so blindly.

  • nyan@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    11
    ·
    10 days ago

    From the viewpoint of an observing human, what’s the difference between the robot saying something which is believes to be true but isn’t (very common with current software, and unlikely to change even in the distant future, see “humans, purportedly intelligent”) and lying on purpose? If it lies on purpose, does the intent to lie come from the robot itself, or its programmers? Ultimately, it seems like the presence and source of intent is the only difference. Regardless, a robot will never be right about everything it says, so its statements have to be weighed in a way similar to how one would weigh statements coming from a human.

    TL;DR: I expect robots to tell me untruths from time to time regardless of how I feel about it.

    • cheese_greater@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      11 days ago

      The Simpsons already invented the greatest and only beauty machine anyone could ever need. The MakeupGun

  • Plopp@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 days ago

    Depends on the circumstances. I’d be ok with it lying to me like “ooh baby yes I’m cumming”.

  • jwt@programming.dev
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 days ago

    “Absolute honesty isn’t always the most diplomatic nor the safest form of communication with emotional beings”.
    — TARS

  • Australis13@fedia.io
    link
    fedilink
    arrow-up
    4
    ·
    11 days ago

    Hell no. Do not give machines the ability to lie. We already have enough trouble with people using technology to deceive without it choosing to be deceptive on its own.

  • RangerJosie@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 days ago

    I would.

    But I don’t want a caste of sentient slaves. I want partners with perspectives us meatbags couldn’t come up with on our own.

    • NaibofTabr@infosec.pub
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      11 days ago

      For a “robot” or other automated appliance to be able to perform tasks in the world, it must be able to perceive the world around it in some way. For it to interact with humans, it must perceive the humans (observe their actions, interpret their instructions, and understand their intentions). The direction our technology is headed in has shown us that any such device would primarily be a surveillance platform which collects data on its users. Any helpful tasks it might perform for the user would be the bait that gets them to swallow the hook, and not the device’s primary purpose.

      I don’t want a smart car or a smart TV and definitely not a smart household appliance such as a refrigerator. Why would I want a self-propelled, self-aware surveillance platform under the control of a multi-billion dollar corporation in my home? or workplace? or anywhere?