• nyan@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    11
    ·
    13 days ago

    From the viewpoint of an observing human, what’s the difference between the robot saying something which is believes to be true but isn’t (very common with current software, and unlikely to change even in the distant future, see “humans, purportedly intelligent”) and lying on purpose? If it lies on purpose, does the intent to lie come from the robot itself, or its programmers? Ultimately, it seems like the presence and source of intent is the only difference. Regardless, a robot will never be right about everything it says, so its statements have to be weighed in a way similar to how one would weigh statements coming from a human.

    TL;DR: I expect robots to tell me untruths from time to time regardless of how I feel about it.