• OhNoMoreLemmy@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    2 months ago

    Words might have meanings but AI has been used by researchers to refer to toy neutral networks longer than most people on Lemmy have been alive.

    This insistence that AI must refer to human type intelligence is also such a weird distortion of language. Intelligence has never been a binary, human level indicator. When people say that a dog is intelligent, or an ant hive shows signs of intelligence, they don’t mean it can do what a human can. Why should AI be any different?

    • raspberriesareyummy@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      You honestly don’t seem to understand. This is not about the extent of intelligence. This is about actual understanding. Being able to classify a logical problem / a thought into concepts and processing it based on properties of such concepts and relations to other concepts. Deep learning, as impressive as the results may appear, is not that. You just throw a training data at a few billion “switches” and flip switches until you get close enough to a desired result, without being able to predict how the outcome will be if a tiny change happens in input data.

      • OhNoMoreLemmy@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        2 months ago

        I mean that’s a problem, but it’s distinct from the word “intelligence”.

        An intelligent dog can’t classify a logic problem either, but we’re still happy to call them intelligent.

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          With regards to the dog & my description of intelligence, you are wrong: Based on all that we know and observe, a dog (any animal, really) understands concepts and causal relations to varying degrees. That’s true intelligence.

          When you want to have artificial intelligence, even the most basic software can have some kind of limited understanding that actually fits this attempt at a definition - it’s just that the functionality will be very limited and pretty much appear useless.

          Think of it this way: deterministic algorithm -> has concepts and causal relations (but no consciousness, obviously), results are predictable (deterministic) and can be explained deep learning / neural networks -> does not implicitly have concepts nor causal relations, results are statistical (based on previous result observations) and can not be explained -> there’s actually a whole sector of science looking into how to model such systems way to a solution Addition: the input / output filters of pattern recognition systems are typically fed through quasi-deterministic algorithms to “smoothen” the results (make output more grammatically correct, filter words, translate languages)

          If you took enough deterministic algorithms, typically tailored to very specific problems & their solutions, and were able to use those as building blocks for a larger system that is able to understand a larger part of the environment, then you would get something resembling AI. Such a system could be tested (verified) on sample data, but it should not require training on data.

          Example: You could program image recognition using math to find certain shapes, which in turn - together with colour ranges and/or contrasts - could be used to associate object types, for which causal relations can be defined, upon which other parts of an AI could then base decision processes. This process has potential for error, but in a similar way that humans can mischaracterize the things we see - we also sometimes do not recognize an object correctly.