These experts on AI are here to help us understand important things about AI.

Who are these generous, helpful experts that the CBC found, you ask?

“Dr. Muhammad Mamdani, vice-president of data science and advanced analytics at Unity Health Toronto”, per LinkedIn a PharmD, who also serves in various AI-associated centres and institutes.

“(Jeff) Macpherson is a director and co-founder at Xagency.AI”, a tech startup which does, uh, lots of stuff with AI (see their wild services page) that appears to have been announced on LinkedIn two months ago. The founders section lists other details apart from J.M.'s “over 7 years in the tech sector” which are interesting to read in light of J.M.'s own LinkedIn page.

Other people making points in this article:

C. L. Polk, award-winning author (of Witchmark).

“Illustrator Martin Deschatelets” whose employment prospects are dimming this year (and who knows a bunch of people in this situation), who per LinkedIn has worked on some nifty things.

“Ottawa economist Armine Yalnizyan”, per LinkedIn a fellow at the Atkinson Foundation who used to work at the Canadian Centre for Policy Alternatives.

Could the CBC actually seriously not find anybody willing to discuss the actual technology and how it gets its results? This is archetypal hood-welded-shut sort of stuff.

Things I picked out, from article and round table (before the video stopped playing):

Does that Unity Health doctor go back later and check these emergency room intake predictions against actual cases appearing there?

Who is the “we” who have to adapt here?

AI is apparently “something that can tell you how many cows are in the world” (J.M.). Detecting a lack of results validation here again.

“At the end of the day that’s what it’s all for. The efficiency, the productivity, to put profit in all of our pockets”, from J.M.

“You now have the opportunity to become a Prompt Engineer”, from J.M. to the author and illustrator. (It’s worth watching the video to listen to this person.)

Me about the article:

I’m feeling that same underwhelming “is this it” bewilderment again.

Me about the video:

Critical thinking and ethics and “how software products work in practice” classes for everybody in this industry please.

  • 200fifty@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 months ago

    The problem is I guess you’d need a significant corpus of human-written stuff in that language to make the LLM work in the first place, right?

    Actually this is something I’ve been thinking about more generally: the “ai makes programmers obsolete” take sort of implies everyone continues to use javascript and python for everything forever and ever (and also that those languages never add any new idioms or features in the future I guess.)

    Like, I guess now that we have AI, all computer language progress is just supposed to be frozen at September 2021? Where are you gonna get the training data to keep the AI up to date with the latest language developments or libraries?

    • gerikson@awful.systems
      cake
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 months ago

      Correct, it presumes that everyone will be eagerly learning new languages, and new features to existing languages, and writing about them, and answering questions about them, at the same rate as before, despite knowing that their work will be instantly ingested into LLM engines and resold as LLM output. At the same time, the audience for this sort of writing will disappear, because they’re all using LLMs instead of reading articles, blog posts, and Stackoverflow answers.

      It’s almost as if no-one has thought this through[1].

      Relatedly: https://gerikson.com/m/2023/09/index.html#2023-09-27_wednesday_04


      [1] unless the designers of LLMs actually fell for their own hype and believe they actually think.

      • 200fifty@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        9 months ago

        When you put it that way, I can’t help but notice the parallels to Google’s generative AI search feature, which suffers from a similar problem of “why would people keep writing posts as the source material for your AI if no one is gonna read it other than the AI web scraper”

      • Steve@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        this makes sense. It’s kind of like crypto being deflationary. There is no incentive to make something new just to feed it. Software has eaten the world and now all it can do is keep eating it’s own shit over and over

        • gerikson@awful.systems
          cake
          link
          fedilink
          English
          arrow-up
          5
          ·
          9 months ago

          Yes, with the difference that crypto was never realistically going to replace normal currency. There’s a real risk that LLM-generated content kills the open web, though. Both by flooding the zone with generated shit, and by destroying the motivation of humans to add to the inputs.

          • Steve@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            9 months ago

            do you have a rough outline of the steps you see toward the killing of the open web? Do you mean the effect of not realistically being able to stop the scraping of content?

            • gerikson@awful.systems
              cake
              link
              fedilink
              English
              arrow-up
              6
              ·
              9 months ago

              Basically, the incentives to publishing on the open web will go away.

              Because all open search results will be LLM-generated, output on the open web will drown in it, and people will flee to silos that can do a better job at keeping that crap out (similar to the tip to add “reddit” to google search queries to avoid the increasingly SEO’d search results). Said silos will either charge money for LLM-ingesters, or forbid them altogether. People will post to silos, either because that’s where the content is, or because the silo will claim their input is safe from LLM harvesting[1].


              [1] outside nerd circles I don’t think this is a big selling point