These experts on AI are here to help us understand important things about AI.

Who are these generous, helpful experts that the CBC found, you ask?

“Dr. Muhammad Mamdani, vice-president of data science and advanced analytics at Unity Health Toronto”, per LinkedIn a PharmD, who also serves in various AI-associated centres and institutes.

“(Jeff) Macpherson is a director and co-founder at Xagency.AI”, a tech startup which does, uh, lots of stuff with AI (see their wild services page) that appears to have been announced on LinkedIn two months ago. The founders section lists other details apart from J.M.'s “over 7 years in the tech sector” which are interesting to read in light of J.M.'s own LinkedIn page.

Other people making points in this article:

C. L. Polk, award-winning author (of Witchmark).

“Illustrator Martin Deschatelets” whose employment prospects are dimming this year (and who knows a bunch of people in this situation), who per LinkedIn has worked on some nifty things.

“Ottawa economist Armine Yalnizyan”, per LinkedIn a fellow at the Atkinson Foundation who used to work at the Canadian Centre for Policy Alternatives.

Could the CBC actually seriously not find anybody willing to discuss the actual technology and how it gets its results? This is archetypal hood-welded-shut sort of stuff.

Things I picked out, from article and round table (before the video stopped playing):

Does that Unity Health doctor go back later and check these emergency room intake predictions against actual cases appearing there?

Who is the “we” who have to adapt here?

AI is apparently “something that can tell you how many cows are in the world” (J.M.). Detecting a lack of results validation here again.

“At the end of the day that’s what it’s all for. The efficiency, the productivity, to put profit in all of our pockets”, from J.M.

“You now have the opportunity to become a Prompt Engineer”, from J.M. to the author and illustrator. (It’s worth watching the video to listen to this person.)

Me about the article:

I’m feeling that same underwhelming “is this it” bewilderment again.

Me about the video:

Critical thinking and ethics and “how software products work in practice” classes for everybody in this industry please.

  • zogwarg@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    9 months ago

    I wouldn’t be so confident in replacing junior devs with “AI”:

    1. Even if it did work without wasting time, it’s unsustainable since junior devs need to acquire these skills, senior devs aren’t born from the void, and will eventually graduate/retire.
    2. A junior dev willing to engage their brain, would still iterate through to the correct implementation for cheaper (and potentially faster), than senior devs needing spend time reviewing bullshit implementations, and at arcane attempts of unreliable “AI”-prompting.

    It’s copy-pasting from stack-overflow all over again. The main consequence I see for LLM based coding assistants, is a new source of potential flaws to watch out for when doing code reviews.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 months ago

      Isn’t the lack of junior positions already a problem in a few parts of the tech industry? Due to the pressures of capitalism (drink!) I’m not sure it will be as easy as this.

      • zogwarg@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        9 months ago

        I said I wouldn’t be confident about it, not that enshitification would not occur ^^.

        I oscillate between optimisim and pessimism frequently, and for sure some many companies will make bad doo doo decisions. Ultimately trying to learn the grift is not the answer for me though, I’d rather work for some company with at least some practical sense and pretense at an attempt of some form of sustainability.

        The mood comes, please forgive the following, indulgent, poem:
        Worse before better
        Yet comes the AI winter
        Ousting the fever

      • Aceticon@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        9 months ago

        The outsourcing trend wasn’t good for junior devs in the West, mainly in english-speaking countries (except India, it was great there for them).

    • Aceticon@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      9 months ago

      It’s worse that “copy-pasting from stack-overflow” because the LLM actually loses all the answer trustworthiness context (i.e. counts and ratios of upvotes and downvotes, other people’s comments).

      That thing is trying to find the text tokens of answer text nearest to the text tokens of your prompt question in its text token distribution n-dimensional space (I know it sound weird, but its roughly how NNs work) and maybe you’re lucky and the highest probability combination of text-tokens was right there in the n-dimensional space “near” your prompt quest text-tokens (in which case straight googling it would probably have worked) or maybe you’re not luck and it’s picking up probabilistically close chains of text-tokens which are not logically related and maybe your’re really unlucky and your prompt question text tokens are in a sparcelly populated zone of the n-dimensional text space and you’re getting back something starting and a barelly related close cluster.

      But that’s not even the biggest problem.

      The biggest problem is that there is no real error margin output - the thing will give you the most genuine, professional-looking piece of output just as likely for what might be a very highly correlated chain of text-tokens as for what is just an association of text tokens which is has a low relation with your prompt question text-token.

    • wagesj45@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      9 months ago

      who don’t know how to implement it

      He didn’t say anything about replacing them. Certain tedious aspects that get farmed out to junior devs the AI will certainly be able to do, especially under supervision of a developer. Junior devs that refuse to learn how to use and implement the AI probably will get left behind.

      AI won’t replace anyone for a long time (probably). What it will do is bring about a new paradigm on how we work, and people who don’t get on board will be left behind, like all the boomers that refuse to learn how to open PDF files, except it’ll happen much quicker than the analogue-to-digital transition did and the people effected will be younger.