• Aceticon@lemmy.world
      link
      fedilink
      arrow-up
      20
      ·
      5 months ago

      It’s even worse then: that means it’s probably a race condition and do you really want to run the risk of having it randomly fail in Production or during an important presentation? Also race conditions generally are way harder to figure out and fix that the more “reliable” kind of bug.

      • dev_null@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        Or it was an issue with code generation, or something in the environment changed.

    • Octopus1348@lemy.lol
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      5 months ago

      There was that kind of bug in Linux and a person restarted it idk how much (iirc around 2k times) just to debug it.

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      2
      ·
      5 months ago

      Legit happens without a race condition if you’ve improperly linked libraries that need to be built in a specific order. I’ve seen more than one solution that needed to be run multiple times, or built project by project, in order to work.

      • abraxas@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        5 months ago

        Isn’t that the definition of a race condition, though? In this case, the builds are racing and your success is tied to the builds happening to happen at the right times.

        Or do you mean “builds 1 and 2 kick off at the same time, but build 1 fails unless build 2 is done. If you run it twice, build 2 does “no change” and you’re fine”?

        Then that’s legit.

        • KairuByte@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          Yup, it’s that second one. 0% chance of success until all dependencies are built, then the final run has a 100% chance to work.