• CodeMonkey@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      10 months ago

      All the time. Causes include:

      • Test depends on an external system (database, package manager)
      • Race conditions
      • Failing the test cleared bad state (test expects test data not to be in the system and clears it when it exits)
      • Failing test set up unknown prerequisite (Build 2 tests depends on changes in Build 1 but build system built them out of order)
      • External forces messing with the test runner (test machine going to sleep or running out of resources)

      We call those “flaky tests” and only fail a build if a given test cannot pass after 2 retries. (We also flag the test runs for manual review)

  • attero@feddit.de
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    The definition of insanity is doing the same thing over and over and expecting different results.

    • Aceticon@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      It’s even worse then: that means it’s probably a race condition and do you really want to run the risk of having it randomly fail in Production or during an important presentation? Also race conditions generally are way harder to figure out and fix that the more “reliable” kind of bug.

    • Octopus1348@lemy.lol
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      10 months ago

      There was that kind of bug in Linux and a person restarted it idk how much (iirc around 2k times) just to debug it.

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      0
      ·
      10 months ago

      Legit happens without a race condition if you’ve improperly linked libraries that need to be built in a specific order. I’ve seen more than one solution that needed to be run multiple times, or built project by project, in order to work.

      • abraxas@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        9 months ago

        Isn’t that the definition of a race condition, though? In this case, the builds are racing and your success is tied to the builds happening to happen at the right times.

        Or do you mean “builds 1 and 2 kick off at the same time, but build 1 fails unless build 2 is done. If you run it twice, build 2 does “no change” and you’re fine”?

        Then that’s legit.

  • Buttons@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    If that doesn’t work, sometimes your computer just needs a rest. Take the rest of the day off and try it again tomorrow.

      • CanadaPlus@futurology.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        I wonder if there’s an available OS that parity checks every operation, analogous to what’s planned for Quantum computers.

        • Danitos@reddthat.com
          link
          fedilink
          arrow-up
          0
          ·
          10 months ago

          Unrelated, but the other day I read that the main computer for core calculation in Fukushima’s nuclear plant used to run a very old CPU with 4 cores. All calculations are done in each core, and the result must be exactly the same. If one of them was different, they knew there was a bit flip, and can discard that one calculation for that one core.

          • CanadaPlus@futurology.today
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            10 months ago

            Interesting. I wonder why they didn’t just move it to somewhere with less radiation? And clearly, they have another more trustworthy machine doing the checking somehow. A self-correcting OS would have to parity check it’s parity checks somehow, which I’m sure is possible, but would be kind of novel.

            In a really ugly environment, you might have to abandon semiconductors entirely, and go back to vacuum as the magical medium, since it’s radiation proof (false vacuum apocalypse aside). You could make a nuvistor integrated “chip” which could do the same stuff; the biggest challenge would be maintaining enough emissions from the tiny and quickly-cooling cathodes.