• jjjalljs@ttrpg.network
    link
    fedilink
    arrow-up
    34
    ·
    1 day ago

    Yeah we finally set up a workflow where we get production data available in a staging environment. This has saved a lot of trouble via “well it worked on my local where there were 100 records, but prod has 1037492 and it does not”

    • _stranger_@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      17 hours ago

      I once tanked a production service by assuming it could handle at least as much load as my laptop on residential sub-gigabit Internet could produce.

      I was wrong by at least an order of magnitude.

    • kryptonianCodeMonkey@lemmy.world
      link
      fedilink
      arrow-up
      18
      ·
      1 day ago

      Same. Early on as a new dev, I failed to performance check my script (as did my qa tester) before it was released to production, and that was my first roll back ever. It was very unoptimized and incredibly slow under one of our highest density data streams. Felt like an idiot that I was good with it’s 1-2 second execution time in the dev environment.

      • 🐍🩶🐢@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        22 hours ago

        I deal with this constantly. Profilers are your friend. I keep begging my team to use the database dumps from production to test with, but nope. Don’t feel bad about messing up though. The amount of fuck ups I deal with in prod is exasperating. At least most of the things I break is a quick 5 minute fix and not weeks of rework.

        The hardest thing I have explaining to the team is the concept of time. Once you have done controls programming and get to witness how much happens in 50-100ms, it sinks in. Your thing takes 500ms? 1 second? They think this is acceptable on something that is dealing with less than 100 database records. 😭