• EatATaco@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    I was an embedded developer for years for critical applications that could not go down. While I preferred avoiding dynamically allocating memory, as it was much less risky, there were certainly times it just made sense or was the only way.

    One was when we were reprogramming the device, which was connected to an fpga which would also require reprogramming. You couldn’t store both the fpga binary and the new binary for the device in memory at once, but there was plenty of space to hold each one individually. So allocate the space for the fpga, program it, free and allocate space the new processor code, verify and flash.

    What am I missing? Have things changed?

    • owenfromcanada@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      I’d effectively gain the advantage of dynamic allocation by using a union (or just a generic unsigned char buffer[16384] and use it twice). Mostly the same thing as a malloc.

    • verysuchaccount@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      9 months ago

      You said it yourself:

      While I preferred avoiding dynamically allocating memory, as it was much less risky, there were certainly times it just made sense or was the only way.

      This is not a common attitude to have outside of embedded and similar areas. Most programmers dynamically allocate memory without a second thought and not as a last resort. Python is one of the most popular programming languages, but how often do you see Python code that is capable of running without allocating memory at runtime?

      • EatATaco@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        I guess I’m taking the meme too literally here and that people would be disgusted by it. While I think it’s a common practice, but obviously to be used very judiciously.