• pycorax@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Do x86 CPUs with iGPUs not already use unified memory? I’m not exactly sure what you mean but are you referring to the overhead of having to do data copying over from CPU to GPU memory on discrete graphics cards when performing GPU calculations?

    • sunbeam60@lemmy.one
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      Yes unified and extremely slow compared to an ARM architecture’s unified memory, as the GPU sort of acts as if it was discrete.

      • pycorax@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        Do you have any sources for this? Can’t seem to find anything specific describing the behaviour. It’s quite surprising to me since the Xbox and PS5 uses unified memory on x86-64 and would be strange if it is extremely slow for such a use case.

        • sunbeam60@lemmy.one
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          It’s been a while since I’ve coded on the Xbox, but at least in the 360, the memory wasn’t really unified as such. You had 10 MB of EDRAM that formed your render target and then there was specialised functions to copy the EDRAM output to DRAM. So it was still separated and while you could create buffers in main memory that you access in the shaders, at some penalty.

          It’s not that unified memory can’t be created, but it’s not the architecture of a PC, where peripheral cards communicate over the PCI bus, with great penalties to touch RAM.