The model, called GameNGen, was made by Dani Valevski at Google Research and his colleagues, who declined to speak to New Scientist. According to their paper on the research, the AI can be played for up to 20 seconds while retaining all the features of the original, such as scores, ammunition levels and map layouts. Players can attack enemies, open doors and interact with the environment as usual.

After this period, the model begins to run out of memory and the illusion falls apart.

  • xionzui@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I mean, yes, technically you build and run AI models using code. The point is there is no code defining the game logic or graphical rendering. It’s all statistical predictions of what should happen next in a game of doom by a neural network. The entirety of the game itself is learned weights within the model. Nobody coded any part of the actual game. No code was generated to run the game. It’s entirely represented within the model.

    • huginn@feddit.it
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      What they’ve done is flattened and encoded every aspect of the doom game into the model which lets you play a very limited amount just by traversing the latent space.

      In a tiny and linear game like Doom that’s feasible… And a horrendous use of resources.

      • Todd Bonzalez@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        And a horrendous use of resources.

        This was a stable diffusion model trained on hundreds of thousands of images. This is actually a pretty small training set and a pretty lightweight model to train.

        Custom / novel SD models are created and shared by hobbyists all the time. It’s something you can do with a Gaming PC, so it’s not any worse a resource waste than gaming.

        I’m betting Google didn’t throw a lot of money at the “get it to play Doom” guys anyway.