- cross-posted to:
- gaming@lemmy.ml
- cross-posted to:
- gaming@lemmy.ml
Excerpts:
The Verbal Verdict demo drops me into an interrogation room with basic facts about the case to my left, and on the other side of a glass window are three suspects I can call one at a time for questioning. There are no prompts or briefings—I just have to start asking questions, either by typing them or speaking them into a microphone
The responses are mostly natural, and at times add just a bit more information for me to follow up on.
Mostly. Sometimes, the AI goes entirely off the rails and starts typing gibberish
There are, of course, still many limitations to this implementation of an LLM in a game. Kristelijn said that they are using a pretty “censored” model, and also adding their own restrictions, to make sure the LLM doesn’t say anything harmful. It also makes what should be a very small game much larger (the demo is more than 7GB), because it runs the model locally on your machine. Kristelijn said that running the model locally helps Savanna Developments with privacy concerns. If the LLM runs locally it doesn’t have to see or handle what players are typing. And it also is better for game preservation because if the game doesn’t need to connect to an online server it can keep running even if Savanna Developments shuts down.
it’s pretty hard to “write” different voices for them. They all kind of speak similarly. One character in the full version of the game, for example, speaks in short sentences to convey a certain attitude, but that doesn’t come close to the characterization you’d see in a game like L.A. Noire, where character dialogue is meticulously written to convey personality.
Since I learned about LLMs when ChatGPT became popular, the one thing I wanted to see was games where you can actually talk to NPCs (using a locally running LLM like here, not using ChatGPT) and it’s cool to see that we’re getting closer and closer to that
This is the only actually good use of LLMs I can really think of. As long as there is a good way to keep them within the bounds of the actual story it would be great for that
I think they also have potential for creating lots of variations in dialogue pre-run in a database, and manually checked by a writer for QC.
The problem with locally-run LLMs is that the good ones require massive amounts of video memory, so it’s just not feasible anytime soon. And the small ones are, well, crappy. And slow. And still huge (8GB+).
That of course means you can’t get truly dynamic branching dialogue, but it can enable things like getting thousands of NPC lines instead of “I took an arrow to the knee” from every guard in every city.
It can also be used to generate dialogue, too, so not just one-liners, but “real” NPC conversations (or rich branching dialogue options for players to select.)
I’m very skeptical that we’ll get “good” dynamic LLM content in games, running locally, this decade.
I’ve played Ace Attorney and the writers put a lot of love and personality into the characters. I’d be sceptical if an AI could get close enough to any kind of writing style to “kill” writing in games like that.
Honestly getting fed up of AI doing a mediocre job of creating art and then people claiming it kills whole industries because it’s the “in” technology.
Yeah, the whole study of character is kind of outside the realm of an LLM. They can sort of reguritate, but they don’t think about humans like a humans do, the things we find funny or interesting about eachother, the small nuances that potential reveal a person’s history and internal experience.
An LLM, as sophisticated a piece of technology as it is, cannot really do that like a human writer can. Best it can do is sort of mimic in a limited capacity.
I don’t think we’ll see this any time soon, because corpos probably won’t listen to any creative that presents this, but I want something where the LLM runs locally and is just used to interpret what you are asking for but the dialogue responses are all still written by a writer. Then you can make the user interaction feel more intuitive, but the design of the story and mechanics can just respond to the implied tone, questions, prompts, keywords from the user.
Then you could have a dialogue tree that responds with a nice well constructed narrative, but a user who asked something casually vs accusatory might end up with slightly different information.
Unless you’re willing to put in some kind of response that basically says “I’m not going to respond to that” (and that’s a sure way to break immersion) this is effectively impossible to do well, because the writer has to anticipate every possible thing a player could say and craft a response to it. If you don’t, you’ll end up finding a “nearest fit” that is not at all what the player was trying to say, and the reaction is going to be nonsensical from the player’s perspective
LA Noire is a great example of this, although from the side of the player character: the dialogue was written with the “Doubt” option as “Press” (as in, put pressure on the other party). As a result, a suspect can say something, the player selects “Doubt”, and Phelps goes nuts making wild accusations instead of pointing out an inconsistency.
Except worse, because in this case, the player says something like “Why didn’t you say something to your boss about feeling sick?” and the game interpreted it as “Accuse them of trying to sabotage the business.”