I got a pingback recently letting me know that one of my posts on this blog had been linked to by another. Curious about who was using my post about stealth in Cyberpunk 2077 as reference I checked it out. The post seemed… Odd. There was something off about its verbiage, links didn’t make sense. And then I realized: Oh, duh, the whole thing is AI generated slop.
Clicking around the website a bit confirmed that, yeah, all of its articles and ‘guides’ have been generated by a large language model. There are a couple embedded YouTube videos, but even the images in the posts, rather than using screencaps from the game, are just generated images that look vaguely cyberpunk-y. There’s nothing really from the game, it just looks kinda right.
And also kinda wrong. I read the post that linked to mine, curious if I could suss out what had been ‘borrowed’ from my blog post. But it kept mentioning being able to use the lighting in Cyberpunk 2077 to stealth. I played a lot of Cyberpunk last year, experimenting with both stealth and guns-blazing builds, and at no point did it seem like the lighting of the area I was in had anything to do with stealth. Brightly lit room? Hide behind the table. Dim dock? Hide behind the table. Didn’t make a lick of difference whether I was in the shadows or not — the deciding factor was line of sight, lighting be damned.
But why was this ‘guide’ so sure that lighting was so important? Why did it think the player could hide in the shadows? Then it hit me — the LLM got confused by a metaphor. Surely someone, somewhere described the player as able to ‘hide in the shadows.’ It’s something that you or I read and recognize as not being about literally using shadows to hide, but a language model doesn’t understand that. The machine interprets shadows literally and so we have a guide to a game that’s just wrong.
Herein lies one of the big flaws of using LLMs to generate stuff: they don’t actually know anything. This machine hasn’t actually played Cyberpunk 2077, it’s just repeating words it’s heard and like a toddler with babbling words, it doesn’t know what the words it’s saying mean, nor is it able to parse for itself what’s actually going on. They are so many empty words, strung together in a fashion that sounds right, heedless as to if they actually are.
While writing this post, I did quickly Google if you could actually use darkness as stealth in Cyberpunk, just in case I had in fact missed that part of stealth. I was greeted by Google’s ‘AI Overview’ telling me that yes, it was. Its citation? A forum post discussing rules for Dungeons & Dragons (not to belabor the point, but while Cyberpunk 2077 is based on a tabletop RPG, Dungeons & Dragons is not it). It’s almost funny just how wrong it is — if it wasn’t also worrying that this is just kinda there. It’s one thing when it’s a mistake about a game, but with LLMs being hawked as the Next Best Thing for everything from customer service to therapy it makes me pause and wonder if we’ve thought this through. Because LLMs sound authoritative and if they’re presented as such, but there is no actual thought behind the words. Listen too closely and you’ll end up wondering why hiding in the shadows in Cyberpunk 2077 isn’t working, and why Google’s telling you to eat rocks.