AI has super-charged indie game dev. Over the last two weeks I paired Unreal Engine 5 with a stack of AI tools to crank out a playable on-rails light-gun prototype—solo. From brainstorming and concept art to code, SFX and music, AI handled heavy lifting so I could stay in the zone and keep the ideas flowing instead of drowning in busywork.
There's a quick-cut video of the build process below; the full walk-through sits at the end of the post.
Bottom line: AI touched every stage, turning a typical multi-person project into a one-man sprint.
I asked myself the basics up front: What's the core loop? Why is it fresh? Where will it ship?
Inspiration came from those goofy "DOOM x Animal Crossing" mashup videos, the Fall Guys / DOOM crossover, plus the arcade Castlevania rail shooter. The mash-ups sparked an idea: mix on-rails shooting with melee. Players ride a preset path, blast distant enemies with a light gun, then whip out a laser sword to slash or parry anything that gets too close. It's a combo you rarely see in classic rail shooters and perfect for a two-week scope.
After the concept clicked, I tossed the outline into Gemini and let it spit back a day-by-day schedule. Was it perfect? Nope. But it gave me a solid baseline and saved hours of calendar shuffling. I kept a living to-do list in Tencent Docs and tuned it nightly.
Goal One: Week 1 = stand-up a tiny MVP level that proves the gun-plus-saber loop.
Gemini's first pass landed in a few hours instead of days and gave me a clear checklist to attack.
Every level follows this arc: light skirmishes kick things off, difficulty ramps up, and it all climaxes in a boss fight for a tight, high-energy finish.
Before I wrote a single line of code, I tossed the core feature list into ChatGPT for a sanity check. The AI came back with an MVP plan that boiled everything down to one short level—just enough to prove that rail-shooting plus melee actually feels good.
That lines up with the "iterate fast" mindset: nail the tiniest slice first, then expand. I use a back-of-the-napkin rule: if you can prototype a mechanic in a day, a full game around it will run about a year; two days of prototyping means closer to two years of production, and so on. In other words, keep the test build small enough to finish in a few days, so the whole demo can wrap up in two weeks.
Bottom line: Week 1's mission was a stripped-down level that locks in the core shoot-slash loop.
To lock in the game's musical vibe fast, I flipped the usual order—music first, visuals later. I turned to Suno AI, a powerhouse generative-music tool that writes and sings full tracks from any lyrics and style prompts you feed it.
One cool trick: Suno lets you drop "meta tags" right into the lyrics, wrapped in brackets—things like [Intro], [Male Voice], [Interlude], [Female Voice], [Guitar Solo]. Those tags steer the song's structure, vocals, instrumentation, and genre (pop, electronic, whatever) for instant fine-tuning. The full tag catalog lives on Suno's site.
Here's how I used it:
Prompt: Imagine you're a songwriter. In the style of Kendrick Lamar's "Not Like Us," write a track called "Cute Bomber." Please provide:
- The full lyrics.
- The backing-track details, expressed as Suno AI keywords (genre, mood, instruments, meta-tags, etc.).
After a quick A/B test, I grabbed the punchiest electronic segment as a placeholder theme. The track still needs a human pass for mixing and polish, but Suno AI let me nail the musical direction in record time.
I'm going for a mash-up of sci-fi flash and Saturday-morning cute. Picture chibi-size future soldiers battling a hellspawn invasion in a neon-soaked cyber-city—adorable on the surface, but still packed with high-octane edge.
Style touchstones: the bubbly animal shapes and bright palettes of Fall Guys, fused with hard-glow cyberpunk signage and circuitry. Blend those ingredients and you get a candy-colored battlefield that's equal parts charming and electric.
For concept art, I leaned on DeepSeek plus Midjourney:
Static concept art was just the start—I also put together a proof-of-concept opening cinematic. The whole thing came together with a relay of AI tools:
A few AI hand-offs later, the concept trailer was done—locking down the game's audio-visual vibe with a fraction of the usual effort.
Even as a solo dev, I spun up an SVN repo on my home NAS—just a Docker container and done. Unreal Engine plays nicely with SVN, the setup is painless, and it handles the fat binary files that come with game projects.
With proper version control in place, I can branch at will, drop checkpoints, and experiment without fear. Rollbacks and diffs are instant, which matters even more when you're living in Blueprints—UE5's editor has built-in diff support. Solid source control is the safety net that makes rapid iteration possible.
Because this is a rail shooter, UE5's Sequencer sits at the heart of everything. I dug into its event-trigger system and confirmed it's good for more than cutscenes—it can run moment-to-moment gameplay with frame-perfect timing, which is exactly what a rhythm-heavy shooter needs.
The timeline owns the player's rail path and every camera cut. Keyframes inside a Level Sequence asset lock in position and rotation, so the view glides along a preset route and transitions cleanly between angles—no extra code required.
I also need Sequencer to fire gameplay beats—spawning enemies, kicking off animations, whatever. The solution: custom event tracks. Drop an "EnemySpawn" or "MonsterAttack" marker on the timeline, bind it to a custom event in the level Blueprint, and when playback hits that frame the game logic fires automatically. It turns level scripting into a visual, easily tweakable flow chart.
I ran another feasibility check—mixing my own experience with ChatGPT's advice—and locked in the tech approach. ChatGPT's top tip was: build the smallest possible loop first to prove the shoot-and-slash combo. Our MVP: one short level with rail movement, a handful of enemies, and a boss fight.
In UE5, I whipped up that prototype fast. Sequencer drove the first-person rail, timeline events spawned enemies that swooped in, and the player shot them down. Once the loop played smoothly, I had real data on pacing, difficulty, and fun—so I could confidently tackle full development.
Player Controller & Rail Movement
The player isn't free-roaming—instead, they're "locked" to a preset rail. I made a Character subclass but handed off all movement to Sequencer. In a Level Sequence asset, keyframes drive the Pawn's Location and Rotation over time. Binding that Sequence to the player automatically moves and turns them along the path. I also hooked up multiple cameras to the same timeline so the view cuts smoothly without extra code.
Combat System
Input is simple: left-click to shoot, hold-and-drag to swing the laser sword.
Weapons live in two Blueprint components—one for the gun (trace, VFX, damage) and one for the sword (arc detection, slash VFX, block/parry).
Enemies & Level Logic
I built four enemy classes: melee grunts, flyers, ranged flyers, and a boss. The first three run on AIControllers with Behavior Trees—once you spawn them, the tree handles movement and attack decisions. The boss's phases and special attacks are driven directly by Sequencer events for tight timing.
To sequence the whole level, I created a LevelPoint
actor (subclassing LevelSequenceActor). You place multiple LevelPoints along the rail; each has its own Sequence asset for spawning enemies, playing lines, camera shakes, etc. A LevelPoint defines its "next" node (or branches based on conditions). When the player reaches a LevelPoint, its Sequence plays; once its win condition is met (all enemies down or time's up), the system jumps to the next LevelPoint's Sequence. I even added custom tracks to Sequencer so it can check game state and decide when to advance—making the entire flow a modular, visual "node + timeline" storyboard.
AI-Assisted Blueprint Scripting
ChatGPT was my go-to whenever a Blueprint question popped up ("How do I spawn enemies from a Sequencer event?" "How to convert mouse drag into a melee hit?"). It delivered step-by-step answers, pointed to docs or forum posts, and even sketched out pseudo–Blueprint nodes.
For instance, when I asked how to tweak a material at runtime, it told me to use a "Create Dynamic Material Instance" node and reminded me to store the reference. A few clicks later, my laser-sword swings lit up enemies exactly as planned.
Having an AI "tech consultant" on call cut through roadblocks and boosted both speed and confidence. Its advice sometimes needed a tweak or two, but overall it handled countless little details so I could keep building.
UI needed to match the game's cute cyberpunk look. I used GPT-4o to generate flat, in-game art assets in a "Splatoon"–style—feeding it clear prompts (reference images, colors, layout notes, text descriptions) and iterating until I got a set of neon-graffiti UI pieces (e.g. pipeline-style health/energy bars, splashy score panels).
GPT-4o's image-generation (released March 2025) proved powerful but imperfect:
Overall, the GPT-4o is a game-changer—ideas now outpace technical hurdles, and a new, AI-driven workflow for 2D UI design is emerging.
Thanks to this AI pipeline—concept → 3D mesh → rig → animation—a process that normally takes weeks was done in days.
I had Gemini study the arcade Castlevania flow and spit out a node-based level roadmap. Its breakdown—intro → mobs → mid-boss → more mobs → final boss—nailed the feel, even specifying enemy types and counts at each stage. I then tweaked that script to fit my own design and finalized the level plan.
I used the LevelPoint system to assemble the level by dropping event nodes along the player's rail. Each LevelPoint ties to its own Level Sequence in Sequencer, where you choreograph everything—enemy spawns, camera shakes, dialogue, etc.
For example:
The "node + timeline" method turns level design into a visual edit: to tweak pacing or add a beat, just insert, remove, or adjust event markers in Sequencer.
I built an on-the-fly dialog system instead of using fixed VO scripts. At each story node, AI crafts a fresh line and then we TTS it.
Here's how it works:
Dialogue Generation Flow
All of that happens in seconds, so every play-through hears fresh lines without any hand-written script or recorded VO.
Challenges & Tweaks
Takeaways
Translated and republished with permission.