In my work as a 3D artist, I’ve found that smart retopology is the single most critical step in transforming a raw photogrammetry scan into a production-ready asset. It’s the process that bridges the gap between captured data and a usable, efficient 3D model. Through extensive trial and error, I’ve developed a hybrid workflow that leverages AI automation for speed and manual tools for precision, ensuring clean topology that’s optimized for texturing, animation, and real-time performance. This guide is for anyone—from indie developers to studio artists—who needs to clean up scanned data for games, film, or XR.
Key takeaways:
Photogrammetry delivers incredible surface detail, but the raw output is a data set, not a functional 3D model. Smart retopology is the intelligent reconstruction of that model's wireframe.
When I first import a raw scan, I'm typically faced with a multi-million-polygon mesh. It’s dense, but the topology is a chaotic triangle soup with no consideration for edge flow. This causes several immediate problems: the file size is enormous, the mesh is often non-manifold (containing holes or flipped normals), and the UVs are either non-existent or a fragmented nightmare. In a real-time engine, this model would crash a scene. For animation, it would be impossible to deform cleanly. The high poly count is also deceptive; the density is uneven, wasting polygons on flat surfaces while undersampling complex curves.
My retopology process is guided by three non-negotiable objectives. First, I must create a clean, quad-dominant mesh. Quads deform predictably and subdivide neatly, which is essential for animation and further sculpting. Second, I need intelligent polygon distribution. I aim to place edge loops where they matter—along sharp creases, major forms, and articulation points—and reduce density in flat areas. Finally, the mesh must be "watertight" and manifold, with a clean UV layout ready to transfer the high-resolution scan detail onto a low-resolution model via normal and displacement maps.
This is the practical sequence I follow for almost every photogrammetry asset. Consistency here saves countless hours downstream.
Before I touch a single polygon, I analyze the scan. I identify the asset's primary forms, its critical details (like engraved text or fabric folds), and its intended use. A prop for a cinematic background has different needs than a hero asset for a game or a character for rigging. I then set a target polygon budget. For a real-time game asset, this could be 5k to 50k tris depending on its screen size. For film, it might be higher, but the principle of efficiency remains. I also note areas that will need specific edge loops for deformation if the asset will be animated.
I never run retopology on the original multi-million-poly scan. First, I use a decimator to reduce the mesh to a more manageable size—often between 5-10% of its original count—while attempting to preserve silhouette and major details. This step is purely for performance during the next stages. I then run a cleanup pass to fix non-manifold geometry, remove floating debris, and fill any major holes. This prepped mesh becomes the "sculpt" or reference mesh for the retopology process.
This is where I apply a blended approach. For organic forms and large, continuous surfaces, I use AI-powered retopology. In my workflow, I’ll feed the decimated scan into Tripo AI and define my target poly count and desired edge flow (e.g., "organic" or "hard surface"). It generates a clean base mesh in seconds, which is a phenomenal starting point. However, I never accept this as final. I then import this base mesh into a traditional 3D suite (like Blender or Maya) for manual refinement. I use tools like the Shrinkwrap modifier and manual poly modeling to pin and correct edge loops around hard edges, complex intersections, and areas where the AI's guess didn't match my artistic intent.
With a clean low-poly mesh complete, I immediately unwrap it. A clean topology makes this step infinitely easier. I create UV islands with minimal stretching and efficient use of texture space. Once unwrapped, I ensure the low-poly mesh is perfectly aligned with the original high-poly scan. This setup is crucial for the final step: baking. I bake the high-poly detail (from the original or decimated scan) onto the low-poly mesh using normal, ambient occlusion, and displacement maps. The clean UVs and accurate cage/ray distance ensure a flawless bake with no artifacts.
Choosing the right tool for each part of the job is the mark of an efficient pipeline.
I lean on AI for the initial heavy lifting. It's unbeatable for quickly generating a sensible base topology for organic objects like rocks, trees, or terrain, and for establishing primary edge flow on complex shapes. It's my go-to when I have a batch of assets that need to be brought to a consistent, production-ready baseline quickly. The time savings here are measured in hours, not minutes.
AI still struggles with precise technical requirements. I always take manual control for: Hard-surface modeling where perfect, straight edge loops and 90-degree angles are mandatory; defining exact edge loops for skeletal deformation (like around shoulders or knees); and fixing topological errors in areas of complex overlap or thin geometry, which AI often misinterprets.
My optimal workflow is a sandwich: AI in the middle, manual work on both ends. I manually prepare the scan (decimate, clean). I use AI to generate the 80% solution—the bulk of the retopology. Then I manually perfect the final 20%, focusing on functional and artistic precision. This blend gives me the speed of automation with the control of hands-on artistry.
These are the hard-won lessons that separate a usable model from a professional one.
My mantra is "density follows detail." I use more polygons to define a character's facial features or the intricate carving on a prop, and fewer on the flat plane of their back or the prop's handle. I constantly check my model against my initial target budget. A useful trick is to apply a checkerboard texture to the UVs early on; it instantly reveals stretching and shows if your texture space is being allocated efficiently relative to mesh density.
The silhouette is king. I prioritize edge loops that define the object's outer shape. For hard edges, I always use supporting edge loops close to the crease if the model will be subdivided or have a smooth shading applied; this prevents the edge from rounding out. I mark these edges as sharp in the mesh data to ensure they bake correctly.
If the asset will be rigged and animated, topology is destiny. I ensure edge loops follow the natural lines of deformation: loops around eyes and mouth, loops following major muscle groups, and loops at joint intersections. Pole management is critical—I try to position poles (vertices where more than four edges meet) in areas of low deformation or hide them within flat geometry. A clean, flowing topology here means realistic, artifact-free animation.
moving at the speed of creativity, achieving the depths of imagination.
Text & Image to 3D models
Free Credits Monthly
High-Fidelity Detail Preservation