seed3d 1.0 is ByteDance’s 2025 single-image-to-3D diffusion-transformer model that outputs detailed explicit meshes. It blends an image encoder with a latent 3D VAE to convert a single RGB input into shape tokens, then iteratively denoises them into high-fidelity geometry before decoding through a mesh generator to ensure watertight topology suitable for physics simulations.
Unlike multi-view reconstruction pipelines, seed3d maintains deterministic feed-forward performance. It integrates multi-view texture synthesis for aligning albedo, metalness, and roughness across viewpoints, all while preserving small typographic and mechanical details.
Every seed3d output includes UV-mapped textures, PBR maps (albedo, metalness, roughness), and consistent scale metadata that downstream platforms like NVIDIA Isaac Sim can utilize to infer mass and friction, enabling immediate embodied AI experimentation.