DreamText is a cutting-edge model for image-to-3D generation. It utilizes text descriptions to effectively learn the content of images, subsequently leveraging this information to guide Stable Siffusion as a loss prior for a Nerf model. By integrating text-based guidance, DreamText enhances the generation process, leading to improved stability and fidelity in the resulting 3D reconstructions. This webpage showcases the rendered 3D reconstructions produced by our model. These reconstructions serve as a basis for comparison with the state-of-the-art model known as RealFusion. By examining and contrasting the outputs of both models, visitors can gain insights into the respective strengths and capabilities of our approach in relation to RealFusion's performance.
Used images to condition the image to 3D generative models DreamText and RealFusion.