Published in News

Chipzilla shows off Latent Diffusion Model for 3D

by on28 June 2023


Uses generative AI to create realistic 3D visual content

Intel Labs and Blockade Labs have developed Latent Diffusion Model for 3D (LDM3D), a diffusion model that uses generative AI to create realistic 3D visual content.

Intel said that its LDM3D is the industry’s first model to generate a depth map using the diffusion process to create 3D images with 360-degree views that are vivid and immersive.

Chipizilla adds that LDM3D could revolutionise content creation, metaverse applications and digital experiences, transforming a wide range of industries, from entertainment and gaming to architecture and design.

Intel Labs boffin Vasudev Lal said that generative AI technology further augments and enhances human creativity and saves time. 

“Most of today’s generative AI models are limited to generating 2D images and few can generate 3D images from text prompts. Unlike existing latent stable diffusion models, LDM3D allows users to generate an image and a depth map from a given text prompt using almost the same number of parameters. It provides more accurate relative depth for each pixel in an image compared to standard post-processing methods for depth estimation and saves developers significant time to develop scenes," Lal said.

He thinks the research could revolutionise digital content interaction by enabling users to experience their text prompts in previously inconceivable ways.

“The images and depth maps generated by LDM3D enable users to turn the text description of a serene tropical beach, a modern skyscraper or a sci-fi universe into a 360-degree detailed panorama. This ability to capture depth information can instantly enhance overall realism and immersion, enabling innovative applications for industries ranging from entertainment and gaming to interior design and real estate listings, as well as virtual museums and immersive virtual reality (VR) experiences, he said.

LDM3D was trained on a dataset constructed from a subset of 10,000 samples of the LAION-400M database, which contains over 400 million image-caption pairs. The team annotated the training corpus using the Dense Prediction Transformer (DPT) large-depth estimation model (previously developed at Intel Labs).

The DPT-large model provides highly accurate relative depth for each pixel in an image. The LAION-400M dataset has been built for research purposes to enable testing model training on a larger scale for broad researcher and other interested communities.

The LDM3D model is trained on an Intel AI supercomputer powered by Intel Xeon processors and Habana Gaudi AI accelerators. The resulting model and pipeline combine generated RGB image and depth map to generate 360-degree views for immersive experiences.

To demonstrate the potential of LDM3D, Intel and Blockade researchers developed DepthFusion, an application that uses standard 2D RGB photos and depth maps to create immersive and interactive 360-degree view experiences.

DepthFusion uses TouchDesigner, a node-based visual programming language for real-time interactive multimedia content, to turn text prompts into interactive and immersive digital experiences. The LDM3D model is a single model to create an RGB image and its depth map, leading to savings on memory footprint and latency improvements.

 

Last modified on 28 June 2023
Rate this item
(3 votes)

Read more about: