Researchers at Carnegie Mellon University are attempting to make a task, once only possible in science fiction movies, an everyday occurrence. Using basic 3D models, the group hopes their new software can lead to full 3D manipulation of traditional 2D photos.
Led by Associate Researcher Yasser Sheikh, the Carnegie Mellon team has created software, which enables the use of freely available stock 2D images of everyday objects, such as furniture, cars, clothing and appliances, and turn them into 3D models. These models can then be manipulated in 3D spaces as the user desires. The software also includes an ability to adapt the image’s lighting and blending after the individual aspect is transformed.
The software relies on a massive library of stock 3D renderings and images to function. Once a user selects the entire or portion of the 2D image to manipulate, the software utilizes the catalog to create a simple 3D rendering that can then be transformed by the user. Once the model is created, the photo can be completely re-imagined.
The researchers believe that as 3D scanning and printing becomes more widespread, more stock models will become available for the software to tap into, filling the gaps in its database. “In the real world, we’re used to handling objects — lifting them, turning them around or knocking them over,” Robotics Institute PhD student Natasha Kholgade told CNET.
The authors note as a result to the rising accessibility of 3D renderings of everyday objects, “Public repositories of 3D models are growing rapidly and several Internet companies are currently in the process of generating 3D models for millions of merchandise items, such as toys, shoes, clothing, and household equipment. It is therefore increasingly likely that for most objects in an average user photograph a stock 3D model will soon be available, if it is not already.”
If this new software proves to be viable, the resourcefulness of the platform for the 3D printing community could be invaluable. If you want to read the full report from the research team, the document can be found here.