MIT scientists make Pokemon Go more interactive with Interactive Dynamic Video

Scientists from MIT have come up with a new technology called the Interactive Dynamic Video that can take pictures of real objects and immediately create interactive videos for people or 3D models. According to the scientists who developed the technology, these advancements could also help simulate how real bridges and buildings might respond to natural disasters. The smartphone game Pokemon Go that superimposes images onto the real world to create a mix of reality could really benefit from this.

For example, in contrast, the Pokemon Go app can drop virtual characters into real-world environments, IDV can go a step beyond that and help enable virtual objects like the Pokemon to interact with their environments in specific and very realistic ways such as bouncing off the leaves of a nearby bush.

The IDV lets the user reach in and touch objects in videos. Using traditional cameras and algorithms, IDV looks at the tiniest and most invisible of vibrations of an object to create simulations in a video that the users can virtually interact with.

CSAIL Ph.D. student Abe Davis, who is said to be publishing this work this month for his final dissertation said,”This technique lets us capture the physical behavior of objects, which gives us a way to play with them in virtual space.”

This kind of interactive videos can help predict how an object will react to unknown forces and help prepare for calamities. Apart from this, as far as entertainment is concerned, filmmakers can use it to produce new kinds of visual effects.

Also Read: Pokemon Go India release date: Augmented reality game might launch in August

This kind of technology goes well with 3D models. It is very difficult and expensive in movies to get CGI characters to realistically interact with their real-world environment. It requires the use of green-screens and detailed models of virtual objects that can be made to sync with live performances.

IDV helps a videographer take video of an existing real-world environment and do minor editings like masking, matting, and shading to achieve a similar or better effect in much less time and cost.

This work by the Computer Science and Artificial Intelligence Laboratory at MIT was supported by the National Science Foundation in partnership with the Qatar Computing Research Institute.

LEAVE A REPLY