Microsoft Kinect + DSLR = DEPTH FILMMAKING

A very enterprising fellow by the name of James George has developed theRGBDToolkit, a workflow that allows you to marry the two tools with some very intriguing results.  You shoot through a DSLR attached to a Kinect, and after calibrating both devices to a checkerboard, you can start creating some incredible imagery.  

The RGBDToolkit invites you to imagine the future of filmmaking. 
Repurposing the depth sensing camera from the Microsoft Kinect or Asus Xtion Pro as an accessory to your HD DSLR camera, the open source hardware and software captures and visualizes the world as mesmerizing wireframe forms. A CGI and video hybrid, the data can be rephotographed from any angle in post.   ( source http://www.rgbdtoolkit.com) 

On their website, you can find  tutorials, and more about how this technology works.” http://www.rgbdtoolkit.com/tutorials.html ” 

Below you can see a video on how this technology works, in a  2 min video recording.

You may now think; What is this doing for me, well it can do  many things for researchers, to  create an environment, where you can actually   film a 3D human body, but  still at this point, this tool, is  doing a lot for experimental filmmakers, as it’s creating some very cool experimental imagery by itself. And here we are, having two off-the-shelf tools along with free open-source software that let us start dipping our toes into that world!

Who knows?  Perhaps volumetric scanning will become a common feature on future cameras, allowing for 3D projections, holograms and other assorted weirdness. Which is the technology I’m hoping to   achieve when my X100 will be available to  be manufactured. Hoping that with this  technology, my camera will come to life, sooner then I thought.

Image

 

Advertisements

The camera that can see around corners.

As I promised, In this post I’m going to  show and explain, how this technology helped my progress in the creation of my new camera  (x100).

“New technology could lead to devices that can spot people hidden from view or inspect components deep inside machines”

This is how .theguardian.com is presenting the new technology, but it’s actually more then that. This camera is a evolution from the  last technology I’ve been talking about( camera that can see though skin), it opens doors, for developers, and helping people like me, understand the technology, how it works, and  it helps me develop my idea, as it uses the same  laser  pulse, and the speed of light.

My X100,  technology   is a revolutionary camera, that is using the speed of light, and laser, to  analyse the environment, in deep, as the technology will see though skin, walls, glass, and will then  some of the pulse will be transported back to the camera, 100 to be exact.  Why 100? , because  we divided the human body,  in 100 points,so the camera will focus on those points, wile creating the image back.  

As the new technology, my camera  at heart of the technique is the ability to build up images from light waves that are scattered off surfaces like walls in almost every direction. 

You may ask what are the differences?

I can only explain that in a single word, ‘HOLOGRAPHY’ . X100 will be able to reflect a  3D hologram, when you watch your footage, or you can choose to  look at the LCD screen on the camera.

Most ultra-fast imaging technologies aim to mitigate the effects of scattered light, focusing instead on just the first photons to reach the sensor. The difference here, says Raskar, “is that we actually exploit the scattered light”. ” It can record images every 2 picoseconds, the time it takes light to travel just 0.6 mm. So it can record the distance travelled by each photon with sub-millimetre precision” 

This process will take several minutes, but in the futures researchers hope that, it will be reduced to 10 seconds says nature.com

X100, is a prototype, but a revolutionary way to see things differently, is based on two technologies that has been developed in 2012, and holograms which is already in the world in a 2D format and it uses a A4 scanner, extremetech.com says that,” The reason why you don’t see holograms everywhere is not because we don’t know how to make them, but because the technology required to do so is currently quite expensive.” 

The scanner used by the team  ( Japan’s Chiba University ) — a regular 4800 dpi A4 paper scanner, and it’s able to create images with a resolution of over two gigapixels.  This technology is called scannergram, and you can see how it works in the picture below.

Image

 

So far, the team used the rig to create 0.43-gigapixel holograms of tiny insects — a flea and an ant. Using a process the team called “band-limited double-step Fresnel refraction,” they were able to build the hologram in only 177 seconds, down from an original 350. ” says James Plafke for extremetech.

In my next post you will see how my camera works, a  pich, and visuals, for how it will look like, using  footage from other projects.

 

260MC Future Media T1

“Art is making something out of nothing and selling it.”

Frank Zappa

The Pitch 

         260MC,  the module, witch I thought,  that will be my personal favourite  this year, as  it started, in a very creative way, with the past, and giving us tasks, from which we’ll have to drew  inspiration from  arts, is starting in Term 2, to make us  think of a revolutionary, innovative idea, to explore how the contemporary media landscape is being shaped by technological and cultural change.

        In the first week of this term, we had to came up  with 3 ‘Elevator Pitches’,  and as my  inspiration,  is always drew from arts and nature, I had a hard time, trying to get outside my comfort zone, and think about something that has to do with technology.  

       This being said, my first idea was to  create, a futuristic, experimental gallery, that will use technology in order to make a cultural change in the art world, and in the same time, is trying to bring people’s attention into the art world. My second idea  was to  build a community of arts students around the world, that will be able to  share ideas, donate, and get involved in other projects, plus it will  use a radar,  in order to find  people with same interests, and a chat platform. 

        My 3rd idea, and the one that I’m going to develop is a prototype, and a new media product, that will use the same principles as the real-world technology camera that  can see around corners, but my prototype, is a bit more sci-fi, as I want to create  a camera that is aware of the travel time of light, and  can  analyse the object, and environment, using laser pulse, and  when you watch it back , it will use holograms in 3D. Now  you see why I said it’s a sci-fi idea, but  as the new world is happening so fast and we already know that  2D Holograms are real,  and Ramesh Raskar, the head of Camera Culture research group, at MIT , the man that opened doors for photography with his camera that  can see around the corners, it will not take  a long time since my  prototype will came to life.

      If you want  to know more about my new idea that will change the way we film, and watch a film forever, please keep in touch, and  leave me a feedback in the comment below.