Not yet a member? No problem!
Sign-up just takes a second.
Forgot your password?
Recover it now.
Already signed up?
Log in now.
Forgot your password?
Recover it now.
Not yet a member? No problem!
Sign-up just takes a second.
Remember your password?
Log in now.
12 Comments
articiansays...They must have a string of at least a hundred cameras along that trajectory that synchronize their shutters (how they do it in film, too). Must be an expensive setup.
eric3579says...See video here for explanation. Not to many cameras actually needed:
http://www.replay-technologies.com/technology.html
Our technology works by capturing reality not as just a two dimensional, or stereoscopic representation, but as a true three dimensional scene, comprised of three dimensional "pixels" that faithfully represent the fine details of the scene. This information is stored as a freeD™ database, which can then be tapped to produce (render) any desired viewing angle from the detailed information.
This enables a far superior way of capturing reality, which allows breaking free from the constraints of where a physical camera with a particular lens had been placed, to allow a freedom of viewing which has endless possibilities.
The current working deliverable of this technology ("Watch As U Want") allows producers and directors to create "impossible" camera views of a given moment in time as seen in Yankee Baseball YES View- But we believe that ultimately the biggest freeD™ innovation, as display technologies get better and more advanced, will allow the user to get fully and interactively immerse in the content.
They must have a string of at least a hundred cameras along that trajectory that synchronize their shutters (how they do it in film, too). Must be an expensive setup.
articiansays...Aha, okay that explains why it looked like CG. I didn't think it was faked though, and thought it must have just been the novelty of seeing such a large space panned around in that manner.
Cool!
AeroMechanicalsays...The question is: how long did it take to render? Is it hours or even days on large render farm for each clip? That might limit the practicality, certainly for sports broadcasts at least.
On the other hand, I hope in 10 or 15 years, I can watch sports and put the camera wherever I want in real time or put on my VR headset and watch as though I were standing next to the pitcher or sitting on the wing of a race car. That probably will happen and that is an AWESOME prospect.
kymbossays...This will be used widely. Very useful for cricket, where current technology and its use is a real problem.
antsays..."Woah." as Neo says! I remember when CBS(?)'s Super Bowl did this, but it was crappy and choppy!
http://vimeo.com/user12462930 has more!
Sniper007says...What? Render time? I'd guess minutes at most. C'mon, this is 2013, not 1993.
But you're right: Oculus Rift + This = Star Trek Holodeck in real life. Sports is possibly the best application for this combination, since the area of play is limited and well defined. Let me build upon your vision of the future.
It would require the ability to change your camera angle even when the "video" is playing. They'd also need to thoroughly map all audio sources on the playing feild. Heck, I'm sure there are tons of other massive technical hurdles that I haven't even thought of, but if you will, imagine this:
Go to an empty baseball field (or other large, flat area) during a time when you can be assured you'll be totally alone. You'd need to set up some kind of markers, four in total, non-coplaner. They would track your movement on the field in 3 dimensions. You might also set up a large circle fence around the outter edge of the field with sticks and string, to make sure you don't run into a tree or a building, since you'll be totally blind once you don your Oculus Rift. Then, put on the Rift, and play the video with your vantage point on the field as the camera angle. You'd be holding in your hand a remote control which can pause, rewind, or fast forward.
You could literally be IN THE GAME, AS IT PLAYS, with the ability to run along side your favorite football player as he runs into the end zone, seeing everything he sees, hearing everything he hears. Or stand in the endzone, and watch your favorite plays from every imaginable angle as though you were really there. Rewind the "video" and watch again from the vantage point of the quarterback, or the referee, or the coach.
There is no higher form of sports immersion. It is Nirvana.
The question is: how long did it take to render? Is it hours or even days on large render farm for each clip? That might limit the practicality, certainly for sports broadcasts at least.
On the other hand, I hope in 10 or 15 years, I can watch sports and put the camera wherever I want in real time or put on my VR headset and watch as though I were standing next to the pitcher or sitting on the wing of a race car. That probably will happen and that is an AWESOME prospect.
rebuildersays...Rendering would be pretty much instant, as they're simply mapping shot footage onto generated 3d models as texture, in a sense.
It's generating the models that can be pretty intensive, and if they're actually getting reliable results of the quality seen here with only a few cameras, their software is pretty good. The automated solutions I've seen for scene capture have been a bit off from the mark still, having trouble with reflective surfaces especially and generally being a bit unpredictable. This looks quite impressive.
Probably they take advantage of having a fixed location, so they can pre-calibrate the camera fixes in advance, improving the quality of the spatial interpretation and probably speeding things up as well.
The question is: how long did it take to render? Is it hours or even days on large render farm for each clip? That might limit the practicality, certainly for sports broadcasts at least.
On the other hand, I hope in 10 or 15 years, I can watch sports and put the camera wherever I want in real time or put on my VR headset and watch as though I were standing next to the pitcher or sitting on the wing of a race car. That probably will happen and that is an AWESOME prospect.
Esoogsays...Im at newb at this kind of stuff. Why do they always have to freeze the frame to do the transition? What limitation is preventing it from being done on the moving video?
shatterdrosesays...Probably none, or simply render times. My guess is their goal is to allow a frozen moment, such as when the runner comes into home, to show from various angles that A) the ball was in the mitt but B) the runners foot was already on base. Mostly, but not limiting the view angles you can more clearly see the action while if the motion was still ongoing while the camera moved, it'd cause a jarring effect on the viewer and make it even more confusing as to what really happened.
It's why JJ Abrams shakes the camera a lot. It helps hide the flaws in action scenes such as where people don't actually stab each other or hit each other etc.
Im at newb at this kind of stuff. Why do they always have to freeze the frame to do the transition? What limitation is preventing it from being done on the moving video?
Jinxsays...I'm watching Dota2's International and I can move the camera around just like this. Whats the big fuss?
Impressive tech.
fuzzyundiessays...Graphics programmers call those 3D pixels "voxels". Once you have them in some sort of a spatial acceleration structure like an octree, rendering is usually pretty quick. Making voxels out of a bunch of high-res camera images is the expensive part.
Discuss...
Enable JavaScript to submit a comment.