Unlimited Detail: Potential Next-Gen Graphics Technology?

In this video Bruce Dell illustrates the Unlimited Detail graphics engine, which works using streaming point cloud data as opposed to traditional polygon rendering techniques.
Stormsingersays...

Sorry, this guy's just full of shit.

Point clouds don't scale as you zoom in, for one thing...that's a real serious flaw, as it means you need more points for a decent zoomable model than most computers have memory.

Second, polygon processing isn't the biggest bottleneck...creating the art is. Using point clouds isn't going to help at all in that area, it'll likely make it worse.

Third, every single graphics engine out there has 3d search algorithms and means for limiting the objects to be displayed, there's nothing new in that.

Bottom line, this screams snake-oil, and I'll believe it when I see it, just like the Phantom game console.

1stSingularitysays...

Everything about this sets off my bullshit detector. First of all, cutting edge games can run in the 10s of gigabytes of size (most of this size is due to textures, or those things that color the polygons). The problem is that even bleeding edge computers only have a few gigabytes (nowadays probably between 2 and 12) of RAM, and only 1 to 2 gigs of high performance RAM used for rendering the graphics we are used to seeing (aka, the visual RAM). Saying that this magically fake system allows for unlimited detail implies that they can somehow surpass the RAM barrier. I don't care what kind of search algorithms they use (and as Stormsinger pointed out, CURRENT GRAPHICS ALREADY DO THIS), they CANNOT go over the current day RAM barrier.

Also, why in the HELL would someone do a technical demo like they were trying to explain things to a 6th grader? The more I think about it, the more this seems like an elaborate scam to try to get technical laymen to go to their website and/or buy their shit, stock, or something.

As for the the hint that they are doing the same thing google is... Wow. Google uses a metric shit-ton of the most technically advanced servers available to deliver its results quickly. Not a single consumer computer/console. We are talking thousands upon thousands of business class servers. Do not compare your fake video to Google.

mrsidsays...

He's trying very hard to make polygons look bad. Fact is, there's no problem with polygons. He just picked some really ugly examples. If he would've shown shots of Avatar, it would've been a less convincing argument.

darkpaw02says...

It sounds pretty reasonable to me. Google does actually use some fairly advanced algorithms, not so much brute force computing power. The huge server farms are there because they have a huge user base.

And the issue of "unlimited" detail = "unlimited" memory requirements isn't quite right, have a look at some of the 64K demos from the PC demo scene. The example of the billions of animals stacked in pyramids is the answer to that one, it might be a few meg for the first animal, the rest are just extra entries in an index pointing back to the first one.

Stormsingersays...

>> ^darkpaw02:
And the issue of "unlimited" detail = "unlimited" memory requirements isn't quite right, have a look at some of the 64K demos from the PC demo scene. The example of the billions of animals stacked in pyramids is the answer to that one, it might be a few meg for the first animal, the rest are just extra entries in an index pointing back to the first one.


I'm not sure which way you intend your example here...-Polygons- supply that "few extra entries in an index" property...using point clouds, not so much. Yes, you could presumably create exact duplicates of objects by recording a displacement for a cloud, but you get exactly the same ability with polygons, and the objects are vastly smaller in memory footprint. IOW, duplicating an object is -not- increasing the number of details. Details only have a cost in -any- approach if they're unique.

A point cloud first requires you to determine a maximum zoom, i.e. how close to the object are you going to be allowed to move the camera. Then you must specify enough individual points to provide a seamless view of the object. If you then move closer, your object falls apart into dissociated points. That's inherent in the very definition of a point cloud, after all. With polygons, you can interpolate between your vertices for any level of zoom...and the only flaw is an increasingly pixelated image.

Honestly, if points are so vastly superior, you can use polygons as points too...for a simple three-fold increase in memory. That would make it pretty trivial to prove his concept...but he isn't offering that proof. If memory serves (I'm not listening to the whole thing again), he's looking for investors. Proof is not an unreasonable requirement in this case. If he's being honest, a demo at one of the big conferences (as opposed to an unbelievable video clip on youtube) would net him all the funds he could need.

I still believe it smells exactly like the Phantom.

darkpaw02says...

You are right Stormsinger, you could use exactly the same tricks to save memory with polygons, and everyone does. A polygon model + textures certainly could be smaller in memory than a very detailed coloured point cloud, but what we should be comparing is the computational power needed to render polygon vs point-cloud models of similar complexity.

That this method could give the point-cloud method an advantage in that case seems plausible to me.

I don't think zoom is an issue. Take the case where there is just one single point in the cloud. If you zoomed in to the point where that one point filled your screen, all that would happen is that the point cloud search performed for each screen pixel would return the exact same point each time, and therefore be coloured the same.

MaxWildersays...

After a quick Google search, it appears that "Unlimited Detail" started making these claims back in March 2008, and nothing much has changed in the discussions that have popped up on various websites since then. I call vaporware.

Stormsingersays...

>> ^darkpaw02:
You are right Stormsinger, you could use exactly the same tricks to save memory with polygons, and everyone does. A polygon model + textures certainly could be smaller in memory than a very detailed coloured point cloud, but what we should be comparing is the computational power needed to render polygon vs point-cloud models of similar complexity.
That this method could give the point-cloud method an advantage in that case seems plausible to me.
I don't think zoom is an issue. Take the case where there is just one single point in the cloud. If you zoomed in to the point where that one point filled your screen, all that would happen is that the point cloud search performed for each screen pixel would return the exact same point each time, and therefore be coloured the same.


I'm not a graphics programmer, but I shared an office with several for the last 10 years. A few things have rubbed off over the years.

With today's cards, computational power is not really the limiting factor...it's bandwidth. Moving a point cloud around in memory is going to take vastly more bandwidth than moving the equivalent object made with polys.

As for the zoom, there is no object that is a single point, you're thinking about it from the wrong direction. Ever seen one of those 10x demos? Where the screen repeatedly changes scale by a factor of ten, zooming from a view of the galaxy, down to a grain of sand on a beach? That's the sort of zooming I'm talking about. You can do that with polys...you simply cannot represent the galaxy (even a simplified version) as a point cloud and have the ability to zoom down to that kind of level. It requires many orders of magnitude more data with points.

As I think about it, all that's really beside the point. Here's what you need to know on this topic.
1) What is a polygon? It's a collection of an -infinite- number of points, specified by a -few- vertices, possibly augmented by various forms of texture maps to provide colors. Meaning it is vastly more compact in representation.
2) There is absolutely nothing that a point can do, but a polygon cannot.
So, which approach is going to be more feasible and more flexible? In bandwidth and memory, polygons are a hands-down winner. What's the advantage to points? None.

And we're not even starting to look at more advanced issues like reflections. How can you possibly handle those with point clouds?

darkpaw02says...

Obviously there is a limit to the number of points you can have in memory, but that's a limitation of memory, not the rendering process. I imagine that animation would be done by having two rendering passes, one for fixed background or terrain, and another for moving/animated objects. It may be that "moving" a point cloud is just as simple as moving a polygon model: set its location to somewhere else, then let the rendering system decide how to draw it there.

Given a triangle and an arbitrary point, it takes many operations to see if that point lies on the plane of the triangle. With a point cloud that would come down to asking "is there a point at location x,y,z?"

In the video they seemed to say that they had duplicated the whole point cloud below the reflective water, which does seem like a nasty hack. That may have just been the easiest thing to do though.

Raaaghsays...

>> ^Stormsinger:
Sorry, this guy's just full of shit.
Point clouds don't scale as you zoom in, for one thing...that's a real serious flaw, as it means you need more points for a decent zoomable model than most computers have memory.
Second, polygon processing isn't the biggest bottleneck...creating the art is. Using point clouds isn't going to help at all in that area, it'll likely make it worse.
Third, every single graphics engine out there has 3d search algorithms and means for limiting the objects to be displayed, there's nothing new in that.
Bottom line, this screams snake-oil, and I'll believe it when I see it, just like the Phantom game console.


You're points seem a bit wishy wash, and I don't find that they refute this presentation. But it does sound like bullshit.

It could be argued that polygons are the biggest limiting factor, especially if you are talking in the context of computation (which the presentation is). If there was infinite polygons, workflow would be much simpler.

I don't get why you say point clouds dont scale as you zoom in, I'm sure you could do pretty awesome pruning, searching, grouping at different point-resolutions. It does seem like each acndidate point/group of points's would have to be in volatile memory, with some groups/branches streamed off the HDD many times per frame. I just really need some numbers to believe the CPU-HDD bridge can handle that.

I would of been convinced by some indication of the processing required to find a single pixel.

Stormsingersays...

>> ^Raaagh:
You're points seem a bit wishy wash, and I don't find that they refute this presentation. But it does sound like bullshit.
It could be argued that polygons are the biggest limiting factor, especially if you are talking in the context of computation (which the presentation is). If there was infinite polygons, workflow would be much simpler.
I don't get why you say point clouds dont scale as you zoom in, I'm sure you could do pretty awesome pruning, searching, grouping at different point-resolutions. It does seem like each acndidate point/group of points's would have to be in volatile memory, with some groups/branches streamed off the HDD many times per frame. I just really need some numbers to believe the CPU-HDD bridge can handle that.
I would of been convinced by some indication of the processing required to find a single pixel.


I don't see how what I said was "wishy washy"...it was either right or wrong. But 3D graphics is a rather complex field, and I'm not sure I'm being as clear as I should, so let's see if I can make it simpler.

Forget zoom altogether...I'm apparently not expressing my view clearly there.

1) A point cloud requires vastly more memory than polygons. This is a simple fact, and not really subject for debate...3 points define a triangle while 1 triangle defines an infinite number of points. That's an infinite compression rate.
2) With today's video cards, converting polygons to pixels (points) is -not- the limiting factor. The bottleneck (what's slowing the process down) is moving the data to the video card in the first place, not processing.
3) So, a solution that trades increased memory requirements for reduced processing (like this video suggests) is exactly backwards, and provides no benefit in the real world. It would be like streamlining a car to reduce wind resistance when the engine can only reach speeds of 5mph...pointless.

And then there are other more complex problems, polygons offer many other conveniences that points do not. Consider collision detection. Polygons are perfect for that, if something cuts through a polygon, you have a collision. Points are useless for such operations. If you have points A, B, and C, where's the surface you're supposed to collide with? In fact, that question is not even answerable, because a raw collection of points doesn't -have- that information.

SpeveOsays...

I work at a game development studio, so you can imagine there was quite a bit of debate and doubt about this video and the technology. At the very least, what you have here is what seems to be a relatively efficient point cloud viewer.

I mean, I don't think anybody would ever think that a point cloud based technology would ever replace the gaming technology that has developed, it's more a case of technological amalgamation.

I think its obvious that you would never use the point cloud as a raw source for your collision data etc. I ultimately envision this kind of point cloud technology being used as a visual proxy, whose collision will still be defined by a low res polygon mesh. I think its application would ultimately be restricted to static and non deforming objects within a game, but this would require some kind of mixed rendering pipeline, and the technicalities of that are well beyond my area of expertise so I don't know if that would even be possible.

I agree that selling this technology as some kind of holistic game development solution is total snake oil, but I still think that there could be some exciting potential use of point clouds in gaming tech in the future.

Raaaghsays...

>> ^Stormsinger:
>> ^Raaagh:
You're points seem a bit wishy wash, and I don't find that they refute this presentation. But it does sound like bullshit.
It could be argued that polygons are the biggest limiting factor, especially if you are talking in the context of computation (which the presentation is). If there was infinite polygons, workflow would be much simpler.
I don't get why you say point clouds dont scale as you zoom in, I'm sure you could do pretty awesome pruning, searching, grouping at different point-resolutions. It does seem like each acndidate point/group of points's would have to be in volatile memory, with some groups/branches streamed off the HDD many times per frame. I just really need some numbers to believe the CPU-HDD bridge can handle that.
I would of been convinced by some indication of the processing required to find a single pixel.

I don't see how what I said was "wishy washy"...it was either right or wrong. But 3D graphics is a rather complex field, and I'm not sure I'm being as clear as I should, so let's see if I can make it simpler.
Forget zoom altogether...I'm apparently not expressing my view clearly there.
1) A point cloud requires vastly more memory than polygons. This is a simple fact, and not really subject for debate...3 points define a triangle while 1 triangle defines an infinite number of points. That's an infinite compression rate.
2) With today's video cards, converting polygons to pixels (points) is -not- the limiting factor. The bottleneck (what's slowing the process down) is moving the data to the video card in the first place, not processing.
3) So, a solution that trades increased memory requirements for reduced processing (like this video suggests) is exactly backwards, and provides no benefit in the real world. It would be like streamlining a car to reduce wind resistance when the engine can only reach speeds of 5mph...pointless.
And then there are other more complex problems, polygons offer many other conveniences that points do not. Consider collision detection. Polygons are perfect for that, if something cuts through a polygon, you have a collision. Points are useless for such operations. If you have points A, B, and C, where's the surface you're supposed to collide with? In fact, that question is not even answerable, because a raw collection of points doesn't -have- that information.




Wishy washy comment was made about your arguments, as they are opinion & mainly because you injected your own context into the debate.

1) Crucially, a line is a an infinite number of points.
Possibly when they talk of a "point-cloud" they are meaning an effective "point cloud" once you do the calculations.
You dont need to keep "infinite" points in memory you just need the functions to access them. It might be a series of functions defining a 2/3d map, already been doing this for years in software and hundreds if not thousands of years on paper (nurbs/splines, (beziel curves)

2-a) Converting polygons to pixels is a big computation step, specifically rendering the polygons.

2-b) Too many polygons would saturate the PCI bridge...if they didn't first invent PCI-express, crossfire and the next gen standards for PCIe. Its a roadmap coupling of processing unit and bridge which we have been seeing for the last 20 years. The new PCIe will have 8gb per second, which is EXPECTED to HANDLE the next gen cards.

3) This is where we agree there could be a problem, but I am far less zealous about the impossibility of it (I still think its bullshit).



RE: Collision detection
You seem to be straw-manning the issue, who mentioned that you would need to do away with polygon math in games? Collision detection in games does not occur with the rendered polygons (expect in rare examples) - it occurs with proxy objects (ie invisible, simple 3d shapes)

But just as for the sake of shits n giggles, IF these guys have a magic algorithm for rendering, then it would be academic to create a low res concave or convex proxy object. You could have points defining centres of clusters etc - like it would be doable...IF there is a magic algorithm...

You invoke the "impossible" clause to much - if you are a software developer (like I) I hope you start channelling more Kevin Garnett.


HOWEVER...I happen to agree that this is all too good to be true, and there doesn't seem to be anyway around doing MASSIVE amounts of calculations. I hope to be proved wrong.

Also if its been languishing for years, its almost certainly bullshit: games developers appropriate new technology with gusto from what I have seen.

Stormsingersays...

>> ^Raaagh:
You invoke the "impossible" clause to much - if you are a software developer (like I) I hope you start channelling more Kevin Garnett.

HOWEVER...I happen to agree that this is all too good to be true, and there doesn't seem to be anyway around doing MASSIVE amounts of calculations. I hope to be proved wrong.
Also if its been languishing for years, its almost certainly bullshit: games developers appropriate new technology with gusto from what I have seen.


Nope, not a game developer any longer and never a 3D programmer, as I said (I'm a database guy), I just worked alongside them. But I do clearly remember them complaining about the lack of memory bandwidth to the card, which really does imply that replacing polygons with points is going to make things worse rather than better.

I'll certainly agree with you on the last two points. If it sounds too good to be true...and all that.

Nebosukesays...

It doesn't seem that improbable to me. It seems like comparing a really big RAW logo and a SVG. Although this is bad comparison, it seems reasonable. The last minute or so of the video is the important part.

Send this Article to a Friend



Separate multiple emails with a comma (,); limit 5 recipients






Your email has been sent successfully!

Manage this Video in Your Playlists




notify when someone comments
X

This website uses cookies.

This website uses cookies to improve user experience. By using this website you consent to all cookies in accordance with our Privacy Policy.

I agree
  
Learn More