the real anonymous sam wrote:@Praxis: Please bear w/ me here. I actually went to the link you provided and read everything and you are WRONG. Nvidia was describing how the old generic cube-mapping technique is done, what I was referring to was “panoramically-composited depth-mapped cube-mapping”. That is in no way what the Nvidia is describing there. Nintendo filed that patent long after 1999 and is a “patented” technique which means and please for the love of god remember this. Only Nintendo can use that technique the way its described in that patent. I rest my case that you have been just gathering needless info from many sources and putting them together and claiming you know everything. If you actually think that G5’s can’t be implemented into Alienware PC’s you’re simply an idiot. So go back and read Nintendo’s patent on “PANORAMICALLY-COMPOSITED DEPTH-MAPPED CUBE-MAPPING”. Then relate that patent to the crap Nvidia link you related it to, then you tell me who doesn’t have their story straight. And yes you did say it way fake moron, I quote you said “cube-mapping crap-already proven false”, wrong again. Take care.
S
Praxis wrote: I see, I thought by cube mapping you were referring to the industry's cube-mapping standard.
So yes, I apologize for misunderstanding you.
"So go back and read Nintendo’s patent on “PANORAMICALLY-COMPOSITED DEPTH-MAPPED CUBE-MAPPING”"
Alright then. I looked it up and found the patent you are referring to, but it is still not what you are describing. And I had several programmers and hardware enthusiasts assist me in reading through it.
"About cubemaps: Cubemaps are textures consisting of six textures from all six directions that the object can be seen from (right, front, left, back, up, down, though not necessarily in that order). They are useful for simulating reflections, or skies and environments if the camera is inside the object with the cubemap. It's used quite often to give a shiny feel, that looks better than the phong shading otherwise used for that, to objects in games. It's not a new "graphic gimmick".
That which the patent describes might already be possible to do, using some form of shading technique like normal mapping in addition to the cubemap. I figure it's supposed to look like a slightly more advanced layered panoramic view, so those parts close to the camera will move slightly differently in respect to those further back. It certainly wouldn't be "revolutionary" in any way that I can think of. If it's going to use actual depth maps in a game with six degrees of freedom, those maps are going to be quite huge. Well, unless they are rendering the depthmaps in real-time, I suppose, but what would be the point of doing that. You might just as well use high-res normal textures. "
To quote the actual PATENT:
"Video game play rendered using a panoramic view of a cube map style rendering uses an associated depth map to supply three-dimensionality to the pre-rendered scene. The resulting panoramic rendering may be indistinguishable from rendering the original scene in real-time except that the background is of pre-rendered quality."
This is pretty simple to understand. It's, like it says, a 3D panoramic view.
Another quote, from the actual patent:
"13. A video game playing system comprising: means for loading a pre-determined environment-mapped image having multiple planar projected images and associated multiple depth map images; means for at least in part in response to real-time interactive user input, compositing at least one additional object into said mapped images, said compositing using said depth map to selectively render at least portions of said object into cube mapped image to provide a composited mapped image; and means for panoramically rendering said composited mapped image using a desired viewing angle and frustum to provide interactive video game play."
Note the last bit. Panoramically rendering the said composited mapped image using a desired angle.
This is for a 3D panoramic image with depth added. This has been done in the past without the depth. Want an example?
Zelda 64: Ocarina of Time. Remember the Marketplace? Well, this patent does something similar, creating a pre-rendered panorama similar to the marketplace, with a fixed viewpoint in the middle and a pre-rendered environment. The difference is that it has depth, so the objects stick out as if it was rendered in real time.
It allows normal objects to be put into this pre-rendered 3D panorama.
This is great in certain situations, but is not going to be used in most games. I wouldn't be surprised if this is for the next Zelda, or for a Revolution Zelda game. Imagine walking around in an area that looks almost lifelike, the only breaks in the illusion being the objects and characters...
But only in limited areas. Just like the marketplace was an area of its own in Zelda 64, MOST of the game is rendered normally. This cube mapping patent is just a pre-rendered 3D panorama with depth. It's cool but it's not going to make a big difference. And it's not going to make the Revolution be able to produce far superior visuals than the opposition in every game; certainly not.
The applications are very limited. It'll be great in some areas though. But like it says, it's from a fixed viewpoint.
And...wow...
Quote:
"If you actually think that G5’s can’t be implemented into Alienware PC’s you’re simply an idiot."
Amazing. You have reached unsurpassed levels of ignorance, far beyond comprehension. Congratulations.
I'm stunned that someone can possibly believe that an Alienware PC can have an unreleased dual core G5 processor with which it is totally incompatible.
And again, I ask you. What operating system are you running on your Alienware?
@ Norikage:
Yep, your source is correct that the G5 will not fit in a laptop (the processor itself is even too large), but he's talking from the subject of a Mac laptop (Macs use PowerPC processors). It's even more incomprehensible from the viewpoint of an Alienware laptop, as a PowerPC processor is incompatible with EVERYTHING Alienware sells.
If Sam had a computer with a G5, he'd have a Mac. If he had a laptop with a G5, he'd have some top secret IBM/Apple prototype that hasn't been released. Not an Alienware.
Praxis wrote: To add to my previous post; Sam, this is what you claim Cube-mapping is.
Quote:
"The patent Nintendo filed calls for the games graphic designers to pre-render the entire game environment of the whole game along with the NPC’s pretty much everything except the main interactive character/characters and basically converts it to real-time gameplay with no graphical fidelity loss whatsoever and allows a completely dynamic camera. The word “cube-mapping” stands for basically converting pre-rendered to real-time in an instant. This technique is actually dedicated to the hardware along with many other techniques that aid in this being accomplished."
Alright, I'm going to quickly compare YOUR determination of cube-mapping to the actual patent.
You said:
"The patent Nintendo filed calls for the games graphic designers to pre-render the entire game environment of the whole game along with the NPC’s pretty much everything except the main interactive character/characters"
Patent says:
"loading a pre-determined environment-mapped image having multiple planar projected images and associated multiple depth map images"
This is for the environment and the environment ONLY. No NPC's or any of that stuff. The only thing that is cube mapped is the environment.
You said:
"and basically converts it to real-time gameplay with no graphical fidelity loss whatsoever"
It doesn't "convert" objects from pre-rendered to real-time; it takes the pre-rendered backgrounds and renders it. This is like putting a pre-rendered background like every game has (look at the sky in any videogame. The sky image was pre-rendered, and that pre-rendered texture was slapped on and is displayed in realtime), only it's panoramic (like the Zelda 64 Marketplace) and has depth.
It's a nitpick on words, but a significant nitpick as "converting" from pre-rendered to real-time makes no sense from the viewpoint of a modeller. You can render any pre-rendered image in real time if you slap it on a wall or something; a pre-rendered image is just a picture once its been rendered.
You said:
"and allows a completely dynamic camera. "
The patent says:
"panoramically rendering said composited mapped image using a desired viewing angle and frustum"
In fact, it says this twice; once in the abstract and once in point 13.
Note the "desired viewing angle" part. The programmer sets the viewing angle.
You say:
"The word “cube-mapping” stands for basically converting pre-rendered to real-time in an instant."
The patent says:
"A video game playing method comprising: loading a pre-determined environment-mapped image having multiple planar projected images and associated multiple depth map images; at least in part in response to real-time interactive user input, compositing at least one additional object into said mapped images, said compositing using said depth map to selectively render at least portions of said object into said mapped image to provide a composited mapped image; and panoramically rendering said composited mapped image using a desired viewing angle and frustum to provide interactive video game play."
Thats what cube-mapping is.
Furthermore...
The patent also says:
"at least one pre-rendered cube map;"
and
"The storage medium of claim 10 wherein said pre-rendered cube map comprises six images as if looking through faces of a cube with the viewpoint at the center of the cube. "
THAT is what a cube-map is. It's called cube-mapping because it utilizes cube-maps, six pre-rendered images arranged in a cube with the viewpoint at the center, and adds depth to the maps.
I think the patent contradicts what you say directly.
News flash! Anyone who thinks G5's cannot be used in Alienware's is an idiot!
I guess we're all a bunch of idiots then. Too bad