When NVIDIA launched the GeForce 2 in 2000, Jen-Hsun Huang said it was a "major step" towards achieving "Pixar-level animation" in real-time only to be criticized by Pixar's Tom Duff.
"These guys just have no idea what goes into `Pixar-level animation.' (That's not quite fair, their engineers do, they come and visit all the time. But their managers and marketing monkeys haven't a clue, or possibly just think that you don't.)
`Pixar-level animation' runs about 8 hundred thousand times slower than real-time on our renderfarm cpus. (I'm guessing. There's about 1000 cpus in the renderfarm and I guess we could produce all the frames in TS2 in about 50 days of renderfarm time. That comes to 1.2 million cpu hours for a 1.5 hour movie. That lags real time by a factor of 800,000.)
Do you really believe that their toy is a million times faster than one of the cpus on our Ultra Sparc servers? What's the chance that we wouldn't put one of these babies on every desk in the building? They cost a couple of hundred bucks, right? Why hasn't NVIDIA tried to give us a carton of these things? -- think of the publicity milage they could get out of it!
Don't forget that the scene descriptions of TS2 frames average between 500MB and 1GB. The data rate required to read the data in real time is at least 96Gb/sec. Think your AGP port can do that? Think again. 96 Gb/sec means that if they clock data in at 250 MHz, they need a bus 384 bits wide. NBL!
At Moore's Law-like rates (a factor of 10 in 5 years), even if the hardware they have today is 80 times more powerful than what we use now, it will take them 20 years before they can do
the frames we do today in real time. And 20 years from now, Pixar won't be even remotely interested in TS2-level images, and I'll be retired, sitting on the front porch and picking my banjo, laughing at the same press release, recycled by NVIDIA's heirs and assigns. "
Many did not find this response a very warm one and started blogging for example , this one is from a brother :http://industrialarithmetic.blogspot.com/2009/10/real-time-toy-story-3d.html
Now I ventured to do some calculations. You can always correct me if I am wrong as I am no expert in the computer industry.
Machines used to render Toy Story :
87 dual-processor and 30 quad-processor 100-MHz SPARCstation 20s
Total number of processors = 294
According to :http://ftp.sunet.se/pub/benchmark/aburto/flops/flops_2.tbl
SPARCstation 20 (single processor) had SunOS 5.4 installed and used a HyperSPARC @100 MHz with 27.5066 MFLOPS
Theoretical maximum performance of the setup used by PIXAR
294 * 27.5066 = 8086.94 MFLOPS
Movie was rendered at 1526x922 pixels using Stochastic Anti-Aliasing
Scan-line rendering used, shadow mapping for shadows ( no ray tracing )
Movie Length ~ 75 minutes
Number of rendered frames = 110064
Movie frame rate ~ 25 frames per second
Rendering time = 46 days
Total data sent to renderer = 34 Terabytes
Factor by which more power is needed for this to be real time
46*24*60 (rendering time)/ 75 (movie length) = 66240 = 883.2
So required computational power = 883.2 * 8086.94 MFLOPS = ~ 7.14 TFLOPS
This is without considering all the network bottlenecks.
Theoretical Performance of Gforce 480GTX ~ 1.5 TFLOPS
(not considering ATI solution because it is less programmable)
If 4 cards are placed in quad SLI (EVGA SLI classified motherboard though this motherboard can handle 7 cards )
4*1.5 = 6 TFLOPS
Now hardware rendering is much faster than software rendering and there is less of network bottleneck here.
Per frame data of Toy Story = 34*1024*1024(total data sent to renderer)/ 110064 = 323 MB
Per frane data of Crysis ~ 200 MB
Per frame data of Crysis 2 =
Polygon per frame of Toy Story = 5-6 Millions
Polygon per frame of Crysis ~ 1.7 Millions and Nanosuit = 67000
Polygon in the Nanosuit of Crysis 2 ~ 1 million
Texture Streaming can now allow for extremely detailed textures
Example : RAGE from Id Soft
Something similar is probably used in Crysis 2
Global Illumination is used in Crysis 2 which is probably better than the lighting system in Toy Story
Only lacking feature is probably Stochastic Anti - Aliasing
So Tom Duff was probably wrong in chosing Moores Law in his calculations ........