Remember the Titan
re:tutor's math
the ability to grasp that much GPGPU power for that little money makes me worry for
crypto keys everywhere, first. people have never been able to get supercomputing-level power for so little $$ before.
Hammers can be used for good, such as to hit nails to build shelter to help others (and like GPU-rendered scenes in Harry Potter and the Goblet of Fire) and, for bad such as to hit heads to exact punishment (and like crypto keys). Tools will be put to good use by the good and bad use by the bad, but it's not the tool that chooses, but the heart behind the hand that wields the tool.
but then I drool over the VFX madness that people like ILM will be able to produce with power like that.
I worked at ILM when the first GPU-rendered scenes were done . . . they did part of the fire for the inferi cave scenes in Harry Potter and the Goblet of Fire with a few GPU's sitting in some workstations in a corner of the datacenter. by the time I left, a decent part of the datacenter was being repurposed to house PCI enclosures for gpu rendering arrays. i'm sure that trend has only continued since!
Prior to the release of the Titan, Otoy [
http://render.otoy.com ] who owns the Octane renderer brand gave this metric - "Octane Render uses the untapped muscle of the modern GPU compared to traditional, CPU based engines. With current GPU technology, Octane Render can produce final images 10 to 50 times faster than CPU unbiased render engines, or even more with multiple GPUs (depending on the GPU(s) used). Octane Render provides a true WYSIWYG (What You See Is What You Get) rendering environment that allows the user to focus on creating stunning images without bouncing back and forth between a modeling view and then waiting for a render to complete. The viewport on the screen IS the final render. Any changes to the scene are instantly updated on the screen allowing the user to tweak any setting and know immediately how the result looks." Now, with the Titan having been released, that multiple gets only greater; so I agree with you that what you saw at ILM will only increase, except that if Intel can gain traction with the Xeon Phi and drops the current price of the Phi by at least half (a 60% drop would make it about even with Titan), then there might be a period of time where there's a mixture of Phis and Titans doing ILM's bidding. That's the worst that I'd hope for. The worst that could happen is set forth below.
We're seeing the makings/true beginning of an across the boards - seemingly World War - or, at least, a Card War with Intel vs. Nvidia, and with the 3rd (weaker) axis AMD vs. Nvidia. Intel and AMD also have CPUs in their arsenals, but Nvidia has no CPUs. CUDA itself, then later the GTX 400 and 500 series, were the meaningful opening shots that got Intel interested in battling, with it's longterm CPU foe - AMD, against Nvidia. Next, Nvidia shot the GTX 600 series at AMD. Then Intel fired back at Nvidia with the Xeon Phi. Now Nvidia has fired against AMD with the Titan - " The technology that powers the world's fastest supercomputer is now redefining the PC gaming experience. Introducing GeForce® GTX TITAN. Bring the powerful NVIDIA® Kepler™ architecture technology that drives the Oak Ridge National Laboratory's Titan supercomputer to your next gaming experience," but that shot was also directed at Intel because Titan comes with the added bonus that Titan just doesn't only do games (like the CUDA hobbled GTX 600 series) but also does other compute tasks with great might. While it's true that Titan does not compute as precisely as the Tesla line does for things that require more pinpoint exactness (Nvidia had to protect those assets for it's higher priced products), but Titan does compute a whole lot faster than any GTX 500 or 600 series card and Titan has 6 Gb of onboard ram to boot. Whereas Nvidia seems to have developed the Titan weapon wisely to do battle with both of its foes (AMD has no similar two prong artillery because Open CL development is stagnant and Xeon Phi is still in its infancy, but Xeon Phi's having the advantage of being able to be adopted and adapted easily and quickly} means that this war will appear to take full shape rapidly. We, the consumers of the cards, will win as this war is being fought and more so as it intensifies; but I shudder to think what will happen if Intel prevails completely. Competition brought us the Titan, but a total victory for Intel will likely bring only stagnation and higher prices. Intel's next weapon will likely be a tie-in between a GPU oriented CPU, namely Haswell, and Xeon Phi - for a one-two punch. Will Nvidia bend? I hope not. I hope that Nvidia has the smarts to be currently making it's next weapon of mass destruction even more compute intensive, price competitive and code modularized and popularized so that it can be more quickly and broadly adopted and adapted rapidly. Maybe, Nvidia needs to take courses in "Apple 101 - How to Come Back from Near Death" and "Apple 102 - Making Coding for Your Developers Dead Simple."