Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
OK, I'l stop talking about this now because

A)we have to wait anyway until we get numbers from Apple
B)I have explained my theory well enough and until now I still think it might happen (maybe not 100x but in the high nineties maybe 99.8x speedup) :)

Doc
 
OK, I'l stop talking about this now because

A)we have to wait anyway until we get numbers from Apple
B)I have explained my theory well enough and until now I still think it might happen (maybe not 100x but in the high nineties maybe 99.8x speedup) :)

Doc

You seem to have trouble understanding the concept of multiplication.

Snow Leopard will not bring 100x faster computers. It will not even bring 20x faster computers. It will probably bring +20% faster computers and, on the unnecessarily optimistic end, +100% faster computers.

So 2x at most.
 
A little knowledge can be a dangerous thing.

Encryption/Decryption is a lot like graphics code. It doesn't look like it to an end user, but this is about how the processor handles the code. GPUs are massively parallel stream processors. They're very good at threading and manipulating data streams. Not much of that power is relevant to say, iCal or Safari. Most applications do not deal with data streams, and the number of threads can be easily handled with a good scheduler and multi-core CPU.

For password cracking by brute force, you do lots of manipulating data streams, and can have any number of threads. That's why they got a massive increase in performance. Most applications don't do much data stream manipulation, and have a small number of threads (1 for the UI, and 1 or 2 for background tasks).

Areas where you will see noticeable improvement:
- Encryption and Decryption
- Compression and Expanding data
- Compiling
- Photo/Video editing (depending on coding of filters)
- Simulation and scientific software
 
OK, I'l stop talking about this now because

Please do; you're not making any sense. I tried to be easy before, but it appears that doesn't work.

A)we have to wait anyway until we get numbers from Apple
B)I have explained my theory well enough and until now I still think it might happen (maybe not 100x but in the high nineties maybe 99.8x speedup) :)

Doc

A) You really think Apple is going to come out with numbers of this magnitude? GPUs have been used before for offloading certain calculations, yet somehow you just noticed GPUs exist and think you're going to offload all instructions to it.

B) All you have done is thrown some sort of GPU gibberish together and saying it is going to do wonders.

Read this carefully:

No matter what Apple (or Microsoft, Sun, HP, Linux, or <insert your favorite OS manufacturer here>) does, they are not going to be able to offload all instructions to the GPU and be way ahead of the competitors. It's just not going to happen.
 
You seem to have trouble understanding the concept of multiplication.

Snow Leopard will not bring 100x faster computers. It will not even bring 20x faster computers. It will probably bring +20% faster computers and, on the unnecessarily optimistic end, +100% faster computers.

So 2x at most.

please tell me where you are pulling those numbers from?

seriously, dont give crap to the op when you yourself are just pulling numbers out of the air:rolleyes:
 
My "crap numbers" are based on what a respectably high tec company ElcomSoft achieved in GPU acceleration.

Yeah, in certain calculations. Why can't you accept that? They're not using a GPU to run an OS. It's not the numbers the company have, it's what you're interpreting those numbers to mean.
 
Or will Snow Leopard dramatically improve spell checking performance also?

Perhaps ;)

AppleInsider MDN

Link raises the point that even WPA2 may be at risk (WPA having previously been shown to be weakened recently). Harnessing GPU power can give some incredible speed enhancements, as mentioned, in specific areas, which fit what the GPU can do.

As Saladinos mentions:

Areas where you will see noticeable improvement:
- Encryption and Decryption
- Compression and Expanding data
- Compiling
- Photo/Video editing (depending on coding of filters)
- Simulation and scientific software

To be honest, that's a pretty decent crowd, considering Apple is big in Education, and Graphics.

The technology helps its Pro crowd a lot. Consumers will get the benefit of the graphics for games, and the improvements from getting more from their CPU by trying to get as much concurrency in there as possible.

Very cost effective, and upgradeable. And also to an extent, parallizeable (word? hehe). Whack 1-3 high end graphics cards in there for starters.

As others have mentioned, a lot of the potential improvement has been known about and talked about for a while. The title is a little misleading, Tallest Skil is right - it's more that they'll be seeing 1-2x improvement across the board, with big improvements in certain areas. e..g See current benchmarks for Nehalem to show the improvement the upcoming Nehalem Macs will get roughly compared to the current ones.
 
Is there a replacement for WPA2 coming anytime soon? WPA3 or whatever it might be called?

It is WPA-TKIP which has been cracked - but not completely, there are replacements such as AES and CCMP which are more secure.

But lets also be realistic - are they really going to hack your wireless network - really? anything secure will be encrypted via SSL so it isn't as though they can ease drop even if they had the key, worse case, they'll leech your connection.
 
password breaking vs. pixel pushing....

I would hazard a guess that the reason it is possible to increase password cracking performance by such a margin is that it is a HIGHLY parallel task. Think about it - there are however many millions of possible passwords and each one needs testing/calculating. When you think about what GPU's were specifically designed for: putting millions of pixels on a screen at perhaps 100 times a second, you realise there is plenty of parallelism inherently in their design. Anything more linear, such as compiling an application the benefit is likely to be minimal.

I'm no expert in CPU/GPU's or anything of the sort, but this seems logical to me. There is no way we'll see 100x performance increase on everything across the board. There's just no way this will happen as others have mentioned. Just look here to see what nvidia claims are the performance increases for various applications people have made to make use of their CUDA platform (their GPGPU effort) that is similar I believe to OpenCL.

http://www.nvidia.com/object/cuda_home.html#
 
Hey, speaking of compression and encryption- maybe this will make FileVault usable! ;)

(I actually have never tried using it myself, just hear stories of laggier performance, etc.)
 
The only applications that will run faster are those that today run the activity meter to the top. How many times have you notice the CPU(s) "maxed out"? If you are editinf HD video than quite a few times maybe but it you are doing e-mail not at all.

The classic example of this is the DVD player. With 10.6 will you be able to watch to 180 minute movie in only 18 minutes? I hope not. What's happened is that the CPU is already fast enough for most normal tasks

What GPUs are good at are things like applying the same transformation matrix to a million vectors. Or adds 5 to a set of a million integers. This will not make you email go faster. But what it will do is enable new classes of tasks that were impractical before. for example we have touch pads now in the Macbooks that can read gestures but with enough "compute power" and a small web cam you could do the same gestures in the air, no pad. Voice input becomes reasonable. and then other tasks like searching an iPhoto library to find a face that matches.

I don't expect this to help existing apps much at all, most just don't need it. it will enable new apps. It has always been that way. Look back over the last 60 years of computer history -- as compute power expands we put computers to more uses, we don't just do the old jobs faster
 
The only applications that will run faster are those that today run the activity meter to the top. How many times have you notice the CPU(s) "maxed out"? If you are editinf HD video than quite a few times maybe but it you are doing e-mail not at all.

The classic example of this is the DVD player. With 10.6 will you be able to watch to 180 minute movie in only 18 minutes? I hope not. What's happened is that the CPU is already fast enough for most normal tasks

Wait - Are you sure you get how this would work?

It's not that the CPU is getting maxed out and needs relief - it's that things that took 20 seconds to load in the past will now take 15. Things that didn't scroll super smoothly in the past, will now scroll smoother. The slight hiccup you get when you use expose with a movie, a couple web browsers and some work, won't hiccup anymore. And so forth and so on.

And I'm all for that!
 
I don't believe in all these crap claiming 100x faster, I know that Snow Leopard will be a great upgrade to Leopard, but it is ridiculous how the fanboyism went too far. I love my iMac and OS X but I prefer to be in the reality world than dreaming like these fanboys. 100X my ass :D
 
Everybody seems to forget the fact that almost all regular OS/Application instructions work on data that is either read in from memory or from the disk. Here the bottleneck is the I/O from memory/disk, which is not going to speed up with the use of GPU. No matter how fast your CPU is, even a 100GHz CPU will not make a huge difference to applications like Finder, Safari (Web pages are stored in memory cache, not much processing needed after page is loaded), mail (network lag is much larger than disk access delays), compilers (seeking data from disk), etc. Even applications like Maya or video editors need to work on data read from disks/memory, the processor (whether CPU or GPU) will spend most of its time waiting on the data to process, no matter how fast you make the processor by offloading the work to the GPU, the data to be manipulated by the instructions cannot be all stored in the small processor caches, hence the processors end up waiting for data.
 
Why not, if you can replace supercomputers with PlayStations...

Someone already mentioned the key element in Snow Leopard, OpenCL:
Another powerful Snow Leopard technology, OpenCL (Open Computing Language), makes it possible for developers to efficiently tap the vast gigaflops of computing power currently locked up in the graphics processing unit (GPU). With GPUs approaching processing speeds of a trillion operations per second, they’re capable of considerably more than just drawing pictures. OpenCL takes that power and redirects it for general-purpose computing
 
Encryption and decryption are actually very similar to the sorts of graphical tasks GPUs traditionally do. Modern GPUs are not set apart from CPUs because they can do "graphical" sorts of calculations. They are set apart from CPUs because they have a ton of parallel "stream" processors (hundreds of them) which can perform operations that can be segmented very easily. This is where vector calculations (graphics calculations which are essentially just physics problems) and encryption/decryption (non-graphics calculations) are similar. Each sort of calculation can be broken down into smaller calculations which can then be performed on different stream processors at the same time.

This is a vast oversimplification but suppose you have a calculation which looks like this [(A+B), (C+D), (E+F)]. So it's one problem composed of three problems. If you only had a single processing element you could only perform the calculations in some sort of order like this:

Calculation 1: A+B
Result
Calculation 2: C+D
Result
Calculation 3: E+F
Result

If you have more than one processing element, let's say you have 3 processing elements, then you could divide the workload and perform each of the 3 calculations at the same time instead of one after the other. GPUs, these days, have hundreds of stream processors and each one is suited to performing simple sorts of operations. So, for any problem, if it can be broken down and split up into many smaller problems you can perform them all at the same time. This is true for graphical problem as much as it's true for non-graphical problems that satisfy the same conditions.

So the fact that a non-graphics applications gets a huge speed up is only evidence that the calculations of the application are easily segmented and broken down into smaller calculations which can be performed in parallel on the many stream processors of the GPU. It's not evidence that applications which do not have calculations that can be broken down, so to speak, will get a speed boost. In fact, applications with calculation that are not sufficiently similar to "graphics" sorts of calculations will not get any speed boost at all.

Everybody seems to forget the fact that almost all regular OS/Application instructions work on data that is either read in from memory or from the disk. Here the bottleneck is the I/O from memory/disk, which is not going to speed up with the use of GPU. No matter how fast your CPU is, even a 100GHz CPU will not make a huge difference to applications like Finder, Safari (Web pages are stored in memory cache, not much processing needed after page is loaded), mail (network lag is much larger than disk access delays), compilers (seeking data from disk), etc. Even applications like Maya or video editors need to work on data read from disks/memory, the processor (whether CPU or GPU) will spend most of its time waiting on the data to process, no matter how fast you make the processor by offloading the work to the GPU, the data to be manipulated by the instructions cannot be all stored in the small processor caches, hence the processors end up waiting for data.

Good point. This has already been shown to be a limiting problem facing the potential speed gains afforded by offloading tasks to the GPU. During a developers conference it was shown that in some video encrypting scenarios where video was being converted from one format to another the GPU was outputting frames faster than modern I/O systems were capable of transferring and storing the information. So, yes, GPU acceleration is going to speed things up a lot. But, the speed boost is limited by how fast out I/O systems are and that's something that, it seems, engineers have been ignoring.
 
Wait - Are you sure you get how this would work?

It's not that the CPU is getting maxed out and needs relief - it's that things that took 20 seconds to load in the past will now take 15. Things that didn't scroll super smoothly in the past, will now scroll smoother. The slight hiccup you get when you use expose with a movie, a couple web browsers and some work, won't hiccup anymore. And so forth and so on.

And I'm all for that!

Consider an app that takes 20 seconds to load, and if the CPU is only hit 5% for example during loading, getting a vastly faster processor isn't going to help load times. There's a bottleneck elsewhere.

What ChrisA said is accurate. It is only going to help when the CPU is the bottleneck and the instructions can be offloaded.
 
Consider an app that takes 20 seconds to load, and if the CPU is only hit 5% for example during loading, getting a vastly faster processor isn't going to help load times. There's a bottleneck elsewhere.

What ChrisA said is accurate. It is only going to help when the CPU is the bottleneck and the instructions can be offloaded.

But won't the system be able to do things in parallel better?

So instead of taking 20 seconds doing thing linearly at a 5% load, it can get it done in 10 by doing two things at the same time? (Just for example, not any real numbers or time or anything)

Truthfully (and rather obviously), I'm not a GPU or OS engineer or anything ... so I'm just speculating, to be perfectly honest. But I do believe that, as end-users, we will be seeing some speed and performance benefits - not just when we're crunching the CPU at 100%.

I dunno .... I guess we'll have to wait and see what happens when Snow Leopard gets here.
 
cellocello: belvdr's point was that the app is waiting for your hard drive. No matter how much you're doing in parallel, if you're loading from disk you'll almost certainly be limited by that.

More generally speaking, GPU processing really isn't something most apps should even be considering. Unless you have a significant chunk of code that meets *all* of these requirements, it isn't going to help:

1) Few serial dependencies
2) Few/no branches
3) Long enough running time to be worth the overhead of shipping it all over PCI-E
4) No disk or network access

Most code in most apps is nowhere close to that.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.