Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Hattig

macrumors 65816
Jan 3, 2003
1,457
92
London, UK
Someone with that article is using a misnomer. How is it that there are "hundreds of microprocessor cores" waiting to do my bidding yet my processor is a "dual-core" one? That makes absolutely no sense. Part of me thinks someone means "transistors" or something else.

Modern graphics cards do have hundreds of "cores" but they're not like CPU cores which are very different. However for what the GPU cores are good at, they make the CPU cores look anaemic. Think of a single GPU core as being like a single G4 Altivec unit +/- some things.
 

iMacmatician

macrumors 601
Jul 20, 2008
4,249
55
Intel's Larrabee will come out eventually, but it might be 2010 or 2011 before its technology gets integrated into a chipset.
It's slated for a late 2009 release. Don't know about chipsets though.

AMD's current HD4700 gets 1.2 TFLOPS, today, at a quite affordable price. Quite a lot of power consumption too, but a lot less than Larrabee is rumoured to consume.
I should have made this clear in my original post but...

The difference is that Larrabee's 960 gigaFLOPS is double-precision (used in CPUs), while the HD 4870's 1.2 teraFLOPS is single-precision. Regular GPUs take a big hit on double-precision. In the HD 4870's case, it's 1/5 the SP speed (for the GTX 280, it's 1/12). In other words, 240 double-precision gigaFLOPS.

Someone with that article is using a misnomer. How is it that there are "hundreds of microprocessor cores" waiting to do my bidding yet my processor is a "dual-core" one? That makes absolutely no sense. Part of me thinks someone means "transistors" or something else.
That reminds me of this.

And yes, I'm going to keep this crusade up until everyone ridicules anyone who refers to a single alu as a processor or a core. You either count a single bit xor as a "processor/core" you count something that actually is a processor as a processor.
 

macduke

macrumors G5
Jun 27, 2007
13,145
19,701
Ok...say they release iPhone software 3.0 next summer after the release of Snow Leopard. How could the iPhone benefit? Does it have a video chip that could even take advantage of this technology? Or would there have to be a new version of the iPhone? Oh geeze...let the rumors begin!!
 

Riemann Zeta

macrumors 6502a
Feb 12, 2008
661
0
Someone with that article is using a misnomer. How is it that there are "hundreds of microprocessor cores" waiting to do my bidding yet my processor is a "dual-core" one? That makes absolutely no sense. Part of me thinks someone means "transistors" or something else.
The "cores" in an Intel dual-core CPU are full featured general purpose CPUs in and of themselves--they have multiple out-of-order integral instruction units, floating point units and vector units, as well as much of "other stuff" for memory management, bus interfaces, etc... A GPU "micro-core" aka a "shader" is basically just a single vector unit; its purpose is to do linear algebra really quickly--it takes in an N-dimensional vector (which is composed the coefficients of a linear equation, e.g. Ax1 + Bx2 + Cx3 + ...) and does a linear transformation on that vector. This is great for calculating where things are and how things move in 3D, but not every type of OS code benefits from parallel vector processing. CPUs can easily perform other non-vectorized functions while GPUs can't, so it is far easier for programmers to write general purpose code for CPUs.
 

commander.data

macrumors 65816
Nov 10, 2006
1,058
187
True - integrated GPUs aren't that powerful to begin with. However, Intel's QuickPath Interconnect technology will reduce the penalty integrated graphics must pay to access system RAM - this will make a performance difference, but how big remains to be seen.
That doesn't make any sense. QuickPath increases the latency for IGP memory access. The reason is because when the memory controller is integrated in the chipset northbridge where the IGP is located, the IGP gets direct memory access. But then the memory controller is located off chip in the processor, then the IGP has to make a request from the northbridge to the CPU's IMC through QuickPath where it will then send information back to the northbridge and the IGP. No matter how efficient QuickPath is that is still an extra hop and will increase memory access latency for the IGP.
 

emotion

macrumors 68040
Mar 29, 2004
3,186
3
Manchester, UK
Just when NVidia's CUDA is shaping up to be the industry's mammoth paradigm. Interesting. Widespread adoption will depend on what Intel do for Larrabee then...
 

diamond.g

macrumors G4
Mar 20, 2007
11,119
2,448
OBX
Just how much extra can they squeeze from the GPUs?

Doesn't just about everything (Quartz 2D, Core Image, Core Video) use the GPU already?
If you think that stuff even remotely stresses anything other than the GMA line (GMA950, X3100) you are mistaken. Programable GPU's don't break any sweat when dealing with those things. You could probably show hundreds of thousands of Core Image/Video Windows and still not stress the GPU (you would probably run out of video memory first).

That doesn't make any sense. QuickPath increases the latency for IGP memory access. The reason is because when the memory controller is integrated in the chipset northbridge where the IGP is located, the IGP gets direct memory access. But then the memory controller is located off chip in the processor, then the IGP has to make a request from the northbridge to the CPU's IMC through QuickPath where it will then send information back to the northbridge and the IGP. No matter how efficient QuickPath is that is still an extra hop and will increase memory access latency for the IGP.
That is until they get the GPU embedded into the CPU (think Larabee).
 

diamond.g

macrumors G4
Mar 20, 2007
11,119
2,448
OBX
Larrabee IS a CPU. It apparently uses revised Pentium cores, a 512-bit SIMD unit, and features present on GPUs but not on CPUs.
That doesn't change the fact that it will be integrated on/in to Nehalem cores.
It wont be a separate socket. There is supposed to be that option, but I douby Apple would go that route.
 

diamond.g

macrumors G4
Mar 20, 2007
11,119
2,448
OBX
No it won't (link please). Larrabee is discrete graphics.

Hmm, I could have sworn all fingers pointed to Intel basically using a stripped down (core wise) Larrabee for IGP duty. Shoot it would be the smart thing to do as it would basically be the reverse of what ATI/Nvidia does. It would get Larrabee into the hands of many and hopefully make the discreet part a bit more popular.


If I am wrong, sorry. I am still searching for anything, but an fast becoming sure of my wrongness.

EDIT: Found something on Electronista
 

winterspan

macrumors 65816
Jun 12, 2007
1,008
0
I'm hoping it will be the beginning of replacable integrated graphics.
"replaceable integrated graphics" is an oxymoron. The only reason we call them "integrated" is because they ARE integrated into the motherboard and therefore are unable to be replaced with newer units. This allows
Apple to save money by having Intel build cheapo graphics directly into the motherboard.

Also, the GPU is a great place to look for hardware acceleration in video decoding. QuickTime X would be the perfect place for this, and such improvements would benefit iChat.
Yep, hardware accelerated video decoding is already in place in all modern discerete graphics, although I'm unsure if Apple has been using this, or if that is indeed was Quicktime X is going to do for desktop OSX. I also believe Intel's integrated crap finally has integrated decoding as well with the new laptop platform.

Sounds great. But considering that all of NVIDIA's mobile GPUs are defective and will fail under their own massive thermal output, I hope Apple starts to go back to ATi...
That's a one-time manufacturing problem that is said to only affect a small amount of outstanding mobile GPUs. I've had 5+ over the years, and I haven't had an issue. I even recently replaced a Geforce Go 7900 with a mobile Quadro unit from Ebay, and it has been working great.
AFAIK, considering nVidia doesn't even have a fab, it could be the fault of the chinese/taiwanese fabrication facility.


Maybe this will trickle up to notebooks and desktops and we'll no longer have a two processor CPU-GPU combo, but one GPGPU or one CPU that's great at graphics!
Well, that is already starting to happen. Although not technically "merging" into one commmon unit, both Intel and AMD have projects on tap for 2009 that integrate CPU and CPU cores into the same package, and eventually into the same die even.
Although the first verions of these will only have the capabilities of integrated graphics, I'm sure eventually they will completely combine and we'll no longer see seperate high-end enthusiast graphics cards, as they'll be replaced by massively-parallel hybrid processors that take care of all processing duties on the sytem.



Also, this stuff has been going on in Windows for at least a couple years. Folding@Home utilizes the GPU like this, and things like PS CS4 are supposed to be already utilizing the GPU on Windows/
Umm not to rain on anyone parade but NVidia and Windows has been doing this for a while now, not sure about ATI though.

Both Nvidia and ATI have their own GPGPU software development kits, but Nvidia's CUDA has definitely been mentioned a lot more. Probably because they were the dominant force in GPUs recently as ATI didn't have competitive product offerings for a long time.
The SDKs have been available for Windows/Linx, but there haven't been ANY comemercial consumer applications using them, so I definitely wouldn't say "windows has been doing this for awhile". The Adobe Photoshop GPU presentation was an unofficial look into the Adobe's research labs, not something currently available for Windows, and folding@home is obviously a specialist scientific case. GPGPU is still largely something for research, although I have seen some examples of internal use at corporations.
Publicly at least, Apple is already way out in front of Microsoft having formed the OpenCL standards body to streamline GPGPU processing into a standard framework.


Keep in mind not all shaders are created equal. More shaders is still better though but I wouldn't compare different manufacturers on shader count alone.
definitely! Intel's IGPs have come a long way, but they are still crap compared to anything from AMD/nVidia.


True - integrated GPUs aren't that powerful to begin with. However, Intel's QuickPath Interconnect technology will reduce the penalty integrated graphics must pay to access system RAM - this will make a performance difference, but how big remains to be seen.
The IGP in current platforms has a direct connection to the memory throught the mem controller on the northbridge -- with nehalem, the IGP will have to go through the northbridge over quickpath to the CPU-integrated memory controller, which will then have to access the memory which is directly connected to the processor.
I can't imagine that this would improve the performance of the IGP through this mechanism, as the latency would appear to be cancel out any increase in memory bandwidth. I'm no expert however, so I'll have to do some more research.


Would hate to leave out Toshiba's Spurs Engine multimedia powerhouse. Toshiba's put it into one of their laptops, so why not Apple who makes some serious multimedia applications. Didn't Apple mention some product transitions. It seems that you could have a board with integrated graphics, intel cpu and spurs engine to speed up multimedia.
The "Spurs engine" is stupid. First of all, they should have used a full CELL BE chip. Secondly, it doesn't matter anyways because who one wants a proprietary, hard-to-program co-processor? Just like the "Physx" add-in card, this will fail because it won't have any extensive community support. For speeding up parllel computations, You want a vendor-neutral standards-based approach that will be compatible with a variety of hardware AKA OpenCL.


Just how much extra can they squeeze from the GPUs? Doesn't just about everything (Quartz 2D, Core Image, Core Video) use the GPU already?
Indeed, but you are referring to graphics-related processing. The term "GPGPU" refers to using the GPU to do general purpose calculations for tasks that are easily parallelized like video processing/encoding/decoding/iDCT, audio encoding, digital image processing, scientific simulations like fluid dynamics, protein folding, Oil/gas geology etc.



The difference is that Larrabee's 960 gigaFLOPS is double-precision (used in CPUs), while the HD 4870's 1.2 teraFLOPS is single-precision. Regular GPUs take a big hit on double-precision. In the HD 4870's case, it's 1/5 the SP speed (for the GTX 280, it's 1/12). In other words, 240 double-precision gigaFLOPS.
You are correct, but FP64/double precision is only really important for scientific and industrial simulation/computation since they need that level of accuracy. I don't think it will matter for most consumer use of GPGPU.


That doesn't change the fact that it will be integrated on/in to Nehalem cores. It wont be a separate socket. There is supposed to be that option, but I douby Apple would go that route.
There seems to be a common misunderstanding that Larabee will be introduced as a CPU-integrated graphics chip. Larabee is going to be an PCIe add-on board, like most GPUs. Intel's first products where they combine CPU and GPU cores into the same package or die are going to be using graphics technology from their current motherboard integrated graphics, NOT from Larabee ---- aLthough this will no doubt change down the road sometime.
 

wrldwzrd89

macrumors G5
Jun 6, 2003
12,110
77
Solon, OH
That doesn't make any sense. QuickPath increases the latency for IGP memory access. The reason is because when the memory controller is integrated in the chipset northbridge where the IGP is located, the IGP gets direct memory access. But then the memory controller is located off chip in the processor, then the IGP has to make a request from the northbridge to the CPU's IMC through QuickPath where it will then send information back to the northbridge and the IGP. No matter how efficient QuickPath is that is still an extra hop and will increase memory access latency for the IGP.
Hmm. I would have thought the exact opposite. Thanks for the information!
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,676
The Peninsula
That... doesn't make sense. One of the defining points of integrated graphics is that it's built in to the motherboard. If it's replaceable, it's just really wimpy dedicated graphics..

"Integrated" is built into the North Bridge (the memory controller of the chipset).

"Discrete" may be soldered to the motherboard, on a special form factor card like MXM, or on a PCIe card that's easily replaceable (except on an Apple).
 

diamond.g

macrumors G4
Mar 20, 2007
11,119
2,448
OBX
There seems to be a common misunderstanding that Larabee will be introduced as a CPU-integrated graphics chip. Larabee is going to be an PCIe add-on board, like most GPUs. Intel's first products where they combine CPU and GPU cores into the same package or die are going to be using graphics technology from their current motherboard integrated graphics, NOT from Larabee ---- aLthough this will no doubt change down the road sometime.

Ah, okay. I am really interested in seeing if Intel can actually hang with Nvidia/ATI in the discreet GPU market. The only twist I can think of is if Intel can get rasterization on the extinction list. Maybe they would have a chance then. As it stands Larrabee, while it sounds cool, just seems like it is going to get walked all over by ATI/Nvidia. Much like Intels first foray into the discreet graphics market.
 

layte

macrumors regular
Jul 23, 2008
205
13
Publicly at least, Apple is already way out in front of Microsoft having formed the OpenCL standards body to streamline GPGPU processing into a standard framework.

DirectX 11 says hi.

http://www.gamasutra.com/php-bin/news_index.php?story=19522

Features include new shader technology that begins to allow developers to position GPUs as more general-purpose parallel processors, rather than being dedicated solely to graphics processing; better multi-threading capabilities; and hardware-based tesselation.
 

iMacmatician

macrumors 601
Jul 20, 2008
4,249
55
You are correct, but FP64/double precision is only really important for scientific and industrial simulation/computation since they need that level of accuracy. I don't think it will matter for most consumer use of GPGPU.
I think Larrabee's single-precision is 2x double-precision (although I'm not totally sure on this). That would give 1.92 teraFLOPS.
 

blitzkrieg79

macrumors 6502
Mar 9, 2005
422
0
currently USA
So.... Basically Intel/AMD is again behind IBM... As I said few months ago, Cell processor will be a blueprint for future processor designs, main CPU surrounded by specialized cores... Nothing revolutionary considering IBM has been doing this for few years now... And with Cell 2 on horizon, IBM will be on the top of its game once again...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.