Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Dear Steve Jobs,

Thank you for having the courage, the massive ego, the marbles—whatever it was—to switch to Intel processors. The roadmap keeps getting better.

Sincerely,
The Macintosh Community
 
AMD/ATI isn't going anywhere just yet. R700 is proving to be quite a bit nicer than GT200 is thus far. Plus Intel does have quite a hill to climb in the GPU market.

Yeah.... I was not worrying about AMD but expressing an opinion! And then, lately Intel has developed a nag of thrashing competition!:p
 
Amdahl's law (not the law of the poster here who calls himself Amdahl, but Gene Amdahl) is a law for vector processors, not for multi-processor machines and quoting it in a current context is misleading.

Huh? How are you comparing vector and multiprocessing? You can have a scalar processor with multiple vector processors for example...

right from wikipedia... "It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors."

On a vector processor, all the vector capability on a computer was useless and wasted as soon as an application couldn't make use of it. However, if an application cannot make use of multiple cores, then _that_ application will be limited in speed, but the cores that it cannot use are then available to other applications. You can make 100 percent use of an eight core Mac Pro by running eight applications that are each totally incapable of using multiple cores.

They all need to access the same memory. Can two cores (sharing the same bus) write the memory at the same time?
 
should be OpenGL, not OpenCL here

http://www.apple.com/macosx/snowleopard/

OpenCL

Another powerful Snow Leopard technology, OpenCL (Open Computing Language), makes it possible for developers to efficiently tap the vast gigaflops of computing power currently locked up in the graphics processing unit (GPU). With GPUs approaching processing speeds of a trillion operations per second, they’re capable of considerably more than just drawing pictures. OpenCL takes that power and redirects it for general-purpose computing.
 
http://en.wikipedia.org/wiki/Image:AmdahlsLaw.svg

Unfortunately, it's not that linear when you zoom out the graph a wee bit. :rolleyes:
Did you even read the article?

From what I've read so far, it seems that Larrabee requires from 150 to 300 Watts. We won't be seeing these things in MacBooks, Mac minis or even iMacs.

Unless they lower the power requirements by about 90-95% we won't see Larrabee in anything but Mac Pros.
They can lower it significantly by reducing core clocks and core count.

But I don't think that Apple will go that way for the other Macs because they are all targeted at the consumer segment, which arguably doesn't need many cores. The MacBook Pro, "pro" as it may be called, lacks a number of pro features that would imply to me that it won't be using Larrabee for any CPU-type tasks.
 
Did you even read the article?

They can lower it significantly by reducing core clocks and core count.

But I don't think that Apple will go that way for the other Macs because they are all targeted at the consumer segment, which arguably doesn't need many cores. The MacBook Pro, "pro" as it may be called, lacks a number of pro features that would imply to me that it won't be using Larrabee for any CPU-type tasks.

See the problem that Larrabee faces is one of core count and speed. To keep up with AMD/Nvidia they have to clock high and have many cores. Of course if they use a TBDR then they won't need to, but no one is sure what kind of renderer they are doing just yet.
 
I'm very disappointing in the graphics card on the macBook. I would think that Apple would have met the power of the intel chip with a good graphics card. But the most simple of graphics often causes my macbook to crash or have errors. I'm totally not impressed.

This is good news, especially if they'll incorporate it into the macbook, it needs a serious graphics card update.

What the hell has that got to do with the the fact it uses a descrete graphics card? those issues you are facing are due to driver issues, not the hardware. As for your problems, given I don't know what yout configuration, I don't know what the problem exactly is - but I have a x3100 MacBook, and I haven't yet experienced a single problem.
 
OK.... we need to clear things here!
OpenCL: Computing: Use GPU for CPU purposes!
OpenGL: Graphical: USE CPU for GPU Purposes. (IN crude Terms)
OpenGL is the most commonly used graphical library and is being used since 1992 (Wiki)

Conclusion: The first OpenCL should actually be OpenGL
The second (right at the end) OpenCL is aptly used!
 
Could Larrabee be used for intensive CPU tasks like Photoshop filters, etc?
 
Could Larrabee be used for intensive CPU tasks like Photoshop filters, etc.

It looks like each core contains a full x86 instruction set, so yes, but things might need to be modified (does it really support SSE or something else?)
 
Could Larrabee be used for intensive CPU tasks like Photoshop filters, etc.

Simplifying it:
Apple is developing Grandcentral for what you want ! (Software Approach)
Larrabee is going to be a hybrid CPU and GPU. (Hardware Approach)

And together both the Software and the Hardware combo will obviously be lethal!:D
 
I'm very disappointing in the graphics card on the macBook. I would think that Apple would have met the power of the intel chip with a good graphics card. But the most simple of graphics often causes my macbook to crash or have errors. I'm totally not impressed.

This is good news, especially if they'll incorporate it into the macbook, it needs a serious graphics card update.

The MacBook currently only uses the Intel GMA X3100 graphics processor with 144MB of DDR2 SDRAM shared with main memory. So it is a simplified system for less intensive graphical tasks. That is made pretty clear in the specifications. It sounds like you are asking it to do more than it is capable of or you have a lemon. Perhaps you should have a MacBook Pro which does have a true graphics card in the form of the NVIDIA GeForce 8600M GT last time I checked.
 
You know, everyone complained before about how Intel integrated graphics didn't have hardware shaders at all in the Extreme Graphics generation and didn't have hardware T&L or vertex shaders in the GMA 900 and 950 forcing Intel to run shaders in software on the CPU. Yet, it seems to me, all the effort Intel made in developing an optimized way of running shaders in software, ie. translating shaders to run on a x86 core, is critical to Larrabee. Only that now Larrabee has the vector units and parallelism to have better performance. It'd be nice if some of enhancements made in compilers and drivers for running shaders on x86 in Larrabee could filter back to the Intel IGPs in the MB, Mac Mini, and iMacs.

In terms of Larrabee replacing the CPU, I don't think it's likely. My understanding of Larrabee's cores are that they are fairly simple x86 cores optimized for floating point/vector operations. Similar to the SPEs in cell which were simple PPC cores optimized for floating point/vector ops. You still need a regular CPU to handle other applications and to feed Larrabee. I'm pretty sure most desktop software, especially office apps, are not barely dual threaded much less multithreaded, and rely on mostly integer ops so would performance terribly on Larrabee.

In fact, I believe one implementation of Larrabee will be as an IGP integrated on the CPU. Probably initially a separate die on the CPU, but eventually integrated into the same die. Mainstream Nehalems targeted for next year will already have an IGP integrated on the CPU, but that will probably be based on the existing GMA X4500 initially. Other implementations of Larrabee will probably be in a CPU package fitting into regular CPU sockets connecting to the main CPU(s) over Quickpath acting as a co-processor or placed in a PCIe slot acting as an accelerator card.
 
AnandTech said:
It looks like future Intel desktop chips will be a mixture of these large Nehalem-like cores surrounded by tons of little Larrabee-like cores. Your future CPUs will be capable of handling whatever is thrown at them, whether that is traditional single-threaded desktop applications, 3D games, physics or other highly parallelizable workloads. It also paints an interesting picture of the future - with proper OS support, all you'd need for a gaming system would be a single Larrabee, you wouldn't need a traditional x86 CPU.
(source)

I'm really looking forward to this. A Larrabee core is much smaller (≈5x?) than a Merom core. Intel, since the Pentium M, has been steadily increasing their CPU sizes despite process shrinks. A quad-core Nehalem is nearly as big as a quad-core Kentsfield. Intel makes a new microarchitecture every two years, which brings additional features at the cost of increased die size. AMD, however, makes a new microarchitecture every four years, which delivers more cores despite less performance per core.

Fast forward to 2011. Intel's Sandy Bridge is expected to have 4-8 cores (6 for the DP server variant), while AMD's Sandtiger (Bulldozer cores) is expected to have 8-16 cores for the server versions. So for Intel to be competitive, 1 Sandy Bridge core = 1.33~2 Bulldozer cores. That's a large hurdle, given that 1 Bulldozer core ≥ 1 Nehalem core. While Sandy Bridge will be a power-oriented microarchitecture and will use high clock speeds (4 GHz) to increase performance, it has been hinted that Bulldozer will also use high clock speeds. So while Intel might win at single-threaded tasks, AMD is likely to have the multi-threaded advantage, and this may continue into the future. Unless…

Sandy Bridge was originally codenamed Gesher, Hebrew for "bridge." Maybe Sandy Bridge is a bridge from multi-core (2~12 cores) to many-core (≥12 cores) CPUs. Intel could group 4-8 Larrabee-type cores into a "node" (which would reduce the in-order performance hit) and integrate 4-8 nodes with comparatively sized Sandy Bridge cores into a very powerful single- and multi-threaded CPU with over 1 teraFLOP DP FP in 2013.

Another project that will be unsupported...

Why not concentrate on the Millions of Multi-core Processors that you have sold into the market
Intel can multitask too, you know.
 
If you look at Pure processing power or in clock cycles it is clear the GPU's are way faster than CPU's. But the real problem is we work with Programs that can't take advantage of every single Clock cycle yet. Take deep blue for example. Deep blue and Deeper blue was designed to do one thing and that was to predicted and make the best chess move. Deep Blue was capable of evaluating 200 million chess moves per second in 1997. This is 2008 you would think Computers now would morph this. But in 2008 the fastest quad-core desktop system can only evaluate 8 million chess moves per second. This is a big jump from previous Computer but only 1/25th of Deep blues performance ten years ago. Now i know most people are going to say a Super Computer now can outperform Deep blue, but it is a benchmark on a time line that make's it a interesting comparison. Yes in 10 years from now Computers may be as fast or faster. But if you look at Custom Computer now, they are surpassing that and doubling this benchmark. Yes GPU Programing for main processor usage is a hurdle but it is something we need to explore.
 
Sandy Bridge was originally codenamed Gesher, Hebrew for "bridge." Maybe Sandy Bridge is a bridge from multi-core (2~12 cores) to many-core (≥12 cores) CPUs. Intel could group 4-8 Larrabee-type cores into a "node" (which would reduce the in-order performance hit) and integrate 4-8 nodes with comparatively sized Sandy Bridge cores into a very powerful single- and multi-threaded CPU with over 1 teraFLOP DP FP in 2013.
Well it is already known that Havendale and Auburndale cores in Nahalem will have an on package IGP. When Larrabee deputes, it will just replace Intel's existing IGP solution.
 
And how is this any different from what IBM has been doing for couple of years with Cell processor and something that Apple ran away from??? Main CPU (multicore CPUs/PPE) surrounded by specialized cores (SPE/GPU shader units) and nothing will take advantage of it until the software is optimized for it. Seems like Intel is behind IBM yet again.
 
Huh? How are you comparing vector and multiprocessing? You can have a scalar processor with multiple vector processors for example...

right from wikipedia... "It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors."
I am not comparing vector processing and multiprocessing. I am saying that Amdahl's law applies to vector processors and nothing else, but it comes up again and again being quoted for multi-processor systems. Multi-processor systems work differently. With a multi-processor system, all the processors that cannot be used by one task are available for other tasks.

They all need to access the same memory. Can two cores (sharing the same bus) write the memory at the same time?

We are talking about the future here. There is no documentation for Larrabee available to the public. But each Nehalem processor has three independent sets of memory using three buses; a system with four Nehalem processors can have twelve independent sets of memory performing up to 12 writes to actual RAM simultaneously.
 
Well it is already known that Havendale and Auburndale cores in Nahalem will have an on package IGP. When Larrabee deputes, it will just replace Intel's existing IGP solution.
No it will not. Larrabee is a discrete GPU that is separate from Intel's IGPs. IGPs are expected to exist at least through 2010.

And how is this any different from what IBM has been doing for couple of years with Cell processor and something that Apple ran away from??? Main CPU (multicore CPUs/PPE) surrounded by specialized cores (SPE/GPU shader units) and nothing will take advantage of it until the software is optimized for it. Seems like Intel is behind IBM yet again.
The Cell is well suited for only a selection of tasks, and not general purpose tasks like a CPU. I believe Larrabee is much more powerful than Cell too.
 
It looks like each core contains a full x86 instruction set, so yes, but things might need to be modified (does it really support SSE or something else?)

Each core consists of a real primitive Pentium 2 processor (which apparently has the best performance / watt when built with modern technology), plus extensions for SSE to SSE4 and 64 bit to be software compatible with the lastest CPUs, PLUS at least 256 bit vector units for the massive performance that is needed for a GPU. Each core apparently does 4x hyperthreading; each thread executes instructions in-order; the hyperthreading is used to cover latency; this allows to get a huge percentage of the theoretical performance that a core would be capable of. There are also some very specific additions to the instruction set that are specifically for use as a GPU; most likely hardware support for texture lookups.

So you'd want to adapt the software to use 256 bit vector instructions instead of 128 bit, and to use a massive amount of threads.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.