Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

deconstruct60

macrumors G5
Mar 10, 2009
12,298
3,893
It's a general purpose cache. What is noteworthy is it's size! It is not SDRAM however, but it's located on die so…

It isn't on die and general purpose means "first come first serve". Whatever makes demands of the cache get usage of the cache. It the GPU cores are making the vast majority of the demands of the cache they are going to get most of the bandwidth. ( unless the screen is asleep in a blank state the GPU is likely doing something. )



A lot of real world algorithms make use of cache locality, which can have huge impact on performance. Or course with the normal frugal cache sizes there are limits to what is possible to do.

Most real world algorithms are not dragging in as much data as the GPU cores will. Remember the GPU cores outnumber the x86 ones by roughtly an order of magnitude: 10's of cores versus less than 4 x85 ones. Which ones are going to make more demands??

L4 cache is large because there is a moderately high double digit number of cores trying to get to memory.
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
It isn't on die and general purpose means "first come first server". Whatever makes demands of the cache get usage of the cache. It the GPU cores are making the vast majority of the demands of the cache they are going to get most of the bandwidth. ( unless the screen is asleep in a blank state the GPU is likely doing something. )

http://www.realworldtech.com/intel-dram/


Most real world algorithms are not dragging in as much data as the GPU cores will. Remember the GPU cores outnumber the x86 ones by roughtly an order of magnitude 10's versus less than 4. Which ones are going to make more demands??

Depends on what you do, and how your task is scheduled by the OS, if you do graphics work at the same time as something else equally memory intensive, who knows what will happen, a likely scenario is that both will cause cache misses for the other. But who knows. Not you or I at least. We will have to wait and see.


L4 cache is large because there is a moderately high double digit number of cores trying to get to memory.

The L4 cache is large because intel decided it should be, but being a level 4 cache, it makes sense for it to be larger than the level 3 cache and so on. It fits well with the memory model and cache locality, going from L1 cache to disk and so on.
 
Last edited:

deconstruct60

macrumors G5
Mar 10, 2009
12,298
3,893

Not sure what your point with this site quote is. From the cite article

".. Based on our analysis, the most likely configuration is a 128MB DRAM that is mounted on the same package with at least 64GB/s of bandwidth (and probably more) to the SoC over a 512-bit bus ..."

In other words the SoC ( the CPU+GPU combo ) are accessing this 128MB eDRAM. They are both mounted on the same package. Both CPU and GPU going to be accessing this.



Depends on what you do, and how your task is scheduled by the OS,

L4 cache demands inside of the CPU/GPU package are not scheduled by the OS. The OS runs on top of that layer.



The L4 cache is large because intel decided it should be, but being a level 4 cache, it makes sense for it to be larger than the level 3 cache and so on.

Please. Anybody with modicum of training in computer architecture would boost the cache when 10's of cores are trying to pull data from two memory controllers. It is an obvious choke point if didn't sleep through any decent computer arch 101 class.
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
Not sure what your point with this site quote is. From the cite article

A recent presentation from Intel at IDF Beijing indicates that the DRAM actually functions as another level in the memory hierarchy for both the CPU cores and graphics; essentially a bandwidth optimized L4 cache.




L4 cache demands inside of the CPU/GPU package are not scheduled by the OS. The OS runs on top of that layer.

What runs on the CPU/GPU is scheduled by the OS. If what ever runs on it uses memory, it will use cache.


Please. Anybody with modicum of training in computer architecture would boost the cache when 10's of cores are trying to pull data from two memory controllers. It is an obvious choke point if didn't sleep through any decent computer arch 101 class.

Yet, Haswell is the first intel CPU to use L4 cache.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,298
3,893
and if the need arise in the future, it will be used again.

What is utterly lacking in the "move to ARM" rationale is any well grounded need. There isn't one. There is a lot of 'Pinky and the Brain' flavored stuff where Apple takes over the world by pushing OS X onto ARM, but they are about as well thought out as the schemes that appear in the cartoon.
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
What is utterly lacking in the "move to ARM" rationale is any well grounded need. There isn't one. There is a lot of 'Pinky and the Brain' flavored stuff where Apple takes over the world by pushing OS X onto ARM, but they are about as well thought out as the schemes that appear in the cartoon.

I'm not talking about ARM.
 

Galatian

macrumors 6502
Dec 20, 2010
336
69
Berlin
Hopefully this will translate to around 6:30 hours of battery life on the Macbook Air 11" when web browsing. My 2011 version gets around 4:50 hours in my tests. :)

I'm looking forward to tests as well but don't forget: the CPU is just one part of the notebook that draws energy. The screen probably uses just as much.

Regarding L4: I don't think it provides substantial performance for the CPU. First of all while those chips gain an L4 cache, they looses out on 2 MB L3 (6MB vs. 8MB) and they are clocked slower then similar non Crystalwell chips. Maybe some highly optimized code will profit from it, but then again which developer will make code for such a small user base? It will take years for TSX and AVX2 to be in use as it is.
 

iRun26.2

macrumors 68020
Aug 15, 2010
2,123
345
Yes, so you agree that ARM designs could ramp up the performance to equal or surpass Intel. Now factor in a couple of other facts:
- Apple already has a team of engineers who now have at least several years experience designing ARM chips. The benefits then extend to the possibility of optimizing their chip designs to the OS or vice versa.
- Designing it themselves, paying a small licensing fee to ARM, and having them built by a foundry like TSMC, GF, or even Samsung is MUCH cheaper than buying chips from Intel

The second point is the big one. Having that cost edge is what helps Apple maintain or support their margins. So the idea that Apple would like to be able to have an ARM based Mac is not so crazy.

Software incompatibilities would cancel my purchase.
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
Regarding L4: I don't think it provides substantial performance for the CPU. First of all while those chips gain an L4 cache, they looses out on 2 MB L3 (6MB vs. 8MB) and they are clocked slower then similar non Crystalwell chips.

Why not, it's not called the von Neumann bottleneck for nothing. Any algorithms that goes beyond big-O analysis and make use of cache locality should in theory be able to benefit from more CPU cache.
 

Nik

macrumors 6502a
Jun 3, 2007
669
1,255
Germany
One of the linked articles said that 5200 comes only in a CPU form factor intended for soldering to the motherboard. So assuming that's true, they would be selling that version only to OEM and not make that chip available to the public.

Thanks for this information :)

But that does not help the fact that Intel says Q3 for 5200 graphics. :/
 

mattferg

macrumors 6502
May 27, 2013
380
22
Haswell (v3) is the generation after ivy bridge (v2, and ivy bridge xeons have been available for a while). In fact, if you click on the intel announcement, it actually mentions Xeon E3-1200 v3 as part of today's introduction.

The E5 needed for dual cpu Mac Pros is still a couple months off, but next week Apple could announce new MP with E5/dual versions shipping later but E3 quads shipping immediately.

The one downside to the new generation of chips is that so far they don't have a version of the single socket xeon that's more than four cores. Are those expected later? Or will people have to buy the dual versions even for single socket 6 (or more) core?

Sorry, you're mistaken. Haswell is the fourth generation, v4.

Nehalem - Sandy Bridge - Ivy Bridge - Haswell

Thus the Xeon E3-1200 v3 you mentioned is an Ivy Bridge chip.
 

iMikeT

macrumors 68020
Jul 8, 2006
2,304
1
California
I can't wait as I might actually pull the trigger this summer to replace my PPC G4 notebook that's served me well for the past 8 years.
 

Tech198

Cancelled
Mar 21, 2011
15,915
2,151
Most people wouldn't keep their laptop that long.

The only thing i kept longer was my car, and that been about 20 years old
 

cgk.emu

macrumors 6502
May 16, 2012
449
1
MR users: "Can't wait for Haswell!"
Mr users...the day of release: "What rubbish these chips are!"

:confused:

----------

Please just give me a new Mac Pro. At least the latest MBP still works. My MP is no use at all.

Why? Does it not power on? Or does it not play Crisis as fast as you want it to? :rolleyes:
 

junctionscu

macrumors newbie
Nov 17, 2011
22
0
So you don't believe Apple is testing OS X on ARM in their labs in anticipation for future ARM chips that might be powerful enough to meet their needs?

If I understood your post correctly, you were saying that for ARM to design a chip that had the equivalent processing power of an Intel chip, it would lose its edge in energy efficiency. I would argue that 1) we won't know until they do and 2) even if what you're saying turns out to be true (that at the equivalent processing power, ARM=Intel in energy efficiency), using ARM chips still has a cost advantage over Intel.

ARM has already tried to create a CPU that has the equivalent processing power of an Intel chip - the A15. As they scaled up in performance and faced the same realities of power/performance and lost their energy efficiency edge. ARM power on A15 is way higher than Intel for the same performance level. ARM tried to remediate this with Big.Little, but we have yet to see if it works (meanwhile it takes a lot of space to put two separate (A15 and A7) cores on the same die). I think that's why Samsung went with Intel instead of Big.Little on ARM and why Apple did a custom design instead of using A15. Good point about cost though.

We are all sure they are but by the time ARM catches up to Intels CPU performance there is also just as likely a chance for Intel to catch up to ARM's power consumption performance.

They've already caught up on power. The new Silvermont CPU Intel announced is even more power efficient (better performance using less power). Of course the mainstream Intel CPUs (like what's in the Surface Pro) uses more power, but ARM isn't competing there yet.

----------

Yes, so you agree that ARM designs could ramp up the performance to equal or surpass Intel. Now factor in a couple of other facts:
- Apple already has a team of engineers who now have at least several years experience designing ARM chips. The benefits then extend to the possibility of optimizing their chip designs to the OS or vice versa.
- Designing it themselves, paying a small licensing fee to ARM, and having them built by a foundry like TSMC, GF, or even Samsung is MUCH cheaper than buying chips from Intel

I would argue that ARM designs can't match Intel performance without completely losing their power efficiency advantage. The A15 is an example of that. They still have a ways to go.

Apple CPU team is really impressive though. Isn't it amazing that a group of engineers at Apple (mainly from PA Semi) are able to make a better ARM CPU than ARM??

I think ARM messed up with A15, and they are trying to fix it with the A12 they announced yesterday (which is essentially an A6X). The A12 won't be ready for over a year though, which is why Apple, Qualcomm, and others have been doing their own chip designs based on ARM.

Agree though that Apple probably would prefer to do their own chips than pay for Intel prices. However, they don't want to give money to their competition (Samsung) or have a huge drop in performance per watt and compatibility loss by switching to ARM in the Mac line. I really don't see any advantage in switching to ARM for Macs except that they would want to "control" it. But I also don't see them going out and making displays and other components. They compete with experiences, software, industrial design.
 

iSayuSay

macrumors 68040
Feb 6, 2011
3,792
906
Can't wait for the Apple ARM version of the MacBooks

Intel has been focusing on power efficiency for some time now. It's only a matter of time before X86 matches ARM on power consumption.

By the time that comes, ARM could be irrelevant.
 

Crzyrio

macrumors 68000
Jul 6, 2010
1,587
1,110
Yes, so you agree that ARM designs could ramp up the performance to equal or surpass Intel. Now factor in a couple of other facts:
- Apple already has a team of engineers who now have at least several years experience designing ARM chips. The benefits then extend to the possibility of optimizing their chip designs to the OS or vice versa.
- Designing it themselves, paying a small licensing fee to ARM, and having them built by a foundry like TSMC, GF, or even Samsung is MUCH cheaper than buying chips from Intel

The second point is the big one. Having that cost edge is what helps Apple maintain or support their margins. So the idea that Apple would like to be able to have an ARM based Mac is not so crazy.

Not saying it is Crazy at all, but it is still a good 3-5 years down the road. And a lot can change in that time. I am saying it is crazy that people are guaranteeing it.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.