The 65W versions of Sandy Bridge still do cost more...remember the 65 watts Quad Core CPU (similar to the ones in iMac) used to cost premium price than the 95watts last year compared to now
The 65W versions of Sandy Bridge still do cost more...remember the 65 watts Quad Core CPU (similar to the ones in iMac) used to cost premium price than the 95watts last year compared to now
I have been thinking the same thing for a while now. Constraints on the hardware side will force the people writing software to be more inventive and efficient.
Having (virtually) unlimited resources in the desktop realm doesn't exactly give you a lot of incentive to reinvent the wheel every time you pump out a new update. Things tend to bloat instead of get rebuilt efficiently. Is there any reason why I should have to download 500MB updates to MS Office every time I open up a Word document on my iMac? Hardly seems efficient to me, and clearly not a viable option on iOS.
It's LTD - he just spouts off stuff.Really don't understand what you mean. Are you saying work will become less intensive, or processors will become faster+more-energy efficient? or are you saying software will become multi-threaded allowing it to leverage multiple energy-efficient cores to get performance, making it both fast and less energy?
I've read about the ARM since it's first use in the Newton. and in my understanding, the ARM is a pure RISC design, a very small core built with efficiency in mind. They don't have branch prediction and deep execution pipe like x86 processor, limiting their effective power in desktop environment. It's like comparing a regular 3L V6 engine with a 1.6 turbo V4 running at 11,000 RPM, both could achieve about the same HP. But the V6 can be push more ahead burning fuel and the V4 will have better fuel efficiency at low speed. While ARM is already push to it's limit, core multiplication and expending the base design of ARM can obliterate those limit in near future.
The interesting part come from Intel, saying right now ARM mobile CPU is growing twice as fast as the Moore Law.
Most won't admit it, but Apple shaped the product road-map of most of those computer computer that's still relevant today. that includes Google, Sony, HTC, Samsung, Motorola etc...
It's LTD - he just spouts off stuff.
The whole idea of a "processor-intensive" application is that it typically loads up most or all cores to nearly full load (or keeps them at full load). As processor technology continues to advance, yes some processor-intensive applications become less-so due to the inherent architectural and/or speed enhancements made to the processor. However, as history has shown, when more-powerful systems become available, many developers strive to take advantage of it, so we'll always have "processor-intensive" applications in some form.
For the average Mac user? LTD can make a case, albeit weak. For quite a few professional Mac users (and many professional non-Mac users), those comments are just dribble.
On what basis do you say they are 10x more efficient? They are slow and hence burn less power and energy. There is nothing inherently inefficient about x86. ARM ISA is equally bloated.
Intel's fabrication edge is not easy to duplicate either. ARM can't use the same tricks in the same time frame. Intel has dedicated fabs tailored for their chips while ARMs built in these shared fabs like TSMC. TSMC stays 2 generations behind Intel in fab technology (and for good, fundamental economic reasons).
Great analogy with cars. Love it.
A major clarification here: ARM is just an ISA. You can implement it with any microarchitecture. Branch prediction, deep pipelines, are all part of microarchitecture. 486 for example was x86 ISA but did not have branch prediction or deeper pipeline. Arm Cortex A15 (http://www.arm.com/products/processors/cortex-a/cortex-a15.php) does have deep pipeline and branch prediction. I will use your analogy. The difference between x86 and ARM is that of whats on the dash. Whats under the hood is independent of that.
The other things I want to point out is that ARM is not RISC. RISC was about simple instruction set, e.g., Alpha or SPARC was RISC. ARM is very bloated, they even have indexed register which even x86 doesn't support so thats a common misconception (I need to blog about this asap ...).
ARM is growing twice as fast because there is more room for improvement in terms of performance, but increasing performance will most certainly reduce their power-efficiency. I guess we will see where this ends up going. I have a feeling Intel and ARM will meet in the middle somewhere. Its good for us though, prices will come down due to competition![]()
Looking it what the iPad 2 is capable of today, it's pretty astounding.
I've read about the ARM since it's first use in the Newton. and in my understanding, the ARM is a pure RISC design, a very small core built with efficiency in mind. They don't have branch prediction and deep execution pipe like x86 processor, limiting their effective power in desktop environment. It's like comparing a regular 3L V6 engine with a 1.6 turbo V4 running at 11,000 RPM, both could achieve about the same HP. But the V6 can be push more ahead burning fuel and the V4 will have better fuel efficiency at low speed. While ARM is already push to it's limit, core multiplication and expending the base design of ARM can obliterate those limit in near future.
The interesting part come from Intel, saying right now ARM mobile CPU is growing twice as fast as the Moore Law.
Really don't understand what you mean. Are you saying work will become less intensive, or processors will become faster+more-energy efficient? or are you saying software will become multi-threaded allowing it to leverage multiple energy-efficient cores to get performance, making it both fast and less energy?
I could see why ARM would be going twice as fast as Moore for little while. My guess is because it only more recently been really developed and pushed so it is more or less playing catch up and using tricks and technology learned from the other CPU lines over the years. I am willing to bet it will slow down and drop to moore law speed after a while.
He is just repeating Apple catch phases and his church of Apple worship.
I will tell you multithreading/multicore coding is hell to do in programming and a huge pain in the ass to get it all working correctly because so many more things can go wrong plus you have to make sure they are not trying to write or change the same set of data at the same time. Single threading is so much easier to code and design for than multi threading.
ARM ISA is a RISC design just like the PPC, Sparc or Alpha, the ARM acronyme stand for Acorn Risc Machine or Advance Risc Machine. Over time with Thumb, Neon, Jazelle, VFP it become bloated like you said, but the core is a straight simple RISC processor based on Berkley RISC projet.
ARM on wikipedia
Sun SPARC when it started did not even have a multiply instruction. You had to write a loop to perform multiply. Then so many of those things got added over time. Hence, my point about the debate being just silly.
LOL @ worshipping Apple. I hear you about multi-threading, actually thats what I studied in graduate school. i always say this: Multi-threading is about taking the hardware's job of finding ILP and assigning it to the programmer in order to save power. No pain without gain, hence the toughness. I wrote a small article about it recently which may interest you (http://bit.ly/lkIair).
I am still in school learning about all of it but I know when I had to multithread it was a pain in the ass just getting it to work.
I do believe multicore was going to have to happen but a lot of work needs to be done to force the CPU to do more of the work instead of us the programs have to do it and we the programs are limited to max speed of a single thread at the close speed of a single core.
The 65W versions of Sandy Bridge still do cost more...
$11 difference on the retail, for apple it might be zero difference
I understand where you are coming from. Actually I dislike this whole RISC vs CISC debate because boundaries are hazy and there is no substance to it as such (this is coming from someone who architects processors for living). You are right about ARM starting as RISC, but even the Wikipedia article points out that features has been added to the ISA since then "To compensate for the simpler design, compared with contemporary processors like the Intel 80286 and Motorola 68020." these ISAs start as "RISC" and end up at the same place. Sun SPARC when it started did not even have a multiply instruction. You had to write a loop to perform multiply. Then so many of those things got added over time. Hence, my point about the debate being just silly.
Edit: it just occurred to me that you meant that Intel will give Apple a discount. If thats the argument, I have no data to disagree it so I would concede![]()
It's LTD - he just spouts off stuff.
The whole idea of a "processor-intensive" application is that it typically loads up most or all cores to nearly full load (or keeps them at full load). As processor technology continues to advance, yes some processor-intensive applications become less-so due to the inherent architectural and/or speed enhancements made to the processor. However, as history has shown, when more-powerful systems become available, many developers strive to take advantage of it, so we'll always have "processor-intensive" applications in some form.
For the average Mac user? LTD can make a case, albeit weak. For quite a few professional Mac users (and many professional non-Mac users), those comments are just dribble.
I should've moderate more myself, about how arm is "better" than others. I totally agree with you on RISC vs CISC debate. The truth is that I never like much intel processors over the ages, but I've got to admit x86 could be the best general use processors. I still thinking it's silly for Intel trying to put a x86 into a phone when other design do the job just fine.
BTW I've read your discussion about multithreading and multicore with Rodimus Prime. I'm not a programmer, i'm more of an OS guy (got ACSA cert) But I would like to know what do you think about Apple way to solve those problems with C ^block and grand central dispatch?
Well true but aren't these same 'Processor-Intensive processes'* are the ones that have the value to pour in to finding ways to use more cores as well as different cores like GPU processing?
It seems to me that most of these processes are the ones being used in Team situations not just one off users. In which case it becomes a trade between having redundant capacity in each workstation and a server or find a high enough bandwidth way to connect to the a central cluster that allows each process to use as much of the capacity as the office has.
Seeing the bandwidth problem is getting wider and wider. Then it could be Intel see a switch in the next few years to lighter clients and heavy central cores. In which case the best core product for them would be one of these 15w cpu's that could handle the user personnel demand (email,communication,interface) but give it lots of bandwidth so that it can be the data coordinator of the users actions within a team/cluster environment.
It would seem that at 15w Intel has more room for hanging lots of bandwidth to memory, to other processors, to displays, to storage, to ports that a 1W ARM just can't match.
*It's not like the whole app is Intensive the interface is going to send it's time between waiting for the user to react and waiting for the Intensive Process to get done dealing with that.
Slow as molasses web browsing?
So what's the bet that Apple exploit this by making thinner devices with thinner batteries with the same battery life rather than using the same devices/batteries to get the "all day" power that they probably will now allow.
I am still in school learning about all of it but I know when I had to multithread it was a pain in the ass just getting it to work.
I do believe multicore was going to have to happen but a lot of work needs to be done to force the CPU to do more of the work instead of us the programs have to do it and we the programs are limited to max speed of a single thread at the close speed of a single core.