Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I don't want to derail the discussion, but I was curious:

With this announcement from Intel about the direction they want to go, the tight relationship Apple and Intel clearly have, and the not uncommon occurrence of Apple getting chips that Intel hasn't officially released yet, would it be entirely unrealistic to think that the MBA's rumored for June/July may see a processor better than the i7-2657 (in the 11)? Perhaps a 17w 1.66 or 1.83 (stretching it) or something that just runs its HD3000 a little faster than 350/1000.

Obviously we're not going to see the 10-15W range they're talking about show up in Sandy Bridge but perhaps they've got something up their sleeve to show they're serious. It's probably a lot to ask for but it seems like a new MBA would be the ideal place for Intel to say, "See, we're already upping the speed in the ULV range."

I haven't closely compared the chips Apple has gotten early with what was available to other OEMs at the time so I'm not entirely sure how big the gap has been in the past. I expect it's minor.

Edit: I suppose if it happened it would be something like a 1.7 given the pattern of the i3/i5/i7 clock speeds.

Just wanted to pat myself on the back for amusing timing :)
Intel Preps 1.7GHz and 1.8GHz Processors Suitable for Next MacBook Air

Idly hypothesized it last night, announced this afternoon. Excellent. Now I hypothesize that Intel will mail me a million dollars tomorrow.
 
Last edited:
Nice. I am very curious, what library/framework are you using for multi-core. Is it pthreads> If so, I know that (void*) (void *) is a pain. OpenMP is restrictive, but cleaner.
It was java but not really sure. I just got it working and called it good. It was at the end of the semester and I was ready to be done. I will look more into it next year as I will be more or less doing the same class in C#.
That's why apple invented grand central dispatch. Synchronizing threads is probably best left in the hands of software due to the complexity involved, like you said, so I think GCD is probably one of the best parts of snow leopard.

Even with GCD (apple name for it) or what ever a language has synchronizing threads it is good practice to use the libraries built in and do not mess with them as chance are we can not do better.

But GCD does not chance the fact that multi threading is a huge PITA. Making sure everything and then flushing out and cleaning up the final few bugs is an even bigger PITA with multi threading because when the program crashes it just kind of goes all at once and no real way to trace back to exactly were it blew up.
 
It was java but not really sure. I just got it working and called it good. It was at the end of the semester and I was ready to be done. I will look more into it next year as I will be more or less doing the same class in C#.


Even with GCD (apple name for it) or what ever a language has synchronizing threads it is good practice to use the libraries built in and do not mess with them as chance are we can not do better.

But GCD does not chance the fact that multi threading is a huge PITA. Making sure everything and then flushing out and cleaning up the final few bugs is an even bigger PITA with multi threading because when the program crashes it just kind of goes all at once and no real way to trace back to exactly were it blew up.

Our discussion inspired me to write a post titled "What makes parallel programming hard?" (http://bit.ly/ku6vJi). Take a look and suggest changes. For others, I welcome criticism or comments.
 
In the end, RISC wasn't about reducing the *number* of instructions - it was about reducing the *complexity* of the ISA. A simple ISA can be decoded very quickly, and makes multiple-issue and other optimizations easier to implement. (It was also very significant that the transistor count exploded during the timeframe of the RISC vs CISC debate. The Intel Pentium processors debuted with a transistor count of 3.1 million - a Core i7-970 has over a billion transistors.)

Yes, I agree that the argument is silly - especially since Intel figured out with the P6 (5.5 million transistors) how to turn x86 into a RISC ISA.

I agree with you, obviously:) I do want to clarify though that I did not say that RISC is about reducing the number of instructions. My Sun SPRAC example showed this very point. They did not have the multiply instruction, which meant it required lots of add instructions, which is equal to "removing a complex instruction but increasing the number of instructions."
 
Just wanted to pat myself on the back for amusing timing :)
Intel Preps 1.7GHz and 1.8GHz Processors Suitable for Next MacBook Air

Idly hypothesized it last night, announced this afternoon. Excellent. Now I hypothesize that Intel will mail me a million dollars tomorrow.


Intel puts out a more energy efficient version every middle of the year. Nw release in winter and a second mini release a few months later. Basically a die shrink and some optimization. They have been doing it since the 1990's
 
This is incorrect. The whole app being intensive is irrelevant. What matters to a user is how much time the computer takes to respond when he/she clicks something. Thus, the fact that you have to wait for the user doesn't help reduce the need for CPU intensive.

When I think high end workflow I think Graphics, CAD/BIM, 3D visualisation, Video Production, Scientific Modelling. Stuff that has the computer ticking along a high speed for hours or weeks. The sort of stuff where the guy/gal behind the machine is being paid serious money and is probably working to a time budget. So programmers are making choices that could well decide if the user goes home that night or works right through.

If you consider in those situation that the hourly cost of software and hardware (spread over it's working life) is around 5-10% of the hourly cost to the project of having a user on the team. Then it's not just about being responsive to the user themselves it's about being responsive to the team. The more time the processor can keep itself at reasonable capacity the better. The more work that can background so the user can get on with the next task if they don't need to know the results the more value the system delivers. Waiting for the user is costly when that time could have been better spent doing work for the the user to review later.

I know on tight deadlines we have 3-4 users running twice or three times as many machines not because the hardware has maxed out but because a programmer has decided that machine can't move on till it's finished the task at hand. Generally it's because they want to lock the data instead of taking a snapshot and having the user decide if it's ok to be a little out of date.
 
Wirelessly posted (Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Mobile/8J2)

Apple need to come out with a new MBA this Tuesday before I reluctantly purchase a higher end 13" mbp....

Please!!
 
This is awesome news. Finally the world has seen sense. Though I do agree this should only be the case for most laptops.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.