Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Boom here boom there told ja told him.
Its simple. Apple would not win over x86 If they dont produce $200 laptop.
The cheapest low entry end is what? $999?
Yea unless apple produce laptop, phone at $100,$200. It would not win

Apple is only doing niche market and i dont think they would like to hurt their brand value with $100 phone and $200 laptop
 
Thats what I am trying to figure out as well? One difference is that in order to be on Par with the 13inch in regards to performance, one has to purchase the 512GB Air as the 256GB Air has only 7 cores of GPU not 8? The Air has only 400nits of brightness compared to 500nits for the MBP? not sure how much of a difference that is though? Some are saying that the fan in the MBP may make a big difference in terms of speed as it can cool the processor? In the Air it is fanless so if it reaches a set temp it will have to clock down in order to continue? I would like to make a purchase today and I am stuck in this dilemma?
I’m going to order the MBP13, because as you say the MBP has the active cooling. I think the Air will be a banger of a machine, but the MBP13 is what, .3 of a pound heavier, has the TouchBar, and active cooling so if you’re rendering something (video in FCP, audio files in Logic) it won’t throttle as soon as the Air conceivably would.
 
You don’t run code “on Rosetta.” It’s a translator that does a one-time translation when an app is installed or first run, so that when you run the code it runs natively, not via an emulator.

It won’t be fully optimized, so I’d expect maybe a 10 percent penalty or so, depending on the code.

Caveat: certain types of code do have to be translated on-the-fly due to peculiarities in the x86 memory model, but it’s comparatively rare.
Yeah, I know you don't run code on Rosetta, I guess better choice of words would have been via Rosetta. What do you base your 10 percent penalty on? Just curious.
 
can someone explain the benefit of buying a MacBook pro, if MacBook air has the same chip which means same cpu/gpu etc.. and is much lighter and nice looking. Why would anyone pay more and buy a heavier MacBook pro? just for the extra battery life?
The main thing one is paying for with the MBP is the fan and the battery size. There are minor other differences: Touch Bar, nits brightness, microphones, but the main difference is the fan. And the fan is only of real benefit after running some applications that max out most or all the cores for well over a minute or so.

If you don't need the nits, bar, and battery life, but do need sustained performance, you might be able to get the same performance out of the MBA by always running it on top of a Peltier cold pad or a block of ice. :)
 
  • Like
Reactions: SeattleMoose
Boom here boom there told ja told him.
Its simple. Apple would not win over x86 If they dont produce $200 laptop.
The cheapest low entry end is what? $999?
Yea unless apple produce laptop, phone at $100,$200. It would not win

Apple is only doing niche market and i dont think they would like to hurt their brand value with $100 phone and $200 laptop

What does "win" mean?

Yeah, Apple is *really* hurting as a company... :p


1605209523138.png


 
Still a bit unclear:

Where does Apple get it's 2.8-3x speed increase from when these suggest relatively modest (but still very nice) increases of only 20-30% in per-core speed?
Read the various notes at the bottom of the page here:

 
thanks for your response to my question! Since it looks as if 16GB is the most one can get, they must be pretty confident that the M1 will be able to handle the workload for users with no problem? This is the great unknown is the M1 13" 8GB model equal to the 2020 13" model at 16GB? is seems to be leaning in that direction, in terms of ability? It makes for a more gray area in terms of M1 Air vs MBP? what do think? Most users >90% do not use a MBP for heavy task usage? They play games, YouTube, school work, etc...?
You're forgetting they've only transitioned the lower-tier systems.

I'd expect a transition for the four-port MBP13 and a MBP16 in the future.
 
Same here. Recently acquiring 16-core 2019 cheese grater, so I won't getting any newer Mac soon, but I keep in eye for M1 advancement.

Also shame, if M1 has to be that speedy, I still can’t running Solidworks on it, hence my choice still with x86. After all my Solidworks license almost cost one unit of my tower workstation...so I am only buying a computer if can run Solidworks. In my position, software dictates the hardware I am choosing on.
And imagine if you need to buy Solidworks, AutoCAD, Matlab, and Comsol!
 
Looking at my MBP 213 15”.....please die please die.....
You know it’ll never die ;) The power cord will need replacing, and maybe the battery, but Macs seldom die....

I’m still using my mid-2012 MacBook Air :)
I'm still using a MacBook Air 2013. Replaced the battery last year, but it's not as good as the original one. Also using a used MacBook Pro 2010 for Snow Leopard and PowerPC apps. Was given that recently. I think many Apple users hold onto their Macs very long. I reckon my MacBook Air will go through 10 years before I upgrade. So, yeah, while I am a loyal Apple user, Apple doesn't get that much money from me. Although, I do buy iPads. My phones are all used iPhones, though, like a 1st gen SE.
... and I thought it was just me!
 
  • Like
Reactions: Coolkiwi
Yeah, I know you don't run code on Rosetta, I guess better choice of words would have been via Rosetta. What do you base your 10 percent penalty on? Just curious.
An optimal machine code translation would have to undo all the various optimizations that are important for x86-64, but are inappropriate (slows things down) on an arm64 or Apple Silicon target. And there's not enough information in the binary to allow a completely correct removal of all those inappropriate x86 optimizations. e.g the LLVM byte code to machine code mapping isn't always reversible. And there are a lot of those inappropriate optimizations due to differences in register pressure, memory ordering, and etc. So my random guess is that the difference between Rosetta 2 translation and native code generation will be more than 10%.
 
Last edited:
Yeah, I know you don't run code on Rosetta, I guess better choice of words would have been via Rosetta. What do you base your 10 percent penalty on? Just curious.

Experience with previous static code translators, dating back as far as the DEC Alpha days. Remember that a lot of code will not even need to be translated - a lot of code that gets executed is code in SDKs, and this code will all execute natively. For the rest, what you are looking at is essentially the same thing a compiler does, but instead of human readable text to object code, it’s converting object code to object code. It won’t be as optimized as a native compilation because some intent is always lost in the initial compilation, but it won’t be horrible.,
 
An optimal machine code translation would have to undo all the various optimizations that are important for x86-64, but are inappropriate (slows things down) on an arm64 or Apple Silicon target. And there's not enough information in the binary to allow a completely correct removal of all those inappropriate x86 optimizations. e.g the LLVM byte code to machine code mapping isn't reversible. And there are a lot of those inappropriate optimizations due to differences in register pressure, memory ordering, and etc. So my random guess is that the difference between Rosetta 2 translation and native code generation will be more than 10%.

Maybe, but since most x86 code was also compiled on Xcode, apple is well aware of typical “x86 optimizations” and can likely detect most of them.
 
An optimal machine code translation would have to undo all the various optimizations that are important for x86-64, but are inappropriate (slows things down) on an arm64 or Apple Silicon target. And there's not enough information in the binary to allow a completely correct removal of all those inappropriate x86 optimizations. e.g the LLVM byte code to machine code mapping isn't always reversible. And there are a lot of those inappropriate optimizations due to differences in register pressure, memory ordering, and etc. So my random guess is that the difference between Rosetta 2 translation and native code generation will be more than 10%.
Or less depending.

Marjor software will make it's way. You can already run a lot of opensource on ARM linux platforms so we will see ports (say browsers for instance.)

Heck, how much stuff is browser-based these days anyway?
 
Impressively, my Late 2012 iMac is still chugging along nicely. For an eight year old machine, I'm surprised it's only half as fast as the new MBA. But if the new MBA is twice as fast as the previous one, then I'm really looking forward to what AS can do in an iMac!
 
The ram isn’t all on a single chip. And the ram is not on the same silicon die as the SoC. It’s merely in the same package. You don’t throw out the SoC if the ram doesn’t yield.

Uhm. I’m really sorry to say but you’re just wrong here.

Integrated memory is absolutely built onto the same wafer as the rest of the SoC. Multiple copies of the SoC are printed onto the wafer which is then cut into a single die for each chip.

There is often confusion about the use of the term die when discussing system on a chip design. As traditionally each die had a specialized purpose where as now the same die houses components serving multiple discrete purposes but more tightly integrated (removing the waste of copper interconnects/silicon wafers etc that a longer traversal to a separate SoC would require.

Testing these complex arrays of components requires more of a hybrid / real life simulation to suss out the most insidious of errors that can occur with poor yield quality.
 
I don't know. I was quoting someone else. Did you read my whole post where I quoted someone? Guess not.
No I didn't, just keep seeing M2, D1, Monkey-doo for these chips. I get it it's speculation, but we have M1s now... confirmed. These are 10V versions of the A14 designs. All great and how they go from here is just guessing.
 
Reduced instruction set required more intelligent software design. Why bother when you can have simple code executed on an expensive CISC processor? /s

Not that again. That's why you use a compiler. It does all the work for you, and that's ignoring that the ARM instruction set is a lot easier than x86. My iOS code can be compiled for x86 and run on an x86 Mac with zero work on my side. And you have no idea what's going on inside the M1. AnandTech had a deep dive into the A14, and it's an absolute monster compared to any Intel processor. Vastly superior in any way.
 
The ram isn’t all on a single chip. And the ram is not on the same silicon die as the SoC. It’s merely in the same package. You don’t throw out the SoC if the ram doesn’t yield.
you keep bringing logic and facts to emotional reactions. :) How are they supposed to hate properly if we keep proving them wrong?
 
The last sentence is almost literally what the “I’m a PC” guy said at the end of the keynote. You don’t see how wrong you are when you are taking the position that everyone else uses as a strawman?
Looking at the benchmarks today I am impressed. However, I would still choose performance over battery life. But it appears Apple may be able to deliver both. I still want to see how this first generation works out in the wild before I jump on board, but the benchmarks make me optimistic.
 
I hope Rosetta 2.0 sticks around as long as Rosetta 1.0 (PowerPC to Intel HW switch) which lasted from Tiger 10.4 up to Lion 10.7 (~6 years). I always found Rosetta 1.0 to be very good.
 
  • Like
Reactions: Spectrum
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.