Very much excited at the possibilities. When devs write native code away from x86 of course the apps will FLY!
I think it will have something to do with the lack of the 32GB RAM that so few people actually need.
It's not really that hard. All modern CPUs do out of order execution. If you're already executing stuff in the "wrong" order, then you can do it concurrently.Yes, we are sure.
Parallelizing single threaded code automatically is one of the hardest computer science problems. No way Rosetta is doing that.
Yes, this is what the graph is inferring for single-threaded loads. However, we don't know how well they perform for multi-threaded loads, and whether the M1 will be thermally limited for long-running tasks (particularly in the Air). I suspect performance will depend on specific applications too, so we need to see real-world results for each application.As a tech enthusiast but someone who doesn't necessarily understand the finer side of this kind of thing, can anyone confirm if my understanding is correct—the M1 in the lowest type of configuration (a MacBook Air), emulating Intel-based macOS applications, performs better than the best high-end Intel iMac running x86 natively?
Is there a way to see how the iMac Pro and Mac Pro perform compared to the computers in the graphic?
Does anyone have an educated guess as to when we will see AS iMacs?
Ya know, your attitude is the worst.
Either that or you don't know what "doubt" means or, very likely can't simply handle criticism of Apple.
There's nothing wrong, at all, with doubting a company or someone. There's still plenty of doubt to be had.
Further, failing to virtualize x86 is not insignificant. Software, in the real world, isn't always $0.99 from the store on Macs. It can be very expensive. Asking people to pay for it twice? That's.. a tough pill to swallow and Apple needs to offer something huge in return. Or risk people jumping to Microsoft.
When you force people to jump into a new ecosystem -- then their options to jump into ANY ecosystem because possible and reasonable. It's a very risky move.
It's not really that hard. All modern CPUs do out of order execution. If you're already executing stuff in the "wrong" order, then you can do it concurrently.
Yes, but the issue here is you are looking at the cpu only. Apple is delivering a package. Now you may or may not have experience in this field and as it’s the internet anyone can claim to be anything, but you are narrowing the focus onto the cpu design. The specific core. The fact of the matter is that a computer is the sum of its parts which you should well know. Apple packaging everything together applies efficiencies that other chips do not have and are outside the norms of computer design. Perhaps I should have said it that way rather than give the impression I was talking merely cpu core design. This influences benchmarks. Whether just the cpu is similar or not isn’t really the point, it’s the bigger picture.Massive reorder buffer: UltraSparc V had that. I know, because I was the original designer of the reorder unit on that chip.
On die memory: there is no on die memory. It’s in the package, but not on the die. This is easy to see from the actual die photographs that have appeared on Ars (I addressed this claim in another thread and posted the picture). There are a number of LPDDR4X channels with off-chip drivers, so you can even see how the die connects to off-die RAM. Here’s the photo: https://images.anandtech.com/doci/16226/M1.png
”width”: what width are you referring to? There is nothing unusual about the execution width. It’s, in fact, identical to that used in, say Athlon-64 and Opteron. (I know, because I owned the integer execution unit for the first of those designs)
Dedicated units: most chips are now designed as SoC with on-chip encryption units, etc. AMD transitioned to that design methodology with Bulldozer. I know, because I left AMD right around when that started happening.
The CPU portion of the chip is very similar to every other CPU I ever designed. The SoC methodology is now a very common methodology.
What’s different here is competence, not some radical difference between M1 and x86 chips.
I expect that Microsoft will push its consumer "Cloud PC" service quite soon to address some of these limitations. Of course you can already run virtual desktops in MS Azure, AWS, Shadow and many more, but it's still more focussed on Enterprise (and gamers to a small extent), not "typical consumer PC users". Microsoft will want to capitalize on the increasing number of consumers who will not be running Wintel client machines (new Mac users, Chromebook & tablet users)If you need windows, then you will eventually likely need a windows machine. Luckily for Apple, only 1% of users use bootcamp, and something like 5% use VMs, so even if they lose those customers, they will more than make up for it with new buyers who want to run iOS software on their laptop or desktop.
I do game dev with Unity and they are still at least a year away from a stable M1 build.Well I was waiting for this. Why on earth would you buy a 2020 Intel Macbook Air or Pro now?
Well I think that all depends on whether or not you can do the things you want to do on it.How can you say “for seemingly no good reason?”
It’s got way more performance than any of the competition, and two or three times the battery life.
Aren’t those good reasons?
Actually, it does.Bl**dy Hell!!! Does. Not. Compute ...
We are not going to see that this year. 64Gb is not an entry level requirementThat’s incredible. Hope they release a Mac with 64 GB RAM soon!
I said it in another article's comment: Legacy support. Apple was able to shed all the x86 legacy BS and focus on just what's needed for today's OS and apps. Intel and AMD have to maintain compatibility with a slew of antiquated technologies.
I anticipate that within the year we'll see either x86-64 processor emulation solutions that run Windows x64 (the return of SoftPC and Virtual PC!), and/or Microsoft will begin licensing Windows ARM to consumers (and by that time they should have their 64-bit emulation worked out). Though one possible hurdle with the latter is how much faster Windows ARM will run on Macs than on their Surface Pro X. 😬I’m definitely not an apple hater and these M1s look genuinely amazing. But, I rely on an x86 windows app for work. Currently use Parallels for it. Something will have to give when I next need a new computer.
Good Lord, did we cross over a bridge today because the trolls are out in force.No, we aren't sure, you are. And you're wrong by the way. No one said Rosetta 2 can't run on multiple cores. The article just emphasises the single core perf because that already exceeds all the Intels so it results in a funnier article. Reading comprehension, people.
There is a multi-core score on the benchmark site. It just happens to be beaten by Intel Macs, because apparently multi-core emulation doesn't nearly scale linearly as in a native run, where multi-core score for a proper multi-threaded executable with tasks that are easy to run in parallel is almost exactly single score times core number. Here the single core perf is 1313, so you'd expect a multi-core score of around 10504, yet in the real world it only scores 5888. So multi-core scalability isn't great, but it does exist.
Apple will work hard either to improve on that, or push ARM ports for the most important apps. Or both.
Well I was waiting for this. Why on earth would you buy a 2020 Intel Macbook Air or Pro now?
This is all fake news until production units are in the hands of real users and YouTubers. Does anyone *actually* believe the gimped M1 with 8GB or 16GB of RAM is going to process video faster than an Intel blowtorch. Probably not. But the marketing is cute.