Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You are the most toxic poster on this site.
That's a bit harsh with reference to @cmaier . He's probably just frustrated by the tendency of partisan posters who proclaim "it ain't so", rather than calmly reading and analysing available performance data. I find zealots of every stamp to be quite tiresome.

I don't know why people get so riled up about a piece of technology. It either meets your requirements or it doesn't. Just base your purchasing decisions on a rational analysis of the data and your needs.

I run machines with different operating system on different platforms, according to the job at hand, and there is no emotion involved in the decision. Things like availability, stability, support, software compatibility, migration effort, cost, long-term support etc. are far more important. I like Macs as client machines because they offer a slightly better working experience for my work - others may disagree and have different preferences. The overall user experience is what counts here.
 
The problem is, will rosetta work with vmware fusion 12 that emulates windows xp? I need one small program to run that only runs on windows xp.
No, Rosetta won't with virtualization software like VMWare or Parallels or run x86 Windows VMs.

If you have a single application, especially for only occasional use, perhaps you could consider running it on a Cloud platform? (e.g. AWS, MS Azure, Google etc.)
 
  • Like
Reactions: NetMage
How much of M1 is patented and how much is free for other chip makers to copy? Won't pc chip makers catch up by copying Apple? How long will Apple have this lead?
 
If you need windows, then you will eventually likely need a windows machine. Luckily for Apple, only 1% of users use bootcamp, and something like 5% use VMs, so even if they lose those customers, they will more than make up for it with new buyers who want to run iOS software on their laptop or desktop.
Windows Arm is bound to come too at some point hopefully?
 
no honestly the only complaint here is in how closed this system is going to become and for seemingly no good reason.
but M1 looks like a legit beast.
It's going to be more open as you are allowed to install the os with no security on one section of the SSD and run another in secure mode, and you can switch between. Why do people always make up some assumptions about macs instead of investigate.
 
Most of you all seem to be unable to do basic research(no offence). Windows virtualisation WILL work a super Apple's own words!
I don't think this is correct. Can you please supply the Apple quotation that you are referring to?

My understanding is that Apple Silicon will support virtualization of ARM binaries for different operating systems. So you will be able to run Linux for ARM, which was demo'd at WWDC.

Technically, you *should* be able run Windows-on-ARM, but this will require Microsoft to licence this to either Apple or to the hypervisor vendors (e.g. VMWare, Parallels or Oracle for VirtualBox). Currently, MS only has OEM agreements with a small number of ARM-laptop vendors.

When we say "Windows" we need to specify ARM or x86_64. Windows for ARM can emulate x86 (and soon x86_64) applications, so you could, in theory, run Windows-on-ARM in a VM and then emulate x86_64 applications.

AFAIK, Apple has made no comment about this possibility.
 
How much of M1 is patented and how much is free for other chip makers to copy? Won't pc chip makers catch up by copying Apple? How long will Apple have this lead?

Apple has many patents, but so, too, do Intel and AMD and Samsung and Qualcomm, etc.

The reasons M1 kills everyone else are:

1) TSMC 5nm (a generation ahead of anything Intel is shipping in any quantity. Is AMD on TSMC 5nm yet? They may be)
2) Arm vs x86 - no matter what you do, you will have to pay a penalty to use x86, because of the instruction decoders which take area and add pipe stages, which then make the branch predictors have to be more complicated to make up for the higher missed branch penalty. There are also various other complications caused by x86 addressing modes, etc.
3) better designers - many of the designers came from places like DEC/PA Semiconductor, Exponential/Intrinsity, AMD, etc. I don’t know what design methodology they use now, but based on results I’d have to guess they are using techniques similarly to those used at DEC, Exponential, and AMD (at least the AMD of the mid-1990s, which is all I know about). These sorts of techniques rely much more on talented designers and less on software that does the design for you. Instead, you use software to check the design and offer suggestions, etc. It takes a bit longer, but in all our testing it always bought you 20% performance/watt improvement.
4) their own innovations in microarchitecture. We can only guess what these are, because they don’t publish papers on this stuff.
 
That is super zippy for entry level machines.

Those scores don’t mean I can load up one of my 50-60 layer After Effects projects though. Even a dedicated ultra fast desktop with an RTX card struggles with this load. Lots of RAM and a discrete GPU are the only choice until SOCs take a much more gigantic leap forward.
The Octane GPU rendering guys where saying that they could fit a scene with 100+GB textures files in those 16GBs of unified memory. If they get into it, they could handle those 50-60 layers, but I doubt adobe will... many years later, FCPX/Motion still renders beyond real-time while adobe chugs along. And I love me some AE to death, great piece of software.
 
In graphic (not compute) tasks, the M1 could be on par with the Radeon Pro 570X (and way ahead of the 560X). The TBDR architecture of the M1 (for which Metal has been tailored) benefits graphics more than it benefits compute. Also, Apple GPUs can use 16-bit AND 32-bit numbers in shaders, for precision and to boost efficiency, which PC GPUs can't.

That's great! I suspected that since it can render more pixels/s:

M1 41 GPixel/s, 82 GTexel/s
Pro 560X 16.06 GPixel/s, 64.26 GTexel/s
Pro 570X 35.36 GPixels/s, 123.8 GTexel/s
Pro 580X 38.4 GPixels/s, 172.8 GTexel/s
Pro 5300 52.8 GPixels/s, 132 GTexel/s
Pro 5500 XT 56.22 GPixels/s, 168.7 GTexel/s
Pro 5700 86.4 GPixels/s, 194.4 GTexel/s
Pro 5700 XT 95.94 GPixels/s, 239.8 GTexel/s
The TDP for the 560X is 75 watts. The M1 GPU TDP is under 15 watts. That is a staggering difference, one that can’t be ignored or diminish.
 
An ipad running LumaFusion on 4gb of ram is already exporting video faster than a windows workstation running premiere pro.

I believe. In Apple’s ability to integrate their hardware and software. In Intel’s inability to innovate meaningfully in this space, and in the competition’s lack of incentive to properly optimise their offerings for the windows platform.
lmao...LumaFusion and premiere pro aren’t even in the same league.
 
  • Angry
Reactions: NetMage
If that were the case, we'd see auto parallelizers already. They've already done all they reasonably can and decided that each thread only runs on one core.
Some here are focusing on processor technology and dont realize what they are claiming would require magic. At a high level many programs do an action wait on an answer and do another action based on the answer. This coding style is very common and cannot be parallelized because a procesor cannot guess the future and it has nothing g whatsoever to do with processor optimization. The processor could be instantaneously fast it would make no practical difference if the bottleneck is elsewhere.

i will grant them a benchmark for a processor is unlikely to be dependent on disk seek or have a wait on a network service, but they opened up the discussion rather wide
 
  • Like
Reactions: hot-gril
lmao...LumaFusion and premiere pro aren’t even in the same league.

And they don’t need to be.

Premiere Pro may be undeniably powerful, but that is also its weakness, because it continues to be horribly unoptimised for pretty much every platform (often relying on pure specs to bulldoze its way through), and its full-featuredness just means added bloat and complexity for those do need the added functionality.

I believe there continues to be a sizeable user base who will benefit from more lightweight video editors like LumaFusion and Final Cut Pro. This is where Apple is uniquely positioned to cater to this user base who is being overserved by Premiere.
 
Apple has many patents, but so, too, do Intel and AMD and Samsung and Qualcomm, etc.

The reasons M1 kills everyone else are:

1) TSMC 5nm (a generation ahead of anything Intel is shipping in any quantity. Is AMD on TSMC 5nm yet? They may be)
2) Arm vs x86 - no matter what you do, you will have to pay a penalty to use x86, because of the instruction decoders which take area and add pipe stages, which then make the branch predictors have to be more complicated to make up for the higher missed branch penalty. There are also various other complications caused by x86 addressing modes, etc.
3) better designers - many of the designers came from places like DEC/PA Semiconductor, Exponential/Intrinsity, AMD, etc. I don’t know what design methodology they use now, but based on results I’d have to guess they are using techniques similarly to those used at DEC, Exponential, and AMD (at least the AMD of the mid-1990s, which is all I know about). These sorts of techniques rely much more on talented designers and less on software that does the design for you. Instead, you use software to check the design and offer suggestions, etc. It takes a bit longer, but in all our testing it always bought you 20% performance/watt improvement.
4) their own innovations in microarchitecture. We can only guess what these are, because they don’t publish papers on this stuff.
Thanks! I was actually hoping you would answer. :) I'm sure their chip design is classified and patented to some degree but can't other ARM manufacturers like Qualcomm and Samsung just open it up and copy parts of it or would Apple say " I see what you did there" and sue them? I mean others will reach 5nm and Qualcomm and Samsung also make ARM. I guess for Apple it all comes down to better designers, innovations and patents in the future in order to keep their lead. :)
 
  • Haha
Reactions: NetMage
Granted I’m no designer but isn’t the big difference that the AS chip is using vertical layers and a SoC vs opposed to a taditional layout across a pcb board?

curious to your thoughts with your background
I think it is more about ditching the legacy. Intel and friends run on CISC or complex instructions that need to be translated down to simpler RISC code before they execute. ARM runs nativly on RISC. Intel is also stuck with all kinds of issues related to memory management, instruction ordering and speculative execution that are less of an issue with ARM. Large portions of an Intel or AMD chip are dedicated to dealing with piles of legacy stuff. This is much less of an issue with ARM.
 
  • Like
Reactions: officialjngtech
I do game dev with Unity and they are still at least a year away from a stable M1 build.
They say the player build itself is going to be (or already is) working soon.
The editor itself, yes, it’s farther but it is my understanding that Rosetta 2 will just launch it on the M1 Mac?
As an Unity tinkerer myself I’m really curious about the usability of Unity as of next week.
Also, normal MacOS player builds (non arm builds) would just launch with the Rosetta interpreted one?
 
  • Like
Reactions: ader42
Thanks! I was actually hoping you would answer. :) I'm sure their chip design is classified and patented to some degree but can't other ARM manufacturers like Qualcomm and Samsung just open it up and copy parts of it or would Apple say " I see what you did there" and sue them? I mean others will reach 5nm and Qualcomm and Samsung also make ARM. I guess for Apple it all comes down to better designers, innovations and patents in the future in order to keep their lead. :)

Maybe. I can honestly say that when we designed chips we never took apart anybody’s chips to see how they did the design. We *did* read academic papers and attend conferences and such where we would learn things. If we thought the things had merit, we’d try them out and sometimes do them. Never saw anything where we went “aha! This is the key to our success!”

Chip design is the art of making a million little decisions, none of which is going to make or break the design.

We had a running joke - any time we’d try and simulate the effect of some major change in the architecture we‘d predict that it would be a 2% effect. And usually it was :)
 
What are your thoughts on the relative memory needs for the x64 world vs the M1 UMA? Will the same amount of memory be comparable?

I’m not very qualified to guess. I don’t know a lot about GPU architecture. It does seem like if you are dedicating some RAM that would otherwise be available for the CPU to GPU usage, then that’s less RAM for the CPU to use. On the other hand, the CPU might have been using an equivalent amount of memory (or more) anyway, for use in communicating back and forth with the GPUs, so UMA would be an improvement.

Anyway, the tldr; is that it‘s outside my experience, so I would only be guessing.
 
Apple has many patents, but so, too, do Intel and AMD and Samsung and Qualcomm, etc.

The reasons M1 kills everyone else are:

1) TSMC 5nm (a generation ahead of anything Intel is shipping in any quantity. Is AMD on TSMC 5nm yet? They may be)
2) Arm vs x86 - no matter what you do, you will have to pay a penalty to use x86, because of the instruction decoders which take area and add pipe stages, which then make the branch predictors have to be more complicated to make up for the higher missed branch penalty. There are also various other complications caused by x86 addressing modes, etc.
3) better designers - many of the designers came from places like DEC/PA Semiconductor, Exponential/Intrinsity, AMD, etc. I don’t know what design methodology they use now, but based on results I’d have to guess they are using techniques similarly to those used at DEC, Exponential, and AMD (at least the AMD of the mid-1990s, which is all I know about). These sorts of techniques rely much more on talented designers and less on software that does the design for you. Instead, you use software to check the design and offer suggestions, etc. It takes a bit longer, but in all our testing it always bought you 20% performance/watt improvement.
4) their own innovations in microarchitecture. We can only guess what these are, because they don’t publish papers on this stuff.

Interesting. That's what gave us a huge advantage at the fabless semiconductor startup I worked at. We had our our own custom-designed and hand laid out (Magic !) libraries of basic functions (logic blocks, adders and carry look-ahead, registers, multipliers, etc), and our own software simulator and verification tools. Fabs were ES2 in France and Atmel. We had a huge advantage over our competitors in both performance, power dissipation, die size, and cost.
 
  • Like
Reactions: Captain Trips
Well I was waiting for this. Why on earth would you buy a 2020 Intel Macbook Air or Pro now?
Not speaking for myself but for the friend who talked me into an iMac in 2008. Because he is a small business owner and he needed a new portable for remote work last summer and he had no idea what the AS MacBooks would be like or what potential problems would show up so he didn't try to push his older one to last another 3-6 months. If it makes economic sense he can buy a new AS MacBook Pro in a couple of years and probably still get something selling his 2020 MacBook Pro. In the meantime problems (and there will be problems) that pop up have time to be solved either with code updates or with hardware updates if necessary.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.