Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
One good reason to switch to AMD, and a really good one, would be the PCIe lane count, which is a big plus.
You don't even need X399. Just put it all in the CPU, with lanes to spare. Clean and lean mobo design.
 
May you develop that? I'm not sure you know a thing on what you write...

Switch from Intel to AMD don't add costs if you are due to update an motherboard (as the MP), neither implies refactoring neither recompile macOS (at least 99%, maybe some library still use soon to be deprecated cpu extensions discarded by AMD and on the same pipe discard by intel), even developers don't need to compile nothing (did you know AMD/Intel binaries are the same thing?) this is a much different scenario than to switch from cpu platform (as was PowerPC to x86, Intel to AMD both are x86).

You can't just put Ryzen on the Mac and expect it work flawlessly just because they have same binaries when their architectures are vastly different. I don't expect Apple to add extra resources on optimizing Zen architecture for MacOS, especially when their priorities lie elsewhere.

The Whole PC market isn't shrinking the actual phenomena is people don't update their system as soon as before because there is no meaningful updates (until this Year Intel ruled Alone the Market and its processors barely evolved the last 5 years, this means a 5yr old PC is almost 80% as capable as a new one), w/o updates people delays buys, this creates an artifical sense of market shrinking, but actually is an latent demand scenario.

Bring more often new CPU and GPUs to the market with meaningful upgrades, and you'll see the money flow again.

I believe next year will see PC demand to grow again.

Primary reason for PC market decline is that smart devices can do most of the basic tasks that was previously required PC. Also, even if you get "meaningful upgrades", would that mean much for most of people out there? How many would care about those unless you would actually work with applications that utilizes more cores or take advantages of more GPU power?
If you compare Sandy Bridge to Kaby lake, performance difference is pretty big , but it is simply hard to tell the difference between those two when it comes to most of the basic tasks. Upgrading PC or Mac was attractive back in the day because perceived difference in performance was pretty big, but these days, not anymore.
So no, I don't expect PC demand to ever grow again, in a traditional sense.
 
I certainly hope they don't switch to AMD cpu on the big Mac Pro.

Epyc / Threadripper are great on price/performance but are still outclassed from a pure performance standpoint.

As we all know price/performance has never been a concern for Apple in the past, so why now?
but the price gap is big and ZEN-2 is in the works. also more pci-e at all levels.
 
It's almost trivially simple to disprove this by creating binaries that will run on Intel but not AMD - and vice versa.
Will XCode output such binaries without manual coaxing, though?
While I imagine there may be special cases where such a program may make sense, in real life a program built for AMD64/x64 simply works on pretty much any compatible CPU unless you go out of your way to make it not run on a certain brand processor.
 
Will XCode output such binaries without manual coaxing, though?

While I imagine there may be special cases where such a program may make sense, in real life a program built for AMD64/x64 simply works on pretty much any compatible CPU unless you go out of your way to make it not run on a certain brand processor.
I wouldn't consider setting an optimization flag to be "manual coaxing" or "going out of my way".
 
Even if you run a compiler in full compatibility "mode" is that really a guarantee that it will run on all x86 CPU:s? Each x86 CPU still is very different and there is alot of vendor specific softare and hardware implementations and all the instructions for running on those specific implementations.

There is millions of undocumented instructions on every x86 CPU and those differ vastly between AMD and Intel, the documented ones are similar, similar in the sense that it can have different hardware implementations of the same instructions but it runs the same code, though since the implementations are different (hardware optimisation to execute an instruction faster or better) there can be hardware bugs, so an x86 instruction can work on all all Intel CPU:s and none of the AMD CPU:s depending on what bug we are talking about, so your code can fail or work any given x86 CPU with various success.

So how could binaries be stated as being the same for AMD and Intel if your code targets a vendor specific instruction? And that instruction is needed for your code to execute? Sure, there is alot of binaries that works on both AMD and Intel but its not a 100% guarantee so some flexibility in the notion that "Intel and AMD binaries are the same" is probably needed.

Maybe i am stupid and missed what the x86 discussion here is about but yeah
 
It's almost trivially simple to disprove this by creating binaries that will run on Intel but not AMD - and vice versa.

Yes can be deliverately create AMD/Intel incompatible binaries but that is not that simple and certainly is not the compilers default behavior.
[doublepost=1507830058][/doublepost]
You can't just put Ryzen on the Mac and expect it work flawlessly just because they have same binaries when their architectures are vastly different. I don't expect Apple to add extra resources on optimizing Zen architecture for MacOS, especially when their priorities lie elsewhere.

Your claim ignores a simple fact: ISA, AMD x86 and Intel x86 both are ISA PC Architecture, both uses PCIe as peripheral interface, and JEDEC memory interface architecture, same CPU endians, same registers, etc, said in words you may understand, its like cars sharing carburetors and gearboxes terminal, plus using the same carburetors, all that you need is to buy the engine and adapt the mounts. No sir, the biggest challenge to Apple is to adapt their EFI, while not trivial they also had to do the same for the new Xeon-W architecture (which includes same even more changes than AMD, as AMD isnt supporting yet Intel's Optane since it is proprietary).

PD. I have own motherboards design in my resume, I could tell you a thing or two about if you like.

Primary reason for PC market decline is that smart devices can do most of the basic tasks that was previously required PC. Also, even if you get "meaningful upgrades", would that mean much for most of people out there? How many would care about those unless you would actually work with applications that utilizes more cores or take advantages of more GPU power?
If you compare Sandy Bridge to Kaby lake, performance difference is pretty big , but it is simply hard to tell the difference between those two when it comes to most of the basic tasks. Upgrading PC or Mac was attractive back in the day because perceived difference in performance was pretty big, but these days, not anymore.
So no, I don't expect PC demand to ever grow again, in a traditional sense.

Have you tried to edit an video on an iPhone? its possible? yes, practical? maybe, the best way? NO.

This its just an use case, you may replace an PC with an Tablet (usually more expensive), but common people knows the difference on doing things on a phone and doing that on a PC, as navigating the web, the most common use case for phones and tablets, still an better more complete experience on a PC.

And of course you name a cause: same basic tasks, with low end specifications even with stagnated pc specifications, software developers dont have an incentive to create new applications that require more powerful hardware, these apps will start to appear the coming months driving more buyers to update their pc, the most popular category: video games.
 
Even if you run a compiler in full compatibility "mode" is that really a guarantee that it will run on all x86 CPU:s? Each x86 CPU still is very different and there is alot of vendor specific softare and hardware implementations and all the instructions for running on those specific implementations.

There is millions of undocumented instructions on every x86 CPU and those differ vastly between AMD and Intel
, the documented ones are similar, similar in the sense that it can have different hardware implementations of the same instructions but it runs the same code, though since the implementations are different (hardware optimisation to execute an instruction faster or better) there can be hardware bugs, so an x86 instruction can work on all all Intel CPU:s and none of the AMD CPU:s depending on what bug we are talking about, so your code can fail or work any given x86 CPU with various success.

So how could binaries be stated as being the same for AMD and Intel if your code targets a vendor specific instruction? And that instruction is needed for your code to execute? Sure, there is alot of binaries that works on both AMD and Intel but its not a 100% guarantee so some flexibility in the notion that "Intel and AMD binaries are the same" is probably needed.

Maybe i am stupid and missed what the x86 discussion here is about but yeah

OMG ... another one....

Please, at least take the time to supports your blatant tongues with factual data.

did you know, how many PC/Lnux applications run the same binaries on both architectures? almost all.

Yes you can optimize an binary targeting an specific cpu feature exposed by some specific x86 extension, but new x86 extensions are uncommon and mostly very specialized, and actually Intel/AMD exchange their extension specifications in order to keep both architecture compatibles...

Did you knowm Apple developed years ago, an FULL OS/X port to ARM architecture, even they build macbook protypes running on ARM A?? chips (same as on the iPhone), that was expensive, ARM/x86 uses different instruction set, endians etc, and apple do all tat work w/o breaking the bank just to test how feasible is an ARM Mac.
[doublepost=1507831119][/doublepost]
But it is a normal and common occurrence for developers to go beyond least-common-denominator optimizations.
as is normal in these cases to ship as many different versions of the binaries to support every foreseeable specific architecture.
 
[doublepost=1507831119][/doublepost]
as is normal in these cases to ship as many different versions of the binaries to support every foreseeable specific architecture.

Its been my experience, where your product has such requirements, you will have a build environment that can integrate new builds rather effortlessly. From Dev, QA, etc onwards to production, scaling horizontally as such isn't as much of a big deal as it is played out.

This whole point of contention seems like a bit of a red herring to me honestly, especially given apple has not been shy in the past about compiling and maintain software in house for different architectures. As stated already.
 
OMG ... another one....

Please, at least take the time to supports your blatant tongues with factual data.

did you know, how many PC/Lnux applications run the same binaries on both architectures? almost all.

Yes you can optimize an binary targeting an specific cpu feature exposed by some specific x86 extension, but new x86 extensions are uncommon and mostly very specialized, and actually Intel/AMD exchange their extension specifications in order to keep both architecture compatibles...

Did you knowm Apple developed years ago, an FULL OS/X port to ARM architecture, even they build macbook protypes running on ARM A?? chips (same as on the iPhone), that was expensive, ARM/x86 uses different instruction set, endians etc, and apple do all tat work w/o breaking the bank just to test how feasible is an ARM Mac.
Don't really know what set off your weird rant, i even said that there is alot of binaries that works on both Intel and AMD so i have no idea why, in your rant, you kinda claim i had no idea that binaries are used on various architectures, i even said it, so yeah.

Intel and AMD exchange SOME, not all, extensions, they exchange those that they think would improve the architecture or should be on the feature set of x86, these would be "main" feature sets. But as i said there is millions (yes millions, no idea why you underline and made it bold) of instructions that is specific for vendor, for model, for generation and even for stepping (fixed hardware bugs), those are not shared between AMD and Intel. And as i said in my previous post, even though they share documented instructions for x86 they execute them differently and create their hardware differently.

For instance, Intel and AMD, even though its the same instruction, they can have different handling of NULL selectors on some instructions, to make your code binary compatible you have to know this and handle this "issue" so your code will end up binary compatible on both AMD and Intel.

AMD and Intel can handle selectors differently too which would make them binary incompatible if not handled properly.

Not that long ago Intel had that TSX bug that the media world made a huge fuss about for some years ago that only affected some CPU:s and, they had to turn of that feature in an update and if i remember correctly they even fixed it in some CPU models which means not even the same Intel models are 100% binary compatible, 1 stepping is not compatible with another stepping of the same CPU model when it comes to that TSX bug.

There are bugs in some AMD CPU:s, there is bugs in some Intel CPU:s, they execute instructions differently, they have different instruction sets, they have undocumented instructions (those are not uncommon to be added, some even added or removed on a cpu by cpu basis) etc etc etc.

Most stuff is binary compatible when it comes to x86, but as a developer you CAN run into binary incompatibles and then it is up to the developer to make them binary compatible, your Linux example for instance, they are binary compatible because the developers MADE them binary compatible, not because x86 by design is binary compatible.

And yes, everything i have written has been 100% facts, not "blatant tongues" as you proclaim.

Your last paragraph is absurd, this IS common knowledge, everyone knows this and yet you pump up your chest and exclaim it like you are the special chosen one to know it and tell the world about it.
 
as is normal in these cases to ship as many different versions of the binaries to support every foreseeable specific architecture.
If you're actually doing that, you should look at https://stackoverflow.com/questions...s-on-non-haswell-processors/23677889#23677889 . You compile different routines with different optimizations, and link them into one binary which chooses the right routine at runtime.

And saying "every architecture" is absurd. One only optimizes the parts of the program that use the most CPU time, and only for the things which make a big difference. If your program is doing encryption/decryption, you'd make two versions of the core data handling routine - one standard, one using AES instructions. If vector instructions help, then maybe four variants of the core routine - standard, AVX, AVX2 and AVX512.

And you wouldn't need four copies of the source - make the routine name a placeholder identifier and pass the definition from the compile command:

void *foobar (...)​
then
cc ...
cc ... /Dfoobar foobar_AVX /O avx ...
cc ... /Dfoobar foobar_AVX2 /O avx2 ...
cc ... /Dfoobar foobar_AVX512 /O avx512 ...​
 
Last edited:
Don't really know what set off your weird rant, i even said that there is alot of binaries that works on both Intel and AMD so i have no idea why, in your rant, you kinda claim i had no idea that binaries are used on various architectures, i even said it, so yeah.

Intel and AMD exchange SOME, not all, extensions, they exchange those that they think would improve the architecture or should be on the feature set of x86, these would be "main" feature sets. But as i said there is millions (yes millions, no idea why you underline and made it bold) of instructions that is specific for vendor, for model, for generation and even for stepping (fixed hardware bugs), those are not shared between AMD and Intel. And as i said in my previous post, even though they share documented instructions for x86 they execute them differently and create their hardware differently.

For instance, Intel and AMD, even though its the same instruction, they can have different handling of NULL selectors on some instructions, to make your code binary compatible you have to know this and handle this "issue" so your code will end up binary compatible on both AMD and Intel.

AMD and Intel can handle selectors differently too which would make them binary incompatible if not handled properly.

Not that long ago Intel had that TSX bug that the media world made a huge fuss about for some years ago that only affected some CPU:s and, they had to turn of that feature in an update and if i remember correctly they even fixed it in some CPU models which means not even the same Intel models are 100% binary compatible, 1 stepping is not compatible with another stepping of the same CPU model when it comes to that TSX bug.

There are bugs in some AMD CPU:s, there is bugs in some Intel CPU:s, they execute instructions differently, they have different instruction sets, they have undocumented instructions (those are not uncommon to be added, some even added or removed on a cpu by cpu basis) etc etc etc.

Most stuff is binary compatible when it comes to x86, but as a developer you CAN run into binary incompatibles and then it is up to the developer to make them binary compatible, your Linux example for instance, they are binary compatible because the developers MADE them binary compatible, not because x86 by design is binary compatible.

And yes, everything i have written has been 100% facts, not "blatant tongues" as you proclaim.

Your last paragraph is absurd, this IS common knowledge, everyone knows this and yet you pump up your chest and exclaim it like you are the special chosen one to know it and tell the world about it.

Did you know what is the purpose of GCC/Clang (llvm) 's compiler directives ?

Have you ever linked an C app?

Did you know as being based on LLVM Xcode' Clang inherited (while never explicitly used) cross Intel/AMD optimizations, even if never used before, there are, I know what I wrote since my main income source is compute application I wrote in C++, I build it on my mac (I have 13'tcMP, an iMac 5k 1st gen and an MBP15'17, i compile on both, I can build binaries compatibles with Posix, which I can run on macOS/Ubuntu (maybe on windows too) are optimized for AMD/Intel Upto 128bit AVX, and those binaries even dont need segregated files, one single files host code optmized and compatible with both CPU vendor's optimizations (actually AMD/Intel are the same platform: x86-64), the people that maintain GCC/Clang take care of these peculiar issues on each CPU generation (even from the same cpu vendor you dont have the same features available across all the lines).

Dis you know, which platforms are really different? nVidia and AMD-ATI, but even, I've wrote accelerators using exatly the same 'C' code, just caring on the specific way to juice full every core available.


So take care the next time you do a quick google search for arguments if you dont actually know a thing about Von Neuman architecture.
 
There is an established method of overcoming any difficulties, in the eventuality that Apple were to decide on switching from Intel to AMD CPU's for some future model(s) of Mac: it's called a SOFTWARE UPDATE. Either by the 3rd party vendor, or by Apple itself.
No need to "re-invent the wheel".
However, I consider the chances of Apple actually jumping the ship and going for any AMD CPU's: not all that likely.
A new AMD MacMini seems more likely than with their higher priced machines that run macOS.
 
This debunks every anti-AMD compatibility theory:

https://www.reddit.com/r/hackintosh/comments/689xt3/ryzen_hackintosh_success/

a Hackintosh running full (and not emulated/virtual machine/ or thru hypervisor) on AMD Ryzen:

http://www.insanelymac.com/forum/topic/325514-amd-high-sierra-kernel-release-and-testing/

Ryzen specific video:
Which means that? vanilla macOS High Sierra binaries running on AMD Ryzen.
[doublepost=1507848707][/doublepost]
There are no 128-bit AVX instructions - just 256-bit and 512-bit.

How embarrassing. :eek:
I wrote code for 128bit, AVX supports 128/256 bit, AVX2 goes upto 512. I'm not embarrased.
https://en.wikipedia.org/wiki/Advanced_Vector_Extensions
[doublepost=1507849329][/doublepost]
If you're actually doing that, you should look at https://stackoverflow.com/questions...s-on-non-haswell-processors/23677889#23677889 . You compile different routines with different optimizations, and link them into one binary which chooses the right routine at runtime.
you can imagine I post all these here, somebody may have a stroke.
[doublepost=1507849538][/doublepost]
A new AMD MacMini seems more likely than with their higher priced machines that run macOS.
the mac mini and mac pro are the less risky to introduce an newer architecture, I'll include later the base iMac 21 to as the first AMD-Macs, reason: lower volume, lower the risk (support/RMAs etc)
 
Last edited:
And AVX2 does not support 512-bit registers - that's why the next extension was called AVX-512.
not even considering re-write algorithms for 256bit, I dont care much ov AVX beyond 128, actually I'm moving the project to cuda, on the long it will run on a single GTX 1080 on 1/10th of the time, and 1/4 the hardware cost.
 
not even considering re-write algorithms for 256bit, I dont care much ov AVX beyond 128, actually I'm moving the project to cuda, on the long it will run on a single GTX 1080 on 1/10th of the time, and 1/4 the hardware cost.
I thought the point of letting the compiler handle it was exactly that you didn't have to rewrite for different instruction sets.

But moving to CUDA seems to be a good move regardless.
 
So take care the next time you do a quick google search for arguments if you dont actually know a thing about Von Neuman architecture.

Oh boy, here we go...

(You might lose some optimizations but macOS will at least run on AMD. The problem is still that AMD chipsets as they are today don't fit Apple's needs. For things like: Thunderbolt. AMD could change that but Ryzen as it ships today isn't happening.)

(Meantime Apple could literally drive down the street to Frys and start grabbing Xeon's off the shelf and they'd be ready to go.)
 
I just read the news about Apple reading an Pencil for the iPhone...

Seems somebody leaked that long time ago... (it was leaked by DNG )...

So better get ready for such strange 3-GPU ALL AMD modular Macintosh Professional...
[doublepost=1507900915][/doublepost]
I thought the point of letting the compiler handle it was exactly that you didn't have to rewrite for different instruction sets.
AVX instructions are tricky you may put the switches to the compiler, but if your in-lines are not optimized for this peculiar SMT logic it wont work, I found worth more to move all the project into more generic SMP logic (actually downgrading it to 64 bits) which optimize it for deeper multi-core execution.

Still makes more sense to use AVX on things such like codecs but I found much more convenient to offload SMT process into GPUs or even bare cpu cores than explicitly using AVX.
[doublepost=1507901248][/doublepost]
For things like: Thunderbolt.
FYI Thunderbolt 3 is independent to the processor architecture as long it has at least 2 PCIe3 lines available, further from jan 1'18 Intel frees TB3 rigts so everyone can build/sell its own TB3 controllers/cables/etc even put the TB# brand on it w/o paying royalties to intel.
 
Last edited:
Oh boy, here we go...

(You might lose some optimizations but macOS will at least run on AMD. The problem is still that AMD chipsets as they are today don't fit Apple's needs. For things like: Thunderbolt. AMD could change that but Ryzen as it ships today isn't happening.)

(Meantime Apple could literally drive down the street to Frys and start grabbing Xeon's off the shelf and they'd be ready to go.)
amd ThreadRipper / epyc don't really big chipset will all of there pci-e lanes. 64 and 128
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.