Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The 00s are calling asking for their Mhz/Ghz war back...

If that is the only "innovation" Intel has to show, they are truly f**ked
 
Keep in mind that the node names aren’t comparable among foundries and have little connection to reality at this point. No dimension in Intel’s 14 nm node is actually 14 nm and same with the other manufacturers. Intel’s 14 nm is closer to Samsung/GloFo’s 7 nm node in terms of gate lengths/wire sizes/etc. Intel is behind in adopting EUV lithography so instead they’re brute forcing their features with multiple patterning. The technology is far more complex and harder to compare than just a single marketing number.
 
  • Like
Reactions: Adult80HD
Keep in mind that the node names aren’t comparable among foundries and have little connection to reality at this point. No dimension in Intel’s 14 nm node is actually 14 nm and same with the other manufacturers. Intel’s 14 nm is closer to Samsung/GloFo’s 7 nm node in terms of gate lengths/wire sizes/etc. Intel is behind in adopting EUV lithography so instead they’re brute forcing their features with multiple patterning. The technology is far more complex and harder to compare than just a single marketing number.

Sounds like a lot of the same excuses they ran into with NetBurst and Pentium 4

AMD passed them back then with far more efficient CPU's that could perform similarly with far less heat and far less MHz.

So Intel's response? Make the pipeline even more complicated in HOPE of reaching 5+ GHZ. Unfortunately, we all remember the "Presscot" models. Also infamously reffered to as "Press-hots" because to run them at their given frequencies, They'd idle even at 70+c on normal Air coolers.
[automerge]1578404303[/automerge]
Well, AMD is not able to support TB3 and now Intel is going to support TB4 only for Intel CPU... I dont see any hopes for AMD on Mac.

This is no longer true

Intel dropped licensing costs and restrictions on the Thunderbolt standard and there are now AMD based boards available with Thunderbolt.

Here's an example: https://www.extremetech.com/computi...thunderbolt-3-only-supports-intel-cpu-coolers

though the pickings are still somewhat slim and manufacturers implement their solutions.

I'm not sure TB4 will be Intel only. If anything, right now AMD is more suitable to get on board TB4 quicker due to AMD already making the jump to PCI-E 4. Intel is currently still on PCI-E 3.

IIRC, to go TB4 on Intel, you'll either need the faster PCI-4 to maintain the 4 lanes currently for thunderbolt. Or use up more PCI-E lanes to get to the new bandwidth. Something not everyone is going to be willing to do because there are only so many PCI-E lanes available. Especially on intel's lower end CPU's that have reduced lane counts.
 
Last edited:
Sounds like a lot of the same excuses they ran into with NetBurst and Pentium 4

AMD passed them back then with far more efficient CPU's that could perform similarly with far less heat and far less MHz.

So Intel's response? Make the pipeline even more complicated in HOPE of reaching 5+ GHZ. Unfortunately, we all remember the "Presscot" models. Also infamously reffered to as "Press-hots" because to run them at their given frequencies, They'd idle even at 70+c on normal Air coolers.

I think there are plenty of similarities between what's happening now and the P4 debacle, yes. To me it comes down to Intel's manufacturing hubris more than anything else. Intel thought they could brute-force their way forward but progress has been harder than they initially thought, and others have now caught up with Intel's 2-4 year process lead. However, Intel's 14+++nm node isn't nearly as bad as you think - it's still quite competitive. No doubt that AMD's design chops are far and beyond Intel's, and this is becoming more and more apparent as time goes on now that AMD is decoupled from GloFlo's fabs and is using TSMC.
 
I think there are plenty of similarities between what's happening now and the P4 debacle, yes. To me it comes down to Intel's manufacturing hubris more than anything else. Intel thought they could brute-force their way forward but progress has been harder than they initially thought, and others have now caught up with Intel's 2-4 year process lead. However, Intel's 14+++nm node isn't nearly as bad as you think - it's still quite competitive. No doubt that AMD's design chops are far and beyond Intel's, and this is becoming more and more apparent as time goes on now that AMD is decoupled from GloFlo's fabs and is using TSMC.

Anyone who says Intel's chips aren't competitive from a performance standpoint is fandboying for AMD. I'm a fanboy myself, but I'd rather rely on facts than just bold claims.


given that. It's not really the silicon that Intel is pumping out that is really the problem here. it's their business hubris in assuming that "INTEL" label alone justifies their higher costs to get that performance.

Than when you consider that Intel is still doing things like different SKU's just for unlocked CPU's. Different SKU's for ECC support. and charging significantly more for those features. On top of an already more expensive part per performance.

Intel's CPU's are still GOOD. Intel's business model is lazy, consumer hostile and the executives operating this way need to be turfed.

I'ts like the nonsense they did when releasing the 10xxx CPU's to compete against AMD's Zen2. AMD had announced the date months in advance. Intel suddenly than 2 weeks before hand rushed out test units to "influencers" and announced the SAME DAY! knowing that the Influencers, to get maximum youtube monetization would have to pick one of the two releases to cover on day 1 and then wait a second day for the other chipset. that's just a low move to try and manipulate the markets perception and "attack" AMD's release.

Intel needs a massive and serious shakeup in their business ethics and behaviour.
 
  • Like
Reactions: sal_III and uller6
No you don't. You just call Intel MKL and it takes care of it for you with hand-written assembly code optimized for each individual processor. 90% of all serious computational software does this.

AMD couldn't compete so they gave up on their imitation (ACML). The funny thing was that MKL and Intel Compilers are so good, it was found that MKL ran better on AMD processors than AMD's own software.

Even if you write it yourself, you simply put your AVX256 and AVX512 behind different code paths, that is, with a big if statement. There is no need for "different binaries". That is plain wrong.

100% wrong.
Compiler flags is applied to the whole file instead of blocks. You can not compile a single binary that contain both AVX256 and AVX512 paths. You have to compile them separately and link them together.

And AMD do not need Intel MKL to half their performance. AMD can use OpenBLAS that is almost 2x compare to running MKL.
 
Anyone who says Intel's chips aren't competitive from a performance standpoint is fandboying for AMD. I'm a fanboy myself, but I'd rather rely on facts than just bold claims.


given that. It's not really the silicon that Intel is pumping out that is really the problem here. it's their business hubris in assuming that "INTEL" label alone justifies their higher costs to get that performance.

Than when you consider that Intel is still doing things like different SKU's just for unlocked CPU's. Different SKU's for ECC support. and charging significantly more for those features. On top of an already more expensive part per performance.

Intel's CPU's are still GOOD. Intel's business model is lazy, consumer hostile and the executives operating this way need to be turfed.

I'ts like the nonsense they did when releasing the 10xxx CPU's to compete against AMD's Zen2. AMD had announced the date months in advance. Intel suddenly than 2 weeks before hand rushed out test units to "influencers" and announced the SAME DAY! knowing that the Influencers, to get maximum youtube monetization would have to pick one of the two releases to cover on day 1 and then wait a second day for the other chipset. that's just a low move to try and manipulate the markets perception and "attack" AMD's release.

Intel needs a massive and serious shakeup in their business ethics and behaviour.
It is Ice Lake that would be almost half as fast as AMD in integer.

Although the H chips might also be a bit better than their counterparts (nothing shown).
 
These discussions are funny.
The only thing that really matters is performance with real workloads not some synthetic benchmarks.
The other thing that doesn't matter unless you know the design rules, is minimum feature size.
Minimum features are used for making RAM in a process, not logic.
So what TSMC advertises a 5nm process. That is the smallest feature size, not all features.
At higher frequencies those wires need to be spaced at larger distances. The wires need to be larger to carry more current. So your minimum features don't mean crap when you've got a heavily loaded net that needs to travel distance. Now you worry about crosstalk, capacitance and mutual inductance. Signal integrity issues just get worse.

So what Intel s shipping a 14nm++ or whatever. Is the thermal adequate for the application and does it meet the performance goal?

Those things matter and a minimum feature size doesn't matter a whole lot.
As far as an ARM processor in an iPad? When you start adding support for lots of channels DDR, PCIe 4.0 and all the peripheral stuff, it will generate a lot more heat. PCIe is power hungry with differential IO. The need to drive board traces that are multiple inches long requires beefy drivers.

Anyway, you can do lots with ARM processors and Ampere and others show you can, but lets be clear an A13 mobile processor is not up to the task in it's current form.

A13 have PCIe x4 interface for NVMe Storage. And dual channel LPDDR4x IMC.
I do not think quad channels are mainstream on desktop yet and as a iGPU only platform 4x is not a bad number--connect that 4x to a PCH and you got a working platform.

And BTW Amazon already have 32 cores ARM CPU with a lot of PCIe lanes competing EPYC at only 105W.
[automerge]1578410573[/automerge]
The DT in HEDT means DESKTOP.

You would buy such a laptop just to program AVX-512.

I imagine the Ice Lake architecture was also intended for desktops.

You do not have to own a CPU that runs AVX-512 to program for AVX-512.
Cross compiling was there for super long time already.

You are not coding assembly code directly. Just asking compilers to do the magic.
[automerge]1578410766[/automerge]
Don't waste you time.
People here are fixated on smallest feature size and don't understand that density and power efficiency matter.

Let's hope Intel' 7nm is more power efficient than TSMC N7/N7+

Intel 10nm has same density of TSMC N7 but the end CPU frequency/power efficiency is awful. AMD can reach 4.5GHz under reasonable power and Intel Icelake is running 1.xGhz with more power than Intel 14nm running at 4GHz.
 
Last edited:
I guess you dont know everything.

Officially, AMD CPU does not support TB3. Asrock has 2 motherboards with TB3 but Asrock is historically well known for making special types of the motherboard... Those AMD motherboards with TB3 arent normal motherboards. How come others still not able to support TB3? Since Ice lake and Tiger lake have integrated TB3 and 4, it wont be possible to see a controller separately.

Tell me if you know any other brands support TB3 for AMD CPU but I already doubt it.

You can throw in a titan ridge thunderbolt 3 add-in card to any AMD motherboard and it will work.
TB3 only need PCIe lanes and USB connections. It just currently no third party Thunderbolt chipset so you have to go with an Intel Titan Ridge and that's the same chip you need even on an Intel platform.
 
  • Like
Reactions: PickUrPoison
You do not have to own a CPU that runs AVX-512 to program for AVX-512.
Cross compiling was there for super long time already.

You are not coding assembly code directly. Just asking compilers to do the magic.
Of course people also use inline instructions, not just relying on the compiler.

How do you debug if you cannot run it?
 
Of course people also use inline instructions, not just relying on the compiler.

How do you debug if you cannot run it?

How many programmers need inline assembly? Or instead: How many programmers use a programming language that support inline assembly?
 
That is how some libraries are programmed.

That's not answering my question.
How Many programmers actually use a programming language that support inline assembly.

Most programmers today just use compiled libraries instead of building them and let along optimizing them.

And as AMD is leading the performance now. Building Intel optimized binaries with hand making assembly code are not top priority anymore. And Intel have MKL for you to link against so even optimizing Intel does not require you to have a AVX512 enabled CPU.
 
That's not answering my question.
How Many programmers actually use a programming language that support inline assembly.

Most programmers today just use compiled libraries instead of building them and let along optimizing them.
It does not matter how many. Many of the ones that don't could just buy an ARM tablet.
 
It does not matter how many. Many of the ones that don't could just buy an ARM tablet.

For what?
Buying ARM tablet for optimizing Intel Server chip performance?
That's as hilarious as buying AMD for optimizing Intel Server chip performance.

You should buy something that's similar to your target system if you're doing hand writing assembly code.
If the target system is a AMD EPYC will you buy an Intel CPU to debug and test your assembly code?

And how many matters a lot: will you force a frontend web developer to buy a AVX512 enabled laptop to just build a website?
 
For what?
Buying ARM tablet for optimizing Intel Server chip performance?
That's as hilarious as buying AMD for optimizing Intel Server chip performance.

You should buy something that's similar to your target system if you're doing hand writing assembly code.
If the target system is a AMD EPYC will you buy an Intel CPU to debug and test your assembly code?

And how many matters a lot: will you force a frontend web developer to buy a AVX512 enabled laptop to just build a website?
You could use a tablet to program in Java or pure .NET if you had the tools.

Who said anything about buying AVX-512 if you know you won't need it?
 
You could use a tablet to program in Java or pure .NET if you had the tools.

Who said anything about buying AVX-512 if you know you won't need it?

So AVX512 isn't a big advantage for PC since most user won't need it.
A CPU runs as fast without AVX512 is even better for compatibility reasons.
 
So AVX512 isn't a big advantage for PC since most user won't need it.
A CPU runs as fast without AVX512 is even better for compatibility reasons.
Of course AMD is now better than Intel in most cases.

Intel would still win in eGPU, ultra-mobile, and current pure dGPU gaming.
 
Last edited:
100% wrong.
Compiler flags is applied to the whole file instead of blocks. You can not compile a single binary that contain both AVX256 and AVX512 paths. You have to compile them separately and link them together.

And AMD do not need Intel MKL to half their performance. AMD can use OpenBLAS that is almost 2x compare to running MKL.

Wrong.


This option tells the compiler to generate multiple, feature-specific auto-dispatch code paths for Intel® processors if there is a performance benefit. It also generates a baseline code path. The Intel feature-specific auto-dispatch path is usually more optimized than the baseline path. Other options, such as O3, control how much optimization is performed on the baseline path.


With the GNU C++ front end, for x86 targets, you may specify multiple versions of a function, where each function is specialized for a specific target feature. At runtime, the appropriate version of the function is automatically executed depending on the characteristics of the execution platform. Here is an example.

__attribute__ ((target ("default")))
int foo ()
{
// The default version of foo.
return 0;
}

__attribute__ ((target ("sse4.2")))
int foo ()
{
// foo version for SSE4.2
return 1;
}

__attribute__ ((target ("arch=atom")))
int foo ()
{
// foo version for the Intel ATOM processor
return 2;
}

__attribute__ ((target ("arch=amdfam10")))
int foo ()
{
// foo version for the AMD Family 0x10 processors.
return 3;
}

int main ()
{
int (*p)() = &foo;
assert ((*p) () == foo ());
return 0;
}

In the above example, four versions of function foo are created. The first version of foo with the target attribute "default" is the default version. This version gets executed when no other target specific version qualifies for execution on a particular platform.
 
Of course AMD is now better than Intel in most cases.

Intel would still win in eGPU, ultra-mobile, and current pure gaming.

eGPU is CPU independent.

Intel is losing Ultra-mobile as 4800U destroying Intel top end 1056G7.

3950X have same gaming performance as 9900KS even at 1080P high frame rate level.
[automerge]1578415863[/automerge]
Wrong.





That works, but still doesn't make sense to add AVX512 to Zen just for debug assembly code.
Performance characteristics are totally different and we are not even talking about CPU errata.
 
Last edited:
Of course AMD is now better than Intel in most cases.

Intel would still win in eGPU, ultra-mobile, and current pure gaming.

unfortunately for Intel is those claims are no longer guaranteed True. Zen 2 really REALLY changed the playing field for performance.

Zen 2 IPC is on par at least with Intel's 9xxx series CPU's. it'll be interesting to see if the 10xxx is better or similar.

given that Zen 2 CPU's are equivelant IPC. But offer more cores at lower dollar. While also providing all the features like ECC and Overclocking default in every CPU. Intel needs a massive shift in their pricing and retail experience.

And one place AMD has done even better than intel is backwards compatibility. Almost any A4 socket will fit almost any A4 based CPU with just a BIOS update. Meaning even if you're on a first Gen RyZEN CPU. you can slot in easily a Zen 2 CPU without any other system upgrade and get Zen 2 performance. (though no PCI-E 4).

Intel on the other hand has been less consistent with their sockets.

SO in Current Pure gaming: it's a wash right now. Both Intel and AMD are similar performance, with AMD getting the slight edge in multithreaded titles.

in eGPU, not really relevant. as long as you have Thunderbolt, eGPU is an option even with AMD. thunderbolt 3 is becoming more common now on AMD devices.

in UltraMobile: That' where we will find out. Previous today, AMD was still selling Zen 1 based mobile parts. Today's announcement of Zen 2 based mobile parts should shake things up a lot.

Simply put: Intel no longer has the raw power advantage it was relying upon to maintain it's position in the market despite it's boorish business practices. They will need to fundamentally change how they operate if they don't want to get hammared in 2020, even with competitive CPUs.

This is the nature of competition and why it'sso good.
 
eGPU is CPU independent.

Intel is losing Ultra-mobile as 4800U destroying Intel top end 1056G7.

3950X have same gaming performance as 9900KS even at 1080P high frame rate level.
1. Show me an AMD laptop with Thunderbolt.

2. 4800U is mobile. Ultra-mobile would be less than 12W.

3. Nobody knows in what percentage of games each is faster.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.