Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Trying to force PCI-e v4 on TB isn't going to fix that. The very bottom end GPU cards are hard to be "excitiing" eGPU solutions because the performance gap between them and the internal integrated GPUs is giong to shrink to an increasingly narrow gap. That says next to nothing about eGPUs and the ability to sell them. Just can't sell them to "too cheap" video cards.

You are just spinning on a position that just doesn't have any real technical foundation at all and just throwing misdirection.

There is extremely little tactical or strategic need for Thunderbolt to move off of PCI-e v3 before the implementations can actually fully take advantage of the x4 PCI-e v3 worth of bandwidth got ( the first iteration of TBv3 largely pragmatically capped at 22Gb/s which is short of 32Gb/s. Get to being able to dynamically flow control a full 30Gb/s and then there is some "need" to move on. Otherwise just chasing "PCI-e v4" tech porn chatter. that isn't going to help Thunderbolt. ).
That would not be enough for 40GbE either which is now quite affordable.
 
No, I was just trying to point out a funny observation. As did others. Don't be so fragile.

Feeling smart? Of course you know what he meant so why being a jerk?


[automerge]1578349055[/automerge]
 
Last edited:
False. There is no USB 3.1. All USB had been renamed to USB 3.2 with gen 1 and gen 2.

That is from an Intel representative as an official answer. It's technically wrong/outdated but gives us enough info for what's happening.

As USB 3.1 standard the fastest you can get is called "USB 3.2 Gen2" today at 10Gb/s.
 
In the $1,100 & up range for Intel laptops from major players, Thunderbolt is not all that rare.

In the sub $900 range it is. Up until very recent that sub $800 zone is where the overwhelming vast majority of AMD laptops were. That is going to change a bit with The new Ryzen 4000 options, but the "low priority" was more so because it was 'cheap as possible' priority more so than whether Thunderbolt (TB) was useful or not. TB isn't aligned with race bottom system pricing.



Atom never was most of the Ultra-Mobile space at all. The Celeron/Pentium chips aren't Atom based. A very large fraction of the Chromebooks aren't on Atom anymore either.

Even though AMD was (is?) larglely regulated to the lower half laptop segment they didn't have much leverage in the Chromebook space. How much work the chip vendor puts into reference design and system R&D support matters. Intel largely swapped out their Atom for Core based designs in the lower-mid range Chromebook space and kept ARM implementations from sweeping them away. There aren't zero ARM solutions there, but ARM didn't sweep Intel away like a plague of locusts either.

Thunderbolt is not that rare but most of them are wired to PCH lanes thus not ideal for eGPU.
And most U series CPU were configured as 2GT link for power saving further hurting the performance.
 
AMD CES 2020 Update_Client_Embargoed Until Jan. 6 at 6pm ET-page-009_678x452.jpg


Not bad for 15 watts from AMD....
The iGPU would also blow away the pathetic Skylake one that Apple/Intel has used for the last 4 years...
AND it supports LPDDR4
 
Last edited:
  • Like
Reactions: MikeZTM
That is from an Intel representative as an official answer. It's technically wrong/outdated but gives us enough info for what's happening.

As USB 3.1 standard the fastest you can get is called "USB 3.2 Gen2" today at 10Gb/s.


How come Anandtech mentioned USB 3.2 Gen 2? Also, there is a USB 3.2 Gen 2x2 which is 20gb/s. Clearly, Intel doesn't know about USB standard. Since they gonna use PCIe 4.0, there is no reason not to support 80gb/s.
 
VS19 does not vectorize to 512, you're forced to hand code.

Almost nobody hand code AVX512 code.
Do you really know about what is assembly code?

And why would I care about performance on Xeon if my target is EPYC?
If Intel keep losing for a decade, AVX512 will die just like how 3D now! SSE4 and FMA4 was.
[automerge]1578513092[/automerge]

How come Anandtech mentioned USB 3.2 Gen 2? Also, there is a USB 3.2 Gen 2x2 which is 20gb/s. Clearly, Intel doesn't know about USB standard. Since they gonna use PCIe 4.0, there is no reason not to support 80gb/s.

Nobody mentioned "USB 3.2 Gen 2".

They only mentioned USB 3.1 which is an outdated specification standard. The fastest speed you can get from USB 3.1 standard is already renamed into "USB 3.2 Gen2" and since its old name "USB 3.1 Gen2" is deprecated I just use the new name for that. It's not about "USB 3.2 Gen 2x2".

There's huge problem about 80gb/s duplex link that is called "copper wire". If 80gb/s is possible now they can just go 8x instead of 4x. Displayport 2.0 use 80gb/s single direction also shows copper wire is a huge problem now.

From their article it looks like Thunderbolt 4 is Intel's marketing name for their USB4 implementation.

No information yet: I also suspect thunderbolt will skip PCIe 4.0 as it burns too much power and signal can not travel long enough. PCIe 5.0 will use QAM signal and will probably be the better choice.
 
Last edited:
Almost nobody hand code AVX512 code.
Do you really know about what is assembly code?

And why would I care about performance on Xeon if my target is EPYC?
If Intel keep losing for a decade, AVX512 will die just like how 3D now! SSE4 and FMA4 was.
[automerge]1578513092[/automerge]


Nobody mentioned "USB 3.2 Gen 2".

They only mentioned USB 3.1 which is an outdated specification standard. The fastest speed you can get from USB 3.1 standard is already renamed into "USB 3.2 Gen2" and since its old name "USB 3.1 Gen2" is deprecated I just use the new name for that.

There's huge problem about 80gb/s duplex link that is called "copper wire". If 80gb/s is possible now they can just go 8x instead of 4x. Displayport 2.0 use 80gb/s single direction also shows copper wire is a huge problem now.

From their article it looks like Thunderbolt 4 is Intel's marketing name for their USB4 implementation.

No information yet: I also suspect thunderbolt will skip PCIe 4.0 as it burns too much power and signal can not travel long enough. PCIe 5.0 will use QAM signal and will probably be the better choice.
If you hand coded AVX2, you can hand code AVX-512. It is fun.
 
If you hand coded AVX2, you can hand code AVX-512. It is fun.

It's possible, but are you really doing that?
Assembly code performance optimization is not just write assembly debug it and done.

It requires a long profile session on different CPUs to make sure you're code is not slowing down the performance as different Cpu have . And you have to write 5 different version of AVX512 to have it runs on all Intel AVX512 enabled CPUs.
 
It's possible, but are you really doing that?
Assembly code performance optimization is not just write assembly debug it and done.

It requires a long profile session on different CPUs to make sure you're code is not slowing down the performance as different Cpu have . And you have to write 5 different version of AVX512 to have it runs on all Intel AVX512 enabled CPUs.

In other words: you have to handcode AVX512.

Whether that involves writing it as inline assembly by hand or having the compiler generate it for you, but then having to verify that the generated code 1) works on the CPU you're targeting and 2) doesn't cause performance regressions elsewhere, you're still putting a significant amount of effort into it. By hand.
 
In other words: you have to handcode AVX512.

Whether that involves writing it as inline assembly by hand or having the compiler generate it for you, but then having to verify that the generated code 1) works on the CPU you're targeting and 2) doesn't cause performance regressions elsewhere, you're still putting a significant amount of effort into it. By hand.
It might not involve a lot of code.
 
Suitable? Yes. however, Apple uses 65w TDP S-Series CPUs in the Mac mini, these are 45w TDP H-Series for the 16” MacBook Pro.

I always thought that it would be great if Apple went back to these 15" MacBook Pro class mobile CPUs for what become the 2018 iMac.

Going forward they'd get great economy of scale from these and perhaps a little more headroom for a cooling solution - or even a better iGPU of sorts if Intel's Xe (DG1) could be a contender for replacing AMD in 2020 Apple 16" MacBook Pros although there are rumours swirling around that all is not well in that camp.

Perhaps Intel will heavily discount the GPU for Apple to use in low end stuff or selected SKUs?
 
I always thought that it would be great if Apple went back to these 15" MacBook Pro class mobile CPUs for what become the 2018 iMac.

Going forward they'd get great economy of scale from these and perhaps a little more headroom for a cooling solution - or even a better iGPU of sorts if Intel's Xe (DG1) could be a contender for replacing AMD in 2020 Apple 16" MacBook Pros although there are rumours swirling around that all is not well in that camp.

Perhaps Intel will heavily discount the GPU for Apple to use in low end stuff or selected SKUs?

There's almost no difference between mobile CPU and desktop CPU for Intel.
Intel CPU on its matured 14nm node doesn't have that much different for chip bins.
45w TDP just means Intel heavily limit the sustained performance to meet the power target. Those chip will burn same amount of energy if you make them runs at same all core boost as their desktop counterparts.
[automerge]1578585761[/automerge]
In other words: you have to handcode AVX512.

Whether that involves writing it as inline assembly by hand or having the compiler generate it for you, but then having to verify that the generated code 1) works on the CPU you're targeting and 2) doesn't cause performance regressions elsewhere, you're still putting a significant amount of effort into it. By hand.

Using compiler is not hand code. As a programer I trust my test cases and compiler and CPU. If something goes wrong I will investigate it but that's not my daily work.

Almost nobody write assembly code today and if my target is EPYC why would I care about AVX512?
 
You are saying AMD should support AVX512 just to debug the assembly code even if it's slow and useless in production.
It is not useless, if they implement all of AVX-512 over 256-bit they can run any program.
 
It is not useless, if they implement all of AVX-512 over 256-bit they can run any program.

Intel itself is not implementing all of AVX-512 on any of its CPUs.

They all support only partial of all AVX-512 instructions.

So technically there's no Intel CPU that can run any x86 target program yet. Why should AMD do this?

If I write a program using both AVX512ER and AVX512VNNI then we can not find a CPU that is possible to run it now. All Intel CPUs that support AVX512ER do not support AVX512VNNI.
 
Intel itself is not implementing all of AVX-512 on any of its CPUs.

They all support only partial of all AVX-512 instructions.

So technically there's no Intel CPU that can run any x86 target program yet. Why should AMD do this?

If I write a program using both AVX512ER and AVX512VNNI then we can not find a CPU that is possible to run it now. All Intel CPUs that support AVX512ER do not support AVX512VNNI.
That is the point. I would prefer to buy a 256-bit CPU that implements all of AVX-512 rather than an incomplete 512-bit CPU.
 
That is the point. I would prefer to buy a 256-bit CPU that implements all of AVX-512 than an incomplete 512-bit CPU.

That's the point: Intel doesn't do that and there's no reason for AMD to do that since it's useless for production.

Oh and you have to license AVX512 from Intel to use it further increases the cost of the CPU.

And as your logic goes on: why not also include arm instructions to make it more universal? Why not add Power and MIPS and RISC-V to the mix?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.