Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
That's the point: Intel doesn't do that and there's no reason for AMD to do that since it's useless for production.

Oh and you have to license AVX512 from Intel to use it further increases the cost of the CPU.

And as your logic goes on: why not also include arm instructions to make it more universal? Why not add Power and MIPS and RISC-V to the mix?
It is not useless. AMD would have an inexpensive and clean answer to the Intel mess.

Now you're just being silly.
 
  • Like
Reactions: DanBig
It is not useless. AMD would have an inexpensive and clean answer to the Intel mess.

Now you're just being silly.

AMD's new instruction set does not have to be compatible with the mess Intel created right now.

And licensing those mess from Intel only increase the cost with no benefit for the product.

Just like how AMD64 killed Intel's IA64 and made Intel licensed AMD64 for their CPU, future "AVX512-A" could be a totally different instruction set and force Intel to implement it for compatible reasons.
 
AMD's new instruction set does not have to be compatible with the mess Intel created right now.

And licensing those mess from Intel only increase the cost with no benefit for the product.

Just like how AMD64 killed Intel's IA64 and made Intel licensed AMD64 for their CPU, future "AVX512-A" could be a totally different instruction set and force Intel to implement it for compatible reasons.
AMD already created instruction sets in the past and failed because Intel did not adopt them.

I don't want more mess.

I look for compatibility when buying a computer.
 
And as your logic goes on: why not also include arm instructions to make it more universal? Why not add Power and MIPS and RISC-V to the mix?

Are you comparing AVX-512 to completely different architectures? Why?

AMD's new instruction set does not have to be compatible with the mess Intel created right now.

And licensing those mess from Intel only increase the cost with no benefit for the product.

Just like how AMD64 killed Intel's IA64 and made Intel licensed AMD64 for their CPU, future "AVX512-A" could be a totally different instruction set and force Intel to implement it for compatible reasons.

You mean like the huge success that was 3DNow?
 
  • Like
Reactions: Adult80HD
Are you comparing AVX-512 to completely different architectures? Why?



You mean like the huge success that was 3DNow?

For that person asking AMD to implement AVX512 and call it "compatibility". Even Intel is not implementing AVX512 now.

I mean huge success like AMD64.
[automerge]1578597581[/automerge]
It makes no sense to create something redundant.

For you AMD is the non-sense redundant.

For me that's called choice.

If performance is better, which instruction set it support does not matters. I will change my compiler flags to support it.
 
Last edited:
AMD64 succeeded because people wanted x86.

Nobody needs an AVX-512 replacement.

Nobody needs AVX-512 at beginning.

If Intel knows a better way to increase AVX256 performance then AVX512 will not exist.

Just like if they knows how to increase the single core performance forever then multi core CPU will not exist.

This AVX512/Multi-core exist because they can not further optimize the hardware and decide to pass the burden to software developers.

If AMD is faster without AVX512 support compare to Intel CPU with AVX512 that a win instead of a loss for AMD.
 
For that person asking AMD to implement AVX512 and call it "compatibility". Even Intel is not implementing AVX512 now.

I don't know what Intel's plans are with the portions of AVX512 that are effectively Xeon Phi-only, but other than that, Intel is absolutely implementing AVX512. Ice Lake implements 14 out of 20 instructions, and Tiger Lake will add a 15th.

I mean huge success like AMD64.

Yes, but that's clearly the exception rather than the rule. Mostly, AMD either misfires or just catches up with Intel's proposals.

AMD has a shot at leapfrogging Intel with AVX512.
[automerge]1578598216[/automerge]
If AMD is faster without AVX512 support compare to Intel CPU with AVX512 that a win instead of a loss for AMD.

That's not how arch extensions work.

PowerPC was way ahead of x86 on vectorization with AltiVec, but it didn't do them any good for general-purpose calculations.
 
  • Like
Reactions: Adult80HD
Nobody needs AVX-512 at beginning.

If Intel knows a better way to increase AVX256 performance then AVX512 will not exist.

Just like if they knows how to increase the single core performance forever then multi core CPU will not exist.

This AVX512/Multi-core exist because they can not further optimize the hardware and decide to pass the burden to software developers.

If AMD is faster without AVX512 support compare to Intel CPU with AVX512 that a win instead of a loss for AMD.
So you want either double the 256-bit execution units or 10GHz CPUs.
 
I don't know what Intel's plans are with the portions of AVX512 that are effectively Xeon Phi-only, but other than that, Intel is absolutely implementing AVX512. Ice Lake implements 14 out of 20 instructions, and Tiger Lake will add a 15th.



Yes, but that's clearly the exception rather than the rule. Mostly, AMD either misfires or just catches up with Intel's proposals.

AMD has a shot at leapfrogging Intel with AVX512.

I mean there's no rule. Anything could happens and AVX512 is a mess right now.

Icelake CPU runs AVX512 with huge offset that runs the CPU at 1.xGHz. AVX512 doesn't give it any advantage compare to same 15W Ryzen 4800U.

He act like a low level engineer trying to optimize assembly code for CPUs and asking an ultimate CPU with all instruction he need totally make no sense.

Same instruction may perforce totally different on different architectures.
 
Nobody mentioned "USB 3.2 Gen 2".

They only mentioned USB 3.1 which is an outdated specification standard. The fastest speed you can get from USB 3.1 standard is already renamed into "USB 3.2 Gen2" and since its old name "USB 3.1 Gen2" is deprecated I just use the new name for that. It's not about "USB 3.2 Gen 2x2".

There's huge problem about 80gb/s duplex link that is called "copper wire". If 80gb/s is possible now they can just go 8x instead of 4x. Displayport 2.0 use 80gb/s single direction also shows copper wire is a huge problem now.

From their article it looks like Thunderbolt 4 is Intel's marketing name for their USB4 implementation.

No information yet: I also suspect thunderbolt will skip PCIe 4.0 as it burns too much power and signal can not travel long enough. PCIe 5.0 will use QAM signal and will probably be the better choice.

"Intel did confirm that they were referencing USB 3.2 Gen 2"
Somebody did. As you can see, a lot of people already confused about the standard for USB 3.0 so I highly doubt what they said.

There is NO reason to keep 40gb/s again.
 
So you want either double the 256-bit execution units or 10GHz CPUs.

So you have no knowledge about how CPU and software works.

If I have a Pentium MMX that can overclock to 1000GHz it will run your SIMD code faster than a AVX512 Xeon.

It's just because that's not possible so Intel start to make new instructions and multi core CPUs.
Nobody asked for multi core as it much harder to write code for it and sometimes even mathematically impossible to do so.
 

The name of the website where this article is published is beyond ironic.

I don't see any way Apple implements this CPU in the current 27" iMac chassis. The base TDP for the Core i9-10900K is 125w and if it can ramp up 240%, the current iMac has neither the power supply or the thermal envelope to accommodate. Intel has completely thrown TDP considerations out the window at this point, even more so than the past, where they at least tried to pretend that they cared. After the W-3175X was introduced, the final nail in that coffin was hit, IMHO.
 
Last edited:
  • Wow
  • Like
Reactions: DanBig and MikeZTM
"Intel did confirm that they were referencing USB 3.2 Gen 2"
Somebody did. As you can see, a lot of people already confused about the standard for USB 3.0 so I highly doubt what they said.

There is NO reason to keep 40gb/s again.

USB 3.2 Gen 2 is 10Gb/s, so 4 time that thing is 40Gb/s

USB 3.2 Gen 2 is the new name for "USB 3.1 Gen 2"

There is physically no possible way to double the bandwidth of thunderbolt now as that's the current limit for signal integrity unless they go full fiber and name Thunderbolt back to its original codename "Light Peak".
 
Last edited:
He act like a low level engineer trying to optimize assembly code for CPUs and asking an ultimate CPU with all instruction he need totally make no sense.
If you ever had to run heavy calculations you would know why you would have done it.
[automerge]1578599418[/automerge]
So you have no knowledge about how CPU and software works.

If I have a Pentium MMX that can overclock to 1000GHz it will run your SIMD code faster than a AVX512 Xeon.

It's just because that's not possible so Intel start to make new instructions and multi core CPUs.
Nobody asked for multi core as it much harder to write code for it and sometimes even mathematically impossible to do so.
You are exceedingly arrogant.
 
If you ever had to run heavy calculations you would know why you would have done it.

I know how I should never touch assembly code myself and should trust Intel MKL or OpenBLAS library for my compute needs instead of reinventing the wheels and get it wrong. You are competing with years of works done by groups of Intel engineers.

I know how to read assembly code but do not see any useful cases to write code using them as I will 100% build even slower code than the standard libraries.

I trust other people's works and they make my life easier.
 
Last edited:
I know how I should never touch assembly code myself and should trust Intel MKL or OpenBLAS library for my compute needs instead of reinventing the wheels and get it wrong.

I know how to read assembly code but do not see any useful cases to write code using them.
Just because you don't see the utility it does not mean someone else did not need to.
 
"Intel did confirm that they were referencing USB 3.2 Gen 2"
Somebody did. As you can see, a lot of people already confused about the standard for USB 3.0 so I highly doubt what they said.

There is NO reason to keep 40gb/s again.

There might not be a reason, but guaranteed...Intel is simply going to rehash Thunderbolt 3, merge in USB 4 goodies (what few there are) and spit it back out as Thunderbolt 4 under the guise of helping the average consumer from being confused. Saying it another way, we can just call it Thunderbolt 3+.
 
Just because you don't see the utility it does not mean someone else did not need to.

I think it's clear: If you are trying to optimize your assembly to Intel AVX512 then get an Intel Xeon.
If your are that picky about performance on all CPUs then get all of them for testing.

There's no shortcut for that. All CPU behaves differently even running same instructions.

Binary compatibility to what is now not yet commonly supported does not make any sense.
 
You will never need it if your CPU runs faster than a CPU that have it.

If you are real low level engineer you have plenty of computer at your disposal and will never have this question.
You know nothing about my needs.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.