I wonder if the nVidia GPUs use their tensor cores for this. If they do it would be like comparing the M1 NPU to some standard GPU compute shaders.tensor AI backends
I'd like to see an integrated GPU compared to the M1, for instance some intel Xe.
I wonder if the nVidia GPUs use their tensor cores for this. If they do it would be like comparing the M1 NPU to some standard GPU compute shaders.tensor AI backends
I'm not sure which code you refer to, because the M1 performs very well in many apps and several industry-standard benchmarks. SPEC tests for instance. It's the same code compiled for different architectures.
You find a piece of code on which the M1 is slow and claim that the hardware is inherently slower. Then how do you explain the M1 outstanding performance in other tasks, if it's slower? Magic?
It seems you don't understand what optimising the code can do.
It can speed things up a lot. For instance, x265 got >50% faster after Apple included some optimisations for the M1.
Please, post precise links to performance comparisons, with settings etc. These discussions are terrible. It's mostly posters trashing on Apple and its "fanboys". I'm not going to read through all this garbage to find something relevant.Links to the talkchess threads have been posted.. Plase read trough the discussion before posting.. I will post links again for your convenience..
Then you think wrong. Apple never referred to geekbench.I think Apple focuses more on questionable benches like geekbench questionable benches like geekbench
I wonder if the nVidia GPUs use their tensor cores for this. If they do it would be like comparing the M1 NPU to some standard GPU compute shaders.
I'd like to see an integrated GPU compared to the M1, for instance some intel Xe.
So you are happy with subpar chess software performance, compared to cheaper alternatives...
No one would take away your "experiences" from you...
I am sure there are some people also very happy with a dual-core i3 computer that are satisfied with that level of performance as well...I would not question that.
It is just is a question of your expectations and needs. But the discussion of "satisfaction" is subjective... Performance benchmarks like the ones discussed for compiled open-source chess engines, and comparison with recent CPUs from AMD and Intel makes a more interesting topic and can be discussed much more objective. It makes a lot more sense to focus on hard performance facts than to focus on personal experiences and personal bias regarding hw choices you make.
Except it's missing ML cores, memory bandwidth, media encode engines, less GPU performance, etc.5800H supports 16 threads on 8 full-speed cores..Much more porwerful even than the latest Max and Pro CPUs.. for less money.
Except it's missing ML cores, memory bandwidth, media encode engines, less GPU performance, etc.
And comparing total system prices is pointless : much of the cost of an Apple device is the nicer peripherals.
The M1 does speculative/preemptive execution too, and while it doesn't have as bad an effect, it's not nothing and could be a security problem. I think just about all modern CPU's have it.Edit: Intel also tried its best to hide poor pipelining by doing crazy preemptive branch prediction/speculative execution...and we all know how well that worked out. Now we have Spectre + Meltdown.
Then you think wrong. Apple never referred to geekbench.
Anyway, I've yet to see someone demonstrate how geekbench is "questionable".
As opposed to stockfish, Geekbench is designed to be a cross-platform benchmark tool and its results are consistent with SPEC tests.
So you're essentially saying "any benchmark claims by apple, that I don't even know what they are, are crap!"well any bencharmk that for example claims
Yeah, the M1 GPU is slower than an RTX 3070. News at 11. That's the case for all integrated GPUs that come in PC ultrabooks.I dont see the relevancy of this.. If you care about GPU perfromance and have $1500 to spend on a laptop you compare a m1 Mac against a $1000 5800H + RTX 3070 GPU similar priced..
Not at all the same. Intel actually has a separate branch target decoder, etc. much more convoluted than anything apple is likely doing.The M1 does speculative/preemptive execution too, and while it doesn't have as bad an effect, it's not nothing and could be a security problem. I think just about all modern CPU's have it.
It would be better for everybody to start reading with page 1.I am trying to summarise after 20 pages of arguing and I concluded that if you want to play chess you should buy a chess-board.
For anything else, an M1 based computer will do just fine.
Phew
A lot of things are easier to speed up when you have special designed engines on the Pro and Max chip.I'm not sure which code you refer to, because the M1 performs very well in many apps and several industry-standard benchmarks. SPEC tests for instance. It's the same code compiled for different architectures.
You find a piece of code on which the M1 is slow and claim that the hardware is inherently slower. Then how do you explain the M1 outstanding performance in other tasks, if it's slower? Magic?
It seems you don't understand what optimising the code can do.
It can speed things up a lot. For instance, x265 got >50% faster after Apple included some optimisations for the M1.
It is called optimization of hardware. So now you are going to complain that Apple has some sort of unfair advantage.A lot of things are easier to speed up when you have special designed engines on the Pro and Max chip.
Please, post precise links to performance comparisons, with settings etc. These discussions are terrible. It's mostly posters trashing on Apple and its "fanboys". I'm not going to read through all this garbage to find something relevant.
That's some pretty low-quality content there, which you're bringing here yourself.
I dont see the relevancy of this.. If you care about GPU perfromance and have $1500 to spend on a laptop you compare a m1 Mac against a $1000 5800H + RTX 3070 GPU similar priced..
For example take a look at..
Acer Nitro 5| 15.6-inch | AMD Ryzen 7 5800H| Nvidia RTX 3070 | 1080p | 144Hz | 16GB RAM | 1TB SSD |£1099.97 ....
5800H supports 16 threads on 8 full-speed cores..Much more porwerful even than the latest Max and Pro CPUs.. for less money.
If you don't care about GPU perfromance then both Xe and M1 is "fine".. and Xe-only alternatives will be much cheaper.of course.
Not at all the same. Intel actually has a separate branch target decoder, etc. much more convoluted than anything apple is likely doing.
Until someone publishes about a new side channel attack, I wouldn’t assume any new processor suffers from security leaks due to speculative execution.
arstechnica.com
Why don't you read the threads reffered to and the links in those all your questions about the settigns and compiles can be found..
Or better yet, Why dont you just try it yourself and postt some rela results? if you think figures are somehow wrong or non representative?
Std banchmark with the benchmark option of stockfish after compiling the source from github is done by the -benchmark opttion https://github.com/official-stockfish/Stockfish/blob/master/src/benchmark.cpp
What results do you think is wrong?
This is a pretty good description of running the included std SF bench..
Similar procedures on all laptops according to chess forum.
You would maybe gain a few percent of performance by optimizing by hand compraed to the compiler that already optimizes for the CPU.It is called optimization of hardware. So now you are going to complain that Apple has some sort of unfair advantage.