Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Correct me if I'm wrong here, i5 8265U is quad core with base frequency of 1.6 GHz , and even the i7 8565u has base speed of 1.8 GHz.
Both the above processors score around 14k in multicore on average which I confirmed on geekbench.
Still how the 8257U with a base frequency of 1.4GHz can score 17-18k. Are the SSD speeds also taken into consideration by Geekbench?
 
Correct me if I'm wrong here, i5 8265U is quad core with base frequency of 1.6 GHz , and even the i7 8565u has base speed of 1.8 GHz.
Both the above processors score around 14k in multicore on average which I confirmed on geekbench.
Still how the 8257U with a base frequency of 1.4GHz can score 17-18k.

Those are Whiskey Lake. This one is Coffee Lake. It'll be hard to compare, since Coffee Lake doesn't have any other 15W parts.

Are the SSD speeds also taken into consideration by Geekbench?

No.
 
If by majority of cards you refer to Nvidia, then realize that since Mojave, Apple no longer supports OpenGL. Nvidia is either having difficulty writing Metal drivers, or they have no intention of doing so. Maybe they need technical resources that they’re not getting from Apple to overcome whatever issues are holding up MacOS drivers, but if so they haven’t said so publicly. It could be that Nvidia’s simply not willing to support Metal, who knows.
So basically they do not work this out. It's ok to decide on different technology, but "caring" is not what I'd call that. Especially as the support for deep learning etc. on non-Nvidia is way behind.

re: the mini, with the upgraded cooling there’s a 65W thermal budget for CPU+GPU. How would you suggest it be allocated, rather than with all 65 Watts dedicated to the CPU/iGPU, as Apple did?
Again, thank you for confirming the non-caring. The decision to go with that cooling was completely up to them. It's not that any design decision for an upgraded Mini would have been expected up front.
 
So basically they do not work this out. It's ok to decide on different technology, but "caring" is not what I'd call that. Especially as the support for deep learning etc. on non-Nvidia is way behind.

You want to use a laptop with a 15W CPU for "deep learning"?

Are you sure you're in the right thread?
 
You want to use a laptop with a 15W CPU for "deep learning"?

Are you sure you're in the right thread?

No, I wouldn't want to do that. :)

It was the messages "Apple cares deeply about GPUs" which got me laughing and starting. If that was limited to their mobile lineup, that *might* be true. But with regard to their desktops, this is quite a mixed bag.
 
It is interesting that Intel is making customized variants of its chips exclusively for Apple. I would really like to know what’s up with that.

Because they aren't particularly custom at all.

The actual silicon in the chip package is exactly the same as what in the the chip package in the four port MBP 13". It is just run at a different clock speed.

https://ark.intel.com/content/www/u...-8257u-processor-6m-cache-up-to-3-90-ghz.html

https://ark.intel.com/content/www/u...-8279u-processor-6m-cache-up-to-4-10-ghz.html

Exact same Package size (46x24) . Same EDRAM for the GPU (128) . Same number of PCI-e lanes (x16).
The 'device ID' n the iGPU is different ( probably settable at the factory).


It is a 'custom' product SKU ( a which set of features are tuned on) and perhaps binned slightly different, but it is the same dies as all of the other ones in the Iris Pro class. And same die is just look at the CPU and PCH embedded in the chip package.


This chip is clocked different so Intel charges a different price and Apple has a different thermal envelope to put in it. But there are no significant design changes here at all.
 
Interesting that when I priced out a base 1.4 and upgraded it to 1.7 GHz, 16 GB, and 1 TB it cost the same as the 2.4.

Good to see the base model updated, shame about the touchbar, 2 TB3 ports, and keyboard.
 
As I said, even for 'general use' 4 cores are always better than two but you're only going to see those 50-100% speed increases suggested by the multi-thread Geekbench scores on jobs like video transcoding that have been heavily optimised for multicore.
audio work does also well with several core. Basically each audio chain/track can be processed by a different thread; that comes naturally from the audio workflow. Unless you have one giant cpu hogging soft synth or whatever, you're gonna see a proportional boost in perfs with more cores (and with as many cores as possible; ableton live supports 64 threads for exemple). Also audio work needs quite a lot of cpu headroom for smooth realtime processing and to keep low latencies.
 
They've done it in the past. Remember when Steve jobs announced the first MacBook air? He mentioned they'd been working with intel in order to have a custom chip that fit the tiny internals of the MacBook air.

Intel didn't just work on that chip for Apple. Others system vendors had input into it also. The input may not have be completely even. Rumors are that Apple has been the major vendor pushing Intel forward on iGPU improvements. And Apple is the main buyer of the Iris Pro variants of the line up.

But this current chip is FAR from being custom. It is the exact same package and silicon that is in the the 4 port MBP 13". The 'settings' on the the firmware/chip are different to hit a different TDP envelope. That is all. Intel merely turned the 'knobs' on what was already there.
 
Last edited:
This chip is clocked different so Intel charges a different price and Apple has a different thermal envelope to put in it. But there are no significant design changes here at all.
Since the thermal envellope is the same as the 4 thunderbolt version (I think it is?) it means pratically that it won't work at the base clock unless under very heavy load, and will probably turbo most of the time? am I missing something? which explains it has a similar score in benchmarks.
Base clock is supposed to be the "guaranteed speed", but if it can turbo it will; and it can also go below the base clock if it's too hot or under light load, so really this base clock doesn't say this much except a commercial promise.

edit: nope, I was wrong, the 2 thunderbolts port model has only one fan, the 4 thunderbolts model has two. so it's fitting that the base clock of the former would be lower. Didn't think it would be a completely different design.
see
https://fr.ifixit.com/Vue+Éclatée/MacBook+Pro+13-Inch+Two+Thunderbolt+Ports+2019+Teardown/124676
and
https://fr.ifixit.com/Tutoriel/Vue+éclatée+du+MacBook+Pro+13-Inch+Touch+Bar+2018/111384
 
Last edited:
Since the thermal envellope is the same as the 4 thunderbolt version (I think it is?)

This one is TDP 15W and the 4 port uses one with 28W.


it means pratically that it won't work at the base clock unless under very heavy load, and will probably turbo most of the time? am I missing something? which explains it has a similar score in benchmarks.

So yes, as long as throwing 10-12W of 'work' at the two then the results will largely be the same. Single threaded (where 3 core complexes on the chip fir ) are largely asleep it is basically the same chip with the same power applied (the max Trubo values aren't that far apart). So if most of the workload could be done with just 2 cores (like the previous model) then this two port model is the substantively more affordable model. ( mainly another approx $300 for two more TB ports. ) .


Base clock is supposed to be the "guaranteed speed", but if it can turbo it will; and it can also go below the base clock if it's too hot or under light load, so really this base clock doesn't say this much except a commercial promise.

The 4 port model has two ( I think think slightly smaller ) fans. It will hold up slightly better if pushed extremely hard. ( $300 better ... maybe depending on workload's value. ).
 
This one is TDP 15W and the 4 port uses one with 28W.




So yes, as long as throwing 10-12W of 'work' at the two then the results will largely be the same. Single threaded (where 3 core complexes on the chip fir ) are largely asleep it is basically the same chip with the same power applied (the max Trubo values aren't that far apart). So if most of the workload could be done with just 2 cores (like the previous model) then this two port model is the substantively more affordable model. ( mainly another approx $300 for two more TB ports. ) .




The 4 port model has two ( I think think slightly smaller ) fans. It will hold up slightly better if pushed extremely hard. ( $300 better ... maybe depending on workload's value. ).

not sur I understand the thing about 2 cores, still need all the cores possible; but yeah I figured the 4 thunderbolts model also has 2 fans. About the TDP, I don't understand if it's a logic limitation backed into the processor; or just a guideline for manufacturers to scale their cooling system? wouldn't this 1.4 Ghz processor perform exactly the same as the 2.4 Ghz with the same cooling system?
 
You want to use a laptop with a 15W CPU for "deep learning"?

Are you sure you're in the right thread?

Deep learning utilizes GPU and not CPU. Just look at the Nvidia Jetson Nano with very beefy GPU but old ARM A57 CPU that was introduced in 2012.
 
No, I wouldn't want to do that. :)

It was the messages "Apple cares deeply about GPUs" which got me laughing and starting. If that was limited to their mobile lineup, that *might* be true. But with regard to their desktops, this is quite a mixed bag.

I was the one that wrote that...and I stand by that. If it hadn't been for Apple pushing Intel as well as some GPU engineers inside of Intel pushing for space on the CPU die, we wouldn't have anywhere near the performance level of the Iris GPUs that we have now or that have come before. This article is one of many that lays it down - https://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested

If your prism is simply gaming performance, that's your prerogative. Frankly, I could care less...and arguing about deep learning is pointless as Apple is focused on their own OpenML and Metal technologies and on-device ML with the A11 and A12 Bionic CPUs and CUDA does not have a place on the Mac platform. Besides, why buy a Mac just to put an NVIDIA GPU in it and boot into Windows or Linux, which are clearly more entrenched in the AI/DL space?

NVIDIA is focused on making CUDA the only standard for GPU accelerated AI and ML, which is their right to try and do so and Apple has the right to say "screw that", which is evident by the lack of NVIDIA support in Mojave and Catalina.

I am thoroughly convinced that if Intel offered an Iris Pro or Iris Plus version of their 65w TDP S-Series desktop CPU for the 8th and 9th Gen, Apple would be using them in the mini. But Intel killed that off after Broadwell (i7-5775R/5775C) while they slacked off on Iris to go chase NVIDIA down the GPU rabbit hole. Apple kept the same chassis for the mini for a variety of reasons, some reasonable, some cheap, some were most likely concessions by the hardware engineering team to management for letting them build a new mini after 4 years of neglect by Apple management. There was no way a dGPU and a decent CPU were fitting inside that chassis, dissipating all that heat and still have enough power from that tiny PSU to make it happen. Kaby Lake-G just isn't performant enough, topped out at 4-cores and seems more like a curiosity piece than a genuine choice. I'm glad Apple chose the 65w desktop CPUs, and it would be nice if they had Iris iGPUs as well. But they don't, and hopefully, Apple can work some magic in the drivers to make them perform at their best. In the mean time, you can add an eGPU if that is really a necessity or choose an iMac or the Mac Pro.

That's my 2¢.
 
  • Like
Reactions: Zen_Arcade
No, I wouldn't want to do that. :)

It was the messages "Apple cares deeply about GPUs" which got me laughing and starting. If that was limited to their mobile lineup, that *might* be true. But with regard to their desktops, this is quite a mixed bag.

I think "mixed bag" is a fair assessment.

We've been seeing some improvements like eGPU support and the MPX slots in the Mac Pro (maybe Nvidia or a third party can make a card for that?), and they're quite competitive on their mobile GPU front.

But as far as Macs are concerned, most models just don't have much of a beefy GPU to speak of, so you pretty much rely on eGPU for that. And then there's stuff like dropping OpenGL/OpenCL, not implementing Vulkan, (apparently) refusing to sign Nvidia's kernel extensions, …
[doublepost=1562950652][/doublepost]
Kaby Lake-G just isn't performant enough,

I wonder what happened to that experiment. Was a major vendor asking for it? Was it a toy project? A stopgap?
 
I think "mixed bag" is a fair assessment.

We've been seeing some improvements like eGPU support and the MPX slots in the Mac Pro (maybe Nvidia or a third party can make a card for that?), and they're quite competitive on their mobile GPU front.

But as far as Macs are concerned, most models just don't have much of a beefy GPU to speak of, so you pretty much rely on eGPU for that. And then there's stuff like dropping OpenGL/OpenCL, not implementing Vulkan, (apparently) refusing to sign Nvidia's kernel extensions, …
[doublepost=1562950652][/doublepost]

I wonder what happened to that experiment. Was a major vendor asking for it? Was it a toy project? A stopgap?

I think it was a one off on Intel's part to see if they could make an attractive CPUw/dGPU package that fit within two power envelopes, allowing them to sell a fairly integrated solution to their OEM customers. Especially considering how they left Iris Pro to whither away slowly.

The 65w TDP parts (GL) would somehow appeal to laptop vendors and the 100w TDP parts (GH) for SFF desktops and All In Ones. The problem is that it wasn't very fast overall and it was marketed as an 8th Gen CPU, when it is really a 7th Gen core with a Vega GPU tacked on, which I think went over like a lead balloon with OEMs as everything that is value oriented uses iGPUs and everything performance oriented tends to be NVIDIA or higher end AMD, while gaming rigs (desktop and portable) are where the OEMs are probably able to make some margin back while NVIDIA is dominant in gaming and pretty aggressive with the Max-Q GPUs. I think Kaby Lake-G was a solution in search of an answer that no one, even Apple, was seeking. Intel then relegated it to the Hades Canyon NUC which basically served as a concept car, for lack of a better term.

Intel may have learned some valuable lessons though as far EMIB and will apply that to Sunny Cove CPUs when they end up grafting 10nm iGPUs onto the same package with a 14nm++++++Infinity and Beyond CPU. I am typing on the run or I would cite a couple of AnandTech and Toms Hardware articles to back up my theory a bot better. Feel free to challenge or destroy my reasoning!
 
All valid points, except the main rubbing point for me (and many others) is that the base model comes with a ridiculous 128GB SSD, and to upgrade to a more reasonable 256GB is $200. $200 for what is about $30 worth of parts. Want a 500GB SSD, then add $400 for a $50 part. 500GB SSD's (fast ones) are available everywhere for Windows systems for $50-$100. Memory? 8GB for $200? And old DDR3 at that! Laughable.

And of course, with everything soldered to the system board, no option to EVER upgrade it. So you have to buy it now and you have to buy it from Apple. Two words -- RIP. OFF.

So, sure, Apple is ripping off their customers LESS now, but stop gouging them for necessary upgrades. Who expects to make do with 128GB SSD for 5+ years without being encumbered by needing internet access to store things in the slow iCloud?

Get rid of the 128GB base configuration, make it 256GB, and either start at 16GB memory or make it a $100 upgrade.

Well, not to say your points aren't valid, but you kind of proved my original point.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.