Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

hajime

macrumors G3
Original poster
Jul 23, 2007
8,091
1,366
Owned a HP Z workstation (with dual Xeon 12-16 cores and professional GPU) and rMBP 2012 15" few years ago. Did not see noticeable overall performance gain of the workstation compared with the rMBP. Perhaps rendering was just a bit faster. How fast is the current maxed MBP 2017 15" compared with the best desktops in the market for simulations, Matlab/Simulink and 3D CAD applications?
 
Last edited:
The GPUs aren't very good for 3D CAD. However, with the stupid fast SSDs they have, they feel VERY fast in day to day heavy tasks.
 
  • Like
Reactions: hajime
generally it's (relatively) very slow if you compare it to a similarly priced desktop PC in terms of computing power and graphics.

if you don't plan to carry it around - it's not the best choice. you're better off with iMacs or PC workstations.
 
  • Like
Reactions: meboy
generally it's (relatively) very slow if you compare it to a similarly priced desktop PC in terms of computing power and graphics.

if you don't plan to carry it around - it's not the best choice. you're better off with iMacs or PC workstations.

Over 15 years ago, I used two types of computer. A powerful self-made desktop PC and a lightweight laptops (e.g. Powerbook Duo 280c, Thinkpad X40). For the past 10 years, I mainly use the highest-end MBP as they can satisfy my needs and I work from place to place.

Given that the MBP are so expensive these days and my 17" MBP is 3kg, I probably just get the maxed MBP rather than one ultra-portable and a power desktop.

Compared with the MBP 2010 17", how much faster is the MBP 2017 15"?
 
Compared to something like a ThinkStation P910 with Quadros (and 4-25 times the RAM), the MacBook Pro should in theory be laughably massacred if the Apps being used can take advantage of the multiple cores/CPUs! (although it seems very possible that both the coming iMac Pro and modular Mac Pro are going to give many high-end Windows workstations a decent run for the money!)

I can't speak to how well MATLAB uses multiple cores, but in the case of the current version of Stata there is a huge difference when running one of the multicore versions vs. the single core version.
 
Owned a HP Z workstation (with dual Xeon 12-16 cores and professional GPU) and rMBP 2012 15" few years ago. Did not see noticeable overall performance gain of the workstation compared with the rMBP. Perhaps rendering was just a bit faster. How fast is the current maxed MBP 2017 15" compared with the best desktops in the market for simulations, Matlab/Simulink and 3D CAD applications?

it's not even close.

it depends what you're doing though and how your workload scales.

but GPUs in the macbooks are trash by comparison.
CPUs in any portable are massively limited to anything desktop class i7 or faster
storage options are limited.

you need to get the applications you plan to use however. if they can't take advantage of many threads or are storage or network limited there may not be such a big difference.

if they are heavily multi threaded and CAN take advantage of the resources the difference can be absolutely huge.
 
  • Like
Reactions: Queen6
Just like your 2012 MBP offered only a fraction of performance of your workstation (regardless of you noticing this or not), a modern workstation is much faster than the 2017 MBP, although the difference in performance might have shrank a bit. For one, the GPU in the MBP got a substantial boost, even though th it's still about 4-5 times slower than high end desktop cards.
 
After I replaced the hard drive of my MBP 2010 17" by a SSD drive, I don't feel any difference in performance between it and the two other previous work computers (rMBP 2012 15", rMBP 2014 15"). Sure, the rMBPs have higher resolution displays are are lighter. However, I don't feel a noticeable difference in performance.

Some plugin of Matlab takes advantage of multi-core but not all. I guess that might be one of the reasons.

For the previously used HP workstation, it had the best Quadro and as I recall, 64GB RAM as well.
 
If you don't notice any performance difference than you are not doing anything performance-intensive. Or your workflows/software are very inefficient. That's all there is to it.
 
  • Like
Reactions: throAU
If you don't notice any performance difference than you are not doing anything performance-intensive. Or your workflows/software are very inefficient. That's all there is to it.

Do we consider getting the same things done by 5-10 seconds shorter performance difference?
 
Over 15 years ago, I used two types of computer. A powerful self-made desktop PC and a lightweight laptops (e.g. Powerbook Duo 280c, Thinkpad X40). For the past 10 years, I mainly use the highest-end MBP as they can satisfy my needs and I work from place to place.

Given that the MBP are so expensive these days and my 17" MBP is 3kg, I probably just get the maxed MBP rather than one ultra-portable and a power desktop.

Compared with the MBP 2010 17", how much faster is the MBP 2017 15"?

as I said - if you need portability, get the MBP, otherwise get a desktop.
I use a DIY hackintosh at home (much faster than top spec iMacs) and a maxed out 13inch TB mbp when I'm on the go.

assuming you've got the fastest CPU in the 2010 17" (2.8G dual core i7 640M) - a single core from the 2017 15in i7 - out performs both cores combined in the 2010 model. So - you should see 200%+ processing power increase when your application can utilize all cores.
The 15inch's SSDs are stupidly fast comparing to SATA based SSDs.
 
Do we consider getting the same things done by 5-10 seconds shorter performance difference?

What is the baseline? That is, how long does it take on your old machine?

Objectively, the new machines are faster. As in — they are capable of performing more work in the same time. Whether your software can utilise that properly, is another question. If most of your work is done with interactive applications (you mentioned CAD as an example), then its entirely possible that even an older GPU is more then sufficient to give you sufficient user experience. After all, whether the GPU is able to draw a frame in 0.016 sec or whether it can do it in 0.0001 sec, doesn't make any practical difference — all you care about is getting a smooth picture and fast response to your actions (and thats limited by the screen refresh rate to begin with). Similarly, if you run statistical modelling, it doesn't really matter whether the analysis is done in 1 sec or 0.2 sec (5 times faster) — for all intents and purposes, you get the results back in an instant.
 
  • Like
Reactions: hajime
What is the baseline? That is, how long does it take on your old machine?

Objectively, the new machines are faster. As in — they are capable of performing more work in the same time. Whether your software can utilise that properly, is another question. If most of your work is done with interactive applications (you mentioned CAD as an example), then its entirely possible that even an older GPU is more then sufficient to give you sufficient user experience. After all, whether the GPU is able to draw a frame in 0.016 sec or whether it can do it in 0.0001 sec, doesn't make any practical difference — all you care about is getting a smooth picture and fast response to your actions (and thats limited by the screen refresh rate to begin with). Similarly, if you run statistical modelling, it doesn't really matter whether the analysis is done in 1 sec or 0.2 sec (5 times faster) — for all intents and purposes, you get the results back in an instant.

As I recall, using a very expensive HP workstation with top of the line Quadrao GPU only produced a bit nicer looking objects after rendering. In terms of CAD, I guess one could get faster results in rendering only. I think I compared the speed of rending between the HP workstation and the rMBP 2012 15". The former completed the task only a few seconds faster.

For doing simulations that could take weeks or months to complete, such "small" increase in performance could add up to big savings in time.
 
Owned a HP Z workstation (with dual Xeon 12-16 cores and professional GPU) and rMBP 2012 15" few years ago. Did not see noticeable overall performance gain of the workstation compared with the rMBP. Perhaps rendering was just a bit faster. How fast is the current maxed MBP 2017 15" compared with the best desktops in the market for simulations, Matlab/Simulink and 3D CAD applications?


Perhaps your work is not the type that takes advantage of higher performing CPUs and GPUs.

In engineering and CS work the desktop workstations have several advantages. Faster CPUs, more cores = more threads, fast bus to peripherals, and more powerful GPU cards. The power of the GPU cards can especially be seen doing parallel matrix operations in a tool like Matlab. Here are some difference in performance from a XEON level CPU and a Nvidia, which supports CUDA, GPU

Gigaflops on CPU: 241.482550
Gigaflops on GPU: 1199.151846

The user experienced performance difference when you do something like this is very dramatic. Here are some timings from a ML/AI models I just ran:
  • 80 seconds per cycle on my 2015 15" rMBP
  • 40 seconds per cycle on a fast i7 desktop CPU (windows 10)
  • 1-2 second per cycle on the Nvidia 1070 GPU (Windows 10)
 
Last edited:
  • Like
Reactions: throAU and hajime
Perhaps your work is not the type that takes advantage of higher performing CPUs and GPUs.

In engineering and CS work the desktop workstations have several advantages. Faster CPUs, more cores = more threads, fast bus to peripherals, and more powerful GPU cards. The power of the GPU cards can especially be seen doing parallel matrix operations in a tool like Matlab. Here are some difference in performance from a XEON level CPU and a Nvidia, which supports CUDA, GPU

Gigaflops on CPU: 241.482550
Gigaflops on GPU: 1199.151846

The user experienced performance difference when you do something like this is very dramatic. Here are some timings from a ML/AI models I just ran:
  • 80 seconds per cycle on my 2015 15" rMBP
  • 40 seconds per cycle on a fast i7 desktop CPU (windows 10)
  • 1-2 second per cycle on the Nvidia 1070 GPU (Windows 10)


Thanks. Speaking of ML/DL models, have you tried eGPU under Mac OS and Windows/Bootcamp?
How is the performance for a system with eGPU (e.g. GTX 1080 Ti with AKiTiO Node) under Mac OS/Windows?
 
Sorry. No experience with that.

In your experience, do you think MBP 2017 15" is sufficiently good enough to do ML/DL stuffs? Do you recommend getting a PC instead? I consider to change my MBP 2010 since it is very heavy.
 
As others have said, the difference in both CPU and GPU power between a MBP and a workstation like the one you mentioned can be very dramatic (unless the latter is a really obsolete machine, perhaps).

If you can't notice a difference, it can typically mean that your tasks are not exploiting multiple CPUs/cores/threads, or that your execution times are so short that the speedup factor becomes irrelevant. But if that's the case, you probably don't need a desktop workstation in the first place.

In your experience, do you think MBP 2017 15" is sufficiently good enough to do ML/DL stuffs? Do you recommend getting a PC instead? I consider to change my MBP 2010 since it is very heavy.
The main drawback of the current MBPs for ML/DL is the lack of CUDA compatible (i.e. Nvidia) GPUs, which essentially locks you out of GPGPU acceleration (openCL support for this kind of stuff is, imho, absolutely crappy). Things could soon change with the upcoming official eGPU support, though. The other factor you might want to consider is that DL models (neural nets, etc.) can take a long time to train: if you often need to leave your machine crunching numbers for a few hours, I'd say a desktop/workstation is definitely a much better choice than a laptop.
 
Last edited:
In your experience, do you think MBP 2017 15" is sufficiently good enough to do ML/DL stuffs? Do you recommend getting a PC instead? I consider to change my MBP 2010 since it is very heavy.

I use my 2015 MBP for Machine Learning all the time. It does OK with most traditional ML. However, with Deep Learning models you have use lot of data (N) and potentially thousands of features dimensions (M) so you are working with operations on big N X M matrices. For that it struggles, as the performance numbers I posted shows. This is mostly due to the lack of and NVidia GPU with Cuda support that many Neural Network frameworks (Tensorflow. etc) can use to speed their calculations.

So I find myself using my MacBook Pro for coding models and making sure they work properly with smaller test datasets. But for NN training runs with more data I use my desktop PC with a Nvidia 1070 card.

Another option is to use cloud based services like AWS or Azure to do you big runs. But, those charge by the minute so can get a bit pricey. But, schools tend to get a lot of free cloud compute time so check into it.
 
Owned a HP Z workstation (with dual Xeon 12-16 cores and professional GPU) and rMBP 2012 15" few years ago. Did not see noticeable overall performance gain of the workstation compared with the rMBP. Perhaps rendering was just a bit faster. How fast is the current maxed MBP 2017 15" compared with the best desktops in the market for simulations, Matlab/Simulink and 3D CAD applications?
rendering is usually - the more cores the faster it is.

So, 4 cores machine will render image almost 4x slower than 16 core machine. It depends on the application of course and how well its setup for multithreading.

If you had 16 core machine and 4 core machine and you really haven't noticed any difference then then you will certainly not notice any difference now (unless the software was the bottleneck and its fixed now).
So my suggestion is, get the machine that fits your needs (size,price etc. wise and don't worry about this)

Usually people who have 16 core machine know why they have it and why they need it and therefore they would not even touch 4 core machine. Regardless, your needs might not be so requiring so don't waste money :)

For me though, no machine is fast enough at least not for Maya - there is always more complex model, more complex scene etc. Never ending story :)
 
I used over 120 unix machines for GA and NN work almost 20 years ago. It took three months to complete the task. Haven't played with DL so I have no idea on the computing requirements.

For learning ML and DL, is 16GB sufficient? Supposing eGPU works, how many cores and memory do you recommend?

BTW, which tensorflow do you use? Docker or something else?
 
I used over 120 unix machines for GA and NN work almost 20 years ago. It took three months to complete the task. Haven't played with DL so I have no idea on the computing requirements.

For learning ML and DL, is 16GB sufficient? Supposing eGPU works, how many cores and memory do you recommend?

BTW, which tensorflow do you use? Docker or something else?

That is a lot of hardware!

The biggest changes that have occurred recently is a dramatic increase in the amount of data available and the amount of computing horsepower available, largely due to gamers demands for faster and faster GPUs.

With CUDA and a good DL framework like TensorFlow, you can take advantage of the GPU and push much of the matrix operations off to the GPU card and CUDA cores. So assuming this works with an EGPU. This can reduce the burden on your machine and the need to have ultra fast CPUs. I find I can get by with consumer level core i7 level CPUs with the home/student/small business models I work with. However, this, and everything else in DL, is highly dependent on the type and amount of data you are processing. Can you tell me a little bit about what you are doing with DL?
 
Last edited:
  • Like
Reactions: hajime
That is a lot of hardware!

The biggest changes that have occurred recently is a dramatic increase in the amount of data available and the amount of computing horsepower available, largely due to gamers demands for faster and faster GPUs.

With CUDA and a good DL framework like TensorFlow, you can take advantage of the GPU and push much of the matrix operations off to the GPU card and CUDA cores. So assuming this works with an EGPU. This can reduce the burden on your machine and the need to have ultra fast CPUs. I find I can get by with consumer level core i7 level CPUs with the home/student/small business models I work with. However, this, and everything else in DL, is highly dependent on the type and amount of data you are processing. Can you tell me a little bit about what you are doing with DL?


Which OS do you recommend for ML/DL work?

At this stage, I am just trying to learn about ML and DL.
 
Which OS do you recommend for ML/DL work?

At this stage, I am just trying to learn about ML and DL.

OS does not matter for getting started.

Just use Python as your language. Figure out the basics of the language. Learn about tools like Anaconda for creating virtual environments

Then, for standard ML you can use scikit-learn.

For Deep Learning I would suggest you use TensorFlow as a framework. It is a python library that import just like other python libraries.

This is my recommended way to proceed.

But, if you want to start first with DL you can go to TensorFlow.org and download Docker containers running TensorFlow samples.

I am not a big fan with starting with DL, but some people try that route.
 
  • Like
Reactions: hajime
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.