Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Added cinebench of M380 from this thread. That means we have an almost full list.
It shows that while the difference from 390 all the way to 395X is only about 7%, the difference between the 380 and 390 is a whopping 20%. At the very least if you care anything about graphics (games, video/audio/photo editing), bump up to the 390.
 
  • Like
Reactions: OddyOh and 762999
I just looked at the Final Cut Pro X system requirements on the Apple site (I'm having trouble copying the link on this pesky iPad). It stated that only 1 gig of VRAM was require for 4K editing (along with 3D titles also needing the same). It seems very low, especially compared to Premiere, but if it's even remotely accurate then it's good news. As I'm more of an audio guy and I'm only starting to incorporate video work (1080p video based work so far) I'd love to hear from someone whose familiar wth working with 4K footage in Final Cut on a recent iMac.
 
Just to muddy the waters further, I tested my custom PC...

Unigine: 63 fps
Cinebench: 160.58 fps


Cinebench's OpenGL test is known to be CPU-bound. I wouldn't put any stock into it at all.
 

Attachments

  • cinebench.png
    cinebench.png
    33.9 KB · Views: 205
  • unengine.png
    unengine.png
    207.8 KB · Views: 214
I just looked at the Final Cut Pro X system requirements on the Apple site (I'm having trouble copying the link on this pesky iPad). It stated that only 1 gig of VRAM was require for 4K editing (along with 3D titles also needing the same). It seems very low, especially compared to Premiere, but if it's even remotely accurate then it's good news. As I'm more of an audio guy and I'm only starting to incorporate video work (1080p video based work so far) I'd love to hear from someone whose familiar wth working with 4K footage in Final Cut on a recent iMac.
That's good to know as I use fcpx only and not premiere for my work (I hate adobe's subscription BS!). That lets me know to just save the $300 and get the 395 and not X. I'll put the $300 towards the i7 upgrade which is worth it for the hyperthreading.
 
...I'd love to hear from someone whose familiar wth working with 4K footage in Final Cut on a recent iMac.

I do extensive 4k multicam editing with FCP X on a 2013 top-spec iMac 27 and 2015 top-spec iMac 27. 4k is extremely demanding and ideally you want the fastest available computer. Even with my 4Ghz i7 iMac, SSD and Thunderbolt drive array, it is usually necessary to generate proxy media to obtain good editing performance. Fortunately with FCP X, proxy workflow is seamlessly integrated and easy to use.

Applying effects to a 4k timeline is much slower than 1080p -- there's four times the data. Some of these effects are CPU-bound, other are GPU-bound. Either way you want the fastest possible CPU with the most cores and the fastest available GPU.

There is no question with 4k you ideally should use the 4Ghz CPU and M395X. You can probably get by without SSD since the data is usually too large to fit on internal storage. However that same argument says go ahead and get SSD since the data will be stored externally. Typically you'd use an external Thunderbolt drive array.
 
I do extensive 4k multicam editing with FCP X on a 2013 top-spec iMac 27 and 2015 top-spec iMac 27. 4k is extremely demanding and ideally you want the fastest available computer. Even with my 4Ghz i7 iMac, SSD and Thunderbolt drive array, it is usually necessary to generate proxy media to obtain good editing performance. Fortunately with FCP X, proxy workflow is seamlessly integrated and easy to use.

Applying effects to a 4k timeline is much slower than 1080p -- there's four times the data. Some of these effects are CPU-bound, other are GPU-bound. Either way you want the fastest possible CPU with the most cores and the fastest available GPU.

There is no question with 4k you ideally should use the 4Ghz CPU and M395X. You can probably get by without SSD since the data is usually too large to fit on internal storage. However that same argument says go ahead and get SSD since the data will be stored externally. Typically you'd use an external Thunderbolt drive array.

I really wish someone would run some fcpx benchmarks between the m395 and m395x. The m395 is basically a m395x with very little cut off and 2gb less vram. Reading up on fcpx shows that it's not as demanding when it comes to vram as premiere is. If that's the case than you're wasting serious cash on the m395x and would be better served with putting the money towards the i7 and 1TB ssd.
 
That's good to know as I use fcpx only and not premiere for my work (I hate adobe's subscription BS!). That lets me know to just save the $300 and get the 395 and not X. I'll put the $300 towards the i7 upgrade which is worth it for the hyperthreading.

I'm in the same boat as you as i only use FCPX. I totally agree about the Adobe subscription being pure BS!


I do extensive 4k multicam editing with FCP X on a 2013 top-spec iMac 27 and 2015 top-spec iMac 27. 4k is extremely demanding and ideally you want the fastest available computer. Even with my 4Ghz i7 iMac, SSD and Thunderbolt drive array, it is usually necessary to generate proxy media to obtain good editing performance. Fortunately with FCP X, proxy workflow is seamlessly integrated and easy to use.

Applying effects to a 4k timeline is much slower than 1080p -- there's four times the data. Some of these effects are CPU-bound, other are GPU-bound. Either way you want the fastest possible CPU with the most cores and the fastest available GPU.

There is no question with 4k you ideally should use the 4Ghz CPU and M395X. You can probably get by without SSD since the data is usually too large to fit on internal storage. However that same argument says go ahead and get SSD since the data will be stored externally. Typically you'd use an external Thunderbolt drive array.

Thank you very much for your detailed response! That's great to know that the new iMac can handle editing 4k footage. I'd be quite happy to work with proxies whilst editing in FCPX. I'd imagine that proxy media versions of 4k footage would look not too bad when editing? Yeah, I've decided to max out everything on the iMac.

Thanks for your excellent detailed response.
 
New benchmark at barefeats featuring all available GPU options (sadly without comparison to older models).
Wow the poor 4K iMac is such a dog with its iris "pro". IGPUs aren't going to replace DGPUs anytime soon (I hope!) The m380 is pretty pathetic too. It's still a good gpu for the Facebook crowd... The m395 really looks to be the sweet spot. Hope they do fcpx benchmarks soon.
 
  • Like
Reactions: sn0man1
New benchmark at barefeats featuring all available GPU options (sadly without comparison to older models).

Cobbled together from barefeats tests done previously, my own tests (I did run furmark on my m290x), and so on. Apparently, no one cared about the m290 machine, which is understandable. The Trex offscreen is a little strange, but I don't think it makes much difference. Note that some of these tests involve the CPU.

Screen Shot 3.png


Also, the m380 has 12 compute units; the m390, 16; the m395, 28; the m395x, 32. If you need openCL, it's kind of a clear progression.
 
Last edited:
  • Like
Reactions: AppleDroid
This is a great thread. The conclussion that I get is that either you go for any of the X variants (and the 395) or maybe wait some time for GPUs that can handle 5K descently. From my experience, the 290X is a great card but it is not ready yet for the 5K resolution and I guess none of the mobile cards available today as well. Still, it is the lowest card to show descent/usable performance for demanding tastks.

This given that you want to use it for demanding applications. I'm not talking about web surfing or youtube-ing.
 
Last edited:
My wallet likes this. :)
Seems like the smart money is on the 390, above that is a lot of dollars for very little extra FPS..

Probably because the m380 is so paltry by comparison.

The way I see it is that Left4Dead begs to be run at 5k, not 1440p, Tomb raider shows definite improvement with the m395, T rex doesn't matter, and furmark shows the m395 to be awesome-- though the application of this is unknown. LuxMark has the m395 split the differerence, and the fcpx benchmark gives a palpable advantage to the m395.
 
Probably because the m380 is so paltry by comparison.

The way I see it is that Left4Dead begs to be run at 5k, not 1440p, Tomb raider shows definite improvement with the m395, T rex doesn't matter, and furmark shows the m395 to be awesome-- though the application of this is unknown. LuxMark has the m395 split the differerence, and the fcpx benchmark gives a palpable advantage to the m395.

Hey thanks a lot for your post. It's awesome. Hope you don't mind I added it to the OP.
It has actually made me reconsider - until then I thought the 390 was the sweet spot for price/performance, anything above that was large cost for little performance increase. I still think the same about the 395X, but your post makes me reconsider whether the 395, with its ~$150 price increase, is worth it
 
Added cinebench of M380 from this thread. That means we have an almost full list.
It shows that while the difference from 390 all the way to 395X is only about 7%, the difference between the 380 and 390 is a whopping 20%. At the very least if you care anything about graphics (games, video/audio/photo editing), bump up to the 390.
I finally ordered a 395x and I'll return my m380. Your thread was pretty useful for me :)
 
  • Like
Reactions: ivoruest
Still can't get over the fact the 395X is exactly the same as the 295X. An entire year and no graphics improvements or upgrades. Kind of makes me resent paying this much money, still in two minds whether I should cancel my order and wait for next year but all I ever seem to be doing is waiting another year with Apple because it's always something. Don't mean to bitch but it's a lot of money and one would expect marvellous things... :rolleyes:
 
Still can't get over the fact the 395X is exactly the same as the 295X. An entire year and no graphics improvements or upgrades. Kind of makes me resent paying this much money, still in two minds whether I should cancel my order and wait for next year but all I ever seem to be doing is waiting another year with Apple because it's always something. Don't mean to bitch but it's a lot of money and one would expect marvellous things... :rolleyes:

I don't normally defend Apple, but it is a better screen and processor. Don't get me started on the 1TB fusion drive though.
 
Still can't get over the fact the 395X is exactly the same as the 295X. An entire year and no graphics improvements or upgrades. Kind of makes me resent paying this much money, still in two minds whether I should cancel my order and wait for next year but all I ever seem to be doing is waiting another year with Apple because it's always something. Don't mean to bitch but it's a lot of money and one would expect marvellous things... :rolleyes:

It only has a what, 5% gain over the 295x? The one benefit is that it runs cooler than the 295x at least.
 
Your instinct is correct. Here's the problem: GPUs have been stuck on TSMC's 28nm lithography since 2012. The reports I heard indicated that TSMC's 20nm node was only suitable for LP (low power) applications such as iPhone 6. Now we've moved on to TSMC's 16nm and Samsung's 14nm in iPhone 6S. Broadwell and Skylake are on Intel's 14nm. GPUs however still remain stuck on TSMC 28nm.

Part of the problem here is Apple. I'm sure Nvidia would love to get in on these new 14nm or 16nm lithographies, but Apple has most likely bought all of the capacity for awhile.

Most major jumps in performance per watt occur on lithography steps. Sometimes you can see major improvements with only an architecture change (see Maxwell 2 and the GTX 980). Normally your best bet is to wait on shrinks to upgrade.

My last upgrade cycle was the 2012 15" Retina MacBook Pro which included Ivy Bridge i7-3820QM (22nm shrink) and Kepler GT 650M (28nm shrink). Shortly after was a 2012 27" iMac which has an Ivy Bridge i7-3770 (22nm shrink) and Kepler GTX 680MX (28nm shrink). These machines are not due for an upgrade until GPUs shrink again. It is interesting because I just recently had my 15" rMBP replaced under warranty. The new one has a Haswell i7-4980HQ (22nm) and Cape Verde R9 M370X (28nm). Benchmarking the old and new rMBP side to side showed very little performance improvement.

This may be less impactful on desktops where performance per watt is less of a problem. However, the iMac is not a traditional desktop. As soon as you begin to put it under load, it throttles. This is why we don't see much improvement between the high end models in successive generations. They are effectively reaching the TDP of the chassis and throttling back at a certain performance level, just like a laptop. The iMac will benefit significantly from a die shrink.

This 27" 5K iMac is very nice - Skylake is 14nm and the new display with P3 gamut is great. Certainly if you need a computer now, this is a nice one to buy. However, if you can afford to wait, then waiting is a good idea. Don't believe what people tell you about waiting forever. There are points of inflection in computer progress where buying is of much greater value. Node shrinks are generally one of those times.


Still can't get over the fact the 395X is exactly the same as the 295X. An entire year and no graphics improvements or upgrades. Kind of makes me resent paying this much money, still in two minds whether I should cancel my order and wait for next year but all I ever seem to be doing is waiting another year with Apple because it's always something. Don't mean to bitch but it's a lot of money and one would expect marvellous things... :rolleyes:
 
Your instinct is correct. Here's the problem: GPUs have been stuck on TSMC's 28nm lithography since 2012. The reports I heard indicated that TSMC's 20nm node was only suitable for LP (low power) applications such as iPhone 6. Now we've moved on to TSMC's 16nm and Samsung's 14nm in iPhone 6S. Broadwell and Skylake are on Intel's 14nm. GPUs however still remain stuck on TSMC 28nm.

Part of the problem here is Apple. I'm sure Nvidia would love to get in on these new 14nm or 16nm lithographies, but Apple has most likely bought all of the capacity for awhile.

Most major jumps in performance per watt occur on lithography steps. Sometimes you can see major improvements with only an architecture change (see Maxwell 2 and the GTX 980). Normally your best bet is to wait on shrinks to upgrade.

My last upgrade cycle was the 2012 15" Retina MacBook Pro which included Ivy Bridge i7-3820QM (22nm shrink) and Kepler GT 650M (28nm shrink). Shortly after was a 2012 27" iMac which has an Ivy Bridge i7-3770 (22nm shrink) and Kepler GTX 680MX (28nm shrink). These machines are not due for an upgrade until GPUs shrink again. It is interesting because I just recently had my 15" rMBP replaced under warranty. The new one has a Haswell i7-4980HQ (22nm) and Cape Verde R9 M370X (28nm). Benchmarking the old and new rMBP side to side showed very little performance improvement.

This may be less impactful on desktops where performance per watt is less of a problem. However, the iMac is not a traditional desktop. As soon as you begin to put it under load, it throttles. This is why we don't see much improvement between the high end models in successive generations. They are effectively reaching the TDP of the chassis and throttling back at a certain performance level, just like a laptop. The iMac will benefit significantly from a die shrink.

This 27" 5K iMac is very nice - Skylake is 14nm and the new display with P3 gamut is great. Certainly if you need a computer now, this is a nice one to buy. However, if you can afford to wait, then waiting is a good idea. Don't believe what people tell you about waiting forever. There are points of inflection in computer progress where buying is of much greater value. Node shrinks are generally one of those times.

Thanks for the detailed answer. So when do you think mobile graphics chips will get a die shrink, or at least, one that will make it to the iMac? Are nvidia mobile chips comparable to these amd chips in terms of speed and nm etc? I keep hearing about this 990m...
 
Thanks for the detailed answer. So when do you think mobile graphics chips will get a die shrink, or at least, one that will make it to the iMac? Are nvidia mobile chips comparable to these amd chips in terms of speed and nm etc? I keep hearing about this 990m...
Based on only speculation. It seems like we will get both Nvidia's Pascal and AMD's Arctic Islands microarchitectures in H2 2016. I'm unsure who will be providing the process. It seems a sure bet that TSMC 16nm FinFET will be involved. Samsung/Global Foundries may very well also be providing chips on their 14nm node.

This time next year will be very interesting. We won't have a new lithography from Intel, but H2 2016 should also bring us Kaby Lake (arch rev). I expect new iMacs with a GPU shrink, and it doesn't matter as much if Apple just continues to ship Skylake. If I was going to put my dead presidents down on a Mac, I'd wait until then.
 
I finally ordered a 395x and I'll return my m380. Your thread was pretty useful for me :)

You're welcome. Thanks for the sacrifice so we could get the benchmarks while you had it!
I still think the 395X is overrated though, look @jerwin's table from barefeats' benchmarks. The 395 is the sweet spot.
It's what I'd get if I were getting an riMac (going to try wait out for Thunderbolt 3).
But if you have the money, of course go for it!
 
I still wonder whether Apple's choice in GPUs has more to do with pushing pixels, than with gaming? I bough the big one, mostly because I'm always driving three huge displays. I've noticed that doing this can put a strain on some cards. Maybe that was a consideration for Apple, but I'm no GPU guru?

You're welcome. Thanks for the sacrifice so we could get the benchmarks while you had it!
I still think the 395X is overrated though, look @jerwin's table from barefeats' benchmarks. The 395 is the sweet spot.
It's what I'd get if I were getting an riMac (going to try wait out for Thunderbolt 3).
But if you have the money, of course go for it!
 
Based on only speculation. It seems like we will get both Nvidia's Pascal and AMD's Arctic Islands microarchitectures in H2 2016. I'm unsure who will be providing the process. It seems a sure bet that TSMC 16nm FinFET will be involved. Samsung/Global Foundries may very well also be providing chips on their 14nm node.

This time next year will be very interesting. We won't have a new lithography from Intel, but H2 2016 should also bring us Kaby Lake (arch rev). I expect new iMacs with a GPU shrink, and it doesn't matter as much if Apple just continues to ship Skylake. If I was going to put my dead presidents down on a Mac, I'd wait until then.

Could you speculate on what kind of gains we could see in the new mobile GPUs? And why do people keep saying nvidia are ahead of amd in GPUs? I googled the 990M but couldn't find much info, is it even out? In other words, how does the 395X compare to nvidias latest available offering?

Just found this statement via a Google search
Performance
Radeon R9 M395X 4GB is a Direct Rebrand of Radeon R9 M295X 4GB.
Performance is Comparable to the desktop Radeon R9 280. Therefore, even GeForce GTX 970M is still faster.
image.jpeg


EDIT: Ok so I just answered my own question...
image.jpeg
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.