The future of "Pro"

Discussion in 'Mac Pro' started by Michael73, Nov 8, 2013.

  1. Michael73 macrumors 65816

    Joined:
    Feb 27, 2007
    #1
    Will it be a large box or a small cylinder?

    As the owner of an '08 MP I wonder what holds me in better stead for a 5-6 year ownership time horizon? I could go to my local Frys and get a 12 core MP for $2,599 today or next month I could get a nMP and spend a little more than double that for an 8-core, 16GB, 1TB & D500 rig.

    I've pretty much talked myself into the nMP, even going so far as to buy a Drobo last weekend to handle my external storage needs but every time I see the insane deal from Frys on that 12 core, I'm tempted.

    Thoughts?
     
  2. maflynn Moderator

    maflynn

    Staff Member

    Joined:
    May 3, 2009
    Location:
    Boston
    #2
    What do you use your computer for?

    Its hard to say which model is better if your needs are not stated. :)
     
  3. Michael73 thread starter macrumors 65816

    Joined:
    Feb 27, 2007
    #3
    Good point. I have an interactive marketing business. Most of my work is done within CS6 (PS and DW) and I make heavy use of VMWare running Win7. I also do some occasional video work. Other than that, heavy Office user (Word, Excel & PowerPoint).

    My frustration is really around the virtualization piece. I can't use Boot Camp since I need both OSs open at once. Once Win7 x64 is open, performance really slows down on both sides, most noticeably in PS but also in other applications like OmniGraffle if I have big documents open with lots of layers.
     
  4. Macsonic macrumors 65816

    Macsonic

    Joined:
    Sep 6, 2009
    Location:
    Earth
    #4
    Having 2 OS actually consumes more ram so there's some slowdown on processing. Photoshop is also a memory eater so maybe your 12g ram may be a little short on your needs. Try opening activity monitor to see the total ram usage.
     
  5. AidenShaw macrumors P6

    AidenShaw

    Joined:
    Feb 8, 2003
    Location:
    The Peninsula
    #5
    How many cores do you give the VM?

    I'll assume that you have a handle on the memory issue - it should be obvious if you're short of memory.

    Is the VMDK file(s) on the same spindle as the other apps/data? Are you snapshotting the VMDK file(s)?

    I run lots of VMware VMs on my workstation, and don't see much contention (other than the obvious issues).
     
  6. Michael73 thread starter macrumors 65816

    Joined:
    Feb 27, 2007
    #6
    2 Cores and 2GB of RAM. I've played around with those settings and that seems to be the sweet spot if there is such a thing ;)

    Yep. I keep an eye on my Page Ins and Outs and there isn't an issue there.

    Yes. Fusion by default nests is VMs in a folder called (unsurprisingly) Virtual Machines which lives inside your Documents folder. My boot drive (in addition to the OS and Apps) contains my Documents folder. Second, "yes" I'm taking snapshots. I can't remember how often those occur…I set it and forget it.

    Would you recommend moving the VM off the same spindle as everything else to speed things up? Now that I think about it, that seems logical.

    The interesting thing is that I don't remember this being that much of an issue with WinXP which is in the same VM folder on the same spindle.
     
  7. deconstruct60 macrumors 604

    Joined:
    Mar 10, 2009
    #7
    If you are running a 2008 2.8GHz 8 core model then you are pretty far back in hardware virtualization support. Here is a chart that Intel presented for Haswell

    [​IMG]
    http://www.anandtech.com/show/6355/intels-haswell-architecture/11

    5 back from Haswell is a yellowish, substantially taller, bar which is probably indicative of your machine . The Fry's special is the 3 back red bar. The Mac Pro 2013 is the light blue before Haswell.

    Those are basically indicators of the overhead of running virtualization. An active VM itself is going to consume far more resources, but is indicative there is newer hardware support just don't have in either your 2008 or effectively de facto 2010 technology foundation in that Fry's special.


    RAM probably is one of the primary root cause issues while running RAM hungry apps in each of the OS instances. If Win7 is just for reference browsers to check work then can try throttling that virtural machines resource usage a bit as short term work around.

    It isn't just how much RAM using but where you are at in terms of Page outs / Page ins. If churning on that heavily then basically need more RAM ( although I guess OS X 10.9 should help with that also ).


    The Mac Pro 2013 6 core config is probably substantially faster than your 8 core model. If that is a sub 2.6GHz 12 core the 6 core model is pretty close. There is a pretty hefty jump to get up to 8 cores and head room over just about any old 12 core model.



    You seem to have a couple of laptops. That may be a contributing factor. If going to share devices between several Macs, Thunderbolt may work better.

    If you go the MP2013 route you may want to BTO the entry config. Most likely you'll want another set of DIMMs in there to push substantially past 16GB. The entry's 12GB are easier/cheaper to replace.

    ----------

    The page in/out of the guest OS also is something to keep an eye one. Although if move its virtual drive storage to another storage device that would decouple impact on the host OS X and its apps. XP's smaller footprint may have fit better into the limited box you were putting it in.
     
  8. Michael73 thread starter macrumors 65816

    Joined:
    Feb 27, 2007
    #8

    Very interesting! It would appear that the difference between the Frys machine and the nMP with reference to VM cycles is that the nMP is double the speed(?) or more correctly, halves the number of trips. What that buys me in terms of real world speed, I don't really know. :confused:

    Unfortunately not. I do use it for that but as I also do some SEO work for my clients, there are some programs in that vein that just don't run on a OS X. Also, for some reason which I've never really understood, there must be some subtle differences between Excel for Mac '11 and Excel for Win '13 such that frequently when dealing with files from certain clients, editing spreadsheets on the Win version is far less hassle than the Mac version.

    I haven't upgraded my MP to 10.9 yet. My other machines, yes. I used my mobile machines less than the MP…it's the workhorse. If there are kinks or incompatibilities in the new OS, I can deal with them far more easily on the MBP than on my MP. In that sense, I use the MBP as a test bed. I haven't run into any issues yet (knock on wood) on the MBP so I'll probably upgrade the MP just prior to getting the new machine to make the migration go more smoothly.

    What I have a difficult time with is knowing that 2 of the cores are immediately unavailable once the VM is opened. So the question becomes if I buy a Hexcore whether 4 cores left is enough for PS and all the other things I've got going on the OS X side?

    To clarify, those devices are used solely as surrogates for my MP when I'm on the road or can't use my MP. They aren't being used concurrently or for sharing.
     
  9. deconstruct60, Nov 8, 2013
    Last edited: Nov 8, 2013

    deconstruct60 macrumors 604

    Joined:
    Mar 10, 2009
    #9
    When your guest OS makes OS system kernel calls that require privilege instructions/access those invoke a "trap" because have to 'fake' that these are to the raw hardware. The time it takes to emulate going into and getting out of that trap is what is being reduced. Bascially it is an expensive call. Early on this would soak up 3000 cycles (out of maybe the rate of 2.0GHz, cycles per second, your process operates at ). That is roughtly 3000 instructions you processor could have been doing but isn't. Your application is stalled waiting for that trap to finish. The processors are very fast. So it isn't seconds. But if invoke system calls very rapidly this starts to add up. Most of the time, most apps stay in user mode where this overhead doesn't occur.

    It is a bigger pain if load up multiple virtual machines since multliple VMs are more likely to invoke more system calls.

    The other huge jumps are in SIMD code that could leverage AVX. SSE2-SEE4 from previous generations are now dinosaurs. Similar with doing doing AES encrytion... way faster on modern Intel cores. If pass on all of that to go to 2010 tech machine for next 5-6 years it is a question of whether eventually going to use those or not and how much.





    Depends upon how the interaction goes with VMWare Fusion (or other VM software) but part of the guest OS that isn't be used much could get compressed and that would take some of the memory pressure off. But if page rate is under control in host and guest, that won't help so much.


    Well at about $500/core to move from 6 to 8, it is something were have to eval the productivity improvement and the alternatives.


    Right. So if wanted to take a project on the road could just shutdown/dismount at TB drive and take it with you. Work on it remotely and then just plug back in. Don't have to use that methodology, but it is just a way of sharing the device between Macs that you use one at a time. Depends upon the size of data that goes on the road.

    Single HDD with USB 3.0 would work also ( and can add that to the previous Mac Pro also ).
     
  10. bernuli macrumors 6502

    Joined:
    Oct 10, 2011
    #10
    I do a lot of VMing with VMware and have found that using multiple HDs prevents slowdows. Not uncommon for me to have 3 OSs running at the same time. Here is how I have it.

    3.33 GHz 6-Core Intel Xeon
    48 GB RAM
    HD bay 1 - SSD booting OSX 10.8.5
    HD bay 2 - 1 TB WD Black for /Users
    HD bay 3 - 2 TB WD Black for Win XP and OSX VMs
    HD bay 4

    ----------

    I do a lot of VMing with VMware and have found that using multiple HDs prevents slowdows. Not uncommon for me to have 3 OSs running at the same time. Here is how I have it.

    3.33 GHz 6-Core Intel Xeon
    48 GB RAM
    HD bay 1 - SSD booting OSX 10.8.5
    HD bay 2 - 1 TB WD Black for /Users
    HD bay 3 - 2 TB WD Black for Win XP and OS X VMs
    HD bay 4 - 1 TB WD Black for Win 7 VMs

    There is no reason not to get MAX ram as OS X will cache as much of the Virtual Disk as it has room for.

    B
     
  11. ActionableMango macrumors 604

    ActionableMango

    Joined:
    Sep 21, 2010
    #11
    I agree. Virtual machines and the host OS are both much faster if the virtual image file is on a different hard drive than the host OS.

    Perhaps this is not so noticeable if using SSDs, but it made a big difference when using rotational drives.
     
  12. Larry-K macrumors 68000

    Joined:
    Jun 28, 2011
    #12
    Hey, if you go to Texas to get one, bring me one back, I'll cover you.
     
  13. Michael73 thread starter macrumors 65816

    Joined:
    Feb 27, 2007
    #13
    So here's the thing…going from a traditional spinning platter to an SSD is going to be a huge boost in performance. I could get this some of this today by putting SSDs in my current MP. But…

    Suppose I get a nMP and instead of having my VM on the internal PCIe, I put it on an external Drobo via Thunderbolt. What will give better performance…keeping the VM on the extremely fast internal PCIe which is also the OS X boot drive or shunting it off to run from an external array (and most likely from a 7200rpm RAIDed array)?
     
  14. ActionableMango macrumors 604

    ActionableMango

    Joined:
    Sep 21, 2010
    #14
    I have no experience with that, so I couldn't tell you which is faster.

    However, I suspect both scenarios will be very fast. The trouble in the past was with two operating systems running off the same rotational drive, which was probably hurting performance due to all the seeking it had to do in different sectors of the drive.

    Either method you suggest will not suffer from that problem.
     
  15. Lesser Evets macrumors 68040

    Lesser Evets

    Joined:
    Jan 7, 2006
    #15
    nMP. Simple.

    Apple is notoriously ditching the older tech sooner and sooner. There is also the consideration of breaking Moore's Law soon, which would make a 2014 machine (what the nMP basically is) quite useful for the rest of the decade. I have high expectations that this new MP will have 6 good years, if not more. Unless you are video editing or doing CGI, I think these newer towers will bring us up into 202x where processor compression will probably skid to a serious slow down and then a new Pro would last for a decade or more.

    I purchased a 1,1 and it is STILL powerful for everything except 1080 editing. I'll probably use it until late 2015. That's 9 years, though it is now trapped in 10.7 for life. I'd actually suggest a 2,1 of the new MacPro… the 7,1? 1,1s seem notoriously dull and under developed.
     
  16. AidenShaw macrumors P6

    AidenShaw

    Joined:
    Feb 8, 2003
    Location:
    The Peninsula
    #16
    Are you saying that yields on the E5v2 chips won't be reasonable until after 1 January?

    I'd say that's a fair view. I'm not getting "soon" delivery estimates on the E5v2 servers that I've ordered.

    In any event "Early 2014 Mac Pro" sounds much better than "Very, very Late 2013 Mac Pro".
     
  17. studio347 macrumors newbie

    Joined:
    Nov 16, 2010
    #17
    nMP looks better. It would make us to get used to the newer tech. a bit easier.
    When the tech is changing very fast, it would be a good idea to keep up with it if possible and reasonable….
     
  18. tripitz macrumors newbie

    Joined:
    Oct 16, 2013
    #18
    Every time I read the posts in here, people make it seem like the nMP is such an amazing game changer because of the cpu improvements. Maybe this needs to be said again, because the OP seems to have misinterpreted the chart posted earlier. The nMP is not Haswell based. In reality, it is built on the Ivy/Sandy Bridge design, which is really 1 true architecture change away from the old MP!

    What is very likely to happen is that within 1-2 year of release, a Haswell-based nMP is going to come around that will yield an additional 10-20% improvement. Is everyone going to dump the nMP then for incremental gains? By default, the gains are going to be about the same as now!

    Also, a lot of the gains are through improved use of more powerful video cards. That is somewhat nullified by the fact that we have a LOT more options available now than ever (e.g., MAC GTX680, efi'd GTX7xx series, etc).
     
  19. Michael73 thread starter macrumors 65816

    Joined:
    Feb 27, 2007
    #19
    I did not misinterpret the graphic and donconstruct60 was very clear that, "The Mac Pro 2013 is the light blue before Haswell." When you say that the nMP is "1 true architecture change away from the old MP" you mean the most recent Mac Pro. Those of us using MP 3,1s and older are likely to see significant benefit. We will probably never know, but my hunch is that there is a significant number of MP owners sitting on older generation machines for which the jump to the nMP is going to make a lot of difference.

    Does it bother me that the nMP isn't using Haswell? Not particularly. My perception is that change in speeds of 10% aren't often very noticeable, but get two three or four generations out and the upgrades are huge. To answer your (somewhat) rhetorical question about whether in a year or two people will dump the nMP for the next gen which boasts a 10%-20% gain, my guess is few will. I think the time horizon people keep these machines is 3-5 years.
     
  20. RoastingPig macrumors 68000

    RoastingPig

    Joined:
    Jul 23, 2012
    Location:
    SoCal
    #20
    the future is probably cloud computing but we need to rip out all that copper and go fiber optic
     
  21. Michael73 thread starter macrumors 65816

    Joined:
    Feb 27, 2007
    #21
    Half of me thinks you're right, the other half wonders how cloud computing would work in areas where there is no internet connection?

    You can about fiber optic but how does cloud computing work when you're in the clouds (aka on a place)? How do you get fast enough connections while at sea? Who decides to subsidize the infrastructure needed in areas where the population is very sparse like Montana, Wyoming and mountainous areas like the Rockies and Appalachians?

    The future may indeed be in the cloud, but I think we're talking two or three decades (at a minimum?) before the infrastructure is strong enough to support it. Even in areas where internet speeds are high, I'm not seeing a strong adoption of netbooks...
     
  22. brand macrumors 601

    brand

    Joined:
    Oct 3, 2006
    Location:
    127.0.0.1
    #22
    I can do 10Gb Ethernet of coper.
     
  23. deconstruct60 macrumors 604

    Joined:
    Mar 10, 2009
    #23
    If E5v3 doesn't come until Q1 '15 then everything in 2014 will be E5v2 based. Although on paper Intel announced E5v2 in September there are still vendors slowly rolling out BTO configs. E5v2 missed most of 2013. The new Mac Pro is certainly going to miss the vast majority if 2013.

    [ Right now roadmaps say Q3 '15 for E5v3 (e.g. http://www.cpu-world.com/news_2013/2013073001_Launch_schedule_of_Intel_Xeon_server_processors.html ) but roadmaps had E5v1 in Q3 '11 and it slid into Q1 '12. It wouldn't be a 'first' time that happened. Depending upon how the desktop/laptop/mobile 14nm rollout goes these next gen Xeon E5 could slide a quarter or two. ]
     
  24. blanka macrumors 68000

    Joined:
    Jul 30, 2012
    #24
    In the end all Mac's without screen will be Apple-TV like sized boxes.
    At the moment Moore runs much faster than we can keep up with our computer use. The next 16nm Rockwell processors will bring the current 12 core in a Ivy-bridge like quad core TDP. With the move to 11nm you will have so much power in so little space, that main board sizes will move towards Raspberry Pi dimensions.
     
  25. Sean Dempsey macrumors 68000

    Sean Dempsey

    Joined:
    Aug 7, 2006
    #25
    You sound like you need a mid-range imac.

    I use a 2011 iMac with an SSD and 16 gigs of RAM, it's the i7 3.4GHz, and I do everything you've mentioned plus about 10x more. I run VM's, compile code, run multiple local servers and environments, etc etc.

    And you're wondering if you should get a NMP or a Frie's machine? (BTW what is a Fries?)

    Anyways - it sounds like you could throw a rock in an Apple store and be fine with whatever computer you hit.
     

Share This Page