Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

iPersian

macrumors regular
Original poster
Oct 23, 2012
229
0
Copenhagen, Denmark
im wondering if anyone can answer me if the lower speed GPU (iGPU) affects rendering AND modelling?

will the lower speed cause slower modelling only or also affect rendering.

Im interested in a MBPr (my MBP is from 2006! and alwasy crashing) but i wonder how this will affect my work in Cinema 4D.
 
Well, obviously it will perform much better in Cinema4D than your ancient MBP ^^ As to the rest - nobody can tell it before some actually parts come out and benchmark are done. My guess is that it will perform just as well as the part with the 650M
 
ok but its more a general question.

is all 3d work, modelling and rendering done by GPU or is it CPU?

I believe rendering is CPU and modeling is GPU (possibly CPU?) but not too sure. For sure, the current rMBP and the next rMBP is miles better then what you have, you definitely won't notice any slow downs.
 
I believe rendering is CPU and modeling is GPU (possibly CPU?) but not too sure. For sure, the current rMBP and the next rMBP is miles better then what you have, you definitely won't notice any slow downs.

ok thanks. i know that it will be much much faster than my 2006. Had a rMBP and delivered it back before 14 dasy due to ghosting issue and lag when moving around windows when iGPU was running instead of dGPU.

im architect and hence doind alot of work in adobe, 3d and hopefully running windows for revit/3dmax so im wondering how much an iGPU only will affect my work.
 
ok thanks. i know that it will be much much faster than my 2006. Had a rMBP and delivered it back before 14 dasy due to ghosting issue and lag when moving around windows when iGPU was running instead of dGPU.

im architect and hence doind alot of work in adobe, 3d and hopefully running windows for revit/3dmax so im wondering how much an iGPU only will affect my work.

If I'm correct the iGPU on the next rMBP is better for applications that utilize CUDA and applications such as Photoshop will benefit more but the graphics department will suffer about 20%. This is from looking at a standard issue iris pro. The 650m is much better for gaming and something like Final Cut Pro X ( simply anything graphical) but hopefully someone can jump in and give more insight.
 
ok thanks. i know that it will be much much faster than my 2006. Had a rMBP and delivered it back before 14 dasy due to ghosting issue and lag when moving around windows when iGPU was running instead of dGPU.

im architect and hence doind alot of work in adobe, 3d and hopefully running windows for revit/3dmax so im wondering how much an iGPU only will affect my work.

Seems like you'd want the dGPU.
 
is all 3d work, modelling and rendering done by GPU or is it CPU?

There is no general answer to this question. Fact is - if you have complex models, then you will need a good GPU, because its the part responsible for visualising the 3D data. For rendering, it depends on the application - some of them have GPU acceleration and some don't.

If I'm correct the iGPU on the next rMBP is better for applications that utilize CUDA and applications such as Photoshop will benefit more

Again, this depends on the particular case. The Iris Pro packs some serious computing power, but it is bandwidth-limited. So if your algorithm needs to spend more time computing data then accessing it, then you will see a speedup with Haswell. However, if the computation is fairly simple, the 650M will probably be faster.
 
ok but its more a general question.

is all 3d work, modelling and rendering done by GPU or is it CPU?

modeling is all cpu and runs on a single core.. rendering used to be all cpu but could utilize all available cores.. rendering is now starting to get dished out to the gpu which results in much faster rendering than cpu alone.

no modeling apps (that i'm aware of) are dumping the calculations off on the gpu yet.. i talked to a rhino dev a few hours ago and was told they're experimenting with possibilities (which i take to mean as -- it's not impossible as of right now).

but really.. if you're going to be doing a lot of modeling, get the fastest clock out there even if it means less cores.. you'll feel that type of speed throughout the modeling session.. renders would take longer but at least you're not sitting there interacting with the render app the whole time.

personally, i'd be hesitant to buy a computer which only has iris right now simply because lack of field tests info.. past intel gpu score bad on the feedback so it always had to be discreet.. hopefully though, intel has it fine tuned by now and iris will work out.. because having two gpus in a laptop while only being able to use one is kinda silly.. they're doing it now because it's a stop gap.. when the onboard graphics get good enough, they'll drop (and should do so) the discrete.
 
modeling is all cpu and runs on a single core.. rendering used to be all cpu but could utilize all available cores.. rendering is now starting to get dished out to the gpu which results in much faster rendering than cpu alone.

no modeling apps (that i'm aware of) are dumping the calculations off on the gpu yet.. i talked to a rhino dev a few hours ago and was told they're experimenting with possibilities (which i take to mean as -- it's not impossible as of right now).

but really.. if you're going to be doing a lot of modeling, get the fastest clock out there even if it means less cores.. you'll feel that type of speed throughout the modeling session.. renders would take longer but at least you're not sitting there interacting with the render app the whole time.

personally, i'd be hesitant to buy a computer which only has iris right now simply because lack of field tests info.. past intel gpu score bad on the feedback so it always had to be discreet.. hopefully though, intel has it fine tuned by now and iris will work out.. because having two gpus in a laptop while only being able to use one is kinda silly.. they're doing it now because it's a stop gap.. when the onboard graphics get good enough, they'll drop (and should do so) the discrete.

seems like flatfive and leman do not agree on this ;-)

this seems more complicated than i/we thought.
 
seems like flatfive and leman do not agree on this ;-)

this seems more complicated than i/we thought.

No, we do agree actually :) I was talking about the actual visualisation of the model (e.g. translating the 3d data to the picture you see on screen while you work with the software) - this is done via the 3D API which is accelerated by the GPU. The process of modelling itself (i.e. when you are editing the model) is done on the CPU. There are ways to accelerate that on the GPU nowadays but I have no clue what (or if) any software actually takes advantage of it.
 
No, we do agree actually :) I was talking about the actual visualisation of the model (e.g. translating the 3d data to the picture you see on screen while you work with the software) - this is done via the 3D API which is accelerated by the GPU.

right.. most modern 3d applications have different visualization modes (other than wireframe for instance).. soft shadows, shading, etc -- most of that stuff is coming from openGL and relies heavily on your graphics card..

that said, it doesn't necessarily mean the more pricey your gpu, the better performance to be expected in these areas.. (in fact, i've read reports of geforce cards out performing or at least equalling quadros in rhino).. you should be more interested in finding a card with robust openGL support rather than fps type stuff.. (i.e.- a gaming card probably won't be your best bet)


it's best to talk to other people using your specific software in order to get a feel of which card(s) you should run.. (just beware of the people with $1000+ cards touting 'you need this one!'.. of course they need to justify their investment ;) )

of course, with iris being new, it's going to be tough to get decent feedback.. we need some guinea pigs.. @ipersian, go ahead and buy an iris mbp when it comes out and let us know what you think :D
 
that said, it doesn't necessarily mean the more pricey your gpu, the better performance to be expected in these areas.. (in fact, i've read reports of geforce cards out performing or at least equalling quadros in rhino).. you should be more interested in finding a card with robust openGL support rather than fps type stuff.. (i.e.- a gaming card probably won't be your best bet)

Robust OpenGL support is just as important for games, in fact, many 3D editing software tools use only a very limited subset of OpenGL operations. The main problem is that professional software has quite different demands than games. For instance, professional tools often use wireframe mode, which are not well optimised on gaming cards. This is the main difference between professional and gaming cards - the professional ones have drivers which are better suited for the CAD need. Usually, the hardware is actually identical.
 
Ok thanks , i really hope that the coming MBPrs will be good enough. Didnt know how much is done by CPU.

yeah, and also remember most of the cpu stuff is being done on a single core because it's all linear calculations which must occur.. so clock speed is advantageous over # of cores..


for example:
here's activity monitor during an intense boolean operation in rhino..

booleanCPU.jpg


only one core chugging along while the rest sit idle.. gpu is also inactive at this point.. so the faster your clock, the faster these types of operations are going to complete..
 
No, we do agree actually :) I was talking about the actual visualisation of the model (e.g. translating the 3d data to the picture you see on screen while you work with the software) - this is done via the 3D API which is accelerated by the GPU. The process of modelling itself (i.e. when you are editing the model) is done on the CPU. There are ways to accelerate that on the GPU nowadays but I have no clue what (or if) any software actually takes advantage of it.

in other words. when it comes to rendering cpu does the work accelerated by the GPU?

and when modelling its cpu for calculations/operation but the "screen" part meaning, shading etc is done by GPU.

Have i understood you correctly? if this is the case its actually important to have a good GPU!! ;-(
 
in other words. when it comes to rendering cpu does the work accelerated by the GPU?

If the software supports GPU acceleration, yes. I do not know whether any popular packages support it though. Iris Pro is rather good at computing stuff, as ray-tracing benchmarks from anandtech show

and when modelling its cpu for calculations/operation but the "screen" part meaning, shading etc is done by GPU.

Basically, yes, but do not overestimate how much work goes into shading and stuff. Often, 3D packages use very simple rendering techniques to display the viewport, no advanced lighting or shading, so for small to moderately complex scenes, its rather cheap. Complex models are a different thing though.
 
in other words. when it comes to rendering cpu does the work accelerated by the GPU?

and when modelling its cpu for calculations/operation but the "screen" part meaning, shading etc is done by GPU.

Have i understood you correctly? if this is the case its actually important to have a good GPU!! ;-(

hmm.. it depends on what you mean by 'rendering' ..leman and i are reading that as creating 2D output of the model via (usually) another app.. such as vray/thea/indigo/maxwell..

thats an entirely different process with different types of calculations which can be divvied up across multiple cores and more&more, devs are figuring out ways to get much faster renders by putting the calculations on the gpu side of thing (nothing to do with graphic display on screen-- the gpu can also act as an actual calculator in a similar way as a cpu is used..).. as openCL develops and more devs catch on, we're going to see a lot more processes being handed off to the gpu (the dual gpu in the new mac is saying exactly this)

but the gpu power necessary to display the raw geometry on screen is not entirely demanding.. or- if you spend lots of cash on a high end gpu, you wont tell much of a difference because the mid grade cards can handle the workload just fine..

----------

or- if you spend lots of cash on a high end gpu, you wont tell much of a difference because the mid grade cards can handle the workload just fine..
(selfquote)

well, to say that same thing a little differently..

in a complex scene with lots of geometry (trees for instance) and shadows on etc, you will tax the gpu a lot more BUT, the cpu is also being pushed to it's limit at this stage and will top out before you can top out the gpu..

your model will become sluggish because of the cpu.. and the gpu has plenty of reserve to keep up with that sluggishness.. if you had a blistering fast cpu (i.e.- one that's definitely not available anytime soon) which could handle any sized model with no sweat, then you might start seeing some breakdown on the gpu side of things and a more powerful card would benefit you greatly.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.