Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
well, the PC + Tablet combination allows you to have best of both worlds .... touch responsiveness and mouse precision. how do you know it is difficult? microsoft have been doing this for years since 2012 with desktop UI and modern UI. all they need is optimization.

Let's say, we just know it's a stupid idea, to have two things for the same purpose in one computer. It is too obvious to explain why.

bejeesus, you're spouting nonsense.
mouse and touchscreen complements each other,it is a GOOD combination allowing the users to operate the tablets comfortably while holding it and placing it onto a flat surface.

the surface does not restrict you to only a single input mode at one time. my God

microsoft surface is about wrong execution, not because of fusing incompatible technologies.

you do not get the word "complement" do you?

the HD5200 is NOT as good as GT650, you'd be idiot to buy a machine that has a lower performance than its predecessor.

and 2012 to 2013 is ONE year, Apple has been doing it for FIVE years.
 
Last edited:
I completely agree with you.. We pro's don't care that much about battery life.. while rendering 3d scene fully charged rMBP 15" will drain in 30-45mins ..
Maybe you don't care about battery life, but I know that some/many professional users do.

please word processor user ( microsoft word / pages / email / web-browsing / basic photoshop / video editors "even smart phones can edit photos & videos") stop calling yourself PRO's
So what legitimises what you do as "Pro" over what somebody else does? Income quantity? The amount you stress the hardware? It is absolutely ridiculous to say that just because somebody edits videos they do not qualify as a professional user who is part of the addressable market. Or somebody who writes in Word and sends emails is not a professional. What you said was bad and you should feel bad.


really :cool:

your point is to compare new technology with old technology..

If you use common sense you will compare (latest) iris PRO with (latest)nvidia GTX 780M
That is a horrible comparison. A 780M is a completely different animal than an Iris Pro or a GT 650M. It's somewhat difficult to compare in a balanced/weighted manner. Generally speaking the metric is performance per watt.

The GTX 780M is a 100W chip, the GT 650M is around 45W, and the entire Haswell chip is 47W. However, this is complicated because the Haswell chip contains a lot of other crap such as VRMs, and an entire i7 CPU. There is also a lot of dynamic clocking going on under that TDP, and the TDP itself is a little bit elastic these days. So we'll be generous and say that the Iris Pro GT3e consumes 30 of those Watts.

That would mean GT3e is 15% slower than a stock 650M, but uses 30% less power. Which would mean the Intel architecture is more efficient than the Nvidia architecture. This all scales pretty evenly with the 750M which is apparently a 50W part.

Anyways, the point is you clearly lack basic computer understanding, so as a pro you should know when to opt out of a discussion.
 
So i think we all here want the next rMBP 15" to have 10-12h usage battery
760M Geforce 1 vRam
wifi ac
 
seems like climbing on glass to skip real question?

----------



That seems the scenario..yet to see if its true,how powerful is 5300 compared to 7xx m by Nvidia and if a BTO Discrete Gpu is in or not.

----------



For 13 i'm in but please with Quad Core beside,
but i don't think so..

Air dual + low Iris models
13-dual + 5100
15-Quad + 5200 (pro "5300"..Bto)

for a 13 retina quad+5200 i could accept not to have a discrete,
but Apple doesn't get it and will lose lots of users like me.

For me?

No, I'm done buying Apple for a few years so I'm indifferent to what they do with either desktop or portable.

The 15's had meh graphic's before and will still have meh graphics but better battery life
 
i prey to god for an iris pro 5200 in the new retina MBP 13" Series...

excellent news!

Only available on the quad core Haswells. Apple is trying to make the 13" rMBP thinner (according to rumors). Seem unlikely if they are also trying to increase battery life. I'm waiting for the Haswell 13" rMBP's and am resigned to them being dual core.
 
My Late 2008 Unibody MBP has a discreet GPU. It requires you to log off and back on to use it, and it makes the fans spin at 6000 rpm non-stop, while the computer performs slightly better at a very small number of things. Battery life also drops by 2 hours immediately. Since I never bothered switching to it, except a couple of times, I didn't notice that mine was faulty right from the start, and kernel panicked about once every 1-2 weeks. Noticed when it was too late, out of warranty, so I've never really used it.

So yeah, if there's no nVidia chips in my computer, I won't mind!

Less things in a computer = less things that can fail.

ME TOO! my late 2008 doesn't like my discrete graphics either anymore.
 
but who care's if the new graphics are integrated? if are more fast than the current NON integrated, who cares? integrated or non integrated the point is that is more fast than the actual.

(Im non an export, so maybe my comment is very stupid, but YOU GOT my point)

It only matters if you have the slightest desire to play any games with your $3000 computer. Most people only by a fancy computer once every 2-3 years, so they want it to be as well-rounded as possible. The new rMBP will *NOT* have dedicated graphics. This will only matter, according to Anandtech, for graphics performance in games at higher resolution. For normal work Iris Pro does quite well.

It's really unfortunate that the new rMBP will not have dedicated graphics since I also like playing games in Windows at high resolution. Just another reason I'm hugely glad I took a gamble and got a refurb 15" rMBP from Apple for $1599 and couldn't be happier. The only thing I'll be missing out on is Wireless AC, which is a huge bummer to me since I had already bought the new Wireless AC router... but its something I'll have to deal with.
 
This simply does not have to be the case.

To talk between the CPU and dGPU and vRAM, traffic has to go over an external bus.

If all of that can be integrated on a single die, so long as the resources are spent, the communications can be WAY faster.

The only issue so far is that intel and the rest haven't been willing to spend the resources on GPU on the processor. But the demand for actual CPU resources is just not there these days, its all about GPU.

If you think I'm talking crap, just look what happened to external math co proessors, external MMUs, external L2 and L3 CPU cache and external memory controllers. They're all on the CPU die now because it is faster.

Until recently, intel hasn't bothered to really try to build a proper GPU and haven't spent the die area and transistors on it (which is why their GPUs have been, to be blunt, crap).

Now they've decided to focus on it, AMD and Nvidia should be scared.
AMD are the reason intel is beefing up it igp! Just look at AMDs APUs. If intel had bought ATI then AMD would be completely dead. At the moment we are not close to an igp beating a dedicated except at the low end.
 
Last edited:
ok.....
Image

Anyways, the only applications said article tested were GAMES and GAMES ARE NOT VERY "pro". :rolleyes:
Cherry picking results doesn't prove the iris superiority. I read the same anandtech article and there were plenty of instances where the 650 came ahead. If I'm not mistaken, the conclusion was that the iris is a major boost in performance, but was overall shy of defeating a discrete card; however, when you factor in the power and cost saving, the iris is the winner.
 
My Late 2008 Unibody MBP has a discreet GPU. It requires you to log off and back on to use it, and it makes the fans spin at 6000 rpm non-stop, while the computer performs slightly better at a very small number of things. Battery life also drops by 2 hours immediately. Since I never bothered switching to it, except a couple of times, I didn't notice that mine was faulty right from the start, and kernel panicked about once every 1-2 weeks. Noticed when it was too late, out of warranty, so I've never really used it.

So yeah, if there's no nVidia chips in my computer, I won't mind!

Less things in a computer = less things that can fail.

The problem with that logic is that the older integrated graphics were also by NVidia. Apple never used GMA in macbook pros. They used it in the white macbooks, and it was horrible. Apple has also used AMD. The entire 2011 line was AMD. The low end of the early 2011s was pretty bad.
 
Maybe you don't care about battery life, but I know that some/many professional users do.


So what legitimises what you do as "Pro" over what somebody else does? Income quantity? The amount you stress the hardware? It is absolutely ridiculous to say that just because somebody edits videos they do not qualify as a professional user who is part of the addressable market. Or somebody who writes in Word and sends emails is not a professional. What you said was bad and you should feel bad.

Excuse me, he is not just a pro but a 'REAL PRO' by his own admission. Personally I like the extra battery life as I do plenty of photo editing work on planes, trains, airports, etc. But then I am only pro, not a real pro.
 
The problem with that logic is that the older integrated graphics were also by NVidia. Apple never used GMA in macbook pros. They used it in the white macbooks, and it was horrible. Apple has also used AMD. The entire 2011 line was AMD. The low end of the early 2011s was pretty bad.

I've got the high end of the 2011 MBPs and i'm telling you now, the AMD GPU is pretty bad.

Will randomly not work with the internal display in boot camp - it boots Windows with lines down the screen (driver issue, not initialising it properly - close and open lid will make it work after which it is FINE), the GPU switching is entirely retarded (things turn it on that run just fine on an MBA which has no discrete GPU), etc.

the GPU makes the machine run hot, chew battery, noisy fans, and a HD5200 is about as quick (or faster).

The GMA range has not been produced for a good 5 years or so now and was from the bad old days when intel really couldn't be bothered making a proper GPU.
 
Haswell GT2 ( 20 execution units ) is 177mm*

GT2 is exactly 1/3 of the die area, so- around 59mm* for the gpu.

To double that- 177+59 = 236

Haswell GT3e die is 264mm* (According to AnandTech ), so there is additional 28mm*

Maybe the memory bus for Crystalwell ? Each Haswell core is around 16mm* , doesn't sound very likely for an interface that is described as "high speed but narrow" to occupy almost the same space as two cores. I think there is a real possibility that there's additional execution units- mind you that EU's are a fraction of the area dedicated (what a bold word for an igp, I know :p) for the gpu. I may be wrong, but they can easily put there like, 8 additional EU's for 48 total, just as an example. It makes a lot of sense to have some level of redundancy in such a big die, so no surprise if some of them are deactivated, for improved yield.

GT2 die, EU's (in blue) are on the far left:

Intel-Haswell-Core.jpg
 
Okay this sounds like Apple is definitely looking to drop the dGPU.

Well if they could match the 650M performance that should be pretty amazing but will they be able to?


Interestingly, they did the same with the 650M. They actually overclocked it so it had the same performance as the 660M GTX. So ofcourse, they're not going to settle with just the Iris Pro.

Nothing at all I've seen from any integrated graphics so far would indicate this is a good idea. Least of all, when you're going to high resolutions...

Ugh...
 
really :cool:

your point is to compare new technology with old technology..

If you use common sense you will compare (latest) iris PRO with (latest)nvidia GTX 780M

hey ipad beat 90s super computer ( then v are stupid buying MBP / MP instead v should be buying ipad )

http://www.tuaw.com/2011/05/09/ipad-2-would-have-bested-1990s-era-supercomputers/

what?

The 650m tested is the one in the current gen rMBP, while the 5200 iris pro is the proposed one in the next revision. It is exactly relevant to everything discussed in this thread.
 
Not having a gpu from AMD or Nvidia means that Resolve 10 probably will not work

And many other PRO programs. This is what absolutely scares me. If it doesn't have a DGPU then you might as well throw in the towel for the editorial world from editing, motion graphics, 3D and color correction, etc.
 
The problem with that logic is that the older integrated graphics were also by NVidia. Apple never used GMA in macbook pros. They used it in the white macbooks, and it was horrible. Apple has also used AMD. The entire 2011 line was AMD. The low end of the early 2011s was pretty bad.

Apple has used Intel graphics in the 13" Macbook Pro since the 2011 models.

Intel graphics have also improved immensely since those days, and are now far superior to the old 9400M and 320M nvidia integrated graphics.
 
If you think the 2013 MBPs will be slower than the 2012 MBPs, well, just wait for the benchmarks.
If they truly go sans dGPU, then there will be some tasks that where the Haswell MBPs will be slower than the 1st gen MBPs. Unless intel pulls a rabbit out of their hat.


I agree that you are confused. Increased integration is the major source of performance gains with integrated circuits. It is also the major source of cost reduction. Going from a dGPU to an iGPU will reduce costs.
Except for the fact that the haswell CPU with IRIS 5200 costs as much if not more than a haswell with dGPU.

It's getting there. I will not object at all. It's just that the demand for a dGPU option is high, ESPECIALLY in a machine costing $1500+, whether you want to admit it for not. Have a non-dGPU version and one with. Satisfy both camps. I'm note sure why the pro non-dGPU are so angrily against the dGPU camp. More choices wold appear to be better for consumers, and I think that is the way Apple will go this round.
 
I understand that fear but if this Iris Pro ends up being faster than the dedicated card Apple would have used anyway then does it really matter?

Yes it does because MANY professional programs/plug ins rely on an DGPU to actually work. Many AE plug-ins, color correction, some Adobe products require it to run faster, etc.

----------

It only matters if you have the slightest desire to play any games with your $3000 computer. Most people only by a fancy computer once every 2-3 years, so they want it to be as well-rounded as possible. The new rMBP will *NOT* have dedicated graphics. This will only matter, according to Anandtech, for graphics performance in games at higher resolution. For normal work Iris Pro does quite well.

Then Ananadtech isn't very bright because it DOES MATTER for the editing/graphics/3d world that do use MBPR, especially freelancers that have to go on site.

Many AE plug-ins REQUIRE A dedicated GPU. If it doesn't have a dGPU then the machine becomes useless for many professional in these fields.

----------

It's getting there. I will not object at all. It's just that the demand for a dGPU option is high, ESPECIALLY in a machine costing $1500+, whether you want to admit it for not. Have a non-dGPU version and one with. Satisfy both camps. I'm note sure why the pro non-dGPU are so angrily against the dGPU camp. More choices wold appear to be better for consumers, and I think that is the way Apple will go this round.

I hope Apple has a version with a dGPU and this is only for the base model.

I have an email into a bunch of plug-in companies for AE and have asked them if their products will even work now without a dGPU and sent them the article.

We'll see what they say.
 
This simply does not have to be the case.

To talk between the CPU and dGPU and vRAM, traffic has to go over an external bus.

If all of that can be integrated on a single die, so long as the resources are spent, the communications can be WAY faster.

The only issue so far is that intel and the rest haven't been willing to spend the resources on GPU on the processor. But the demand for actual CPU resources is just not there these days, its all about GPU.

If you think I'm talking crap, just look what happened to external math co proessors, external MMUs, external L2 and L3 CPU cache and external memory controllers. They're all on the CPU die now because it is faster.

Until recently, intel hasn't bothered to really try to build a proper GPU and haven't spent the die area and transistors on it (which is why their GPUs have been, to be blunt, crap).

Now they've decided to focus on it, AMD and Nvidia should be scared.
What you've stated is true, and that is where the tech is going. It's just that Apple would have you believe we've already arrived, when in reality, we've simply come as close as nearly two year old tech(will be two year old when Apple is still selling the non-dGPU MBP in 2014). Sorry, but that is generally not the way tech works. You don't jump up and down for trying, and failing, to match two year old tech.

When Intel comes out with 14nm broadwell and nvidia is still stuck at 28nm, then we'll have probably reached parity. Until then, it's substandard. And pushing substandard for big $$$ should be ridiculed, not lauded.
 
Yes it does because MANY professional programs/plug ins rely on an DGPU to actually work. Many AE plug-ins, color correction, some Adobe products require it to run faster, etc.

But you understand that all those professional applications can just use the Intel graphics processor instead right? It supports OpenCL just like the NVIDIA 650m and the AMD cards do.

In my message that you quoted I clearly said if the Intel GPU is faster does it matter? And you just said it does matter by saying something that makes no sense at all.

If the Intel GPU is faster it will be better than having a slower Dedicated GPU. We just need to wait to see if it actually is faster or not.
 
But you understand that all those professional applications can just use the Intel graphics processor instead right? It supports OpenCL just like the NVIDIA 650m and the AMD cards do.

In my message that you quoted I clearly said if the Intel GPU is faster does it matter? And you just said it does matter by saying something that makes no sense at all.

If the Intel GPU is faster it will be better than having a slower Dedicated GPU. We just need to wait to see if it actually is faster or not.

Many of the plug ins do not support Open CL.

What don't you understand. If it doesn't have an Nvidia or AMD chip in it, the plug-ins and some programs will simply not run/work.

So YES it DOES MATTER.
 
Many of the plug ins do not support Open CL.

What don't you understand. If it doesn't have an Nvidia or AMD chip in it, the plug-ins and some programs will simply not run/work.

So YES it DOES MATTER.

Developers can just update their plugins. I don't see the problem. The Intel GPU supports all the same technologies as the dedicated cards.
 
Yes it does because MANY professional programs/plug ins rely on an DGPU to actually work. Many AE plug-ins, color correction, some Adobe products require it to run faster, etc.
So I think you're making this sound more dire than it really is:

- After Effects leverages CUDA to do rendering in a few places, and a few AE third party plug-ins do too. These will only work on Nvidia GPUs (not AMD or Intel).
- Adobe Premier used CUDA as of CS5 to do many things. As of CS6, almost all of those things are now OpenCL compatible meaning they will work fine on AMD and Intel GPUs.
- AE and Premier also use OpenGL to do many things, which will all work fine on an Intel GPU.
- DaVinci Resolve is probably the color suite you're talking about? It uses OpenCL for its GPU compute and therefore works fine on Intel GPUs as well.
- CUDA adoption has been limited, and there aren't that "many" programs that require it.
- dGPU has nothing to do with it, CUDA is single-vendor. Only Nvidia cards support it. You could use an AMD dGPU and it wouldn't support CUDA either. In fact because the new Mac Pro appears to be using AMD FirePro cards, expect much software that was CUDA only to shift to OpenCL. Also due to OpenCL's increasing maturity.

- Most importantly, most everything will fall back to software rendering if required. It will be slow and inefficient, but it will still work.


Then Ananadtech isn't very bright because it DOES MATTER for the editing/graphics/3d world that do use MBPR, especially freelancers that have to go on site.

Many AE plug-ins REQUIRE A dedicated GPU. If it doesn't have a dGPU then the machine becomes useless for many professional in these fields.
I think you're confusing dGPU with Nvidia GPU again. It wouldn't matter if there was an AMD dedicated GPU or an Intel integrated GPU in there. It still wouldn't support CUDA.


I hope Apple has a version with a dGPU and this is only for the base model.

I have an email into a bunch of plug-in companies for AE and have asked them if their products will even work now without a dGPU and sent them the article.

We'll see what they say.
They will say "Sure, if the dGPU supports OpenGL version X or later.". Alternately, they will say "Sure, so long as your integrated GPU is made by Nvidia (9400M?). But probably not because it probably isn't.".

This isn't rocket science you know.



Many of the plug ins do not support Open CL.

What don't you understand. If it doesn't have an Nvidia or AMD chip in it, the plug-ins and some programs will simply not run/work.

So YES it DOES MATTER.
Alright dude.

- OpenCL is an agnostic compute framework. It can compile your code as-required for whatever available target architectures that conform to its specifications. That could be ARM, x86, AMD GPU, PowerPC, Nvidia GPU, or Intel's GPU architecture (also other kinds of processors).
- CUDA is another compute framework, but this one was created by Nvidia and is much more closely tied to their GPU architecture. It only works on Nvidia GPUs and will not work with AMD or Intel GPUs, or other devices at all really.
- OpenGL is an API for rendering imagery, 2D or 3D. It has a software renderer, so your windows would still draw if there was NO GPU at all in your computer (but that would be miserable). Fortunately, OpenGL is accelerated by all mainstream video cards today, including Intel's.
 
Last edited:
The problem with that logic is that the older integrated graphics were also by NVidia. Apple never used GMA in macbook pros. They used it in the white macbooks, and it was horrible. Apple has also used AMD. The entire 2011 line was AMD. The low end of the early 2011s was pretty bad.
GMA was not that horrible
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.