Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You said I was blantantly wrong, and then agreed with me. Haswell is a chip that has been around for 2 years, hence "2 years old". The GPU chip has been around for 3 years, whether or not they've slightly optimized it via software, hence "3 years old".

It is what it is. I've actually purchased a new 15" because I can't wait for Skylake. But I'm under no pretensions that Apple is using the "latest and greatest" or "highest-end" chips.

As with FCP7, Aperture and Shake, they've done the math and realized the leading edge performance-wise is not where they need to be. And judging by their market cap (and my purchase!), they're exactly right.

Oh, in that way. I misunderstood you. Well, what would you have them do instead? Use non-existing Broadwell chips? And how do you get the GPU to be three years old? As far as I am aware, no Ivy Bridge part shipped with a 40 EU GPU (or more). Or no, wait, you mean the R9 there, right? Well, it's not only software optimisations that've gone into that chip. The GPU arc itself is three years old, fair enough, but the hardware around the primary die has been optimised. As far as I know, there has actually been tweaks to the core GCN architecture branch as well, even though they aren't shown in the GPU numbering. The biggest optimisations have been to heat generation and power consumption (two sides of the same coin), but there has also been things, especially on the memory front that have improved performance. This is evident just from looking at benchmarks (and software, to prove it isn't all in soft optimising), but we sadly have little detail on what they've actually done (trade secrets)

FCPX has more than caught up with FCP7 at this point, and is absolutely wonderful. It's muh faster than 7 on similar hardware, and, going back to our earlier topic, has muh better GPU acceleration .
Shake has somewhat been replaced by Motion (somewhat).
Aperture died because Apple felt like it wouldn't compete with Lightroom anyway, and Photos could be the hobby replacement.

I want to believe.

This is wrong and just upside down. Apple's 750M is clocked at 925Mhz with Turbo disabled. Default clock is 967Mhz and the Turbo adds 15% on top of that. Which means the default clock in action is around 1100Mhz. That is the complete opposite of overclocked but 20% underclocked.
There really is no greater focus on GPU performance. A 15W CPU + 840M/940M perform much better than the 28W chip in the 13" MBP if bother to compare notebooks that come in both options.
The latest AMD M370X is quite clearly not top of the line but just cheaper. Maxwell is the most efficient architecture at 28nm and Apple does not see the need for best efficiency.
Apple once found GPUs too bad but even an HD 4400 Intel is faster than the 320M form once upon a time. These days Apple seems to consider GPUs good enough and not focus on it anymore than necessary.

Where did you get the data on the clocks? I'm now not so certain anymore, but fairly sure I read the Apple 750m had a base of a Ghz, with turbo able to get it to 1250.
A 15W CPU + a dGPU may result in more graphics perf, but the 940m has a TDP of 30W. 30+15=way more than 28W. And the 28W also results in a faster CPU, when the CPU isn't tapped on the GPU side as well. If we're fair though, the Iris Pro 6100 is really, really good. The biggest problem it has is memory speed, but the rMBP uses fast system memory, which works to it's advantage, making the difference smaller than you make it out to be.
Granted. The 370X isn't top of the line. But neither was the 750m before it. Or even the highest end BTO options before the retina. AMD's 370X is however a really good card. And unifying all Macs under the AMD GPU (except intel only machines), isn't that bad an idea, since AMD will have a lead with Vulkan, as it's build around Mantle, and their GPUs were first around the platform. Furthermore, they're still better at OpenCL than nVidia, which is a big one for Apple. If we're talking FP 64, nVidia doesn't stand a chance actually.
Remember, when I say a focus on GPU, I don't mean on MacBook Airs. I mean on the machines that have GPY work as a focus, such as high-end rMBPs and Mac Pros. And the higher end iMacs for that matter. And especially software.

Which is why they don't offer dGPUs in all the rMBPs -- you can already buy a 15" without one. But for the pros who actually need it -- like can't work without it -- they will absolutely continue to offer dGPUs in their rMBPs. Telling pros they can either work from a station or not work at all would kill their pro line of software and hardware in one fell swoop.

And this is exactly the point I was trying to make.

Just about every game out there. 3dmark fire strike, cinebench r15 opengl, also in compute it is on most tasks faster.
An 850M beats a M370X in all tasks and a 950/960M is even faster.
AMD reduced power consumption enough to beat Kepler now but there are no significant architecture changes that put it into the same ballpark as what nvidia did with Maxwell.

Not openCL compute. DirectCompue (not on OS X), and CUDA, fair enough, but not OpenCL. And not FP64... Not at all.
Now, I won't make it sound like I completely disagree with you, as I would've preferred a Maxwell card, but it's not like the AMD GPU is bad. Far from.

It could also be that Apple thought AMD was a better fit. At this point, we are talking 35-40W GPUs. The much faster Nvidia GPUs that often get compared are 50-70-100-120W. Not comparable at all.


Gamers are obsessed with Nvidia, so, I would like to see an Nvidia option and an AMD option. For a lot of non-game benchmarks, AMD has generally been competitive to better. (At least, until the Titan X ($1000) came along on the desktop.)**

**(This is a total aside, but, Nvidia needs to get over its dumb idea that people should pay extra for 64-bit FP. In the engineering/scientific world, 32-bit (only) FP went out with tricorn hats. 32-bits is often still used when there is a specific performance (esp. memory) requirement, but, typical numerical analysis defaults to 64-bits.)

Uhmmm. This is a sad thing, but the R9 370X, has a max TDP of 75W. Like the Scenario TDPs from Intel, it won't really reach that, but it's definitely not a 35W unit.
AMD has a better foot in with the professional market, correct. Quadro aren't half bad though. If you're comparing Fire Pro with Quadro, in most cases, nVidia still wins, although by way less. If compare Radeon with Geforce on pro tasks however, Radeon comes out favourably, which is why I agree somewhat with your first quoted paragraph.
Although I must correct you that the Titan X delivers way worse FP 64 perf than the older Titan. Titan Z would be your best bet if that's what you're going for. Titan X is completely graphics oriented.

How can Apple not upgrade to the Broadwell chip for the 15" when the 13" has better specs(newer chip, faster RAM) than the 15" flagship rMBP. The upgrade to Skylake is probably going to be a complete redesign of the MBP, but doing the upgrade to the Broadwell chip would make it comparable to the 13" and could be accomplished with the current form.

The 15" is already way better than the 13". The die shrink didn't make that massive a difference. The GPUs are way better in the 15" (if not Radeon, the Iris Pro eDRAM makes a huge difference. More than makes up for:) and the memory in the 13" mostly benefits its GPU. Not so much that CPU, which isn't memory starved very often.
As others have stated, Skylake also won't require a design change. Just a new logic board.

OMG no. No more "thinner" crap please. I want MORE POWER (especially GPU) not THINNER. Apple, please keep your thin notebooks in a crapperrific thin lineup like that new ghastly 12-inch Macbook with one connector (including power)....god awful POS that it is. That notebook might as well be a glorified iPad crossover and given how useless the keyboard is, they might as well have included touch screen control. No 15"+ Macbook Pro should resemble that monstrosity, IMO.



And yet Ethernet is still faster and more reliable than ANY WiFI connection in existence and a desktop simply doesn't need WiFi. It needs speed and reliability. I can't get the theoretical limits of WiFi in the same room as the router even. The car is over 100 years old. That doesn't make it obsolete. It's called UPDATES to technology. You might as well say the transistor is obsolete since it's OLD OLD OLD. Show me something better.



Adapters for a dead technology is fine. Show me the money. You'll still need whatever replacement connectors on the new machine. That Macbook has ONE connector for EVERYTHING including power and that is not only stupid, it's ASININE. At the very least, they should have included more USB-C connectors (dead minimum of two to even be functional in the slightest). WTF is the point of having the thinnest lightest most compact notebook around if the design is stymied by having to carry around a whole fracking BAG of adapters to do anything with it? It defeats the point of portable.

My 2008 MBP was portable. It had every connector a person could want on it including a removable battery! (2 USB, 1 FW800, 1 FW400, 1 Gigabit Ethernet, full-size DVI, Audio IN & OUT on separate plugs that also did digital and a full blown expansion port that let me add USB3 to it years later that later models couldn't utilize no matter what). THAT was a GREAT design. Everything that has come since has been a compromise in one or more areas.



iPhone sized computers? Hell, a Mac Mini isn't much larger than an iPhone already (I think an AppleTV Gen2/3 is already smaller overall). Google glasses you wear. We already have the iPhone for that matter (it's a computer believe it or not). The fact is an equivalent desktop of the same time frame will ALWAYS be FASTER, often MUCH FASTER. While Joe "I don't care about computers" may be happy with just a tiny POS pocket computer, no serious computer "nerd/hobbyist/enthusiast" would ever find having JUST one of those acceptable. The fastest thing out there isn't fast enough and probably never will be.

Have you seen the Apple TV? It's way bigger than an iPhone! (Although it doesn't have to be). Although I agree with the rest of your last paragraph.

On the peripheral discussion:
When did you ever plug anything into that 2008 MBP? Cause I know it's very rare I plug things into mine. My iMac neither. Only thing I miss from older models is Line-in audio.
Don't understand your wi-fi situation. I have 60mbps at home, and I can get all of that through 802.11., and I have tried achieving 250mbps on 802.11n at another location. I could not make use of that bandwith with the hardware I have right now anyway. (on the sending data I could saturate it, but the receiver wouldn't be able to keep up).
And my desktops definitely need wi-fi, because I can't drag wires through my house. (as in the people I live with won't have it).
Ports I personally would need:
Card reader, audio (preferably both ways), charging, 1USB (type A), and a thunderbolt port or two. Actually, two Thunders. One TB2 and one TB3. Ever thought about how TB is a confusing acronym? It could also be terrabyte. Anyhow, I realise others may need other ports, but for my sake, you may remove every other port. Or if you give me the adapter, take the USBs from me as well, and let's do it over TB. Only plug in things like twice a month at most.

I still haven't gotten why people hate the keyboard so much. I haven't tried it yet though. Although I agree the Pros shouldn't look like it. I'd say the silver one looks alright, but for crying out loud the gold is gross.
Can't imagine I'd like space grey either, although I haven''t seen it.



So can anybody tell me ,
Would I be able to upgrade my haswell mbp
To broadwell gpu.
Well, there's more than one Broadwell GPU, and there's more than one Haswell GPU. There are obviously things you can't do, but if you just wait for a minute, I'll fix it for you...
There. Your MacBook Pro now has the Broadwell GPU. Well, almost, it's still on a 22nm process, but the GPU is essentially the same. 48EUs on both Haswell and Broadwell (GT3 and GT3e - aka. Iris and Iris Pro)
 
Not openCL compute. DirectCompue (not on OS X), and CUDA, fair enough, but not OpenCL. And not FP64... Not at all.
Now, I won't make it sound like I completely disagree with you, as I would've preferred a Maxwell card, but it's not like the AMD GPU is bad. Far from.

I would be hesitant to run something on these notebooks that leverages OpenCL for an extended period of time. You're likely to drain your battery that way, even plugged in.
 
AMD's 370X is however a really good card. And unifying all Macs under the AMD GPU (except intel only machines), isn't that bad an idea, since AMD will have a lead with Vulkan, as it's build around Mantle, and their GPUs were first around the platform. Furthermore, they're still better at OpenCL than nVidia, which is a big one for Apple. If we're talking FP 64, nVidia doesn't stand a chance actually.

And don't forget the support for 5k displays which is, currently, AMD only feature in mobile GPU's.
 
  • Like
Reactions: casperes1996
I would be hesitant to run something on these notebooks that leverages OpenCL for an extended period of time. You're likely to drain your battery that way, even plugged in.

Isn't it the same with something that uses the same amount of the GPU? And who said anything about an extended period of time? What about race to sleep? OpenCL is very useful for Final Cut, and it'll not exactly run for forever. But even with extended workloads, I don't see how OpenCL would drain more power than graphics based work, since it doesn't use all the parts of the GPU.

And don't forget the support for 5k displays which is, currently, AMD only feature in mobile GPU's.

That's a brilliant point. Perhaps a bigger reason that any I've stated.
 
Uhmmm. This is a sad thing, but the R9 370X, has a max TDP of 75W. Like the Scenario TDPs from Intel, it won't really reach that, but it's definitely not a 35W unit.

I stand corrected. Some early descriptions put the power at 35-40W. Since then, the numbers have gone up, and I've seen 50, 60, and even 75W (source). Ouch!

AMD has a better foot in with the professional market, correct. Quadro aren't half bad though. If you're comparing Fire Pro with Quadro, in most cases, nVidia still wins, although by way less. If compare Radeon with Geforce on pro tasks however, Radeon comes out favourably, which is why I agree somewhat with your first quoted paragraph.
Although I must correct you that the Titan X delivers way worse FP 64 perf than the older Titan. Titan Z would be your best bet if that's what you're going for. Titan X is completely graphics oriented.

It depends on the models being compared, of course. The point I was trying to make is that Nvidia seems to deliberately withhold high-performance FP64 and provide it only for the super high-priced models. Nvidia does seem to perform better across the spectrum in games, but, especially with games that combine high-res backgrounds and high-speed action requiring high FPS. In the engineering and scientific world, 64-bit FP has long been the norm, with 32-bit reserved for when you really need to save memory. In other words, there is nothing "special" about FP64, despite what Nvidia thinks, but, Nvidia clearly shines for a particular type of game.

Have you seen the Apple TV? It's way bigger than an iPhone! (Although it doesn't have to be).

Seen one? I have one! It doesn't weigh enough! One of the requirements of a hobby TV device is to plug it in to an HDMI cable. The HDMI cable weighs enough, and, is stiff, and it pulls the ATV3 off the table, requiring that the whole thing be tied down. Sure, you could shrink it further, but, it ought to be bigger and heavier, not smaller. It makes sense for cell phones to be small. It doesn't make sense for devices that need connectors to be that small. Sometimes smaller is not better.

On the peripheral discussion:
When did you ever plug anything into that 2008 MBP? Cause I know it's very rare I plug things into mine. My iMac neither. Only thing I miss from older models is Line-in audio.

I plug my MBP in every work day using the TB-ethernet dongle. And, my USB keyboard, and mouse, and DVI dongle for my high-res display. I only use the built-in keyboard and trackpad when I'm in a meeting or working remotely. The Apple trackpads are good -- far, far better than most Windows/PC trackpads. The keyboards have gotten progressively more annoying since 2008.

I still haven't gotten why people hate the keyboard so much. I haven't tried it yet though. Although I agree the Pros shouldn't look like it. I'd say the silver one looks alright, but for crying out loud the gold is gross.
Can't imagine I'd like space grey either, although I haven''t seen it.

I learned how to touch type 50 years ago. Laptop keyboards pretty much all are irritating.
 
Have you seen the Apple TV? It's way bigger than an iPhone! (Although it doesn't have to be). Although I agree with the rest of your last paragraph.

Have you seen my signature? I own three AppleTVs. Define "bigger". Gen2 and Gen3 ATVs are shorter than ALL iPhones/Ipod Touch models ever made (3.9 inches ATV vs a typical ~4.4 inches for early iPhones and iPod Touch models) and significantly shorter than a Gen5 iPod Touch and newer iPhones (5.4 inches) let alone the new iPhone6S which is almost as wide (3.9ATV vs 3.6 iPhone6s) and 6.2 inches in length, which is nearly 50% longer than current Apple TVs). Yes, they are thinner, but the thickness of an ATV doesn't mean much in a home environment. It is not going in your pocket, after all.

On the peripheral discussion:
When did you ever plug anything into that 2008 MBP?

Do you want every date? :confused:

Cause I know it's very rare I plug things into mine. My iMac neither. Only thing I miss from older models is Line-in audio.

This is because so many people assume their habits are EVERYONE'S habits. My 2008 MBP is currently sitting on top of my Roland Digital Piano in my living room. Currently plugged into it are the power plug to the charger at the nearest outlet beside the piano, a Firewire cable to the FW400 port that connects it to a PreSonus, which has its own connections running to the Roland (for sound capture) and the stereo (to output it to my Carver ribbon based high-end playback system) and an XLR connection to a microphone mounted on a boom stand for vocals and acoustic guitar capture). It has additional connections for another XLR device and twin RCAs (e.g. for my electric guitar).

Those items typically stay in the room for recording. It's currently being used for transferring records to digital via the PreSonus at 24/192 and then transferred wirelessly to my MacMini for processing (click/noise removal and slicing into a continuous digital album format with tracks, etc. and then added to iTunes as Apple Lossless with album covers, etc.) When I'm set up for recording my own music, an additional MidiMan MIDI interface is connected to the left USB port that runs to the Roland Piano for keyboard/pedal control and I typically connect a Microsoft 5-button mouse to the right USB port for easier navigation in Logic Pro. When I need to backup the internal 500GB 7200 RPM hard drive, I connect an external 500GB Western Digital drive via the FW800 port (no need to disconnect the FW400 cable to the PreSonus as it has ports for both).

When not being used for music, it is docked in the den with its own desk with a 27" monitor (DVI connection used on the side), Klipsch THX 2.1 speakers (mini-RCA jack used), a USB hub to the right USB port which a full-sized keyboard, Microsoft 5-button mouse and a Logitech web camera with microphone (that mounts on the 27" monitor) are always connected. A Gigabit Ethernet cable connects to the MBP's Ethernet port for Gigabit transfers over the local network (much faster than WiFi and was particularly handy when I still used an upgraded PowerMac G4 (1.8GHZ G4 with 1.5GB Ram, Sata card with dual 1.5TB 7200 RPM drives, USB2 card and Radeon 9800 graphics with its own FW400 built-in and that ran Leopard) as the house server for the house-wide AppleTV system. I would convert DVDs to AppleTV format on either the MBP or my Windows machine (or often BOTH at the same time) and then transfer the files over to the PowerMac via Gigabit (the Windows machine also has Gigabit Ethernet). This transfers large files typically 5-8x faster than 100 Ethernet (let alone WiFi) and was limited more by the PowerMac's hard drive at the time, which was SATA and still did 120MB/sec at the time).

The expansion port on the MBP has a USB 3.0 card installed in it much of the time these days. That allows me to connect a 3TB USB 3.0 drive directly for file transfers, if desired where I can typically get 120MB/sec over USB3 (and around 85MB/sec over FW800 for the backup drive). If I need more battery power, I can swap a backup battery out in no time.

So, a BETTER question would be WHICH PORT HAVEN'T I USED??? The answer would be I've used ALL of them except the microphone jack (don't need it with the web cam as it has its own over USB and the PreSonus has professional XLR mic connections on it for music recording).

Don't understand your wi-fi situation. I have 60mbps at home, and I can get all of that through 802.11., and I have tried achieving 250mbps on 802.11n at another location. I could not make use of that bandwith with the hardware I have right now anyway. (on the sending data I could saturate it, but the receiver wouldn't be able to keep up).

That's because you apparently don't do anything with your computer of consequence. You connect to the Internet. Nothing I've talked about above has even MENTIONED the Internet above. I bought that computer for music recording and production (discussed above). I've also used it for video editing (I used to have a Professional Panasonic 8-head VCR digital capture (FW400) and a Hi8 camcorder with a FireWire 400 transfer system to convert and transfer all my old 80s and 90s home videos to it and edit them with Final Cut Pro at the den desk). Once I finished with all of them, I sold the system. Moving giant video files around to my server storage (4.5TB back in 2009 on the G4 PowerMac and up to 11TB now on the Mini) is where WiFi simply doesn't cut it. Gigabit Ethernet blows it away (typically 3-6x faster than 802.11N depending on signal levels). Instead of a half hour to move one large movie file over, it takes 5 minutes (files transfer at about 850-900mbps real world). It's almost as fast as connecting the USB 3.0 media drives directly via USB 3.0 (the Windows machine doesn't do USB 3.0, but it does have Gigabit). Also WiFi does VERY poorly with maintaining fast connections over time with a large file (e.g. noise/interference levels change often slowly/speeding the transfer along the way, particularly further away from the router. An Ethernet cable to the other side of the house on the bottom floor is just as fast as a 6 foot cable in the den straight from the router).

WiFi is handy for flexibility (you can connect to it on your backyard porch with no wires), but pales compared to a desk that has a wired Gigabit Ethernet connection available on it. And if your desk/workstation isn't being moved around the house, why not take 2 minutes to run an Ethernet cable to it and get that speed and consistency? It's also more secure for a LAN network, IMO as any broadcast signal can be potentially intercepted and decrypted/hacked. It's much harder to hack wired LANs without breaking into the house first (assuming your not sending the info over the Internet).

And my desktops definitely need wi-fi, because I can't drag wires through my house. (as in the people I live with won't have it).

The solution is to get your own house. Most of my Gigabit network is around the den behind the desks (desks surround 80% of the room with three permanent workstation setups plus two network printers including inkjet and laser). I still have WiFi for the rest of the house (iOS/ATV devices, guests, etc. or sitting outside with the notebook). In other words, it's not an either/or issue. The notebook picks Gigabit first when available and defaults back to the WiFi 2GHz network otherwise (with 5Ghz an option if I'm not too far away from the router as 5GHz doesn't do as well through lots of walls/floors).
 
Isn't it the same with something that uses the same amount of the GPU? And who said anything about an extended period of time? What about race to sleep? OpenCL is very useful for Final Cut, and it'll not exactly run for forever. But even with extended workloads, I don't see how OpenCL would drain more power than graphics based work, since it doesn't use all the parts of the GPU.

I'm unfamiliar with race to sleep, but I'll look that up later. It is useful for Final Cut. Again perhaps I'm missing some implementation details here. I'm also not sure what you mean by graphics based work. 2D work taxes it very little. 3D applications with a full viewport depend on the type of interaction. They do not consume as much power as a long stream of computation, because it isn't fully utilized over every cycle.

That's a brilliant point. Perhaps a bigger reason that any I've stated.

I thought they both claimed 5K support on both 15" models, but I just checked it again. Only the AMD model supports it.
 
I stand corrected. Some early descriptions put the power at 35-40W. Since then, the numbers have gone up, and I've seen 50, 60, and even 75W (source). Ouch!



It depends on the models being compared, of course. The point I was trying to make is that Nvidia seems to deliberately withhold high-performance FP64 and provide it only for the super high-priced models. Nvidia does seem to perform better across the spectrum in games, but, especially with games that combine high-res backgrounds and high-speed action requiring high FPS. In the engineering and scientific world, 64-bit FP has long been the norm, with 32-bit reserved for when you really need to save memory. In other words, there is nothing "special" about FP64, despite what Nvidia thinks, but, Nvidia clearly shines for a particular type of game.



Seen one? I have one! It doesn't weigh enough! One of the requirements of a hobby TV device is to plug it in to an HDMI cable. The HDMI cable weighs enough, and, is stiff, and it pulls the ATV3 off the table, requiring that the whole thing be tied down. Sure, you could shrink it further, but, it ought to be bigger and heavier, not smaller. It makes sense for cell phones to be small. It doesn't make sense for devices that need connectors to be that small. Sometimes smaller is not better.



I plug my MBP in every work day using the TB-ethernet dongle. And, my USB keyboard, and mouse, and DVI dongle for my high-res display. I only use the built-in keyboard and trackpad when I'm in a meeting or working remotely. The Apple trackpads are good -- far, far better than most Windows/PC trackpads. The keyboards have gotten progressively more annoying since 2008.



I learned how to touch type 50 years ago. Laptop keyboards pretty much all are irritating.

From the top...

Yep. It's pretty bad on that front. I used the same source for my data. It must be clarified though that in real world scenarios, it won't hit that. It'll perform like a 50W device 95% of the time. (This however, I have forgotten where I read)

Very true. But with the exception of the original Titan series, they're not just witholding it from all but the high end. Even the high end doesn't get it... Quadro only.

Alright, you have somewhat of a point here. I've never had that problem with my TV. The HDMI never tips it or anything, but my brother has had your issue. He ended up using elephant gum to secure it to the sounder it's on. However, even with this in mind, we both agree it would be prettier if it were smaller, and the point of a device like the Apple TV is also to look good, since it's visible to everyone, and front and center in your living room.

Apple trackpads are more than good. They're brill.
Your keyboard and mouse -> bluetooth. Cable to the screen -> screen houses ethernet etc. Problem solved with just a single cable.

I don't see your problems with lappy keyboards. When moving from a 2011 model to the retina, I had to adjust to the key travel, and it pissed on my beans to begin with, but now that I'm used to it, I kinda like it, and everything else feels weird. It's all about what you're used to.


Have you seen my signature? I own three AppleTVs. Define "bigger". Gen2 and Gen3 ATVs are shorter than ALL iPhones/Ipod Touch models ever made (3.9 inches ATV vs a typical ~4.4 inches for early iPhones and iPod Touch models) and significantly shorter than a Gen5 iPod Touch and newer iPhones (5.4 inches) let alone the new iPhone6S which is almost as wide (3.9ATV vs 3.6 iPhone6s) and 6.2 inches in length, which is nearly 50% longer than current Apple TVs). Yes, they are thinner, but the thickness of an ATV doesn't mean much in a home environment. It is not going in your pocket, after all.


Do you want every date? :confused:


This is because so many people assume their habits are EVERYONE'S habits. My 2008 MBP is currently sitting on top of my Roland Digital Piano in my living room. Currently plugged into it are the power plug to the charger at the nearest outlet beside the piano, a Firewire cable to the FW400 port that connects it to a PreSonus, which has its own connections running to the Roland (for sound capture) and the stereo (to output it to my Carver ribbon based high-end playback system) and an XLR connection to a microphone mounted on a boom stand for vocals and acoustic guitar capture). It has additional connections for another XLR device and twin RCAs (e.g. for my electric guitar).

Those items typically stay in the room for recording. It's currently being used for transferring records to digital via the PreSonus at 24/192 and then transferred wirelessly to my MacMini for processing (click/noise removal and slicing into a continuous digital album format with tracks, etc. and then added to iTunes as Apple Lossless with album covers, etc.) When I'm set up for recording my own music, an additional MidiMan MIDI interface is connected to the left USB port that runs to the Roland Piano for keyboard/pedal control and I typically connect a Microsoft 5-button mouse to the right USB port for easier navigation in Logic Pro. When I need to backup the internal 500GB 7200 RPM hard drive, I connect an external 500GB Western Digital drive via the FW800 port (no need to disconnect the FW400 cable to the PreSonus as it has ports for both).

When not being used for music, it is docked in the den with its own desk with a 27" monitor (DVI connection used on the side), Klipsch THX 2.1 speakers (mini-RCA jack used), a USB hub to the right USB port which a full-sized keyboard, Microsoft 5-button mouse and a Logitech web camera with microphone (that mounts on the 27" monitor) are always connected. A Gigabit Ethernet cable connects to the MBP's Ethernet port for Gigabit transfers over the local network (much faster than WiFi and was particularly handy when I still used an upgraded PowerMac G4 (1.8GHZ G4 with 1.5GB Ram, Sata card with dual 1.5TB 7200 RPM drives, USB2 card and Radeon 9800 graphics with its own FW400 built-in and that ran Leopard) as the house server for the house-wide AppleTV system. I would convert DVDs to AppleTV format on either the MBP or my Windows machine (or often BOTH at the same time) and then transfer the files over to the PowerMac via Gigabit (the Windows machine also has Gigabit Ethernet). This transfers large files typically 5-8x faster than 100 Ethernet (let alone WiFi) and was limited more by the PowerMac's hard drive at the time, which was SATA and still did 120MB/sec at the time).

The expansion port on the MBP has a USB 3.0 card installed in it much of the time these days. That allows me to connect a 3TB USB 3.0 drive directly for file transfers, if desired where I can typically get 120MB/sec over USB3 (and around 85MB/sec over FW800 for the backup drive). If I need more battery power, I can swap a backup battery out in no time.

So, a BETTER question would be WHICH PORT HAVEN'T I USED??? The answer would be I've used ALL of them except the microphone jack (don't need it with the web cam as it has its own over USB and the PreSonus has professional XLR mic connections on it for music recording).



That's because you apparently don't do anything with your computer of consequence. You connect to the Internet. Nothing I've talked about above has even MENTIONED the Internet above. I bought that computer for music recording and production (discussed above). I've also used it for video editing (I used to have a Professional Panasonic 8-head VCR digital capture (FW400) and a Hi8 camcorder with a FireWire 400 transfer system to convert and transfer all my old 80s and 90s home videos to it and edit them with Final Cut Pro at the den desk). Once I finished with all of them, I sold the system. Moving giant video files around to my server storage (4.5TB back in 2009 on the G4 PowerMac and up to 11TB now on the Mini) is where WiFi simply doesn't cut it. Gigabit Ethernet blows it away (typically 3-6x faster than 802.11N depending on signal levels). Instead of a half hour to move one large movie file over, it takes 5 minutes (files transfer at about 850-900mbps real world). It's almost as fast as connecting the USB 3.0 media drives directly via USB 3.0 (the Windows machine doesn't do USB 3.0, but it does have Gigabit). Also WiFi does VERY poorly with maintaining fast connections over time with a large file (e.g. noise/interference levels change often slowly/speeding the transfer along the way, particularly further away from the router. An Ethernet cable to the other side of the house on the bottom floor is just as fast as a 6 foot cable in the den straight from the router).

WiFi is handy for flexibility (you can connect to it on your backyard porch with no wires), but pales compared to a desk that has a wired Gigabit Ethernet connection available on it. And if your desk/workstation isn't being moved around the house, why not take 2 minutes to run an Ethernet cable to it and get that speed and consistency? It's also more secure for a LAN network, IMO as any broadcast signal can be potentially intercepted and decrypted/hacked. It's much harder to hack wired LANs without breaking into the house first (assuming your not sending the info over the Internet).



The solution is to get your own house. Most of my Gigabit network is around the den behind the desks (desks surround 80% of the room with three permanent workstation setups plus two network printers including inkjet and laser). I still have WiFi for the rest of the house (iOS/ATV devices, guests, etc. or sitting outside with the notebook). In other words, it's not an either/or issue. The notebook picks Gigabit first when available and defaults back to the WiFi 2GHz network otherwise (with 5Ghz an option if I'm not too far away from the router as 5GHz doesn't do as well through lots of walls/floors).

That's a lot to respond to...
I rarely pay much attention to signatures, especially the ones that are just listings of devices. I'm sure you can understand why. Once you've read one: "I have 3 Macs!" You've read them all.
My definition of "bigger", is neither width, height, or length. It's overall volume... Why do I want it smaller? Because it would look better, and it's right there, on the TV stand.

No, the next few paragraphs you wrote will do. But out of curiosity, could you actually give me the dates?

I won't comment on very much of what else you wrote, but I included it in the quote, because it emphasises my following point:

"This is because so many people assume their habits are everyone's habits."
... Cause you totally didn't just do that yourself... How many people do you think have your use case compared to mine?... I thought so. So what should Apple focus on? Please the most, or the niece? Right... Then the niece can get hubs and adapters. 2 Thunderbolt ports can deliver all you need and more with hubs, so you're se, and I get a better computer that weighs less and has a smaller environmental impact.

I'm actually a bit hurt you'd make assumptions, and say I use my computer for essentially nothing. I'm a user of both FCP and Logic myself actually, well, used to be of Logic, now only FCP. I have backups running over WiFi, and yes, they can take a few days, but I just let it run and pay no mind. I doubt I'll lose that data in the four-five days it takes to back up. And between devices on my desk I run cables, but not to a router or anything like that. For device to device networks, Thunderbolt is way better than Ethernet anyway, so why you need it, I still don't see. Yeah, alright, 100G Ethernet beats Thunderbolt, but you don't have that I assume.

Did you just say the solution is to get my own house? Right, alright, I get you. Your solution is to become a brazillionaire, and have Apple build you a custom laptop, with 100 ports, for a trillion dollars... Did you get that joke, and the point it was trying to make?

I'm unfamiliar with race to sleep, but I'll look that up later. It is useful for Final Cut. Again perhaps I'm missing some implementation details here. I'm also not sure what you mean by graphics based work. 2D work taxes it very little. 3D applications with a full viewport depend on the type of interaction. They do not consume as much power as a long stream of computation, because it isn't fully utilized over every cycle.

I thought they both claimed 5K support on both 15" models, but I just checked it again. Only the AMD model supports it.

Now, my current understanding of GPGPU on a hardware level is currently not as great as I'd want it to be,, but surely CL doesn't really use, let's say the TMUs in a GPU? So it shouldn't necessarily drain more power than, let's say a game, since neither use the full GPU every cycle, right?
Race to sleep is just the philosophy of doing a task as fast as possible, boosting as much as possible, and for a limited time, totally overshooting any clock you could realistically sustain yourself at, so that you can stop working quickly again.

Since you seem to know GPUs, do you have any clue how Metal will work on Macs? And what it will mean for Vulkan support and existing OpenGL implementations?
 
So, in new OS X 10.11 preview openCL and Vulkan became Metal for Mac. That's it.

I doubt that's how it work. I assume the Metal API closely resembles the iOS version, and not Vulkan. It would surely not be ok for Apple to rename Vulkan something else, and call it their own.
Besides, they wouldn't have had time to build the system on top of it if it were all Vulkan. It's finalisation was way too recent for that.
 
Now, my current understanding of GPGPU on a hardware level is currently not as great as I'd want it to be,, but surely CL doesn't really use, let's say the TMUs in a GPU? So it shouldn't necessarily drain more power than, let's say a game, since neither use the full GPU every cycle, right?
Race to sleep is just the philosophy of doing a task as fast as possible, boosting as much as possible, and for a limited time, totally overshooting any clock you could realistically sustain yourself at, so that you can stop working quickly again.

I'm not sure regarding TMUs. There are probably valid reasons to use such a thing. GPUs are leveraged in a lot of computation involving large matrices, so the algorithms don't differ that much. I just don't know enough about how they're implemented to provide any kind of answer there.

Since you seem to know GPUs, do you have any clue how Metal will work on Macs? And what it will mean for Vulkan support and existing OpenGL implementations?

"Know" is pretty generous. I know a fair amount about graphics at a general level. I know very little about GPUs, but I've tested the last several generations of notebooks, and I've personally experienced battery drain while plugged in. That is always frustrating. What I was referring to before was if you use something such as Maya, Cinema 4D, or anything else with a 3D viewport that supports modern shading features, you won't necessarily encounter that problem. If you're leveraging something that uses the gpu for certain render passes, such as the raytracer in after effects, that really drains the battery to the point where I wouldn't want to deal with it. That kind of stuff can take a long time, so in some cases you might return to find it in sleep mode.

I don't know that they'll implement Metal on Macs, as their hardware control isn't quite at the same level there. It's worth noting that iOS devices never supported OpenCL. Apple did implement compute shaders some time ago, relatively soon after they showed up in the OpenGL ES specification, yet no OpenCL. Compute shaders seem to use most of the hardware.

Amusingly I wrote most of this post earlier and just spotted the news that Metal has arrived on OSX. Did I mention that I hate proprietary graphics APIs?
 
Metal could be a licensed Mantle, just re-branded with renamed APIs.. who knows. If not, Apple built its API in record time.

Quite possible. Apple could also have worked on it for longer than we think. If it's re-branded mantle though, it''s re-branded Vulkan essentially. But they'd have had to do work to make it work with more GPUs than Mantle.

I'm not sure regarding TMUs. There are probably valid reasons to use such a thing. GPUs are leveraged in a lot of computation involving large matrices, so the algorithms don't differ that much. I just don't know enough about how they're implemented to provide any kind of answer there.



"Know" is pretty generous. I know a fair amount about graphics at a general level. I know very little about GPUs, but I've tested the last several generations of notebooks, and I've personally experienced battery drain while plugged in. That is always frustrating. What I was referring to before was if you use something such as Maya, Cinema 4D, or anything else with a 3D viewport that supports modern shading features, you won't necessarily encounter that problem. If you're leveraging something that uses the gpu for certain render passes, such as the raytracer in after effects, that really drains the battery to the point where I wouldn't want to deal with it. That kind of stuff can take a long time, so in some cases you might return to find it in sleep mode.

I don't know that they'll implement Metal on Macs, as their hardware control isn't quite at the same level there. It's worth noting that iOS devices never supported OpenCL. Apple did implement compute shaders some time ago, relatively soon after they showed up in the OpenGL ES specification, yet no OpenCL. Compute shaders seem to use most of the hardware.

Amusingly I wrote most of this post earlier and just spotted the news that Metal has arrived on OSX. Did I mention that I hate proprietary graphics APIs?

I like you...
I can understand your hatred for proprietary APIs. Og at least graphics ones. I don't know much about Metal, but hopefully it's very similar to Vulkan, so it won't be problematic to develop for both.
I was so surprised when I saw the Metal for Mac thing too, because of exactly that reason. Expected it to be an Imagination GPU thing only. Oh, and are you sure iOS 7+8 didn't have CL support? Fairly sure they did.
Proprietary or not, it's better than nothing though. We need something to compete with DX12, and if we can't have Vulkan, I'll take this.

I've tried the battery drain while plugged in swell, but that's been because of OpenGL + CPU for me, never because of CL tasks. Usually, they seem to require less of the GPU than heavy GL tasks. But granted, I use few things that push CL for a long time. Although it also pisses on my chips when the battery goes down, even though it's plugged in... Gets worse if you've peripherals plugged in, such as a Thunderbolt display and a Thunderbolt drive.

Fair point. Off the bat, I can't imagine a scenario where a TMU would be used (my understanding is perhaps more limited than yours), but that's not to say there couldn't be one. At least for parts of the unit.
 
Quite possible. Apple could also have worked on it for longer than we think. If it's re-branded mantle though, it''s re-branded Vulkan essentially. But they'd have had to do work to make it work with more GPUs than Mantle.
As Craig said in his keynote, Metal for Mac does both what openCL and openGL does. That was the goal of Vulkan in the end. So it is more than just 3D rendering.. Most of Apple's promised speed improvements come through this combination and how apps use them.
 
As Craig said in his keynote, Metal for Mac does both what openCL and openGL does. That was the goal of Vulkan in the end. So it is more than just 3D rendering.. Most of Apple's promised speed improvements come through this combination and how apps use them.

Yeah, but, GL has been around "forever", and OpenGL and OpenCL are open standards, as Vulkan was intended to be. But, Metal, besides being ungoogleable, is Apple. So much for writing code to an open standard.

Or, did I miss the announcement of Metal for Windows and Metal for Linux?
 
Yeah, but, GL has been around "forever", and OpenGL and OpenCL are open standards, as Vulkan was intended to be. But, Metal, besides being ungoogleable, is Apple. So much for writing code to an open standard.

Or, did I miss the announcement of Metal for Windows and Metal for Linux?

Well, Swift for Linux + Open source happened, so maybe soon it will for Metal... Hope
 
Where did you get the data on the clocks? I'm now not so certain anymore, but fairly sure I read the Apple 750m had a base of a Ghz, with turbo able to get it to 1250.
Well I happen to own one with a 750M and the clocks are as stated. Apple has it underclocked and without turbo. With overclocking you can push the base clock to 1066Mhz but with overclocking you can push a normal 750M much much higher towards 1300Mhz.
A 15W CPU + a dGPU may result in more graphics perf, but the 940m has a TDP of 30W. 30+15=way more than 28W. And the 28W also results in a faster CPU, when the CPU isn't tapped on the GPU side as well. If we're fair though, the Iris Pro 6100 is really, really good. The biggest problem it has is memory speed, but the rMBP uses fast system memory, which works to it's advantage, making the difference smaller than you make it out to be.
That is not true as I have repeatedly said. There are notebooks that come in variants with either an Iris chip or the 15W CPU + 940M combination. And BOTH have the same maximum power consumption about 50W on the wall in a 13" notebook. The 940M versions actually have a 5W lower normal load power consumption and that is at twice the performance out put which means the cpu has twice the work to do as well compared to the Iris chip. The X40M chips are 15W GPUs not 30W.
 
Well I happen to own one with a 750M and the clocks are as stated. Apple has it underclocked and without turbo. With overclocking you can push the base clock to 1066Mhz but with overclocking you can push a normal 750M much much higher towards 1300Mhz.
That is not true as I have repeatedly said. There are notebooks that come in variants with either an Iris chip or the 15W CPU + 940M combination. And BOTH have the same maximum power consumption about 50W on the wall in a 13" notebook. The 940M versions actually have a 5W lower normal load power consumption and that is at twice the performance out put which means the cpu has twice the work to do as well compared to the Iris chip. The X40M chips are 15W GPUs not 30W.

Game Debate's GPU data states the opposite, but I don't take them to be 100% reliable. Do you have a source? I would like evidence, so I can safely say I've been corrected, and not give false information in the future. Thank you.

And also, what did you use to see the clocks on the 750? Can I have a screenshot or something?
I believe you, but evidence is always nice.
 
Well I happen to own one with a 750M and the clocks are as stated. Apple has it underclocked and without turbo. With overclocking you can push the base clock to 1066Mhz but with overclocking you can push a normal 750M much much higher towards 1300Mhz.
That is not true as I have repeatedly said. There are notebooks that come in variants with either an Iris chip or the 15W CPU + 940M combination. And BOTH have the same maximum power consumption about 50W on the wall in a 13" notebook. The 940M versions actually have a 5W lower normal load power consumption and that is at twice the performance out put which means the cpu has twice the work to do as well compared to the Iris chip. The X40M chips are 15W GPUs not 30W.

When it comes to these mobile GPUs, it seems quite difficult to compare power. For desktop CPUs and GPUs, some reviews go to considerable lengths to instrument power consumption. It seems pretty difficult to do that for mobile GPUs, and, in the case of iGPUs, there seems to be combined CPU/GPU power management, at least on newer chips like the Core M series. There are also so many slightly different GPU variants. It seems like a daunting problem to come up with meaningful results for a (consumer) review.
 
It is no different on the desktop they also just compare total system power consumption. The only difference on the notebook is that total power includes the display.
The other difference is that neither AMD nor Nvidia officially publish TDP ratings for mobile GPUs. But you can infer it from the power consumption of a few notebooks with the same hardware.
i.e. The MBP consumes 80W with Iris Pro in use. With the 750M in use it is slightly below on 77W with the 370X it is 90W. Intel's TDP is fairly accurate (also you can read it out in detail with the Intel Power Gadget). You can infer that the dGPU must have less than 47W power consumption, because otherwise it would not make any sense. The Intel CPU does not reach anywhere close to its max when the dGPU emits heat. But it is more than 0W for the CPU and the GPU is responsible for 47W-(whatever the CPU uses) or 55W - X if the CPU runs on its higher TDP state. Anyway it has to be somewhere around 30-40W.
You can easily test it yourself by just using istat menus or something similar and testing on battery. You can read out the battery power draw. And Apple gives access to full speed under battery and does not throttle the hardware like some other manufactures do for better run time under full load.
Notebooks with a 960M have typical (3dmark load) power consumption of 90-100W and can peak at 120-130W. Which puts it well ahead of what Apple uses and why Apple underclocked the 750M.
Also the 370X is hotter which is why notebookcheck found some serious heat throtteling issues on the MBP.

http://www.notebookcheck.com/Test-Apple-MacBook-Pro-Retina-15-Mid-2015.144038.0.html said:
Nachdem wir schon in Spielen gelegentliches GPU-Throttling beobachten konnten, überrascht es kaum, dass die Einbußen im Stresstest nochmals drastischer ausfallen. Nach einstündiger Belastung mit Prime95 und FurMark (Windows) vermeldet der CPU bescheidene 1,2 GHz, während die GPU mit ebenso dürftigen 400 MHz taktet. Auch wenn die Geräte von Asus,Acer & Co. ebenfalls mehr oder weniger stark drosselen: Derart viel Performance verliert keiner der direkten Kontrahenten. Unter OS X schlägt sich das MacBook wesentlich besser, allerdings lasten die dort verwendeten Tools (Cinebench und Unigine Heaven) die Hardware nicht ganz so stark aus.
You don't need to understand German to read the findings. The notebook they had there showed some serious throtteling. To be fair even the Asus UX501 with its much better cooling system throttles under full stress but only to 1.8Ghz and the 960M stays speedy (power consumption is at 130W though not 90W).
 
Last edited:
It is no different on the desktop they also just compare total system power consumption. The only difference on the notebook is that total power includes the display.
The other difference is that neither AMD nor Nvidia officially publish TDP ratings for mobile GPUs. But you can infer it from the power consumption of a few notebooks with the same hardware.

EDIT: The (German) review is very comprehensive, and, thankfully for me, Google Translate does a decent job. It basically covers the power pretty well.
 
Last edited:
I tried on mine to just test power consumption on battery without the power plug. You'd catch any extra power draw. There might be some short power spike that you cannot read but the average consumption can be easily tested. 80W and 77W is what my battery says. No power plug.
 
I tried on mine to just test power consumption on battery without the power plug. You'd catch any extra power draw. There might be some short power spike that you cannot read but the average consumption can be easily tested. 80W and 77W is what my battery says. No power plug.

I was thinking about the inverse case (w/o battery), but, you can't easily test that anymore. In any case, it is interesting that being plugged in/unplugged makes no difference.
 
I tried on mine to just test power consumption on battery without the power plug. You'd catch any extra power draw. There might be some short power spike that you cannot read but the average consumption can be easily tested. 80W and 77W is what my battery says. No power plug.

Wait, how did you test that? A wattmeter wouldn't work unless it's plugged in.

Power draw should change significantly when you reach 2% battery. I've experienced very, very noticeable underclocking during extreme low power situations.
 
nooooooooooo my hope for Skylake MBP just died
I will wait for skylake, but i am sure broadwell has been delayed for hashwell to oversell and while intel said skylake will not delay, that are lies. Skylake will delay for broadwell to oversell first and so on, because its unlike to bring both at once, for all are waiting skylake, for intel would mean a loose...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.