Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
And even if publishing deadlines prohibited them from fully resolving the problem (even this being questionable without eliminating the possibility that the problem ties to their procedure) they should have at least communicated with Apple and drawn no material conclusion based on the data without first understanding its relevance to consumers.
That is EXACTLY where CR messed-up.

If they couldn't figure out why they were getting such wildly-inconsistent results, and ESPECIALLY since they all-but-admitted that it was MOST LIKELY a Software issue with Safari, in NO WAY should they have drawn a "Do Not Recommend" conclusion without first letting Apple see if they could figure out what went wrong.
[doublepost=1484177211][/doublepost]
To be fair, approaching Apple first would have likely looked even worse in the press. Can you even imagine the following scenario?

Consumer reports goes to Apple with a lousy grade in hand. Some secretive discussion goes on behind closed doors. Consumer reports amends its grade a few days later and issues some mumbo jumbo about a safari bug which has now been solved.

The cynic would conclude that some sort of bribery had gone on behind the scenes and that CR's palm was being greased. This would have affected their credibility and their future profitability.
They're doing that anyway; so what's the difference?

Haters will hate.
 
  • Like
Reactions: bobob
Why?

Because they've never had a bug in their software, right?

Because for a variety of reasons many M$ users know of cache and clear it out themselves anyway. They give themselves this fresh start once in a while.

Especially the ones who work in M$ based shops. They will never now I went to a questionable site (being nice will assume its on accident) a usual reason I see for this.

We know this from logging/monitoring somewhere else but its cool they use this to clean out the cache's once in while. I see the 1 hit I just go yeah...it was probably a misclick or the site had a bad ad and not user being bad.

Its also low level stuff for when CAC based access to sites arise, web and/or CAC based vpn, etc. Stuff more commonly seen on windows systems. Even I don't have a CAC reader or software installed my personal mbp (only computer in the house). Work needs me...call me and lets make this a legit overtime thing. I will happily go to work, if officially on the clock. Off hours...I don't do the hey can you check this for 2 minutes bit.

Clear cache and maybe reboot...yeah users hate hearing this, but it does the trick often enough we say do this first before we start digging deeper.
 
You had all these problems and still buy Apple?
You sound NUTS.

Trust me. It's not. I have had the shocking 2006 model. I've had the kernel panicking GPU ridden 2008 model. I also had the image retention, screen peeling, pixel blowing 2012 model. I loved them all, but this is a huge step up for me and the smoothest first gen MacBook Pro I have used.

Although Apple fixed them all no questions asked (the 2012 one 4 times!), visits to the Genius Bar are hardly what I consider a Pro feature.
 
Yes they should have. I see the same behaviour when dealing with companies in the US at work. They trust their system to 100% even when someone with half a brain can tell the result it completely wrong. You should always be critical of your methods and models, they can be proven incorrect at any point.

Even though the fluctuations were a result of a software bug and not methodology, as Apple admitted and fixed the bug. Ok, I get it. Makes perfect sense.
[doublepost=1484181489][/doublepost]
Because any software developer knows that bugs happen and aren't always spotted by them, and that old features can develop bugs as the code behind them receives changes from other updates that impact them. It seems reasonable that Apple didn't turn on developer settings when they personally did all their battery testing. It's not like they didn't own up to there being a bug that needed to be fixed.

I wasn't intending to rip on Apple. They totally owned up to it. Their explanation regarding the implications for most users was reasonable, as was the reason CR used that method. Just replying to people who didn't read either of the articles. :)
 
  • Like
Reactions: kdarling
You had all these problems and still buy Apple?
You sound NUTS.

No product is without its problems. What matters is how the company takes care of you, and Apple does just that. Every issue I had was repaired under AppleCare with no questions asked (and with absolutely amazing Genius' personalities) and I was treated with respect. I was just dispelling the belief that every previous didn't have any reports of any flaws.

So far, this generation MacBook Pro has proven to be the most reliable for me. More importantly, I know if I do have a problem, Apple will take care of me.
 
  • Like
Reactions: Abazigal
You're totally right. Consumers would never drive their car straight into a wall, so why do people crash test cars. Horrible methodology.

Actually, people do. That's precisely why they are tested in that way -- not that they do it on purpose, but just to cover that incidental event which generally happens to everyone. We don't drive "by rails". I fully recognize that what you're implying is that we don't compute that way either, but CR is supposed to not be that way -- they have to test implicitly in one method. Ever. Otherwise they have wildly varying results as they themselves detail in great length defending this result.

Consumer Reports had an ethical responsibility to try to test the hardware as best as they could. They published a "not-recommended" based upon inconsistent findings, without consulting Apple about the possibility of "why" until after the report was published and after they changed default Safari settings. I won't deny that the blame goes both ways here, but an average consumer wouldn't nearly notice as much. Even in the original article they noted that when using Chrome there was no issue. So instead of attempting to remediate the problem they posted anyway.

I'm not really trying to cover either side -- I just can't help but feel like it's in poor taste to not allow the vendor in question - in this case Apple - to attempt to respond to negative reviews prior to publishing, in an attempt to resolve, especially since the issue here is clearly fixable and not related to hardware. It is, after all, a hardware review. In their own words they try to normalize it as much as possible but when they can't understand why it's deviated, they'll just publish anyway? In any other avenue it would really smell like clickbait. This article and thread is a great example of how it has become so.
 
If they had a bit more courage they could have removed the headphone socket and made the battery a little bigger!!
 
It is their job to understand what's going on with their test. They do guess the problem. Then they make a "Don't Buy" recommendation for hardware based on a minor software bug. It is a dis-service to Consumer Reports subscribers.

Imagine you put an Alienware gaming machine on your Christmas list. You don't get it because CR rates the whole machine as "DON'T BUY" because IE11 has a temporary bug.

Is it a temporary bug though? Thus far all we have is Apple's word on that. Meanwhile, plenty of other people are doing non-Safari battery tests and often getting poor results. Oh and there was that whole thing that CR is redoing the test, and that buying a $2500+ computer hoping an unknown bug will get fixed is maybe, uh, unwise.... but hey, do what you want with your money.
 
  • Like
Reactions: Naimfan
Trust me. It's not. I have had the shocking 2006 model. I've had the kernel panicking GPU ridden 2008 model.

Still, you have made the worst picks possible. Not counting the 2006 model, if you had the 2009 model and then the Late 2013/Mid 2014 one, you would be very disappointed with your new 2016 model. Instead, you have chosen products that were full of faults. I always tell people "do not buy the first generation or redesign of anything from Apple!"

Sorry man. No hard feelings. I just do a lot of thinking when it comes to stuff like this. Most people just don't
 
To be fair, approaching Apple first would have likely looked even worse in the press. Can you even imagine the following scenario?

Consumer reports goes to Apple with a lousy grade in hand. Some secretive discussion goes on behind closed doors. Consumer reports amends its grade a few days later and issues some mumbo jumbo about a safari bug which has now been solved.

The cynic would conclude that some sort of bribery had gone on behind the scenes and that CR's palm was being greased. This would have affected their credibility and their future profitability.

True, CR would not want to appear to give anyone special favors. Their reputation revolves around the fact that they test products that they themselves buy off the shelf. No freebies, no ads, no manufacturer influences at all.

Also wanted to note, again, that CR is a non-profit consumer testing organization.

Many younger people don't know that Consumer Reports is the reporting section of an overall non-profit named Consumers Union, which was created sixty years ago (long before the US had more consumer friendly laws) to work for a safe and fair marketplace, and to empower the consumer.

CU/CR was a primary public consumer guardian, decades before warranty laws began to really protect us.

When I was a kid in the 50s and 60s, our home got these monthly magazines: National Geographic, Popular Science, Good Housekeeping, and Consumer Reports :)
 
Last edited:
I don't use Safari, but I consistently get 2.5-3.5 hours of battery life using Chrome. I wouldn't expect Apple to design their battery usage for all 3rd party apps, but Chrome is obviously a common choice. The fact the battery drains three times faster is ridiculous, and one wonders if it's not related to browser competition. I doubt it will, but I'm hopeful the "bug fix" helps my situation out as well.

Hidden flag was probably acting up.

The "if detect Chrome, then discharge rate multiply x50" was inadvertently triggered by the bug in Safari. :p:confused:
 
I'm not really trying to cover either side -- I just can't help but feel like it's in poor taste to not allow the vendor in question - in this case Apple - to attempt to respond to negative reviews prior to publishing, in an attempt to resolve, especially since the issue here is clearly fixable and not related to hardware.
As mentioned elsewhere CR did attempt to communicate with Apple and Apple effectively said "go to a genius bar." In any case, it is a no win scenario for CR it appears for Apple fans. If they did work with Apple beforehand then there would be reports of collusion and bribery. Plus there would be questions about why or whether they do this with the thousands of other products they test. It would destroy their credibility.

I don't get why people don't seem to understand this? CR has absolutely no motive nor responsibility to work with vendors to troubleshoot problem products before reporting on it. That's utterly ridiculous. Put it into a different context:

Health Inspector inspects a restaurant and discovers a major violation. Should they be working with management to fix all of that before they report a perfect score? No, they report the violations and the restaurant fixes the problems and then they get retested.

Let's say an investigative reporter discovers an employee doing something unsanitary or dangerous. Is the reporter supposed to go to the management and work with the company to resolve the issue and then report that everything was super?

The logic that CR should go to Apple (or any other manufacturer) with a list of issues before reporting them doesn't hold. CR isn't a third party quality testing group hired by the manufacturer to test products.
 
Actually, people do. That's precisely why they are tested in that way -- not that they do it on purpose, but just to cover that incidental event which generally happens to everyone. We don't drive "by rails". I fully recognize that what you're implying is that we don't compute that way either, but CR is supposed to not be that way -- they have to test implicitly in one method. Ever. Otherwise they have wildly varying results as they themselves detail in great length defending this result.

Consumer Reports had an ethical responsibility to try to test the hardware as best as they could. They published a "not-recommended" based upon inconsistent findings, without consulting Apple about the possibility of "why" until after the report was published and after they changed default Safari settings. I won't deny that the blame goes both ways here, but an average consumer wouldn't nearly notice as much. Even in the original article they noted that when using Chrome there was no issue. So instead of attempting to remediate the problem they posted anyway.

I'm not really trying to cover either side -- I just can't help but feel like it's in poor taste to not allow the vendor in question - in this case Apple - to attempt to respond to negative reviews prior to publishing, in an attempt to resolve, especially since the issue here is clearly fixable and not related to hardware. It is, after all, a hardware review. In their own words they try to normalize it as much as possible but when they can't understand why it's deviated, they'll just publish anyway? In any other avenue it would really smell like clickbait. This article and thread is a great example of how it has become so.

They used it across lines in order to obtain consistency in results, not just randomly in this case for lulz. This was the only time a bug came up. For me the idea of crash testing relates, given that loading pages without cache is more robust and less likely to not inflate expectations regarding their findings and users' experiences.

If you look at the first article, it mentions CR contacted Apple for comment and Apple sent them a form response to see AppleCare, before they published. So, it's not like they didn't allow the vendor a chance to respond.
 
  • Like
Reactions: flyinmac
It is their job to understand what's going on with their test. They do guess the problem. Then they make a "Don't Buy" recommendation for hardware based on a minor software bug. It is a dis-service to Consumer Reports subscribers.

Imagine you put an Alienware gaming machine on your Christmas list. You don't get it because CR rates the whole machine as "DON'T BUY" because IE11 has a temporary bug.

It is their job to create a consistent test for all laptops. It is not their job to spend time figuring out why, although I'm sure they spent some time to make sure they were configuring the test correctly. The system failed the test and that's how it works. While CR can assume what the issue is, it's not their responsibility to dig through code to find out.

The Alienware analogy really doesn't apply, but a MS device would, since the company owns the hardware and the software.
 
  • Like
Reactions: flyinmac
As mentioned elsewhere CR did attempt to communicate with Apple and Apple effectively said "go to a genius bar." In any case, it is a no win scenario for CR it appears for Apple fans. If they did work with Apple beforehand then there would be reports of collusion and bribery. Plus there would be questions about why or whether they do this with the thousands of other products they test. It would destroy their credibility.

I don't get why people don't seem to understand this? CR has absolutely no motive nor responsibility to work with vendors to troubleshoot problem products before reporting on it. That's utterly ridiculous. Put it into a different context:

Health Inspector inspects a restaurant and discovers a major violation. Should they be working with management to fix all of that before they report a perfect score? No, they report the violations and the restaurant fixes the problems and then they get retested.

Let's say an investigative reporter discovers an employee doing something unsanitary or dangerous. Is the reporter supposed to go to the management and work with the company to resolve the issue and then report that everything was super?

The logic that CR should go to Apple (or any other manufacturer) with a list of issues before reporting them doesn't hold. CR isn't a third party quality testing group hired by the manufacturer to test products.
Again, it amazes me how the Apple fanbois miss these basic simple truths.
 
It is their job to create a consistent test for all laptops. It is not their job to spend time figuring out why, although I'm sure they spent some time to make sure they were configuring the test correctly. The system failed the test and that's how it works. While CR can assume what the issue is, it's not their responsibility to dig through code to find out.

The Alienware analogy really doesn't apply, but a MS device would, since the company owns the hardware and the software.
This is a tired and faulty argument. Leading bowsers are free and easily available. Consumer reports tested Chrome themselves which is third party software with no issues. Yet they made a Don't Buy recommendation of an entire machine based on results clearly tied to a single piece of software.
 
Again, it amazes me how the Apple fanbois miss these basic simple truths.

The article:
Apple declined to comment on our test results until they better understand the issue, but emailed this statement: “Any customer who has a question about their Mac or its operation should contact AppleCare.”

This threads response:
They were told to go to a Genius bar; you're a fanboi.

Both are wrong, and both are presumptuous. One could just as easily presume the answer to CR was that they wanted to get more information from CR and that CUSTOMERS who have the issue should contact Applecare - two entirely different avenues of support. Again, CR didn't do more, they had a deadline to meet and published anyway.

I'm not defending either. We don't have solid intel on what happened directly between CR and Apple. I'm just saying, this isn't a review of hardware. This is a review of what happens to hardware when software is manipulated and stressed.
 
Instead, you have chosen products that were full of faults

The GPU issue was present in other MacBook Pro's and the 2012-2015 image peeling issues extended throughout all refreshes, not just the first generation. So no, I would not be disappointed with my 2016.

I prefer first generations because I like to get the latest tech immediately. I always buy first generations from Apple because they always take care of problems, so there's no downside for me personally. Each new iPhone iteration is technically a "first generation" (at least speaking relatively to 'first generation' MacBook Pro's), so by your logic, nobody should buy a new iPhone.

I also do a lot of thinking about purchases like these, and I'm the most confident about this MacBook Pro for me.
 
Last edited:
Consumer Reports purchases products at retail, just like we do.

They then test those products.

They do not and should not have those products "repaired" or tweaked by the manufacturers prior to reporting, as that would violate the very reason that they buy anonymously at retail.

They purchase the products at retail so that they get exactly what you and I would get.

If they purchased a product from the manufacturer, and disclosed that they were going to test it and report on it, then the manufacturer would make sure that one particular unit was perfect and probably better than those available at retail.

Telling Apple that they're having problems with the machine, and asking Apple to address those problems, would create the same situation. Apple would know that it's for the purpose of reviewing, and individually service that machine to pass inspection.

Doing it the way they did, Apple was forced to release a fix that is available to all consumers. And Consumer Reports will retest the machine with that public bug fix being installed.

What they avoided, was having given a positive review on a configuration that consumers wouldn't receive.

This is an important process to achieve unbiased and accurate evaluations.

Imagine all the cars that they test. What if consumer reports had all the manufacturers fix the cars until they were perfect before they published their reviews? Then every car out there would get an amazing review. But all of us would be buying cars that were not built to the same standards.

What if the manufacturer knew the car was going to be subjected to a crash test and reinforced that car to survive the test? And what if you buy that model car based on the best crash test results, and your whole family dies when the car totally flies apart on impact with (going over) a speed bump and slings your family members into objects surrounding the car?

I'd rather have Consumer Reports reviewing the random sample as they received it, instead of a product that's been fixed to pass the test.
 
This is a tired and faulty argument. Leading bowsers are free and easily available. Consumer reports tested Chrome themselves which is third party software with no issues. Yet they made a Don't Buy recommendation of an entire machine based on results clearly tied to a single piece of software.

While you may be tired of it, it's an accurate statement. You must have missed the fact that Consumer Reports does not test using Chrome for the battery tests. Specifically:

Consumer Reports said:
Here’s how our battery test works: We download a series of 10 web pages repeatedly, starting with the battery fully charged, and ending when the laptop shuts down. The web pages are stored on a server in our lab and transmitted over a dedicated WiFi network. We conduct our battery tests using the browser that is native to the computer’s operating system—Safari, in the case of the MacBook Pro laptops.

Emphasis is mine.

EDIT: I should add that CR could certainly begin using one standard browser across all platforms. I see nothing wrong with that idea, but as it stands, that's not how they currently do things.
 
Last edited:
While you may be tired of it, it's an accurate statement. You must have missed the fact that Consumer Reports does not test using Chrome for the battery tests. Specifically:



Emphasis is mine.

EDIT: I should add that CR could certainly begin using one standard browser across all platforms. I see nothing wrong with that idea, but as it stands, that's not how they currently do things.

The problem with using one standard browser across platforms, is that people would start blaming the browser, instead of the computer's design configuration.

So, in this case, Mac fans would simply blame Consumer Reports for using Chrome or Firefox if the computer failed to deliver the expected performance.

In this particular case, it would have benefited Apple if the test had been conducted with a browser that wasn't made by Apple.

But, the computer should be reviewed based on how it shipped, and what its default browser is.

Just like we wouldn't want a review on a Ford with a Chevy engine installed. While it is possible to install said engine, and may provide a better review, it is not how the average car buyer will receive the car.

Likewise, a Chevy with a Ferrari engine may get amazing performance reviews due to a light body and performance engine, and may break quarter mile records, but again, it's not how you would receive the car if you purchased it.

The review must be based on how the product shipped and what it shipped with.

So if they reviewed a HP computer, and HP elected to uninstall / disable Microsoft Edge / Internet Explorer, and made Firefox the default browser, then the review must be made with FireFox. That way the review reflects the choices made by the manufacturer in producing / designing that configuration.

Otherwise, we could just as well get reviews that say we installed Ubuntu on the new MacBook Pro and Dell model X, and found that the MacBook Pro failed to function properly, the touch bar didn't work, Bluetooth wouldn't work, and the display looked terrible. But the Dell worked great (because Linux had the right drivers).

In reality, if we wanted to only compare hardware, Linux or Windows on all machines would be the equalizer (until Apple allows MacOS to run on PCs). But then the Mac would always be last in the results.

So, testing the machines as they shipped is the only fair way to do it.

CR had established a fair and unbiased evaluation system to score the machines.

The alternative is to go with reviews like MacWorld, who traditionally rate every Apple product as being amazing and the best. I always laughed at how they rated every model equally. Because obviously, a low end Mac can't be equal to a high end Mac. But their rating system didn't account for comparing the machines against each other. Each machine was just given a 4 or 5 star rating by default.

But how can a Mac Mini and a Mac Pro get the same rating??? Well, it's because MacWorld is provided review machines by Apple, and they get advertising money from Apple, and their business depends on Apple. And you don't bite the hand that feeds you.

MacWorld is dependent enough on Apple, that they work for Apple more than they work for us.
 
  • Like
Reactions: belvdr
Probably not.

According to people who were downloading the Beta builds of Sierra, they were, by turns, taking out and then putting-back the Time Remaining indicator in those Builds. And of course, that was LONG before the whole kerfluffle about battery life started with the new MBPs. Because it really doesn't work very well unless you sit there doing the same thing, hour after hour.

Obviously, you can still see an ESTIMATED Time-Remaining in Activity Monitor, plus I am pretty sure that the FREE "Coconut Battery" shows that, and much more. Here's a screenshot of the Coconut Battery Menubar info, and drop-down:

screenshot_menubar_bight.png


And here's the Coconut Battery URL:

http://www.coconut-flavour.com/coconutbattery/
[doublepost=1484174045][/doublepost]
Thanks for the update and clarification!
Tks, will download
 
Clearly not a Pro machine if you are required to use Safari to get best battery life.

EDIT: Could we stop ranting about how Chrome is better. I'm just trying to point out that Apple is no longer designing this laptop to have good power for Pro software like Adobe products. Chrome I would not consider a Pro product.

Should we instead rant on how Safari is so much worse? Ok.
[doublepost=1484700648][/doublepost]
Since when all apps are created equal?

They aren't. That's the reason nobody wants to use Safari. It rarely works.
[doublepost=1484700896][/doublepost]
Do you really care about the battery life for whatever your pro work?
If you are working on anything intensive, you have to plug the power anyway.

So in your opinion a pro laptop is basically a desktop with the added ability to take with you for some light browsing on safari? Sorry, I don't see it that way.
[doublepost=1484701431][/doublepost]
No, it is not simple as that. That's why you have a head on your shoulders. The results were suspicious to say the least and if they followed scientific method they would not have published those results.

Please, read the actual test before you embarrass yourself any further. =)
 
The real questions:

1) When performing tests on other laptops, do they disable the cache also?
2) What happens if you do as recommended by Apple and re-enable the cache, but use dynamic web pages in the test.
3) What happens when you use another web browser in the tests which cache disabled -- Firefox or Chrome.

Anyway, I do think that they are "testing it wrong", especially if they are turning off the cache. Setup your test environment correctly so that the content you browse is dynamic, and test using all default settings in the browser/OS. However, I'm really not about to take Apple's word on this one as it almost appears they are trying to influence the test environment (i.e. make sure you load the same web page so that it never has to access the internet -- but saying it with a marketing twist). That said, there are also enough reports of poor battery life to indicate are still real issues with this line of laptops.

Is it really that hard to actually read the report? You'd have answers to most of your questions. 1) Yes, that is why they also did it on mac. 3) they did test Chrome. Feel free to read it to find out what they found.

What comes to the cache - it makes sense to disable it completely to make sure it doesn't taint the results if what they are testing is to see how much the battery life is affected when browsing completely different web pages where you wouldn't hit the cache at all. They wanted to test the hardware, not the software. And they did it exactly the same way they did with every machine, Apple or not.
[doublepost=1484705917][/doublepost]
No Apple crafted that line. It's been used by several Apple folk; maybe it's part of the onboarding process to state things like this.

"This is the best meeting we ever had."
"This is the best campus we ever built."
and so on...

"If we keep repeating it maybe we at least believe it ourselves at some point..."
[doublepost=1484706110][/doublepost]
How is that a problem if it delivers the advertised battery life?

That's a big if. And that's the problem. It doesn't. Go buy one and try to get any real work on it. It fares quite a bit worse than the 2015 model. At least with the fully loaded 15" model. From what I've heard the 13" is even worse.
[doublepost=1484706242][/doublepost]
Why would CR disable caching when browsing to perform their test? By default, people leave caching on. Unless they're trying to find the worst case with battery life.

Also, yes, I know that the issue was not that they disabled cache but a bug with that function being disabled.

Why are people commenting on a test they obviously haven't read? You might understand why.
[doublepost=1484706854][/doublepost]
I got about 7 hours yesterday with my 2016 13" MBP doing browsing, and watching some video. I know this because I was stuck in an airport all day and didn't realize that I packed my charger in the luggage that I checked in.

A question: Why did you buy a Macbook -> PRO <- to do some browsing and watching some (cat?) videos?

My current rMBP can idle on the coffee table for 2 full days if I dim the display really low and don't use it at all! Omg that battery life! Too bad it can't handle a quarter of that on a relatively moderate load. The 2016 model couldn't do even that. I guess I should stop trying to work with it and just browse and watch videos. :p
[doublepost=1484707698][/doublepost]
It is fairly simple. If your computer has a 45 watt tdp cpu and a 75 Wh battery don't expect it to last 2 hours on the heaviest of workloads. No Macbook Pro has ever lasted more than a mere few hours under full load. That kind of math applies to any laptop...

If a laptop has a mediocre battery life to begin with, does it make sense to make the battery even smaller in the next version? Or maybe try to actually improve it in a machine professionals are using on the field. Sometimes even in situations where you can't be plugged in all the time.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.