Thanks for the comments but I didn't see the physics law you referenced. Not trying to be a shill but I have a technical background and don't remember running across a law like you mentioned. Course I'm a senior now and may have forgotten it.A battery is going to discharge at a consistent rate, so running the exact same test 3 times in a row, as CR indicated they did, and getting significantly divergent results would indicate to me, as a computer programmer, that something not normal was going on. Batteries don't change their discharge rate, to that extent, and computer programs don't produce different results unless there is a bug (software or otherwise). I would have investigated it further especially since CR changed a non user facing setting to conduct their test. I would have investigated further before writing the article.
here is a quote from CR "in a series of three consecutive tests, the 13-inch model with the Touch Bar ran for 16 hours in the first trial, 12.75 hours in the second, and just 3.75 hours in the third."
Big companies have gone after CR before and lost...Apple: We are a trillion dollar company (soon to be) you know... we also can afford better lawyers...
CR: I think we get it.
If you go to System Profiler and chose power, if can show the wattage being consumed (or at least it used to show this). I'd be interested to see how high the wattage is with and without Safari loaded. You can press Command+R to refresh the wattage calculation.I have no idea what "Energy Impact" mean in term of technical meaning.
And yes, "System Profiler" is still in MacOS.
For the safari, no, I didn't activate flash player. I only browse typical websites and Youtube might be only 15 mins. since I unplugged the charger.
If using a repeatable process to test similar but not the same products would cause you to flunk, then I would suggest the problem lies with your school and not the experiment.
It does not matter if they go into Safari and change a setting, so long as they go into every browser on every machine they test and make the same change. That's what makes the test worthwhile. It isn't Consumer Reports' responsibility to make sure that Apple gives them bug free software before running their tests.
A battery is going to discharge at a consistent rate, so running the exact same test 3 times in a row, as CR indicated they did, and getting significantly divergent results would indicate to me, as a computer programmer, that something not normal was going on.
It can be to the right consumer.
As someone who would be carrying a laptop around wherever I go, why wouldn't I want it to be lighter and more portable?
You could say "then get a MacBook Air if you want a thin and light laptop". I would reply "I could, but then I would be compromising power for portability." What if I want both?
What Apple has done here is try to make the tradeoffs more palatable for us average consumers by creating a product which offers the best of both worlds. In the past, I would never have considered a 15" MBP due to its size and weight alone. Now, the 15" MBP is both powerful and more portable than before. The drawbacks don't really bother or affect me.
I am currently still using a 11" MBA, but the new MBP's weight feels reasonable enough that I am actually tempted to pick one up as my next replacement laptop if and when my current Air bites the dust.
No, those are false equivalences you made. In both cases (and others as well) they set out a test plan an apply it equally to all vendors they test in the category. I'm not sure why people have such a hard time grasping this? Just because it isn't how you would choose to test doesn't make it invalid.When Consumer Reports tests cars, do they disable traction control and anti-lock braking systems on the cars before testing for performance, handling and braking? Do they add a thousand pounds to a MacLaren P1 before testing it to compensate for all that lightweight carbon fiber body work? That is effectively what they're doing on these computer tests
The point i was attempting to make - but i guess failed in doing so, is that the test made no sense. you cannot, under the circumstances they described, get 4 hours one time and 19 the next. A CPU cannot suddenly require less electricity to do the exact same task (again everything else being equal) unless there is a software bug. I guess it would be the law of physics that governs the amount of energy a particular task requires (I'm sure there is something)Thanks for the comments but I didn't see the physics law you referenced. Not trying to be a shill but I have a technical background and don't remember running across a law like you mentioned. Course I'm a senior now and may have forgotten it.
One other thing, I have seen batteries change their discharge rate, significantly at times, depending on temperature, state of charge and age. Give these were new batteries I think we can take age off the table. Or maybe these batteries weren't assembled correctly like the recent issue with the iPhone batteries.
CR is secretive of their testing methodology, as they should be because they don't want manufacturers gaming the system. Because of this, they are asking us, as consumers, to take a huge leap of faith in accepting their results. For example, do they regularly update their tests to reflect the ever changing ways we use computers as the years go on? So, when they show test results that are completely illogical I feel they owe us a higher standard of due diligence.Being professionally involved in testing of various types of products I have similar concerns with CR tests.
1) Tests should be repeatable with the same result under the same conditions. CR tests are not, the variance was very large in the first tests. This indicates a fault in the product or a problem with the test methodology.
Now Apple has fixed a bug, are the results consistent? I have not seen any mention of the variance in the new tests, so to me it is unclear.
2) In testing you would look for "expected behavior". We know that switching of the cache should lead to higher power consumption compared to normal use. Hence it is very peculiar that CR gets much longer battery times than Apple. We also know that Chrome generally consumes more power than Safari. Why is CR getting the opposite results.
The overall impression is that CR test methodology cannot be trusted.
On a side note the "browser tests" seem to be more sensitive to what pages are loaded and with what frequency. For comparison purposes the "play movie in iTunes" test may be the better.
Yeah, I don't get how this is possible. 15-18 hours? I can't imagine the machine will even run that long at idle with the screen off.
Here is why the 15+ hour test results are useless: they are UNREACHABLE!
Why would I read a review saying it gets 15 hours of battery life when I get around 5-7? People read these things to know whether to buy them. You know to actually USE their computer. I do not want to know Laptop A can survive 3,000 hours if it does nothing. I want to know real life battery results. How long does it last while USING IT compared to other laptops.
The bug, fixed by Apple in macOS Sierra 10.12.3 beta 3, is not one the average user will encounter as most people don't turn off the Safari caching option ...
I'll only believe it when the battery time indicator is back.
Yep, it isn't a useful measure in and of itself but the real value is how it compares against other laptops. I think any rational person wouldn't expect 15 hours of battery for typical usage. However if under the same conditions Laptop ABC rated 10 hours then I'd expect that the MacBook Pro likely has better battery management and capacity. See the difference?Here is why the 15+ hour test results are useless: they are UNREACHABLE!
Why would I read a review saying it gets 15 hours of battery life when I get around 5-7? People read these things to know whether to buy them. You know to actually USE their computer. I do not want to know Laptop A can survive 3,000 hours if it does nothing. I want to know real life battery results. How long does it last while USING IT compared to other laptops.
To be fair we are seeing one side of the conversation where they get the opportunity to edit and position their side. Besides it makes sense. A small time blog site like 9to5 doesn't have much standing in this case. Apple eventually reached out to CR and they no doubt had a lot of discussions about it. Quite a difference.I find it so funny that a well known tech site sent out an email saying they wanted consumer reports to retest again and they replied no their confident in there test and stand by their findings.
However, it is safe to say many customers have experienced abnormal battery results in highs and lows
No, those are false equivalences you made. In both cases (and others as well) they set out a test plan an apply it equally to all vendors they test in the category. I'm not sure why people have such a hard time grasping this? Just because it isn't how you would choose to test doesn't make it invalid.
I find it so funny that a well known tech site sent out an email saying they wanted consumer reports to retest again and they replied no their confident in there test and stand by their findings. Then Apple contacts them and all of a sudden they are willing to re-test and then when they do the numbers are off the charts. It just sounds so fishy to me and unrealistic numbers for the average consumer and isn't that what their business is all about the consumer? 15 hrs on the 15" come on now!
Actually Microsoft basically does the same now. It doesn't really matter, though. From the consumer's perspective they really don't care who made what. They're buying a widget and want to know what to expect. What Apple says or does or claims is rather irrelevant. The actual value from the test CR does isn't very helpful as I said elsewhere. But by keeping a consistent methodology that's repeatable across brands it helps the consumer make a comparison across brands -- and that's what the consumer typically does.Not at all. Apple designs hardware and software together, unlike any other vendor.
Good for Consumer Reports, whose rigorous and documented testing methodology brought to light an issue that had been reported anecdotally by users. Apple may say that "people don't use that setting" but in fact there was a bug brought to light.
Consumer Reports isn't perfect, and their perspective on products doesn't always match mine (especially when it comes to comoputers and AV equipment). But they are disciplined, they document their work, and they are willing to retest when new information comes to light or changes are made.
Yeah - I think it is pretty disappointing. The 15" MBPro could have stayed pretty much the same size/weight and nobody would have cared because there would have been a new 14" ultra thin model to rave about if you needed ultra portability. Apple missed a trick IMO.
The worse part is that there is NO WAY IN HELL that the the MBPro will ever get thicker again. Apple will NEVER release a MBPro that is even 1 mm thicker.
This means MBPro users are now stuck. Battery capacity and ports can only get worse from here on in the Pro model (unless there is a new tech change).