Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I still would say this will influse more people to buy iMac over Mac Pro.

For one, people go where the specs are :D but Mac Pro is also getting good
 



Intel plans to move up the launch of its 14-nanometer Coffee Lake processors, introducing them in August of 2017 instead of January 2018. According to DigiTimes, the launch is being moved up because of "increasing competition from AMD's Ryzen 7 and Ryzen 5 processors."

The site says Intel will release several K-series Core i3, i5, and i7 processors starting in August, along with its Z370 chipsets. Additional CPUs will come at the end of 2017 or early in 2018.

Intel also plans to unveil its Basin Falls platform, with Skylake-X and Kaby Lake-X processors at Computex 2017, which takes place from May 30 to June 3, which is two months earlier than originally scheduled.

imac-duo.jpg

Intel's Skylake-X series features 140W processors with 6, 8, and 10-core architectures, while Kaby Lake X-series features a 112W quad-core processor. Intel also plans to release a 12-core Skylake-X processor in August. Intel's Basin Falls platform could potentially be used in future Mac Pro machines and the rumored high-end server-grade iMac.

Coffee Lake chips appropriate for Apple machines were originally set to launch somewhere around the second quarter of 2018, so if rumors of Intel's updated timeline are true, the launch could be moved forward to either late 2017 or early in 2018.

Coffee Lake chips are manufactured on Intel's 14-nanometer process and will be the fourth processor family to use the architecture after Broadwell, Skylake, and Kaby Lake.

Apple is rumored to have new machines in the works for 2017, including new iMacs, which are likely to use Kaby Lake chips.

Article Link: Intel Rumored to Debut Basin Falls Platform in May, Launch Coffee Lake Chips in August

Something slipped at intel. They fell off their "Tick/Tock" cycle a few years ago with the Haswell/Broadwell procs and I think that's been causing them a lot of issues. Their innovation cycle was broken and they haven't been able to get back into it. The first QPI based Nehalem/Sandy Bridge/Ivy Bridge cycles were all a huge push forward with multi-core hyperthreading/variable speed procs/and thunderbolt,PCIe 3.0 respectively. The recent revs have each taken longer with less advancement. With Apple already using AMD's GPUs their CPU jump wouldn't be that serious, just a a deal renegotiation.
 
It's not just tick-tock. Features that don't really have anything to do with semiconductor process, like more advanced I/O and video support, have also been really slowed down. I'm not sure why, these features would do more to spur sales, in my opinion, than yet another tiny bump in processor speed.
 
Something slipped at intel. They fell off their "Tick/Tock" cycle a few years ago with the Haswell/Broadwell procs and I think that's been causing them a lot of issues. Their innovation cycle was broken and they haven't been able to get back into it.

I think part of it is the issues with shrinking the dies smaller and smaller. As someone else noted, we're starting to enter the realm where quantum mechanics are starting to rear their head. And these new FABs are massively expensive - many billions of dollars. With no real competition, Intel has been able to push out the new process enhancements to reduce their Capital Expenditures.

Now that AMD is looking competitive, Intel is going to have to spend the money (on FABs and Process R&D) again.
 
They don't need X-class, but may still pick up core count a bit without changing thermal power limits (TDP).

With desktop versions of gen 8 (CoffeeLake), mainstream i7 versions are suppose to pick up 6 cores. That appears to be a bit of Intel swapping integrated graphics processor (iGPU) space on the die for more x86 core space. ( they will only be GT2 graphics). That actually makes sense in the context that Ryzen 5 and Ryzen 7 are devoting no die space to iGPU at all. ( AMD integrated options are coming later in the year. Not sure Intel has an answer for those any time in next 9-10 months. )

The mainstream gen 8 should have an Xeon E3 equivalent that should also pick up the 6 core. E3 12xx v7 series. If Intel rapidly moved up the time table they may be able to shrink the that schedule also. So if the desktop moved approximately 5 moths back ( January '18 to August '17) then perhaps the Xeon E3 v7 moved back the same amount to land in October '17 ( March '18 to October '17). Xeon processors though usually get a higher amount of beta testing and defect quality control so Intel may not be able to claw back almost half a year.

E3 v6 would be a safer choice for Apple if shooting for a fixed timeline. if the E3 v6 to v7 timeline has seen a major disruption then Apple's E3 plans are probably screwed up from the original projected timeline. November-December has a decent chance to being more realistic if this upper edge shift to E3 is true. ( Not sure it is when the mainstream top end is shifting to 6 core. Apple could sell just 6 core sizzle as the hype tag line. But yeah a six core E3 v7 with 32-64GB of ECC ram would + an 5K display for less than most Mac Pro prices would draw in a decent number of "pro" market customers at the lower end of the Mac Pro market. ).

There is a new PCH chipset with gen 8 ( 300 series) which technically should not throw Apple's work in a loop if they were already deep into design using v6 (and/or Gen 7 KabyLabe ). Likewise the E3 v7 has similar chipset bump but Apple could just skip some of the more advanced features if caught off guard.

All of that said Intel knocking 5 months off the release timeline. That's is kind of dubious. The approximately 3 months ( a Quarter ) for Skylake-X is far more tractable. Intel probably had some slack in the schedule to get around any last minute significant bugs that might pop up. 5 months is suggestive of throwing something out there that isn't fully baked and vetted. Yes gen 8 is just an incremental change, but there is decent amount of new stuff in the chipset ( bump to USB 3.1 gen 2 among a few other things.)
[doublepost=1492793284][/doublepost]

Skylake-X <==> Xeon E5 1xxxx v5 ~ 140-160W TDP

mainstream KabyLake <===> Xeon E3 12xx v6 ~ 70-80W ( max out four cores )
mainstream CoffeeLake <===> Xeon E3 12xx v7 ~ 70-85W ( max out at 6 cores )

Xeon doesn't necessary mean the E5 class. There are four ranges of Xeon D , E3 , E5 , and E7. Not being limited to E5 leaves plenty of other options both up and down the TDP range.

The iMac is likely to select from the last two product equivalency classes above. Even if the enclosure is modified to get a bigger TDP envelope that would probably get consumed by a more "desktop" like GPU than in pushing on the CPU core count front.
What is awkward for the 21.5" iMac is the Intel 'retreat' on iGPUs with CoffeeLake. Apple would probably have to shift to dGPUs there too.

Even though Intel has retreated on Iris Pro iGPUs (eDRAM) moving forward, remember that the 21.5" iMac (both non-Retina and Retina 4K) use Broadwell CPUs (5575R, 5675R and 5775R) currently.

I suspect Apple wants to get one more revision out of the current 21.5" and 27" chassis and they will end up using Core i5 and i7 Skylake CPUs (6585R, 6685R and 6785R for the 21.5" with i5-7500 and i7-7700K for the 27") when they update in them in the fall. The "R"-series CPU suitable for the 21.5" uses the Intel Iris Pro 580 iGPU while the 27" use the Intel HD Graphics 630 along with a discrete GPU, perhaps a Radeon RX470 (120w TDP). My hope is that Apple will offer 32GB of RAM and up to 1TB PCIe flash for the 21.5" which would make that a decent machine for FCP X among other apps.

If Apple does that, that would buy them time to find a suitable discrete GPU for the next-generation 21.5" iMac and give it a decent swan song, specs-wise.
 
Again, most readers don't seem to realize how big a jump CoffeeLake is. It's Intel's biggest jump in multi-core tech in a very long time.

Not to mention the motherboard's integrated USB 3.1 and Wifi might add some good power savings for mobile.

If their graphics will take the same leap is not clear, but it doesn't seem likely, which is a shame.
 
Even though Intel has retreated on Iris Pro iGPUs (eDRAM) moving forward, remember that the 21.5" iMac (both non-Retina and Retina 4K) use Broadwell CPUs (5575R, 5675R and 5775R) currently.

I suspect Apple wants to get one more revision out of the current 21.5" and 27" chassis and they will end up using Core i5 and i7 Skylake CPUs (6585R, 6685R and 6785R for the 21.5" with i5-7500 and i7-7700K for the 27") when they update in them in the fall.

Last time I looked around the only major users of the Iris Pro eDRAM was Apple. So if Apple isn't shipping a new 21.5" until fall (to match up with some 27" iMac move ) then it doesn't really pay for Intel to make them in volume. Not sure if this isn't a chicken or the egg situation. There are no gen 7 ( Kaby Lake) solutions because the major buying hasn't ordered any yet. If the 7xxxxR show up in August, that would be a decent indicator that Apple is really the volume driver on those.

eDRAM versus HBMv2 means Intel either needs to step up their "Pro" iGPU game: go to a discrete die for the iGPU, or really do just simply walk away from the very high end iGPU market. AMD is going to be applying major heat in that product zone also by the end of the year and just baseline incremental improvements of Intel's current iGPU tech isn't going to cut it.


If Apple does that, that would buy them time to find a suitable discrete GPU for the next-generation 21.5" iMac and give it a decent swan song, specs-wise.

I don't think the 21.5" is going away any time soon. If Apple has to go discrete graphics they they'll just raise the price to cover it. The "Pro" iGPU on 21.5" to help with pricing. Intel could just deliver a bigger package that does a better job next year. AMD is certainly trying to do that later this year. A 1-2GB HBMv2 cache versus what 0.25-0.5GB eDRAM can do is a big difference (in cost and performance. If can iterate on the cost to get it down then may have something very close to what the 21.5" iMac needs to hit Apple's bill-of-material cost targets )
 
Wait whaaat? They're idiots because they make their computers thinner? I bet 99% of iMac users 1) do nothing that generate heat, and 2) want a thin sexy computer.
That's what the MacBook, MacBook Air and 13" MBP are for. Why gimp the 15" MBP, and eliminate the 17"? Oh wait, Apple already stated that they bugged up the Professional market.
 
Coffee Lake: I want a 6 core 95 Watt CPU for socket 1151 with same single core performance like an i7-7700K. :cool:
 
That's what the MacBook, MacBook Air and 13" MBP are for. Why gimp the 15" MBP, and eliminate the 17"? Oh wait, Apple already stated that they bugged up the Professional market.
Apple are only updating the MacBook and the MacBook Pro. Therefore Im not shocked if each caters to 50% of customers.
Do you really think its good from a business perspective to have one of two computers to cater to 1% of customers? (yeah, dont think many people want thicker computers).

Pro in the name just means its more Pro than the non-Pro computer.
 
How about 8? or 6.. Please?

More cores = more heat.
[doublepost=1493657313][/doublepost]
The benches I saw had them doing great in 3d and multicore, most beating Intel. Not so good in single core. But just to F Intel I'd love to see Apple go AMD.

You saw them doing well with video rendering. They don't do that well with 3D.

90% of Mac users aren't going to be using more than 4 cores. Moving to Ryzen makes 0 sense for Apple. Intel makes great CPUs, and they're a very stable platform.
[doublepost=1493657917][/doublepost]
I think the BIOS problems are mostly overblown. There were some disasters, but it is largely an ongoing optimization process.

I am not sure Vega is late, I would say rather that people are impatient.

You can game very nicely on Ryzen. Freaks can buy half the cores for the same money if they want.

Except those half threads are matching AMD's double threads for the same price for most people's workloads. The 1500x is only slightly worse than the 7600K at stock clocks, but as soon as you push the 7600K to 5GHz, it's game over for AMD.

Make no mistake about it: with the 1080 and 1070 being almost a year old now, and the 1080ti offering incredible performance, Vega will either half to offer unreal performance, or incredible value. Because Vega's die size is huge, the yields are already going to be working against AMD offering incredible value on them. Them taunting Volta doesn't make it any better for them.
[doublepost=1493658469][/doublepost]
It's an entirely new architecture, it's going to have a little bit of growing pains. The BIOS screwiness has settled down with recent rounds of updates - Didn't you hear, it's a brand new architecture.

Gaming performances improves with the higher RAM speed, as the CCX communication also improves with higher clock speeds. Having an 8C/16T powerhouse, that can be easily overclocked with a couple clicks, for $320 is insane. Heck, intel was forced to upped their schedule in response to the pressure.

If not for thunderbolt, I'm sure Apple would love to use the Ryzen 6C/12T and/or 8C/16T CPU at 65W TDP.

It doesn't matter if it's a new arch. Apple doesn't want to deal with it at all.

Except that the overclocking basically doesn't matter because every single Ryzen chip comes out of the box extremely close to it's clock limit. Most people can barely get them up to 4Ghz, and anything above that takes more voltage than AMD says is safe for day to day usage. Ryzen is a very poor overclocker.

Apple and Apple users don't care in the slightest about core count. 80-90% of Apple don't use more than 4 cores, and they'll see much more of a difference out of higher clock speeds and "IPC" improvements.
 
It doesn't matter if it's a new arch. Apple doesn't want to deal with it at all.
Of course it matters if it's a new architecture or not. It takes a little time to get ramped up, but Apple sure as %^&$ ramps up for all contingencies. Apple has gotten OS X/MachOS running on all different kind of chips, to make sure they can pivot, if circumstances change. They had OS X running on Intel, WELL before 2006, for example.

You are talking a whole lotta smack, with history not backing you up.
Except that the overclocking basically doesn't matter because every single Ryzen chip comes out of the box extremely close to it's clock limit. Most people can barely get them up to 4Ghz, and anything above that takes more voltage than AMD says is safe for day to day usage. Ryzen is a very poor overclocker.
The Ryzen 7 - 1700 comes has a base clock of 3.0GHz and can be overclocked, with one button push to 3.75GHz, quite easily and stably. A 25% overclock - and that is before upping RAM voltages, etc... People can easily get it going higher than that, yes with more power drain, like any higher overclocking. But 3.75GHZ/25%, with 1 button push.

So again, stating things that are absolutely incorrect.

Apple and Apple users don't care in the slightest about core count. 80-90% of Apple don't use more than 4 cores, and they'll see much more of a difference out of higher clock speeds and "IPC" improvements.

You are correct, the ones who buy macbooks and 13" macbook pros, generally don't care about core counts. But the extreme backlash over the lack of "pro" machines in Apple lineup, show that some really do care about it. And when it comes to productivity, the more cores go a lot further for productivity, than higher clock speeds. I think you are confusing gaming(higher clock speeds) with productivity(more cores).

I bet it made you really mad, when Apple admitted they made mistakes when it came to the "Pro" market.
 
Isn't that what Grand Central Dispatch is for? Whatever happened to it… Is it not around in MacOS anymore?

https://en.wikipedia.org/wiki/Grand_Central_Dispatch

GCD is not a silver bullet to make multithreading work well in any app. It's a tool developers can use to make multithreading easier. It's not necessarily always possible or worth the effort to do so.

Of course it matters if it's a new architecture or not.

Well, Ryzen is a different microarchitecture. Otherwise, it's x86-64 just like Intel Core. There are specific optimizations, but no recompilation is needed.

It takes a little time to get ramped up, but Apple sure as %^&$ ramps up for all contingencies. Apple has gotten OS X/MachOS running on all different kind of chips, to make sure they can pivot, if circumstances change. They had OS X running on Intel, WELL before 2006, for example.

But that was in no small part because NeXTSTEP had been running on Intel even before the Apple-NeXT merger.

But the extreme backlash over the lack of "pro" machines in Apple lineup, show that some really do care about it. And when it comes to productivity, the more cores go a lot further for productivity, than higher clock speeds.

That depends a ton on the workload. Many, many workloads don't parallelize well.
 
  • Like
Reactions: star-affinity
Well, Ryzen is a different microarchitecture. Otherwise, it's x86-64 just like Intel Core. There are specific optimizations, but no recompilation is needed.
I was originally responding to the issues encountered at launch for Ryzen - Motherboard BIOS/EUFI, RAM compatibility, non-optimized applications, etc... and not about Ryzen being something other than x86-64

But that was in no small part because NeXTSTEP had been running on Intel even before the Apple-NeXT merger.
Didn't you just above talk about x86-64? So other than some optimizations, it should work A-OK. Apple did pretty well by getting OS X to run on ARM processors, slapping in touch UI and renaming it iOS, right?

That depends a ton on the workload. Many, many workloads don't parallelize well.
Sure, but then again, you don't need individual applications to parallelize perfectly, if you can run multiple VMs and applications, transcoding, etc... all at the same time. You know what allows that? More cores and more RAM. Amazing, ain't it?
 
I was originally responding to the issues encountered at launch for Ryzen - Motherboard BIOS/EUFI, RAM compatibility, non-optimized applications, etc... and not about Ryzen being something other than x86-64

Right.

Didn't you just above talk about x86-64? So other than some optimizations, it should work A-OK.

Yes.

Apple did pretty well by getting OS X to run on ARM processors, slapping in touch UI and renaming it iOS, right?

I wasn't arguing that Apple couldn't do it. Not sure what you're saying.

macOS will run on Ryzen right now. No recompile or anything needed. Optimizations are a different matter.

Sure, but then again, you don't need individual applications to parallelize perfectly, if you can run multiple VMs and applications, transcoding, etc... all at the same time.

Which is hardly a workload most people will do.

You know what allows that? More cores and more RAM. Amazing, ain't it?

Yes, but last I checked, the human brain doesn't lend itself well to this sort of multitasking. How do more cores and more RAM help a single thing get done faster? Oftentimes, they don't.
 
Which is hardly a workload most people will do.
It's work that 'Pros'(towards the upper end of the definition) can certainly have going on. Running multiple VMs alone, would gladly eat up the more cores and RAM.

Yes, but last I checked, the human brain doesn't lend itself well to this sort of multitasking. How do more cores and more RAM help a single thing get done faster? Oftentimes, they don't.
Kick off an hour render job and NOT use your computer for anything else OR kick off an hour render AND use your computer?
 
Of course it matters if it's a new architecture or not. It takes a little time to get ramped up, but Apple sure as %^&$ ramps up for all contingencies. Apple has gotten OS X/MachOS running on all different kind of chips, to make sure they can pivot, if circumstances change. They had OS X running on Intel, WELL before 2006, for example.

You're completely missing the point. Why would Apple switch over to a Ryzen platform that's having uCode problems, has lower IPC but more cores, and comes with a reputation with consumers as being lower quality? AMD's problems don't matter to Apple because they already are using CPUs that are on a very mature platform, have IPC and core counts that better aligns with the processing requirements of their users, and have a brand that's synonymous with quality? You probably don't realize that, especially in an age where CPUs have pretty much become "good enough", a brand's reputation matters a lot more to consumers than how good something actually is. That's why the GTX 970 sold like hotcakes, and the 390 sold probably 10x less units (despite the 390 being the obviously superior card).

Until Ryzen is at least close to as stable as Intel's platforms, Apple will continue to not care about it.

The Ryzen 7 - 1700 comes has a base clock of 3.0GHz and can be overclocked, with one button push to 3.75GHz, quite easily and stably. A 25% overclock - and that is before upping RAM voltages, etc... People can easily get it going higher than that, yes with more power drain, like any higher overclocking. But 3.75GHZ/25%, with 1 button push.

So again, stating things that are absolutely incorrect.

You know full well that base clocks don't mean anything in this day and age with coolers being so good. In any situation where the 1700 is being stressed while being cooled even by the stock cooler, it's going to be on it's boost clock the whole time. AMD lists the 1700 boost as 3.7GHz, but I'm not going to try and ******** you. You and I both know that that's likely the single core boost. Likely, the 8-core boost clock is 3.5, maybe 3.6GHz, but for the sake of consistency, let's assume a 3.5GHz 8-core boost clock. That means that your precious "3.7GHz OC is a little over a 7% overclock, which is pitiful.

Hell, I'll even be charitable and say that we're going to go to a 3.9GHz OC on the 1700 (which is about average for a top end OC for the 1700). That's a little over 11%. That's likely not even worth it from a power usage perspective, since you're going to likely need be pushing close to 1.4v on the core.

So yes, Ryzen is an atrocious overclocker.


You are correct, the ones who buy macbooks and 13" macbook pros, generally don't care about core counts. But the extreme backlash over the lack of "pro" machines in Apple lineup, show that some really do care about it. And when it comes to productivity, the more cores go a lot further for productivity, than higher clock speeds. I think you are confusing gaming(higher clock speeds) with productivity(more cores).

I bet it made you really mad, when Apple admitted they made mistakes when it came to the "Pro" market.

Extreme backlash? Pfff. A few hundred years on some forums and reddit doesn't an "extreme backlash" make. The Mac hasn't breached 20% of Apple's revenue breakdown since Q2 2011. They don't care about the backlash.

Your generalization about cores and clock speeds is ridiculously broad. It entirely depends on your workload. Practically everything you do in Photoshop is faster with a 7700K than it is with a 6900K. Hell, even the 6850K is faster than a 6900K in Photoshop. If you work with AutoCAD or Solidworks or practically any other CAD software, the 7700K will be significantly faster than both the 6900K and the 1800X. Render previews in 4K H.264 in Premiere Pro is faster with the 7700K than all previously mentioned CPUs. Exporting to 1080p H.264 from 4K TIFF and H.264 is faster on the 7700K than everything but the 6850K. Let's not even talk about data visualization in Python.

Productivity isn't clear cut. You can't make sweeping generalizations about it and claim any sort of accuracy. Here's one that is accurate and true though. I think you're confusing your fantasy land(/r/amd) with the real world(everywhere else).

Why would it make me mad for Apple to admit they made a mistake by building an unserviceable, non-upgradeable can of garbage? Thunderbolt ports != expandability.
 
Last edited by a moderator:
You're completely missing the point. Why would Apple switch over to a Ryzen platform that's having uCode problems, has lower IPC but more cores, and comes with a reputation with consumers as being lower quality?
uCode issues at launch - WITH A NEW ARCHITECTURE. Not seeing any problems, 2 months after launch. I guess you missed the times in history, where AMD caught Intel with it's pants down. Here's another example of > quad core chips, at a reasonable price. Just because you are super angry with AMD for some reason, does not negate the quality > quad chip AMD released.

BTW - In he lab, Apple is always getting their OSs to run on all kinds of hardware, just to keep their options open. So guess what? Apple has MacOS running on a new Ryzen CPU right now!!!! The thought may just bring you nightmares!!

So yes, Ryzen is an atrocious overclocker.
You would be flat out incorrect. The Ryzen 7 1700 and Ryzen 5 1600 are the sweet spots for AMD's recent releases. Both can be EASILY overclocked, with 1 button press, zero issues. Just because you refuse to accept it, does not negate the facts.

Enjoy overclocking your 7700k CPU. Oh wait, Intel is telling people not to, because of overheating issues on their more expensive unlocked CPU. All Ryzens are unlocked by default.

Hmmm.

Your generalization about cores and clock speeds is ridiculously broad. It entirely depends on your workload. Practically everything you do in Photoshop is faster with a 7700K than it is with a 6900K. Hell, even the 6850K is faster than a 6900K in Photoshop. If you work with AutoCAD or Solidworks or practically any other CAD software, the 7700K will be significantly faster than both the 6900K and the 1800X. Render previews in 4K H.264 in Premiere Pro is faster with the 7700K than all previously mentioned CPUs. Exporting to 1080p H.264 from 4K TIFF and H.264 is faster on the 7700K than everything but the 6850K. Let's not even talk about data visualization in Python.

Productivity isn't clear cut. You can't make sweeping generalizations about it and claim any sort of accuracy. Here's one that is accurate and true though. I think you're confusing your fantasy land(/r/amd) with the real world(everywhere else).

Yore correct about one thing, "Productivity isn't clear cut" But you seem to want Apple to limit choices, instead of offering more choices. You seem to have issues with AMD, and can't accept that people might want more than a quad core CPU. I'd prefer more choices, than fewer choices. Right now, AMD is giving that, Intel is charging out the wazoo for anything more than quad core.

Your frothing at the mouth anger toward AMD, and inability to conduce a civil conversation, is causing me to check out. Enjoy your Intel only CPU life.
 
Last edited by a moderator:
I'll wait until Intel goes 10nm or better. 10nm on phone makes a huge difference compared to, for example, 16nm. When the 16nm iPhone 7 Plus runs out of battery a 10nm phone still has 39% battery left with an even bigger 6.2" display.

 
I think part of it is the issues with shrinking the dies smaller and smaller. As someone else noted, we're starting to enter the realm where quantum mechanics are starting to rear their head. And these new FABs are massively expensive - many billions of dollars. With no real competition, Intel has been able to push out the new process enhancements to reduce their Capital Expenditures.

Now that AMD is looking competitive, Intel is going to have to spend the money (on FABs and Process R&D) again.

Aye, it's the time-money-product triangle of product development. I think they decided to slow things down to save money and engineer time (which means money :). I think your right that while AMD isn't looking competitive with Intel as a company yet the appearance factor (AMD is now in the lead) may give Intel a kick in the rear to reinvest in product.
 
I'll wait until Intel goes 10nm or better. 10nm on phone makes a huge difference compared to, for example, 16nm. When the 16nm iPhone 7 Plus runs out of battery a 10nm phone still has 39% battery left with an even bigger 6.2" display.


Let’s also not downplay that Google has been making "better battery life" a slide on every single release of Android.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.