Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
From an iPhone? No. Right tool for the job.
Imagine a possible future when Final Cut Express for iPadOS allows you to add your iPhone 12 to a list of network renderers for your A14X iPad Pro and with both devices using 802.11AX, making it feasible to actually dole out the work to another device wirelessly. Science fiction? Possibly.
 
That's quite a ridiculous flaw of OS that needed to removed from memory to save power but is more on due to lack of adequate RAM.

No, it’s a ridiculous flaw of physics. Keeping things in memory means that memory must be refreshed, because the memory is dynamic RAM. Refreshing memory requires power, and only horribly inefficient designs, like what Samsung does, would include more memory than is necessary. Thankfully, unlike the terrible engineers at Samsung, Apple’s engineers chose to make a much smarter design decision.
[automerge]1584369207[/automerge]
Imagine a possible future when Final Cut Express for iPadOS allows you to add your iPhone 12 to a list of network renderers for your A14X iPad Pro and with both devices using 802.11AX, making it feasible to actually dole out the work to another device wirelessly. Science fiction? Possibly.

Not really science fiction, but not a good idea, either. If the phone is on battery, that would obviously kill it pretty quick. And if not, it would still be pretty inefficient because the phone thermal solution will always require extensive throttling. So the iphone wouldn‘t contribute all that much vs. what the iPad Pro is already doing.
 
Not really science fiction, but not a good idea, either. If the phone is on battery, that would obviously kill it pretty quick. And if not, it would still be pretty inefficient because the phone thermal solution will always require extensive throttling. So the iPhone wouldn‘t contribute all that much vs. what the iPad Pro is already doing.
Fair enough...I’m projecting my own desires into a scenario that looks good on the outside, but may not be on the inside.
 
You can never get enough memory on the die. And the bigger the memory, the slower it is (because the address lines get bigger, etc). And you don’t necessarily want to make the ram from the same process as the cpu. To get sufficient density you want DRAM, and processes for DRAM are optimized quite differently than processes for logic and SRAM.
Still valid, that is why I said L3 and L4 caches. L3 is SRAM, L4 is DRAM. I was hesitant to get into too many technical details.

Note that I also mentioned increasing the number of cores, either CPU or GPU. The L4 Cache can lay outskirt from the cores, which should be alined like a loop. Inside the loop, we put the L3 cache and other small supporting blocks such as hardware decoder, motion co-processor, etc. Between the cores, we put the L2 cache. Within the core, we augment the L1-Instruction cache and L1-Data cache, which are private to the individual cores, and only addressable by the corresponding cores.

The compute, decode, and graphics cores should share the L4 Cache. via the loop-shaped bus. The L3 cache should only be addressable by cores within the same efficiency zone, which shares the same variable clock speed. those L3 Cache should be grouped into blocks, which can then be assigned to efficiency zones based on computational needs.

Therefore, L1 and L2 caches are core addressable. L3 cache is zone addressable. L4 cache is globally addressable.

The die should be significantly bigger than the current CPU architecture equivalent, but still smaller than the combined size of the typical CPU and RAM silicon modules. It should be significantly more performant than the equivalent, especially at sustained load.

Around the die, 4 soldered-on PCI-e NVME silicons should be directly connected to the bus as L5 Express storage. On the two diagonal corners, there should be two encryption Co-processors, each controlling two NVME silicons, the L6 storage and below, and the motherboard firmware. The other two edges should have, two dedicated neuro-network processors.
 
  • Angry
Reactions: Babygotfont
False Claims, Samsung include more memory for preventing app from restarting and use less power to resume the app.
There has to be a balance between what is a reasonable amount of DRAM in a device, the user’s needs (NOT their wants and desires, which engineering can never satisfy), power draw, et al. Exactly what @cmaier is describing. Samsung putting 16GB in a phone is meant to look good on a marketing tear sheet, woo the spec hounds and compensate for any deficiencies in Android, Samsung’s own shortfalls, etc.
 
Meanwhile, my 2017 iPad Pro handles things just fine.

Honestly, the only thing that tempts me to buy a 2018 iPad Pro is the pencil.

Don't care about the performance bump nor the camera.
Yep, you won’t need a new iPad. LUCKY!
 
I'm sorta wondering when Apple starts building their own lightweight high-performance low-power ARM-based servers for their server farms. They could literally make one that works exactly how they want. And be increasingly less beholden to Azure/Amazon for services.

How do you know they’re not already?
 
  • Like
Reactions: CarlJ and firewood
False Claims, Samsung include more memory for preventing app from restarting and use less power to resume the app.
False statistical assumption. The huge apps that most people don't restart most of the time can use more energy from near constant dynamic memory refresh, than the energy occasionally used to read them back in from static storage.
[automerge]1584380339[/automerge]
If only Apple would license the a14 out to server makers. A 50-core a14!

Or you can rent a 64-vcore ARM processor from Amazon. By the hour. Already.
[automerge]1584380677[/automerge]
Out of curiosity, what do people do with their phones that benefits from these processor speeds? I can understand that it is useful in an iPad. Probably just me getting old and can't be arsed to use a phone for anything processor intensive

Yup. It's bored gen-X-ers who give up on any web site or app that takes longer than a few hundred milliseconds to load, run, or render. Apple needs to give them multicores with 3GHz+ to keep them from swiping to the next thing, or driving their car into a light pole.
 
Last edited:
Concurrency works ok now through dispatch queues. These proposals are nice to make it easier, but not necessary to take full advantage of all those cores.

They’re not necessary, but as can be seen with async/await, making concurrency and parallelism easier to write can go a long way towards making it more common.

You said 16 cores — that’s useless with the status quo of “concurrency works OK”. Ten of those will be bored much of the time.

(Maybe Apple should look into something akin Turbo Boost?)
 
They’re not necessary, but as can be seen with async/await, making concurrency and parallelism easier to write can go a long way towards making it more common.

You said 16 cores — that’s useless with the status quo of “concurrency works OK”. Ten of those will be bored much of the time.

(Maybe Apple should look into something akin Turbo Boost?)
Sure. My main point is that concurrency is as ”easy” (or hard) on Arm multi-core as it is on anything else.
 
Android phones have more memory largely because most apps are Java, ergo JIT, and ergo require more RAM.

Bad decisions beget bad decisions. JIT begets need for faster processors (without ability to deliver them) and more RAM, which begets need for more battery, etc. etc.
 



Apple's unreleased A14 chip is rumored to be the first Arm-based mobile processor to officially exceed 3GHz, according to a new report by Research Snipers.

fouriphones2020.jpg

Apple's A14 processor, the successor to the A13 chip in both the iPhone 11 and iPhone 11 Pro, is expected to debut this fall in Apple's "iPhone 12" models. The report highlights the suspected Geekbench 4 score of the A14 chip, with a frequency reaching 3.1GHz. This would be 400MHz higher than Apple's current A13 Bionic chip with a frequency of 2.7GHz.

At such a frequency, the chip's Geekbench 5 running points have surged. The report mentions that the A14's single-core performance shows a score of 1658 (up 25% from the A13), and a multi-core score of 4612 points (up 33% from the A13). The extra processing power will be helpful in running simultaneous workflows, navigating through apps, and more.

Apple chipmaker TSMC is expected to ramp up production of Apple's 5nm-based A14 chipsets in as early as April of this year.

In addition to the A14 chip, rumors recently have mentioned both Arm processors with Mac Pro level performance and a Mac with an Apple-designed Arm processor are in the works.

Article Link: Apple's A14 Chip Rumored to Become First Arm-Based Mobile Processor to Exceed 3GHz
If A14 Geekbench scores have started to leak, it won't be long before we get some leaks of the future Laptop ARM CPUs too. I am really curious to see what that's going to look like from a performance viewpoint.
I am planning on replacing my MBP13 very soon so I probably won't wait until then but it's interesting nonetheless.
 
I'd rather have more DRAM than more MHz. My Apple mobile devices that have become prematurely obsolete were due to DRAM starvation rather MHz starvation. Apple knows this that's why they market MHz while keeping DRAM minimal so people will upgrade more often.

Classic example is while the iPhone 4S had 512MB DRAM that reloaded apps like crazy the competion, Samsung, had 2GB DRAM (4X more) which allowed more advance OS features like split screen multitasking, PiP, etc. and is still usable to this day. iPhones now have 4GB while competition has 16GB (4X more).

I'd take 4X more DRAM future proofing over a minor MHz bump. Arguing against more DRAM is like arguing against more pay. Who does that?
 
Last edited:
  • Like
Reactions: BusanAA
Imagine a possible future when Final Cut Express for iPadOS allows you to add your iPhone 12 to a list of network renderers for your A14X iPad Pro and with both devices using 802.11AX, making it feasible to actually dole out the work to another device wirelessly. Science fiction? Possibly.

Not science fiction at all insofar as that sort of thing has been going on with 'regular' computers for years, although there's a slight matter of network bandwidth if you want to do it wirelessly.

The question is, why you'd want to use an iPhone of all things (expensive, redundant display, limited battery power, extremely thermally limited) as a 'remote' render engine when (say) a Mac Mini, a cheap-as-chips generic PC or even a rack of 20 Raspberry Pis would run rings around it for less money.
 
Not science fiction at all insofar as that sort of thing has been going on with 'regular' computers for years, although there's a slight matter of network bandwidth if you want to do it wirelessly.

The question is, why you'd want to use an iPhone of all things (expensive, redundant display, limited battery power, extremely thermally limited) as a 'remote' render engine when (say) a Mac Mini, a cheap-as-chips generic PC or even a rack of 20 Raspberry Pis would run rings around it for less money.

It was hypothetical as much as anything...imagine being out with just your iPad Pro, iPhone and, possibly, your video camera/DSLR. You’re done with your shoot and you’re already editing the video on your iPad Pro in Final Cut Express (or whatever Apple ends up calling it) or in LumaFusion (or Premiere Rush, blech!) and you need a little extra oomph for the render portion. Turn on Wi-Fi, add the iPhone as a network render node and continue working.

Your examples assume that I am going back to an office where I have access to that sort of kit when in reality I may be cutting in the field that is remote enough that I have cellular coverage, but not at all convenient to a desktop. I have my iPhone with me at all times, so I can put it to work.

We’re past the point where journalists, videographers, YouTubers, et al. should be concerned with getting back to an “office” to cut video, title it, narrate it, upload it and move on to the next project.
 
If A14 Geekbench scores have started to leak, it won't be long before we get some leaks of the future Laptop ARM CPUs too. I am really curious to see what that's going to look like from a performance viewpoint.
I am planning on replacing my MBP13 very soon so I probably won't wait until then but it's interesting nonetheless.

It's fake btw.
 
Android phones have more memory largely because most apps are Java, ergo JIT, and ergo require more RAM.

I think it’s a bit more of cause and effect.

Apple can optimise their OS better to take advantage of less ram, and I suspect they do this to justify including less ram in their devices, which in turn lets them cut down on costs.

I have seen the speedtests and acknowledge that the iphone pro max could probably do with a little more ram, but the reality is that most people are not constantly opening and closing huge apps.

Conversely, android OEMs have no such control over their software, and thus rely on increasing ram to get around any inefficiencies in the OS.

Just different ways of skinning the same cat.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.