Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I don't think Johny or anyone at Apple design stuff based on what customers ask for! Customers didn't ask for 3.5mm jack to be removed, customers didn't ask for all USB-c on MacBooks, customers didn't for camera bump, customers didn't ask for countless of things.

i never said Jony listens to what customers ask for. i said Jony thinks about what users need. huge difference.

plenty of customers asked for a physical keyboard when the iPhone first came out. Apple decided users don't need a physical keyboard on a phone. and Apple was right.

[doublepost=1554766643][/doublepost]
"For most of us on this forum, the needs are often not met, but for the average dumb 55 year old, it's enough.
I think its got more to do with pocket size and portability than age but for this average dumb 56 year old I'd really like for the IOS apps to take more advantage of the current processors(more multitasking) than shrink the size of the phone.:)

apple makes the APIs available for all third party developers to take advantage of the cores. it's not up to apple to implement them, it's up to third party apps to implement them.
 
I knew I'd find you here.

Spot on about transistor and gate sizes.
I'd also like to add that the dominating factor is not transistor size it's wire and signal integrity.
Smaller wires have less current capacity and even if feature sizes shrink, the metal does not scale the same way.
For longer distances you may need fatter wires to reduce inductance and carry the current.
You may also need additional spacing due to signal issues. (aggressors)

When I'm doing development, I don't worry a lot about the gates. I now worry about the wires between those gates and flops. I look at the timing reports to see if I have heavily loaded nets that are going to be slow, or gates where the delay is much higher than expected.

Even if you make transistors faster, metal doesn't get faster in a shrink. Actually, metal might be slower. Yeah, I know the presumption is that with things closer in the shrink, we can ignore that metal is thinner/slower with lower current capacity.
I haven't see chips get smaller. I'm actually seeing more and more chips (not mobile) reach the reticle limit.

This isn't a free lunch. That's all I'm saying.
There are lots of challenges down in the single digits.
I'd like to see the performance of process monitors (ProcMon) and ring oscillators. Some skew lots with sample structures would be nice too.
[doublepost=1554763607][/doublepost]

I was there too; .6, .35 and .25 micron.
I remember them saying we would hit a brick wall a .1 micron.
I remember them saying 90nm, then 48nm and on and on.....

I've seen optical processes using the capillary properties of water along with the refraction properties to give better performance on the lenses to focus in the lithography. We now have EUV being developed for 2020 and beyond.

There are barriers and we keep jumping over them in different ways.

I guess I'm really showing how long I've been around this stuff and how much of a true geek I really am.

Yep. Moving wires closer together is never a good thing, especially when metal thickness doesnt scale with metal width, and you end up creating giant capacitors. The wire lengths never go down enough to keep up with the decreased gate caps.
 
Agreed... iPhones have been getting thicker since the 6. It would be nice to have a thinner iPhone after 5 years.
I hope you’re kidding... the phones don’t need to be any thinner than they already are, especially at the expense of better battery life...
 
  • Like
Reactions: Codeseven
Please give some examples of “real world applications” that “don’t exist in iOS world”.

I realize this is all irrelevant to the article, but you’ve aroused my curiosity.
Cinebench R15, R20 which are based on the real world Cinema 4D application.
[doublepost=1554796156][/doublepost]
I guess you didn't know Geekbench is based on real-world application.
OK, whta's the name of that application then?
 
apple makes the APIs available for all third party developers to take advantage of the cores. it's not up to apple to implement them, it's up to third party apps to implement them.

It's also up to Apple to make concurrency (and by extension, parallelism) easier to use.

Apple needs to make the APIs good, and developers need to implement them well.
[doublepost=1554802139][/doublepost]
I guess you didn't know Geekbench is based on real-world application.

It's not, unless you really stretch "real-world".
 
So how much will NVidia overcharge for the 2180 Ti? At least we can save on heatsink compound.
 
Yep. Moving wires closer together is never a good thing, especially when metal thickness doesnt scale with metal width, and you end up creating giant capacitors. The wire lengths never go down enough to keep up with the decreased gate caps.

That's why I withhold judgement on anything until test chips and some characterization data comes back.
I'd like to see some skew lots to see where fast/fast, slow/fast, fast/slow, and slow/slow end up.
Those corners are important to see where you can end up with typical process variation.
Let's look at the leakage of LVT and HVT and see if the static power drain is really any better.
Let's look at gate pwr/MHz.
And like I said before I want to see the ProcMon and a ring oscillator to see how fast this really is.

I'm not saying it won't deliver, I'm just not celebrating until I see it deliver.
I've been burned before.
 
i never said Jony listens to what customers ask for. i said Jony thinks about what users need. huge difference.

plenty of customers asked for a physical keyboard when the iPhone first came out. Apple decided users don't need a physical keyboard on a phone. and Apple was right.

[doublepost=1554766643][/doublepost]

apple makes the APIs available for all third party developers to take advantage of the cores. it's not up to apple to implement them, it's up to third party apps to implement them.
I agree and that's why I said IOS apps.
 
Cinebench R15, R20 which are based on the real world Cinema 4D application.
[doublepost=1554796156][/doublepost]
OK, whta's the name of that application then?
Oh, you thought it was only one application in Geekbench?
[doublepost=1554832890][/doublepost]
It's also up to Apple to make concurrency (and by extension, parallelism) easier to use.

Apple needs to make the APIs good, and developers need to implement them well.
[doublepost=1554802139][/doublepost]

It's not, unless you really stretch "real-world".
You don't use any of the apps in Geekbench?
 
It's also up to Apple to make concurrency (and by extension, parallelism) easier to use.

Apple needs to make the APIs good, and developers need to implement them well.

GCD is extremely easy to use. I'm not a C programmer, but I picked it up very quickly
[doublepost=1554834105][/doublepost]
I agree and that's why I said IOS apps.

Right but you said that should be done over shrinking the size of the phone, which iOS app developers have no control over.
 
That's why I withhold judgement on anything until test chips and some characterization data comes back.
I'd like to see some skew lots to see where fast/fast, slow/fast, fast/slow, and slow/slow end up.
Those corners are important to see where you can end up with typical process variation.
Let's look at the leakage of LVT and HVT and see if the static power drain is really any better.
Let's look at gate pwr/MHz.
And like I said before I want to see the ProcMon and a ring oscillator to see how fast this really is.

I'm not saying it won't deliver, I'm just not celebrating until I see it deliver.
I've been burned before.

Let’s you and me design a chip. Some sort of A.I. deal. I hear that’s popular.
 
You don't use any of the apps in Geekbench?

Are you counting AES and HTML parsing as “apps”? Cause that’s quite a stretch and only proves the original point: those benchmarks are fairly synthetic.

They do in some cases use libraries that are also used by real-world applications. Regardless, they do not actually benchmark those applications.
[doublepost=1554843940][/doublepost]
GCD is extremely easy to use. I'm not a C programmer, but I picked it up very quickly

And yet the inventor of Swift feels that concurrency is an area that needs significant improvement. https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782
 
And yet the inventor of Swift feels that concurrency is an area that needs significant improvement. https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782

Swift developers can use GCD through C which has matured. The API is good. GCD APIs haven't changed much in the past several years while Swift only recently got ABI stability.

You're simply pointing out that Swift is still new. It's not like these engineers working on Swift were taken away from designing the iPhone's battery size. Has nothing to do with it.
 
Last edited:
Oh, you thought it was only one application in Geekbench?
I actually didn't, it's the way you wrote it that suggested it.
Last I checked real-world application is at singular.

Anyway Geekbenck is not based on real world applications, Geekbench executes in succession what it believes are relevant real world tasks(or bits of code executed in real world tasks) and after that it creates an arbitrary score based on it's own closed parameters.
Chinebench is literally based on a real world application.
 
Last edited:
Swift developers can use GCD through C which has matured. The API is good. GCD APIs haven't changed much in the past several years while Swift only recently got ABI stability.

Nobody is arguing that GCD isn't a pretty good start. But compared to, say, C#'s and TypeScript's async/await, it's quite verbose.
 
Let’s you and me design a chip. Some sort of A.I. deal. I hear that’s popular.

Yep, it's all the rage.
Sort of like the ".com" rage. Lot's of hype.
The faster you can spend the money, the more they throw at you.......
People act like AI is new. My first experience with AI was with the IEEE Micro-Mouse competition.
My undergraduate project team took our project after I graduated and won regional competition.
(I graduated mid year).

But seriously; I have thought about it.
The RISCV seems to be the processor of choice.
Tensilica and ARM just cost too much.
They have a cache coherent interconnect called TileLink.
Not sure about the compiler eco-system though.

Got any proposals and ideas?
We should talk.
 



Paving the way for a 5nm-sized A14 chip in 2020 iPhones, TSMC has announced the release of its complete 5nm chip design infrastructure.

a12bionicchip-800x563.jpg

TSMC's continued packaging advancements coupled with Apple's industry-leading mobile chip designs is beneficial for the performance, battery life, and thermal management of future iPhones. That will continue with the 5nm process:TSMC's 5nm process is already in preliminary risk production and the chipmaker plans to invest $25 billion towards volume production by 2020.

TSMC has been Apple's exclusive supplier of A-series chips since 2016, fulfilling all orders for the A10 Fusion chip in the iPhone 7 and iPhone 7 Plus, the A11 Bionic chip in the iPhone 8, iPhone 8 Plus, and iPhone X, and the A12 Bionic chip in the latest iPhone XS, iPhone XS Max, and iPhone XR.

TSMC's packaging offerings are widely considered to be superior to that of other chipmakers, including Samsung and Intel, so it's not surprising that its exclusivity is poised to continue with A13 chips in 2019 and A14 chips in 2020.

TSMC has been gradually shrinking the size of its dies over the years as it continues to refine its manufacturing process: the A10 Fusion is 16nm, the A11 Bionic is 10nm, and the A12 Bionic is 7nm. A13 chips will likely be 7nm+, benefitting from the process simplification of EUV lithography.

Article Link: TSMC Paves Way for 5nm A14 Chip in 2020 iPhones
More room for the battery?
 
Yep, it's all the rage.
Sort of like the ".com" rage. Lot's of hype.
The faster you can spend the money, the more they throw at you.......
People act like AI is new. My first experience with AI was with the IEEE Micro-Mouse competition.
My undergraduate project team took our project after I graduated and won regional competition.
(I graduated mid year).

But seriously; I have thought about it.
The RISCV seems to be the processor of choice.
Tensilica and ARM just cost too much.
They have a cache coherent interconnect called TileLink.
Not sure about the compiler eco-system though.

Got any proposals and ideas?
We should talk.

I have no ideas at all. I did tweet around 6 months ago that I had the urge to design a RISCV cpu, though.
 
doesn't mean it's difficult to use to the point where developers avoid using concurrency altogether.

No, but just difficult enough that concurrency isn’t used as much as it ought to be. Single-core gains like in the 90s aren’t happening. Multi-core code is still hard.
 
No, but just difficult enough that concurrency isn’t used as much as it ought to be. Single-core gains like in the 90s aren’t happening. Multi-core code is still hard.

can't think of a single popular app that should be using concurrency but currently isnt
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.