Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I have read that webpage several times looking for a contact email, with no luck. All the contacts are via Twitter and I don’t have a Twitter account because Twitter asks for you mobile number.

Don’t bother to copy the “Contact Us” link, because that doesn’t work either. It takes you to a web called “Future”. There’s no way I can contact AnandTech like on the old days anymore.

And no, writting a comment isn’t an option either, because I just made an account, and the comment is taken as spam (thus, it is not published).
 
A company that downplays their product performance and does not advertise an in theory in perfect lab condition environment results? have my respect....

Apple has no need to boast about anything and zero latitude for misrepresentation. Their products are the most discussed, analyzed, and scrutinized on the planet. Even before the official product launch a few early shipped units will be torn down (with x-ray pics), analyzed, benchmarked, and tested for all to see. The exterior will be subjected to razor blades engraving elephants, flames, drop tests, blenders, etc. You'll know if the screen scratches at 6 with deeper grooves at 7. Forensic analysis on the battery, camera, SOC, etc will be complete and up on YouTube, all before the official launch date. If forensic analysis isn't your cup of tea there's even bubbly blonde YouTube influencers talking up the products on the company's behalf. When you make a good product it's better to just let others do the talking for you. Only companies that make poor products need to misrepresent.
 
Last edited:
  • Haha
Reactions: GuruZac
So far so good on all. I did a mini review of sorts in the iPad forum, and I am more than pleased with the capabilities this device has out in the field for my workflow. Since then, I have learned to enjoy using it for basic media consumption and some of the more graphic intensive ios games. The ”jelly” issue just isn’t something that impacts my photo or video editing. I do see it (when swiping through photos), however the m1 MacBook Pro does it even more and still doesn’t bother me. It doesn’t impact my video editing one bit.

Never had an iPad this small before, but with the full iPad OS and the speed of this system, multiple applications and full drag n drop support is awesome for portable workflow. Since I often shoot events with a photo vest, or , cargo pants on, the pocket space on both articles offer plenty of space for this device. I am honestly considering an iPhone 13 mini for my main phone now, the pro models not having USB-C kills any pro productivity for me (camera won’t connect via lightning).

biggest benefits
- USB-C
- screen size
- processor speed
- build quality
- full iPad os support
- screen image quality
- on screen keyboard is nice (typed this post using it)

biggest drawbacks
- screen could be brighter
- 512gb should be a storage option
- Touch ID button location is not ideal.
- could use a touch more battery life


Thanks a bunch for your assessment - appreciate it! I also checked out the mini review linked above. Looks good. Will check it out at the local Apple Store and likely purchase one.
 
Each time I click on Andrei name it links to all his articles. I’ve tried everything on that website and I don’t get anything. In my country, trying this so bad isn’t what we call lazy. Your attitude, however, its pretty rude. Thanks for the help anyways.

There is an Email link on the author's page:

andrei.png


I agree that it could be a little easier to find. :)
 
  • Like
Reactions: Populus
I remember some Nvidia engineer doing a presentation a few years ago stating that (in terms of performance and efficiency):

"Compute is cheap, memory access is expensive"

I.e., once you can feed the GPU/CPU the relevant instructions/data, getting the work done inside the chip is trivial (from an engineering perspective). The issue is keeping the processor actually working.

These days its all about memory system throughput and limiting your accesses to DRAM (or even worse, swap or network). Hence the massive improvements in cache. This has been the case for years, but increasingly more so. Memory and system bus speeds are improving far less quickly than on-chip processing power.

People may not like with with the M1 (for example), but in that form factor it makes total sense what apple are doing with increasing cache sizes and sticking DRAM as close as possible to the CPU/GPU die (i.e., on the package). We're at the point now where waiting for (and using enough power to push) the electrons to physically travel over wires at the scale of a system board is becoming an massive impediment.
 
I remember some Nvidia engineer doing a presentation a few years ago stating that (in terms of performance and efficiency):

"Compute is cheap, memory access is expensive"

I.e., once you can feed the GPU/CPU, getting the work done is trivial. The issue is keeping the processor actually working.

These days its all about memory system throughput and limiting your accesses to DRAM (or even worse, swap or network). Hence the massive improvements in cache.

People may not like with with the M1 (for example), but in that form factor it makes total sense what apple are doing with increasing cache sizes and sticking DRAM as close as possible to the CPU/GPU die. We're at the point now where waiting for (and using enough power to push) the electrons to physically travel over wires at the scale of a system board is becoming an massive impediment.

That's interesting... thank you!
 
Apple has no need to boast about anything and zero latitude for misrepresentation. Their products are the most discussed, analyzed, and scrutinized on the planet. Even before the official product launch a few early shipped units will be torn down (with x-ray pics), analyzed, benchmarked, and tested for all to see. The exterior will be subjected to razor blades engraving elephants, flames, drop tests, blenders, etc. You'll know if the screen scratches at 6 with deeper grooves at 7. Forensic analysis on the battery, camera, SOC, etc will be complete and up on YouTube, all before the official launch date. If forensic analysis isn't your cup of tea there's even bubbly blonde YouTube influencers talking up the products on the company's behalf. When you make a good product it's better to just let others do the talking for you. Only companies that make poor products need to misrepresent.
You're right, Apple doesn't need to engage in misrepresentation. And yet they do—and in bizarrely obvious ways, by which I mean it's obvious they are engaging in misrepresentation, such that it gains them nothing, costs them credibility, and just plain embarrasses them.

Take, for instance, when Apple made the laughable claim that the XDR Pro monitor was competitive with a $40,000+ HDR mastering monitor—when they knew that it was not capable of meeting minimum Dolby Vision HDR monitor specifications, and that this would become obvious once the monitor hit the market. SMH.
 
Last edited:
A15 is even impressive running in low power mode. The 4 efficacy cores are very powerful and only use about 3 watts. This video demonstrates using low power mode to play games and the A15 still maintains a perfect 60 fps while using less power and generating less heat. Pretty flipping amazing, IMO.

 
The problem hasn't been hardware or performance for a while now. It's about utilizing it. iOS or even iPadOS is simply not using all that performance. No pro apps. Locked down OS. At this point, all these advances in performance is almost a mockery, as if they want to say "Yes we have the best chip - but you can't really use it to its max potential".

On the flipside, if Apple makes full use of the performance, people will say it's sluggish and bloated.

They make a vastly faster chip than anyone else - people complain.

They don't - people complain.

Take what they make and enjoy it. You can always complain no matter what happened; and you can always choose not to also, no matter what happened.
 
  • Like
Reactions: Wizec
On the flipside, if Apple makes full use of the performance, people will say it's sluggish and bloated.

They make a vastly faster chip than anyone else - people complain.

They don't - people complain.

Take what they make and enjoy it. You can always complain no matter what happened; and you can always choose not to also, no matter what happened.
It is a conundrum isn’t it?
 

He is the reason Intel was on top of the game over a decade, and he is the reason Apple it is where it is, a King in silicone

Srouji strutting like a BOSS in that M1 annoucement. Not since the original iPhone have I gotten chills until seeing that announcement.

What he's brought to Apple is not unlike Apple launching the Macintosh!

Cannot wait what he has in store of us:
iPhones with better or newer technology on the PCB itself (elements used to cool),
incredible peak performance of an upcoming iMac Pro and Mac Pro.

Those of you that remember the Power Mac G5 leak and how that shook up the industry & Intel racing to catch up some 8months later petitioning to have 'The Worlds Fastest Computer' ads removed after finally catching up ... I'm salivating for over a 3yr time frame for intel and AMD to catch up.

Furthermore the performance of integrated GPU I'm expecting to take leaps and bounds where gaming creators will start putting time and effort bringing top end games to macOS.

This article is one of the reasons why I’ll continue to use Apple products for the foreseeable future.

It's not just the silicon performance but the rapid performance and efficiency of the kernel and UI overtop as well.

Those NeXT gods really gave macOS X some legs man!
 
Last edited:
Launching apps has little to do with the CPU performance and a lot with the OS itself. If Apps launch faster on Android, it’s because the OS is designed for faster app launch or maybe the app is not optimized for quick launch or both. Anyway, where did you get that from? Hot app launch on my 11 non pro is instant, how can one meaningfully do faster than that?
I messed results in Anandtech graph. SnapDragon little cores are no way faster then A15. My asumption was wrong. But soem apps opens faster on android some on iOS.
 
Why?

”You get a bigger speed improvement by using CISC and caches than RISC and caches, because the same size cache has more effect on high density code that CISC provides.”

Intel and AMD chips are RISC.
Intel and AMD chips are NOT RISC!
using some code within is a specific task it doesn't make the entire chip RISC!

The links you've provided are showing just how long Intel & AMD have failed to really improve the x86 architecture on its own merrits and now is falling behind (or has since implementing RISC code within).

What you're saying is like you alone won a 9.52 sec 1/4-mile drag race on ICE engine alone, yet your care has 2 functioning electric motors that helped it's launch and acceleration 1/2 the race.
 
Why?

”You get a bigger speed improvement by using CISC and caches than RISC and caches, because the same size cache has more effect on high density code that CISC provides.”

Intel and AMD chips are RISC.

1) I designed many RISC and CISC chips, including designing chips at AMD, and I was one of the core team that designed AMD64 (now x86-64). I assure you, AMD chips are not RISC. Nor are Intel.

2) The same size *instruction* cache has more effect on CISC. But the data cache, not the instruction cache, is where the action is. And the effect of code density with a given cache size is easily overwhelmed by the terrible penalty that you pay in the instruction decoders and scheduling logic for CISC. I’d much rather increase the size of my instruction cache by 10% and eliminate multiple pipe stages and 20% of the logic in the core, which is what you can do with RISC, instead of buying a tiny gain in instruction latency.

The reason CISC tightly encodes instructions has nothing to do with performance, and never has. The reason CISC exists is that you had very little memory, so you wanted to fit as much functionality into every byte as possible. That is no longer a problem, and hasn’t been in 25 years.
 
Not really, only by Intel marketing. Most CPU guys disagree

They're a hybrid which has some advantages:

  • RISC back end to get high instruction throughput
  • CISC decoder front end to make more efficient use of CPU instruction cache and memory bandwidth via code density.
The CISC front end is essentially code compression for a bunch of smaller RISC style instructions that they are decoded to in microcode.

RISC isn't all advantages - there are tradeoffs - ~1.5x increase in code size due to more instructions required to do the same thing.

The Intel/AMD hybrid approach gets around that. This wasn't something originally intended however, it's more a case of accidental benefit due to needing to go RISC on the back end to improve throughput. And yes, a true RISC design could just likely fit more cache on the same die to offset this somewhat.

However, ARM and PPC aren't truly RISC any more either. All modern processors include things like media encoding and matrix math instructions and other stuff that isn't strictly compatible with the RISC philosophy.
 
Last edited:
They're a hybrid which has some advantages:

  • RISC back end to get high instruction throughput
  • CISC decoder front end to make more efficient use of CPU instruction cache and memory bandwidth via code density.
The CISC front end is essentially code compression for a bunch of smaller RISC style instructions that they are decoded to in microcode.

RISC isn't all advantages - there are tradeoffs - ~1.5x increase in code size due to more instructions required to do the same thing.

The Intel/AMD hybrid approach gets around that. This wasn't something originally intended however, it's more a case of accidental benefit due to needing to go RISC on the back end to improve throughput. And yes, a true RISC design could just likely fit more cache on the same die to offset this somewhat.

However, ARM and PPC aren't truly RISC any more either. All modern processors include things like media encoding and matrix math instructions and other stuff that isn't strictly compatible with the RISC philosophy.

No, they aren’t a hybrid. There is no such thing as a “RISC” back end. These are not chips with some sort of instruction translator that translates to pure risc instructions and then sends them to a risc CPU. The microcode instructions do not solve CISC’s problems, and the microcode instruction stream is not at all RISC. CISC complexity is found throughout the entire pipeline, in every unit. Nobody who actually designs CPUs would describe x86 chips as anything other than CISC.

And ARM is truly RISC. The RISC philosophy has nothing to do with the number of instructions, but with the complexity of instructions, where complexity is defined in a very specific way (only read or write to a register except for limited LD/ST instructions, no variable length instructions (though multiple fixed instruction lengths are permissible), no complicated memory addressing, etc.

I designed chips at AMD, Exponential (PowerPC), Sun (SPARC), etc. Nobody I ever worked with would consider calling ARM CISC or x86-64 RISC (or “hybrid”).
 
  • Like
Reactions: I7guy
They're a hybrid which has some advantages:

  • RISC back end to get high instruction throughput
  • CISC decoder front end to make more efficient use of CPU instruction cache and memory bandwidth via code density.
The CISC front end is essentially code compression for a bunch of smaller RISC style instructions that they are decoded to in microcode.

RISC isn't all advantages - there are tradeoffs - ~1.5x increase in code size due to more instructions required to do the same thing.

The Intel/AMD hybrid approach gets around that. This wasn't something originally intended however, it's more a case of accidental benefit due to needing to go RISC on the back end to improve throughput. And yes, a true RISC design could just likely fit more cache on the same die to offset this somewhat.

However, ARM and PPC aren't truly RISC any more either. All modern processors include things like media encoding and matrix math instructions and other stuff that isn't strictly compatible with the RISC philosophy.
Also, regarding code size, when was the last time you ran out of instruction RAM? 1997? This is not an issue in modern computing. The memory used by data is many times that used by instructions.
 
The M1 GPU uses tile-based deferred rendering (TBDR) whereas most mainstream PC GPUs use immediate mode rendering (IMR). The two rendering modes are fundamentally incompatbile, which is likely why eGPU support is not an option on M1 Macs even though the Mac can “see” the GPU.

AMD and nVidia GPUs will never be available on M-series Macs due to this incompatibility. Apple will more likely release their own higher-powered GPU options which, given the inherent efficiencies of TBDR over IMR, should keep pace with nVidia’s offerings.
tile-based deferred rendering and immediate mode rendering aren’t incompatible. Where have you heard this?

this have no impact if you use an another GPU. Intels latest Xe iGPU is tile based with full compatibility.
If apple doesn’t provide a GPU that can compete on parity with a 3090 they will kill the entire Mac lineup if they won’t allow external GPUs
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.