Just because there isn't a API directly targeting user data, doesn't mean that hackers using Obj-C accessing the fullest array of APIs public and private, with no oversight, won't figure out how to compromise user data.
Only the naive think any OS is hack proof when there is No Oversight on what is installed. . . .
It means exactly the opposite of your post, sandboxing employs memory barriers that do exactly what you claim is NOT done. It does not sound like you have any development experience with the Apple ecosystem.
As to 'hack proof', of course no system is absolutely and completely hack proof, but linux and macOS seem to do just fine without the nanny'isms (oversight) you want employed. Two or three years ago, was oversight and important part of the process, yes, but not today. You should update your knowledge.
There would be no increased piracy, so no need to consider it. There is not any increased piracy in macOS and its sandboxing is way less than iOS. There is not increased piracy in linux which often has sandboxing (secure linux) turned off.
So to summarize, today there is absolutely no need for the same oversight in iOS that was present years ago when iOS was immature. Even back then Apple's oversight was limited in what they caught. They mostly caught developers after the fact. The reason it did not work well back then is there were too many ways to obfuscate the code. That is why we have Apple sandboxing today.
Sandboxing means the Apple approval process does not need to catch these issues. Apple had to do this because, let me repeat, the Apple Approval process was not able to catch developers mis-using Apple's devices. By and large it was marketing and scare tactics that kept developers honest. Sure, Apple scanned the developers code, but that scanning was easy to defeat and only relative unknowledgeable developers got caught.
Even today, I can get all kinds of code past the review process, but I cannot violate the sandboxed memory barriers (at least without NSA level knowledge).
[doublepost=1553790448][/doublepost]
You obviously haven't used a android device before in your life.
Probably should have said "Of course in Android, it does NOT, IMO, work as well as iOS." The problem with Android is that there are so many OS and API versions in the wild that don't have security updates applied or can't be updated, because the device manufacturers don't bother.
For example, Android 6 SDK 24 and below (my quick count is 46.8 % of active devices) defaulted to allowing all apps read permission on all data from other apps for the given user. But get this, even after the default was changed in SDK 24 the app could override it. This is why there are so many hacks in Android that don't appear in iOS. About 83% of iOS devices are on the current version of iOS, 12% are one version back, that is 95% are getting iOS security updates.
Android 9 SDK 28 or better has much stronger security, forcing each app to run in its own sandbox. The problem is that very few people in the real world are actually running Android 9 SDK 28, event though it was launched August 2018. When adopted this version should be pretty secure. I could not quickly find any adoption numbers today. the latest version with stats is Android 8.1 SDK 27 with only a 7.5% adoption (released December 2017). What does that tell you?
Technically Apple's current sandboxing may be inferior to Android 9, SDK 28, which BTW is using SELinux. But, Apple is better at making sure all devices can run the latest software. Once Android 9 SDK 28 or better is rolled out I don't think I would have a problem with Android security and will probably switch, provided that Google has not added a bunch of back doors to support its selling of user data.
So the Android world is completely different for a number of reasons, but none of them have anything to do with the relative approval processes. Android users are 5 to 10 years behind the Apple users in adopting good security measures. That is the difference. Note that I did NOT say the Android OS was so far behind.