It is reasonable to think that, but no. Since iOS (and iPad OS) is, from the very beginning and continues to be, a smartphone OS, Apple relies on app developers to include support for what is typically OS-level functionality found on traditional operating systems.
Things like clipboard support, background processing, multi-window, etc.have to be included by the app developers. This is why features like multi-window are of limited value for some because not all apps support it. The larger the iPad the more frustrating it is.
Your examples aren’t great examples of “OS-level functionality” to begin with. They are more examples (minus clipboard) where iOS takes on more functionality at the OS level, rather than just leaving it up to developers to do it like on the desktop.
Clipboard support has always been something developers have to integrate with, even on macOS, if they aren’t using standard text controls which integrated by default. Win32 and Carbon (remember that?) devs had it worse, although WPF/.NET folks might/should have it about the same.
Multi-window created a bunch of work because devs
assumed they were working in a single window environment, and tied a bunch of UI state to singletons or their AppDelegate. So now they have to go fix all their bad decisions. New projects get the new delegation model for multi-window by default, meaning they will instead tie all their state to the SceneDelegate instead of the AppDelegate, unless the dev actively sabotages themselves. The irony here is that I remember some of the cruft floating around from the desktop when that started going multi-window on the PC side, back when Windows used to give an app a single window. A lot of the same bad decisions were made then too, and had to be fixed, as with iOS. I’ve worked on projects that have code that dates back that far, and seen the things they did to support multiple windows. History repeating, sadly. But one key difference here between desktop and iOS is that on desktop, the app effectively manages the windows. iOS takes on more responsibility for window management than AppKit.
Backgrounding on desktop wasn’t so much an OS level feature, but a side effect of not having any care around battery management when a lot of the stuff was written. And desktops not having the memory/power to spare on process management like the mainframes of the time would. Because you didn’t have battery-powered computers when it was all designed. iOS can behave like macOS here if Apple wanted, the behavior of forcing backgrounded processes to be idle was bolted onto the normal process management Darwin uses. Making apps white-list their background behavior is new, but I’m not sure accounts for “putting the onus on the developer” beyond the fact that they should probably be thinking about how they use battery in the background instead of just spinning up threads and hoping for the best. It would have been nicer if the API for it was a little more general-purpose, I’ll agree to that.
It would work perfectly in all apps if iOS were designed properly. Android has had full and global support for mice and trackpads for many years. Android apps don't need to be "aware" of the presence of a mouse. That is handled by the OS.
When a mouse is detected, Android displays a mouse pointer. But when it is not, no mouse pointer. It's seamless.
I find it odd that people look at what Apple is doing with mouse support in iOS as if no other mobile OS provided mouse support before.
I find it funny you suggest that iOS wasn’t designed properly here. It is very similar to the Android model, but it also builds in the AppleTV-like animations which are new and I-Beam support as an OS-level feature rather than something the app developers have to implement like on the desktop. The problem is more that the OS also lets you hang yourself if you choose to. If I create my own button class instead of subclassing UIButton, it means I lose out on anything Apple does with UIButton (Reeder). Using UITextInput instead of UITextField means I’m now responsible for interacting correctly to enable the I-Beam (Office maybe?). Using a custom view with custom gestures instead of a scroll view means I’m now responsible for making sure all gestures are handled in my new custom view (Reeder again).
I’m not really sure I’d blame Apple for people doing this stuff. The apps getting bitten are the ones pushing the boundaries of UI on the platform, and building up controls/views from scratch to do it (which is generally a bad practice, but done anyways for various reasons). By doing that, they are taking on more responsibility themselves, and as a result, are more likely to be caught off guard when Apple does add new functionality to the platform.
And really, mouse input to an app is
mostly indistinguishable from touch or Pencil. It mostly boils down to the UITouchType on the touch event. Most apps
shouldn’t be messing with UITouchType, or having special behavior based on it. If they are, that’s where bugs can come into play where the mouse can’t click on things.