Most cameras have a subset of the pixels on the sensor it uses for metering/focus (otherwise the computational load makes snapping a pick delayed). But with enough horsepower you can use every pixel. That's only useful when you can select a subset to actually focus on (face, etc) otherwise you get all sorts of weird focus problems.What exactly does 100% focus pixels mean?
On my Canon 5Dmiv, they use what are called dual-pixels, which is a coincident pair of pixels across the entire sensor, so if you aren't in dSLR mode, but rather making it work like a mirrorless camera either for still or4K video, the camera uses the on-sensor pairs to get incredibly fast focus, and you can even do clever tricks like record each odd/even separately to separate shots so you can "fix" slight focus errors when you have the aperture wide open. You can't fix really bad focus errors, but if you're shooting say F1.4 (my 50mm is a 1.4) your depth of field is less than tip of nose to eye, and if I didn't quite get it right, you cam gently adjust between the pairs to bring the eyes into focus. I don't use that often, but pretty cool. The cooler bit is I've shot some friends' concerts at clubs and one club the entire room was lit by one strand of purple Christmas lights (ugh) and I couldn't see through the darkness to get focus dot the singer's eyes, so instead put it in mirrorless mode (in what I like to call "Jesus take the wheel" mode, where you select the thing you want to focus on, and press the trigger and it will wait until it has it fully in focus then snaps, they look amazing despite ISO 12400, F3.5 at 1/125 while she bounced all of the place. So I applaud apple for using the entire sensor giving the algorithm far more info to play with.