If it's the chances of a "hash collision" - i.e. your cat photo having the same hash as a known illegal image - then its a well-defined mathematical property of the hash algorithm and will be tiny. One of Apple's blunders was talking about AI detection of nude photos sent to kids in the same breath - which is far more sketchy and prone to overstatement by the AI writers. Hash matching is not AI/Machine Learning.
Also, they're talking about human confirmation of any matches - done properly that should reduce the probability of false positives to zero - but the "done properly" bit is the kicker. Ideally, the confirmation should be a blind test in which the "matched" images were mixed in with a stream of other random images (matching and non-matching) - otherwise the tester will look at every image expecting to see a matching image. Even if they're comparing the image with the one it supposedly matching, you want them to be hitting "false, false, false, false, true, false, false, false...." rather than vice versa.
The greatest risk, really, is that the agencies responsible for supplying the list of hashes will get careless or over-zealous in what they deem "illegal" and the checkers will be obliged to report anything that matches, even if it appears to be a picture of a fully-dressed kid wearing a "The President is a Big Silly" T-Shirt.
iPhones are continually downloading software updates - Apple can implement any technology they want at any time. If Apple want to "spy" on your phone there hasn't been any technical issue stopping them for years apart from the law.
The real issue here is not the technology, but the fact that ticking "I accept" on iCloud now includes granting Apple permission to do on-device checking against a third-party hash list. There's no immediate practical upshot, any cloud service will check your photos anyway but you've crossed a line in the sand (and maybe waived a constitutional right in the US) by granting Apple that permission.