I guess my point is that people other than software engineers and mathematicians needed to look at this: ethicists, civil rights experts, behavioural scientists, and political scientists. Most of the information coming from Apple has been technical, as though only engineers were involved in the proposal, and the public comments by Apple executives just look outright naive.
I understand your point about human review of the 'hits' from the system. However, most of us will never have CSAM material on our phones or computers, but we might have pictures of our kids, nephews and nieces, etc. The moment the threshold is crossed and an Apple employee reviews a photo that turns out to be a false positive, privacy has been shattered. And remember if somebody takes a series of picture of the same subject, if one is flagged as a false positive, other are likely to be. Apple hasn't told us what the false positives from the system look like. Are the false positives going to be pictures mostly of kids? Is Apple going to alert users that pictures were reviewed by a human and which pictures were reviewed? If they don't it, will be creepy to think that somebody could look at your private photos without informing you.