No, I do not think that should be taken for granted. For one thing, the system is supposed to trigger only when there are around 30 pictures with hash matches, to reduce false positives. That means moderately careful CSAM-consumers are safe even when they occasionally do not pay attention and let such a picture into their iCloud. (How long until governments demand a lowering of the threshold?
"Apple allows 29 CSAM-pics per user!" Bad PR.)
Then after triggering it is when things get really murky. Apple's reviewers are supposed to provide another safeguard against false positives, but they only get to see degraded variants of the user pics, and they do not get to see the CSAM-originals from the database. How will they decide when their review-version is not blatantly obvious? Are these pixels a teenager or a young adult? Discard, or forward to NCMEC and let them sort it out?
Which leads to NCMEC, the big unknown. A semi-private organization, shielded from transparency requirements and oversight, yet legally the sole US authority over its database of alleged CSAM pictures, and in practice the leading authority worldwide. To my knowledge there is no independent auditing regarding the "quality" of that database. However, when NCMEC gets pictures from companies, they decide whether to forward them to the relevant authorities - and from the Swiss federal police we know that
90% of these NCMEC-notifications are not actually CSAM and discarded. So NCMEC is apparently quite clueless about CSAM - not a good sign for their database. They just like to spamspamspam police agencies. The police has to review every such notification, tying their up resources. Investigating actual CSAM-cases takes time, and arguably NCMEC's spam may hamper actual investigations.
Unfortunately NCMEC's spam is useful for agencies that demand further erosion of privacy. E.g. the German federal police regularly touts the inflated numbers from NCMEC as a reason why lawmakers should mandate generous data retention for ISPs. Basically:
"We get so many reports, often ISPs have deleted critical data by the time we start investigating a case!"
TLDR: Even when ignoring concerns about privacy and future expansions, I remain unconvinced that the proposed system would not do more harm than good when it comes to fighting CSAM.