Not only that. Overarching systems like this open up abuses for blackmail. Somebody can inject those hashed pictures on the server side, and suddenly you are flagged. Imagine the risks if you're an opposition politician or journalists.
That is certainly a possibility, since Apple has put the system in place, and such an attack is technical doable.
That said, it will require broad technical skills at the level of NSA or the Chinese cyber-warfare groups:
1 - an attacker would have to breach your iCloud account (OK, that one is probably easy), and
2 - determine a number of original images used by Apple to create the on-device hash-table (this cannot be reverse engineered from the on-device hash-table)
3 - obtain the on-device hash-table (a hacker can't use a 3. party hash-table as a substitute as Apple can't decrypt safety vouchers based on such)
4 - breach your iPhone to capture your account's safety voucher master key and produce the partial keys for the safety vouchers
5 - Build the encryption key (based on the malicious hash-table match) for the outer layer of the safety voucher
6 - get access to both the neural network and the hash function used for on-device matching to creating the Neuralhash
7 - Build the partial key for and encrypt the inner layer of the safety voucher (which contains the Neuralhash)
8 - Assemble the safety vouchers correctly and upload these plus malicious images 100% protocol-aligned with what Apple's servers expects, while masquerading as your iPhone, iPad or Mac.
As I said, certainly doable - Edit: the reach and longevity of such an attack vector would probably be limited (headlines like "Apple's CSAM system captures another government opposing journalist in Belarus" are audit triggers if nothing).
I'm personally more worried about government actors forcing Apple to use the system for other regime-mandated searches.