I think most people are not objecting to the child protection features, just the scanning of pictures in iCloud Photos, and I think that is what the researchers were talking about.From what I’ve read here: they don’t. The problem with their service was that it used an external server to scan for content. That’s not the case in Apple’s implementation at all. All communication is completely end-to-end encrypted. Malicious users can still send offensive material to whomever they want and no-one except for the receiving user will know about it. However, with the new service, if parents choose to enable it, children’s accounts will scan the received images after decrypting them but before displaying them and present the minor with a content warning. If the kid is below the age of thirteen, the parents will can choose to get a warning that improper material was sent to their child. None of this is enabled by default. No external sources are alerted, the service (iMessage) or its provider (Apple) don’t get a notification at all. So, the E2E-messaging is still safe, but children get an optional layer of protection from creeps. Also, older minors can avoid unsolicited dick pics without their parents knowing about it (just in case some moronic parents try to blame their kids just for receiving that kind of harassment. Sadly, victim blaming is not unheard of).
EDIT: pcmxa beat me to it.