The on-device check computes the NeuralHash of your picture. NeuralHash is not a traditional hash function, it tries to represent semantic similarity. Apple does not tell us the tolerances in this, but they must be non-zero, otherwise they could just use traditional hashes. If the NeuralHash of your picture is sufficiently close to the NeuralHash of one of the 200,000 NCMEC pictures (maybe one of a teenager in front of a similar wallpaper), your picture goes to an Apple reviewer.
What happens next, Apple does not really explain. The Apple reviewer does not have access to the NCMEC pictures, so s/he will just look at your picture, supposedly even just a low-res derivative. "Hmm, the hash match is close, and this could be a kid, hard to tell. Better show it to NCMEC, let them sort it out." Is this feasible? I don't know, Apple won't tell. But the Apple reviewers must be making some decisions here, otherwise their existence in the process would be pointless.
So maybe NCMEC gets your picture. Here we really do not know what happens. Presumably they compare it to their database and see that it is not one of theirs. Maybe NCMEC drop the case, and the only damage is that two or more strangers have stared at your private picture. But maybe NCMEC says, "wait, this is not in our DB, but she might be under age anyway! Better forward this to the police, let them sort it out." Is this feasible? Only Apple or NCMEC knows.