it's in the technical document.you do not know that.
NeuralHash NeuralHash is a perceptual hashing function that maps images to numbers. Perceptual hashing bases this number on features of the image instead of the precise values of pixels in the image. The system computes these hashes by using an embedding network to produce image descriptors and then converting those descriptors to integers using a Hyperplane LSH (Locality Sensitivity Hashing) process. This process ensures that different images produce different hashes. The embedding network represents images as real-valued vectors and ensures that perceptually and semantically similar images have close descriptors in the sense of angular distance or cosine similarity. Perceptually and semantically different images have descriptors farther apart, which results in larger angular distances. The Hyperplane LSH process then converts descriptors to unique hash values as integers. For all images processed by the above system, regardless of resolution and quality, each image must have a unique hash for the content of the image. This hash must be significantly smaller than the image to be sufficiently efficient when stored on disk or sent over the network. The main purpose of the hash is to ensure that identical and visually similar images result in the same hash, and images that are different from one another result in different hashes. For example, an image that has been slightly cropped or resized should be considered identical to its original and have the same hash
Visually similar images meaning images that were modified but are of the same content. They gave an example of an RGB photo converted to a black and white photo. A neural hash would result in a match. However 2 photos taken of the same object at different angles would result in different content and therefore result in a different neural hash.