A RAW file isn’t a photo. It’s not even an image. It’s data, and I mean “raw” data. Each of the square pixels on your camera sensor picks up a certain amount of light (0 means it was pitch black), and this “intensity” is recorded for each pixel and saved in a file. Again, just numbers. Even without getting too technical, I think you can understand that a pixel that recorded an intensity of 37431 cannot convert 37431 into a colour. Alone, it represents some greyscale value ranging from pitch black to perfect white.
Whatever software you use to view/open the “photo” — your iPhone itself, Lightroom, Photoshop, etc — is taking the RAW file values and doing some crazy interpolations of all the data, and creating a colour (or b&w) photo out of it. The result of this interpolated data is saved as a JPEG. If you are “looking” at a RAW file, then it’s basically a JPEG you’re viewing. A live feed on a camera or iPhone screen is this data interpolation being done in real-time.
RAW files are easier to edit than JPEGs, especially when it comes to colour and white balance, because all the raw data is still there. By editing a RAW “photo”, you’re just telling the software to do the “RAW data” to “photo conversion” differently.
Since every software that can read RAW files has different instructions as to how to convert the data into an image, the way a “RAW photo” looks will vary between software. That’s not true of images (JPEGs, TIFFs).
In comparison, a JPEG is a very explicit set of Red, Green, and Blue (RGB) values assigned to each pixel, and should look exactly the same in every photo software, every browser, and every OS (as long as it’s viewed on the same computer monitor, same iPhone, or same tablet). An actual photo can’t be interpreted in any other way because the colour of each pixel is already set. That limits what can be done to the colours of a photo file, because not only has an RGB value been assigned to each pixel, but JPEGs are only 8-bit per colour, whereas RAW files store much more information than that.