cameras need a lot of correction to resemble what the eye can see. your eye continuously processes imagery and autocorrects it with context. for example, a camera can adjust exposure or amount of light it receives but it's done in a dumb linear way - in order to capture detail in bright areas you can lower the exposure, but then shadows are completely dark. your brain will actually process areas in shadow and make out detail, while also keeping bright areas from just looking white and that's what HDR tries to achieve for example. your brain does a lot of white balance corrections because it knows what is supposed to be white.
anyway, without any post processing, the images will look nothing like what your eye sees.