Easy, showing a webpage takes up a lot of RAM. The text has to be rasterised (usually done by the CPU), the CSS needs to be rasterised, i.e. the curves, shadows, colours, etc so they can be stuck together to make the final webpage. Any images (jpegs, pngs, etc) need to be decompressed into memory so they can rendered (a 1024x1024 RGB 60kb JPEG actually uses 3MB of memory when it's decompressed) all of these elements mentioned here will be contained in memory separately and then composited together before being sent to the GPU for displaying to the screen.
All of this is on top of the memory required for the JS engine, the CSS and HTML processors.
Think of it this way, if you try to look at each box on a webpage as an individual rendered RGBA image which has to be held in memory ready to be recomposed together at any point when the user starts scrolling or a piece of JS moves an element, it all adds up to a lot of memory being used!
On iOS Safari dumps this rasterised cache during low memory scenarios and instead stores a single static greyscale representation of the webpage, if you want to see the page again it has to reload and re-rasterise it, using up loads of memory again.
There is work being done to get some of this work done by the GPU but I don't believe we are there yet as a lot of the work is still done on the CPU and held in main system memory. Obviously as iOS has Unified Memory Architecture all of this needs to live together in one place, hence why it constantly reloads.
References:
http://www.chromium.org/developers/design-documents/gpu-accelerated-compositing-in-chrome