Yeah I think I remember hearing about Linux back in the 90s. A bunch of the other CS guys liked to tinker around with it when we were in college. I actually make a living as a software engineer, so I have no clue how Linux works. In Windows, if you try to allocate a rediculous amount of RAM (beyond the 4GB virtual memory space on 32-bit), it will simply fail. At no point does the OS randomly pick some arbitrary process to simply terminate. If it needs more physical RAM for the active process, it just swaps the memory of a non-active process out to disk. How on earth could a system work where your application is constantly in danger of being randomly terminated through no fault of it's own? What criteria would be used to choose the victims? That is the most rediculous concept I have ever heard. I would be shocked if this was actually how Linux worked by default.There's this upstart OS called Linux, you may have heard of it.
In a desktop or server it's rare to really run out of RAM, since you can make use of disk for swap. Linux by default (probably other systems too, but Linux is what I know) will allow allocations that it doesn't have the space for. Part of the reason for this is that applications regularly ask for more RAM than they every actually use, so the kernel oversells the capacity (kind of like your bank does with money). This leads to a need to kill processes if space runs out.
You can change this behavior so that the kernel will guarantee that RAM will be available if an allocation succeeds, but there's not much point. Very few applications are really written to handle a failed memory allocation well, and tend to abort -- well written apps do so cleanly.
Whatever the reason, the iPod app terminating while using Safari is a bug.