Wow, thanks for the sudden rush of information! I'll try a bunch of this tomorrow and report back.
Nope, by default virtual memory is unlimited. It might be that you are hitting an addressable ram limit or perhaps some kind of process limit.
You might have better luck posting on matlab specifik forums about this, seems like the thing other people there might have run into.
The error box I get is an OS X error, not a Matlab "out of memory" error, so something is triggering OS X to warn and prompt me to start force quitting tasks. It's a very similar warning box to the "your startup disk is almost full" error.
interesting topic, one that grabs my attention
you cannot change the swap size in OSX. it is automatically chosen as needed.
you have clearly exceeded the free space on your drive by running these massive simulations, and really the ONLY way i can see you fixing it is by getting a larger startup drive - unless of course you can rewrite how OSX handles page files and whatnot
I've still got 300GB free-- it seems that I'm a long way away from being full unless...
Not sure if this is helpful or not. But from what I understand, OS X automatically de-fragments files 20mb and smaller. Anything larger than that, such as in your case- may be fragmented all over the place with many gaps in-between sectors.
This could very well be the case. Each swap file after the first 4 or so, are about 1GB in size. If there weren't 1GB of contiguous disk, would that cause this problem? I wouldn't expect finding a free GB out of 300 would be hard, but doing it the 65th time might just be too much.
I used to do that ritualistically in the Windows world but haven't thought about it in over a decade now... Anybody have a favorite defragmenter?
Have you run the tools (from the Terminal) like top, vm_stat or vmmap?
They can tell you how big a VM region is. I don't know that the swap file has to be in unfragmented disk sectors, as suggested earlier. I haven't heard of multiple swap files either.
Total speculation on my part, but maybe the OS message is not correlated with the underlying condition. It says you have exceeded the disk space, but maybe it reports that when you have "excessive" paging. The tool like vm_stat or top can give you some indication of that sort of activity.
Thank you for both these pointers-- top I'm aware of, but the two vm specific tools I hadn't seen before. I'll see what I can learn when I'm back in the office tomorrow. vmmap looks like more than I'll be able to digest, but vm_stat looks manageable.
I've thought about the possibility that OS X is misdiagnosing the problem, or is just getting concerned by the rate of consumption rather than the total amount and trying to give me time to respond before it's too late. "At this rate, you've got 10 minutes to do something before the world ends" kind of thing...
It is doing massive paging. 1k+ pages continuously on MenuMeters for minutes.
The argument against this theory though is that once it stops consuming memory, and even after having killed non-Matlab processes to appease the gods, OS X won't stop posting the warning until I free memory within Matlab (or restart it).
You can easily test some of your hypotheses by customizing /System/Library/LaunchDaemons/com.apple.dynamic_pager.plist and using the -S option to let you set the size of each paging file. Keep in mind OS X needs a block of continuous disk space for each swap file.
Also are you sure MatLab isn't somehow triggering the error by asking for some ridiculous amount of memory (i.e. copy on write) which the OS refuses to give.
There's the ticket! I'll give this one a shot tomorrow too, hopefully if I set the paging files to 2GB, everything will get happy. If fragmentation is the culprit, I'd expect it to fall over sooner with a 2GB swap file.
I don't think it's a major allocation request that trips the alarm-- I can set simulation parameters such that it will run, but OS X won't allow me to dismiss the force-quit dialog. I think that means that I'm right up against the line at that point-- OS X fulfilled the memory request but doesn't think it can fulfill the next one.