Thanks for the informative post.
- Is this a 64gb imac vs 16gb mbp?
- Can you say how big the data sets were in both cases(small vs large)?
Hi niray9,
To answer your questions: (The answer to your query 2 is somewhat technical...sorry about this, but you asked and so I assume you are interested.)
1. 32GB iMac, 8GB MBP (remember this was around 8 years ago that these computations were performed).
2. The variable n is the actual "size" of the problems, i.e., the number of variables being solved for. The iterative improvement and Monte Carlo algorithms were actually solving for all of these n variables. Because of sparse matrix techniques being employed by the commercial GE routines (MATLAB, NumPy, and Sage), the actual "size" of the problems being solved by the commercial GE routines were thus somewhat smaller than n (sparse matrix recoding was performed as part of initialization instead of as part of the GE algorithm). The Monte Carlo algorithm scaled as O(n^1.83) "big-Oh notation", and thus in the long run it would win out over all of the other algorithms whose time complexities were larger. [Naive theory would say that the Monte Carlo should scale as O(n^2), but it scaled slightly better than this because the average random walk was less than length n.] The MATLAB GE scaled as O(n^2.83), suspiciously close to O(n^\log_2(7)) and not to O(n^3) as the full GE should scale, and thus MATLAB was probably using the Strassen algorithm that scales at O(n^\log_2(7))=O(n^2.81). Since MATLAB was a closed-source commercial code, I couldn't read the source code to determine precisely what it was doing, and the documentation at the time didn't fully explain how the algorithm was operating. Note that the plots in my earlier post are log(t) versus log(n) plots, thus log(n)=10 means n=10^(10).
Hope this helps,
Solouki
[automerge]1581004116[/automerge]
Hi collin_,
You wrote:
A bit off-topic but... what does MacOS do when there isn't enough RAM? Based on what I've read here it seems that people just experience stuttering? That doesn't seem so bad, but I'm assuming worse things could happen? (such as not being able to open certain files or even a system crash) Does it try to use the SSD as RAM or what?
I believe most all modern operating systems (Windows, Linuxes, macOS, etc.) use what are termed swap files or page files. These are large "files" on the computer's disk drive (e.g., its SSD) that are reserved by the operating system to write contents of RAM to the disk in the event that free RAM is running low. This frees up RAM for use by the running processes (both OS as well as User processes). Thus when there isn't enough RAM available, the OSes swap out the contents of some of the RAM to their SSD disks so that they can continue to run. In other words, the OSes do not crash when free RAM runs low. But swapping RAM to SSD takes time since the SSD write/read speeds are considerably slower than the RAM read/write speeds. Thus when a program begins swapping routinely to the page file, the execution of the program slows down drastically. This is what occurred in the bottom plot of my earlier post (#49); this plot shows that MATLAB for log(n)>10 started behaving most erratically and poorly when solving large discrete partial differential equation problems. It doesn't mean that the code crashed or the OS crashed, but rather the execution times blow-up so that in practice these large problems cannot be effectively solved given a finite amount of time and resources.
Hope this helps,
Solouki