This has me baffled. I've got a program that is basically CPU bound with only system calls I know of are mainly to malloc and free. But even those should be limited. When I run under Linux (in a virtual machine) I get 1.8 seconds CPU time (as measured by times() adding system and user) and 2.0 seconds real time. When run under OS X it takes 12 seconds of real time and the CPU time is 5.98 seconds system and 13.20 seconds user.
Keep in mind that the program is a single task/thread so I am baffled why my user CPU time is greater than the real time. I'm also baffled why it takes 5x as long to run. I've tried both gcc and clang and 32-bit as well as 64-bit (64 is faster and are the times quoted, but the Linux is 32 bit.).
One other baffling item is while the program appears to use 2MB of RAM (believable) but shows as 20MB virtual memory. This is in Activity Monitor.
I don't know how to use the analysis tools, especially with a command-line application.
Any thoughts or answers?
Keep in mind that the program is a single task/thread so I am baffled why my user CPU time is greater than the real time. I'm also baffled why it takes 5x as long to run. I've tried both gcc and clang and 32-bit as well as 64-bit (64 is faster and are the times quoted, but the Linux is 32 bit.).
One other baffling item is while the program appears to use 2MB of RAM (believable) but shows as 20MB virtual memory. This is in Activity Monitor.
I don't know how to use the analysis tools, especially with a command-line application.
Any thoughts or answers?