The GUI benchmarking tools available for old versions of Mac OS X are poorly suited for complex storage setups, such as SSDs, RAID and network storage due to the lack of options to set queue depth, thread count, e.t.c. Thankfully, there is a way to use fio, a modern and flexible CLI alternative, which has a bundled tool for visualizing test results. I'd like to make a quick tutorial on how to build fio for PPC-based Mac and share my benchmarking setup.
Fio is available as a formula for Tigerbrew, but it won't build for more that one reason. I've opened an issue and, hopefully, we'll be able to simply run
To build fio from source, Xcode is required, then a minimum required version of GCC:
While at it, you might want to upgrade to the latest GCC 14 after installing GCC 4.8 as a bootstrap. Beware, though, that even on a G5 Quad it takes ~3 hours to build:
Build fio from source:
To produce visual graphs for benchmarks, gnuplot is also a requirement:
Jobs for fio can be specified entirely with CLI arguments, but I prefer to configure them in a separate file named
To automate benchmarking, I launch the following shell script from the same directory where the fio binary is built.
Note the
What I end up with is bandwidth and IOPS over time graphs for each type of workload (linear vs random and read vs write) with varied queue depth.
Fio is available as a formula for Tigerbrew, but it won't build for more that one reason. I've opened an issue and, hopefully, we'll be able to simply run
brew install fio
after the formula has been updated.To build fio from source, Xcode is required, then a minimum required version of GCC:
brew install gcc48
While at it, you might want to upgrade to the latest GCC 14 after installing GCC 4.8 as a bootstrap. Beware, though, that even on a G5 Quad it takes ~3 hours to build:
brew install gcc
Build fio from source:
Code:
git clone https://github.com/axboe/fio
cd fio
git checkout fio-3.7
./configure --cc=gcc-4.8
make
To produce visual graphs for benchmarks, gnuplot is also a requirement:
brew install gnuplot --without-lua
Jobs for fio can be specified entirely with CLI arguments, but I prefer to configure them in a separate file named
jobs.ini
:
Bash:
[global]
ioengine=posixaio
gtod_reduce=1
direct=1
log_avg_msec=1000
per_job_logs=0
thread=1
overwrite=1
[create_workfile]
size=16g
filename=workfile
create_only=1
[create_dummy]
size=16g
filename=dummy
create_only=1
unlink=1
[lin_read_128k_qd1]
stonewall
filename=workfile
readwrite=read
bs=128k
iodepth=1
ramp_time=5s
runtime=15s
time_based=1
write_bw_log
write_iops_log
write_lat_log
[lin_read_128k_qd4]
stonewall
filename=workfile
readwrite=read
bs=128k
iodepth=4
ramp_time=5s
runtime=15s
time_based=1
write_bw_log
write_iops_log
write_lat_log
[lin_read_128k_qd32]
stonewall
filename=workfile
readwrite=read
bs=128k
iodepth=32
ramp_time=5s
runtime=15s
time_based=1
write_bw_log
write_iops_log
write_lat_log
[lin_write_128k_qd1]
stonewall
filename=workfile
readwrite=write
bs=128k
iodepth=1
ramp_time=5s
runtime=15s
time_based=1
write_bw_log
write_iops_log
write_lat_log
[lin_write_128k_qd4]
stonewall
filename=workfile
readwrite=write
bs=128k
iodepth=4
ramp_time=5s
runtime=15s
time_based=1
write_bw_log
write_iops_log
write_lat_log
[lin_write_128k_qd32]
stonewall
filename=workfile
readwrite=write
bs=128k
iodepth=32
ramp_time=5s
runtime=15s
time_based=1
write_bw_log
write_iops_log
write_lat_log
[ran_read_4k_qd1]
stonewall
filename=workfile
readwrite=randread
bs=4k
iodepth=1
ramp_time=5s
runtime=15s
time_based=1
write_bw_log
write_iops_log
write_lat_log
[ran_read_4k_qd4]
stonewall
filename=workfile
readwrite=randread
bs=4k
iodepth=4
ramp_time=5s
runtime=15s
time_based=1
write_bw_log
write_iops_log
write_lat_log
[ran_read_4k_qd32]
stonewall
filename=workfile
readwrite=randread
bs=4k
iodepth=32
ramp_time=5s
runtime=15s
time_based=1
write_bw_log
write_iops_log
write_lat_log
[ran_write_4k_qd1]
stonewall
filename=workfile
readwrite=randwrite
bs=4k
iodepth=1
ramp_time=5s
runtime=15s
time_based=1
write_bw_log
write_iops_log
write_lat_log
[ran_write_4k_qd4]
stonewall
filename=workfile
readwrite=randwrite
bs=4k
iodepth=4
ramp_time=5s
runtime=15s
time_based=1
write_bw_log
write_iops_log
write_lat_log
[ran_write_4k_qd32]
stonewall
filename=workfile
readwrite=randwrite
bs=4k
iodepth=32
ramp_time=5s
runtime=15s
time_based=1
write_bw_log
write_iops_log
write_lat_log
To automate benchmarking, I launch the following shell script from the same directory where the fio binary is built.
Bash:
#!/bin/sh
fio='./fio'
plot='tools/fio_generate_plots'
./fio jobs.ini --section=create_workfile
sleep 15
./fio jobs.ini --section=create_dummy
sleep 15
./fio jobs.ini --section=lin_read_128k_qd1
sleep 15
./fio jobs.ini --section=lin_read_128k_qd4
sleep 15
./fio jobs.ini --section=lin_read_128k_qd32
sleep 15
$plot lr > /dev/null 2>&1
rm *.log
./fio jobs.ini --section=ran_read_4k_qd1
sleep 15
./fio jobs.ini --section=ran_read_4k_qd4
sleep 15
./fio jobs.ini --section=ran_read_4k_qd32
sleep 15
$plot rr > /dev/null 2>&1
rm *.log
./fio jobs.ini --section=lin_write_128k_qd1
sleep 15
./fio jobs.ini --section=lin_write_128k_qd4
sleep 15
./fio jobs.ini --section=lin_write_128k_qd32
sleep 15
$plot lw > /dev/null 2>&1
rm *.log
./fio jobs.ini --section=ran_write_4k_qd1
sleep 15
./fio jobs.ini --section=ran_write_4k_qd4
sleep 15
./fio jobs.ini --section=ran_write_4k_qd32
$plot rw > /dev/null 2>&1
rm *.log
rm workfile *lat.svg
Note the
sleep
commands and the create_dummy
section. Those are meant primarily for benchmarking modern solid state storage, which may not like sustained workloads without taking breaks and also has aggressive read caching behavior.What I end up with is bandwidth and IOPS over time graphs for each type of workload (linear vs random and read vs write) with varied queue depth.
Last edited: