Of course the amount of drives you can lose () depends on the kind of setup you're running.
I'm currently using a raidz2 for a 8 drive setup. Originally I had it configured as a RAID6 and personally I think that single parity for a 8 drive + setup is a little too optimistic.
I hope I can.
I tested ZFS first with the current FreeNas build as a VM before I installed it on a physical machine. Configured it with 8 virtual drives and set up a raidz2. Then I copied about 100GB of data on that pool (mounted via AFP). Then I disconnected drives randomly. First drive out, everything's fine. Second drive out, 6 left, which is supposed to still work and it indeed does.
Alright, then I replaced the two drives, ZFS automatically resilvered the array (took a second or so, way to less to copy any data at all) and it was labeled online again.
Then I popped out another drive and that did the job. Array offline.
Did you actually wait for the system to show that the array was rebuilt? A few seconds sounds way too short to resilver the virtual drive.
It takes about 5-10 minutes to silver a copy with ~5GB of used data on a physical machine in my environment. This is a fresh OS install that is getting mirrored.
No "restoring" process in the classical sense, is it?
What I'd expect is that resilvering actually makes the old data available on all available discs, but apparently it does not.
I'm not sure if FreeNas screwed it, but I hope that you can clear that up!
Haven't tested the resilvering process on a physical machine yet.
I would say that it's your procedures that are flawed here and that you pulled sufficient drives to create a toasted array condition due to the number of drives you pulled prior to the array's integrity being restored.
"zpool status" is your friend.