From my understanding there's a night and day difference between booting and browsing through photos. When you boot, the OS needs to access thousands of small files basically at random, so an SSD can speed up the process by (let's say) 5x or more. That's a massive tangible benefit. What in a photographer's workflow is sped up nearly that much by an SSD?
Quite a lot of things in a photographer's workflow can be sped up a lot by using a SSD, especially if they involve batch processing of files.
For example (these are excerpts from a long post that I wrote about performance
https://forums.macrumors.com/threads/1293809/ ):
As usual, the Mini with a Vertex 3 SSD takes the lead. Looking at the Mini with a mechanical hard drive the extra RAM does speed up the operation by 4 seconds so there is some caching going on. The 2009 MBP is not far behind the Mini server, despite its age and slow CPU. The upgraded RAM and Momentus XT HDD are helping to keep it in the running. Again the difference between the dual core + slow SSD MBA and the quad core + fast SSD Mini is minimal at 2 seconds.
The CPU takes a backstage in this test and the key factor is storage system speed. If the workload increases (number of images / resolution of images) then a faster SSD and more RAM will help. As long as your computer is not running out of RAM, having more generally does not make it faster. In this case the storage is the engine, the RAM is the tyres and the CPU is the suspension.
This is actually the most intensive benchmark that I ran in terms of the storage and RAM. CPU plays an insignificant role. The test uses Pixelmator actions in Automator to add the watermark and this uses a lot of memory resulting in large amounts of page outs and much swapping to the HDD or SSD.
When we run out of memory we end up in a situation of disk thrashing because the virtual memory subsystem is paging in and paging out large amounts of data at the same time as the disk is trying to read and write the data required for this user operation.
The Mini with 4 GB of RAM and a 7200 RPM HDD suffers badly during this test and is more than twice as slow as the 2009 MBP, which has been upgraded to 8 GB of RAM. If your typical workflow involves using large amounts of memory (restart, check your page outs versus page ins after your typical work day – search for more info on this forum if you’re not following) , then upgrading the RAM is the best upgrade you can do.
Take a look at the the performance of the Mini with 8 GB of RAM and 7200 HDD versus the Mini with 4 GB of RAM and a Vertex 3 SSD - the Mini with more RAM is faster, despite its slower, mechanical HDD.
Even with the excessive disk usage of the virtual memory system during this test, the slow Samsung SSD in the MBA again manages to keep up with the much faster Vertex 3 and is only 3 seconds slower.
The clear winner is the Mini with 8 GB of RAM and SSD. What would be interesting to see is how a Mini with 16 GB of RAM and a mechanical HDD would perform. Considering the fact that we end up with around 11 GB of page outs after this test, even when using 8 GB of RAM, I believe that it would be faster than the 8GB SSD equipped mini.
Personally I do development and using an SSD speeds up my project build times by about 5x because that requires similar accessing of hundreds of small files in quick succession. A build that used to require 45 seconds might be finished in less than 10 seconds. That dramatically changes my workflow because now I can make a smaller change to my code and test it out almost immediately whereas before I had to group more changes together and couldn't test/refine each one individually.
So do I - been writing code for over 20 years now, but I work on design and verification of enterprise systems these days.
It depends on which IDE is being used, but in general the storage sub-system makes very little difference to the build times, except for the very first build. As you've said, you have many little source files. A decent IDE, such as Xcode, caches all of those little source files aggressively into memory.
Compiling and building source code is CPU bound.
That is the number one performance factor. There are benchmarks out there (on macperformanceguide for example) that show what happens when you run a build from Xcode off the hard drive, then off a SSD and then finally off a RAM drive. You would expect the RAM drive to be stupidly fast, right? Well, it's actually the same speed as the SSD and if you're running the build consecutive, thanks to caching, the hard drive is not lagging behind either.
The good news is that my new PC significantly outperforms my old one. The somewhat surprising news is that the HDD is consistently marginally quicker than the SSD
Here is an old post that I wrote about this topic:
theSeb said:
Creating these precompiled files is not a massive operation in terms of disk usage. Don't forget we're dealing with source code files, which are usually a couple of kilo bytes big.
Throwing numbers around without any context does not really prove any point. Sure, the Samsung 830 based flash storage can do around 450 MB/s reads, but it achieves those speeds when reading a large sequential file (gigabytes worth). A 3.5" 7200 RPM will do around 150-170 MB/s in those situations. A 2.5" 7200 RPM would do around 90-110 MB/s.
The flash storage won't hit anywhere close to those numbers reading small source code files and your other resources for compilation. RAM is more than 10 times faster than 450 MB/s and yet benchmarks clearly show that there is no advantage to build times when building off a SSD or a RAM drive.
Therefore to suggest that compilation times will be faster on the 2012 MBA in comparison to the 2011 MBA because of the SSD speed difference is a fallacy. The benchmarks also show that even a mechanical HDD does not slow down the compile times, so why would there be a difference between the 2011 MBA SSD and the 2012's SSD? Xcode compile times are bound by the CPU and RAM to a smaller extent. Again though, it does not matter that you're running your browser and other things at the same time. Consider how big your source code actually is. I've worked on quite a lot of large scale enterprise applications and even then the source code is in megabytes, not gigabytes. It will easily fit into the RAM and your storage speed is not a factor in compile times. Even if you're paging out the differences between the 2011 and the 2012 SSDs will not be noticeable.
This does not apply to Xcode only. You'll find the same thing with Visual Studio. Compile times are bound by the CPU and you can prove this to yourself by googling. Example:
http://stackoverflow.com/questions/8...tories-no-theo
Will everything feel more responsive with a SSD as opposed to a HDD? Yes, of course it will. But, again, that sort of thing won't be noticeable when going from a 2011 MBA to a 2012 MBA just because the SSD is faster. Your application might open 0.5 seconds faster. That's why SSD benchmarks are massively intensive operations with lots of things happening at the same time to try and show differences between SSDs. Otherwise you'll find that they all perform nearly the same in things like UI response (opening and closing apps)
I am going to repeat what I've said previously. Unless you're copying or working with large files (e.g. batch job to process lots of raw photo files or similar), then you won't notice a huge difference between the 2011 MBA SSD and the 2012 MBA SSD.
This is a slightly old benchmark, but it illustrates the point
Anandtech said:
Even going back two generations of SSDs, at the same capacity nearly all of these drives perform within a couple of percent of one another. Note that the Vertex 3 is even a 6Gbps drive and doesn't even outperform its predecessor.
In doing these real world use tests I get a good feel for when a drive is actually faster or slower than another. My experiences typically track with the benchmark results but it's always important to feel it first hand. What I've noticed is that although single tasks perform very similarly on all SSDs, it's during periods of heavy I/O activity that you can feel the difference between drives. Unfortunately these periods of heavy I/O activity aren't easily measured, at least in a repeatable fashion. Getting file copies, compiles, web browsing, application launches, IM log updates and searches to all start at the same time while properly measuring overall performance is near impossible without some sort of automated tool.
But for the life of me I can't figure out what in a photographer's workflow would be limited by hard drive speed, since you can already scroll/browse through 10s of gigabytes of photos as fast as you can move your mouse because of thumbnail databases. (And how often do you really need to do something like that anyway?!)
As I mentioned above, if someone is a real professional photographer, then the ability to batch process thousands of photos as quickly as possible is very important. If you are mucking about with a camera and then browsing those files, then hard drive speed is not so important.