You already have that option, buy a regular MBP and put down all money in between on a SSD. Don't really see what the issue is here.
Done Already check my signature...
You already have that option, buy a regular MBP and put down all money in between on a SSD. Don't really see what the issue is here.
Done Already check my signature...![]()
Done Already check my signature...![]()
A file cache works on a block level, your boot example would only work if you booted very recently so that all booting related files are still in the cache. Assuming that the cache is not full you would get SSD write speeds on small chunks of data that can comfortably fit in the cache, you would get SSD read speeds if the data is currently in the cache, which it will only be if you recently read or wrote it.
Those "old" drives are the latest tech.
"Boot time" is really a nonsense measurement in this day and age.
My Win7 x64 laptop might reboot once or twice a month - it's usually just going between sleep and awake. Do I care if it takes one minute or two minutes to reboot - no!
Don't waste SSD space on files that might be accessed once or twice a month - let the drive decide how to make the best use of the SSD cache.
When I got my latest work laptop, I bought a Momentus XT 750 GB with my own money, and put the company drive in the static bag in a drawer.
I've bought Momentus XT 500 GB drives for two of my other laptops.
Money well spent - you've now heard "good things" about hybrid drives.
Ah, gotcha. I assumed it worked on commonly accessed files, using something like superfetch in Win7. That explains why the idea never took off.
Though I'm still wondering how you'd get around the one problem I'm seeing with the fusion drives. I'd want the OS, programs, and games on the SSD, and my movies, music, and project files on the HDD. Is there a way to set what goes where within the OS, or is it determined solely by install order?
First, I meant to say disk cache. I'm not sure what is the deciding factor for Fusion, or if you can influence it in any way, probably not. I would guess that time of last access, access frequency and file type would be among the parameters that can affect the disk type. But, I'm guessing here.
What I've heard about it is that it's a dumb fire type setup that uses the SSD as the front of the drive, and the HDD as the back. There's no real sorting to it other than "first X GB goes here, rest goes here".
Gawww, I'm gonna have to look crap up now, I bet. I was hoping one of you here would tell me and save me effort.![]()
What I've heard about it is that it's a dumb fire type setup that uses the SSD as the front of the drive, and the HDD as the back. There's no real sorting to it other than "first X GB goes here, rest goes here".
Well, that is how a regular disk cache works and the only way it can work. Normal HDDs have a small amount of on board nvram between the actual disk and system ram. The hybrid drives that's been mentioned here adds a SSD section (as a L2 cache on a CPU) in addition to the nvram, still working on a block level, since a disk knows nothing about files.
A tiered storage system like Fusion drive can pick storage with more information at hand, since it has a higher level view.
Hybrid drives have been around for a while. How is this new?
A tiered storage system like Fusion drive can pick storage with more information at hand, since it has a higher level view.
On the other hand, enterprise level (looking at $500K and higher for entry level) tiered storage systems work like the hybrid drives. They track sector-level accesses and automatically move "hot" sectors to fast storage - without any knowledge of "files".
It's really unfortunate if the OS-level filesystem code is talking to the volume manager. That's a gross violation of the normal storage abstractions.
But, Apple has many times ignored best practices - only to have it bite their customers in the butts.
So are you suggesting the 27" iMac isn't targeted at designers and photographers?
Edit: seems like a lot of enterprise tiering uses chunks larger than a sector, so it would be something in between.
Actually, although the implementation is quite different, the end result is quite the same.
It should be blatantly obvious that "sector" and "clusters of sectors" are one and the same at the level I was describing.
And it should also be obvious that if Apple's filesystem is telling the volume manager what to do, that the normal hierarchy of storage abstraction is inverted.
It make me feel dirty just to think about it.
Why? Dell's tiering solution uses a 512kb size, or 2-4MB. IBM Easy Tier uses 1GB chunks, it still provides something in between.
You already said this, and as I already said you don't know anything about if this is the case. But as an aside, breaking abstractions isn't necessarily bad if a sensible new alternative is used.
So you agree - "sectors" and "clusters of sectors" are the same thing.
I'll post a link in the morning, but either Tom or Anand had a description of Fusion™ which said that the OS level software could pin a file to the SSD or the HDD. That's an inversion of abstraction.
Of course it isn't. If you say "sectors" why would it be blatantly obvious that you really mean "clusters of sectors"?
How big is a "sector"?
(Trick question, of course - which helps to prove my point.)
512kb typically, but most importantly it's the smallest unit of data written or read from a disk.
What disks have half mebibyte sectors?
It used to be 512 B sectors, but 4096 B sectors are the norm for current AF disks.
Just give up![]()
512 bytes, yes.
Do your own research and stop being lazy.
In the way that has been explained over and over in the two hundred posts preceding yours in this thread.