Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
H2Hummer said:
This will end up being about 125 to 145 watts/sq. ft. with redundancy. I know.
They must be confident in Intel's power roadmap then, or the infrastructure will be set up to add power/cooling easily.

That's not a lot of spare capacity.

A rack of dual-core Xeons or Opterons is about 3000 watts/sq.ft. - so they'll need very wide aisles between the racks to get by with 140 w/ft^2.
 
AidenShaw said:
They must be confident in Intel's power roadmap then, or the infrastructure will be set up to add power/cooling easily.

That's not a lot of spare capacity.

A rack of dual-core Xeons or Opterons is about 3000 watts/sq.ft. - so they'll need very wide aisles between the racks to get by with 140 w/ft^2.

120-150 Watts/sq.ft. is actually quite good in the industry today. I'd bet most datacenters operating today are below 100 W/sq.ft. We have a DC that can support 170 W/sq.ft. and it took a fair bit of engineering. When datacenter facilities guys quote Watts/sq. ft., they're talking about the overall density of the raised floor, not just the space a given rack occupies. You hit 200 Watts/sq. ft. in a Tier IV facility, you're practically a magician.
 
beaster said:
120-150 Watts/sq.ft. is actually quite good in the industry today. I'd bet most datacenters operating today are below 100 W/sq.ft. We have a DC that can support 170 W/sq.ft. and it took a fair bit of engineering. When datacenter facilities guys quote Watts/sq. ft., they're talking about the overall density of the raised floor, not just the space a given rack occupies. You hit 200 Watts/sq. ft. in a Tier IV facility, you're practically a magician.
i.i.com.com/cnwk.1d/html/itp/34146A_PC_WP.pdf

"However, in today’s data center environments, the highestdensity racks can exceed 200 watts per square foot, so designers are specifying new data centers to handle heat loads of 350 and even 500 watts per square foot."
 
AidenShaw said:
i.i.com.com/cnwk.1d/html/itp/34146A_PC_WP.pdf

"However, in today’s data center environments, the highestdensity racks can exceed 200 watts per square foot, so designers are specifying new data centers to handle heat loads of 350 and even 500 watts per square foot."

I'm not sure if you're trying to rebutt my statement with that article, or support me, but in the very next paragraph, it says:

While typical racks installed in data centers just two years ago might have consumed two kilowatts and emitted 40 watts of heat per square foot, new, high-density racks will consume 10,15,or even 25KW per rack and may dissipate as much as 500 watts per squarefoot by the end of the decade.

I have no doubt that datacenter engineers are attempting to design for 350+ w/sq.ft. for the future. But I stand by my statement that 200 w/sq.ft. is basically the state of the art today in a Tier IV facility. Maybe in very small facilities - say, 2,000 sq. ft. - you'll find higher densities today.

Now some engineers may cheat and not count wall-to-wall square footage to make their densities look better. For example, these guys claim 900 watts/sq.ft., but I guarantee you they're not going wall-to-wall to get that number (just look at their cooling and generator #'s - obviously they're not filling 82,000 sq.ft. at 900 watts/sq.ft.). Here's a more honest assessment of where most datacenters are today and are going in the future. Anyway, with an honest calculation - 200+ w/sq.ft. in a Tier IV of any reasonable size today - that's fantastic.
 
build out as needed

beaster said:
I'm not sure if you're trying to rebutt my statement with that article, or support me...
Both ;)

Note that my original comment said that designing for expansion is also a reasonable alternative.

My building was built with roof supports (and mounting pads), electrical and chilling water pipes pre-installed for additional AC units to be added if needed. (At this point, one additional 850 ton unit has been added when the capacity was needed - a quick and simple job that needed a helicopter to drop the unit, but no other work inside the building.)
 
AidenShaw said:
Both ;)

Note that my original comment said that designing for expansion is also a reasonable alternative.
OK, I may have misunderstood your point then - it looked to me like you were saying that Apple was under-engineering by a factor of 20. If you were saying that 120-150 watts/sq. ft. might not be enough density for a datacenter full of 15-20 kW racks a few years down the road, OK, I'd agree. But realistically, you don't fill 100,000 sq. ft. with purely blade racks. In a facility this size, there's always a hodgepodge of big iron, networking gear, storage racks, tape silos, and random legacy crap that is much lower density and drags down the overall number for the datacenter.
My building was built with roof supports (and mounting pads), electrical and chilling water pipes pre-installed for additional AC units to be added if needed. (At this point, one additional 850 ton unit has been added when the capacity was needed - a quick and simple job that needed a helicopter to drop the unit, but no other work inside the building.)

Sure, having the ability to add on chillers, generators, etc. without major retrofitting is ideal. (Our design went from three 2 megawatt generators to seven over a period of 3 years as we grew to our full capacity.) It's one thing to design a theoretical capacity for greater than 200 watts/sq. ft., but it's an entirely different matter to actually achieve it with real systems on the floor in a big Tier IV facility. I hear datacenter guys at AFCOM meetings talking about how they could push 300 watts/sq. ft. But never seen any of them do it (who knows, maybe someone is - I'd put my money on Google). Inevitably bottlenecks appear - if not power, then cooling; if not cooling, then network; if not network, then tape backup or storage capacity; if not any of that, then an executive management team that won't write the check.

Anyway, my point is that Apple designing for 120-150 watts/sq.ft. may not be cutting edge, but it's reasonable for a facility like that.
 
aswitcher said:
So, no leaks then? Mmm. I guess so but I still wonder why the changes are so large at the last minute.
It's pretty well documented that Jobs can be somewhat capricious about things like this. That's where his reputation as a micromanager comes in. I wouldn't be too surprised if the change only means that he saw the older version and wasn't happy about the missing clarity.
 
I'm very pleased to hear Apple has acquired more capacity, I really hope they will upgrade .Mac with more online space, and when we look what happened to Google Page Creator, Apple should definitly make sure they can still support everything.
I hope that Movie downloads will be released on april 1, and that will definitly require some extra space, too
 
iMeowbot said:
It's pretty well documented that Jobs can be somewhat capricious about things like this. That's where his reputation as a micromanager comes in. I wouldn't be too surprised if the change only means that he saw the older version and wasn't happy about the missing clarity.

Okay, this is weird. I think Apple is just messing with us. Now the original Intel Mac Mini banner is back... :confused:
 
beaster said:
OK, I may have misunderstood your point then - it looked to me like you were saying that Apple was under-engineering by a factor of 20.
I mentioned the need for "wide aisles" - so I was aware that 3000 watts/ft^2 for the six square feet under a rack doesn't mean 3KW/ft^2 for a room that's nearly a hectare.

beaster said:
Anyway, my point is that Apple designing for 120-150 watts/sq.ft. may not be cutting edge, but it's reasonable for a facility like that.
It's reasonable to start with that capacity, but it would be foolish not to have a plan to get to 200 to 300 watts/ft^2 without major disruption and expense. Spend a little extra now for big savings later.

That's my point, not that starting with 125 is short-sighted.
 
"The more “mission critical” the application is, the more redundancy, robustness, and security required. Data centers can be classified by Tiers, with Tier 1 being the most basic and inexpensive, and Tier 4 being the most robust and costly. According to definitions from the Uptime Institute and the latest draft of TIA/EIA-942 (Telecommunications Infrastructure Standard for Data Centers), a Tier 1 data center is not required to have redundant power and cooling infrastructures. It needs only a lock for security and can tolerate up to 28.8 hours of downtime per year. In contrast, a Tier 4 data center must have redundant systems for power and cooling, with multiple distribution paths that are active and fault tolerant. Furthermore, access should be controlled with biometric readers and single-person entryways, gaseous fire suppression is required, the cabling infrastructure should have a redundant backbone, and the facility can permit no more than 0.4 hours of downtime per year.

Tier 1 or 2 is usually sufficient for enterprise data centers that primarily serve users within a corporation. Financial data centers are typically Tier 3 or 4 because they are critical to our economic stability and, therefore, must meet higher standards set by our government. Public data centers that provide disaster recovery / backup services are also built to higher standards."

Link to Article
WTF does that mean?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.