Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
ksz said:
Huh? A new process? Where are you getting this?

I'm getting this from here...

-from the original post
IBM and Toppan Printing Co Ltd announced a $200 million deal to jointly develop a 45-nanometer chip making process aimed at production by mid-2007.

Specifically, the companies hope to develop a photomask process, which would be used to etch patterns of integrated circuits onto silicon wafers, to enable early production of advanced 45nm semiconductors.


ksz said:
Are you suggesting that IBM developed a process for 65nm, tried to make it work, decided to give up, and are now focusing that process on 45nm feature sizes? If something could not work on 65nm, why would it work on 45nm?

I should have been more clear.

I'm seeing a lot of posts along the lines of 'If IBM can't get 90 nm to work then why are they going to a 45nm process.'

This is what I was addressing when I said that the jump from 90nm to 45nm is irrelevent.

I don't think that IBM would skip 65nm development but hey if they've got a better process, I'm not going to complain if they go directly to 45nm.


ksv said:
Well of course, so would everyone else. That's my point. IBM could just as easily announce that they are working on the 30nm process, which actually they probably are, but so what? Once the new process is stable and ready for daylight, they will move whatever designs onto that process that are justified by the economics of the process and the design. This is simply common sense.

If 45nm requires entirely new (or substantially new and different) design rules, it will require an expensive re-synthesis of the chip design. The general trend is towards fewer new design starts at smaller technology nodes because reusable cell libraries are liberally employed in today's designs. Designers do not create new devices entirely from scratch. Instead, like reusable software components, they assemble new designs by reusing preexisting cell libraries and developing any necessary new logic. If the reused cell libraries are too far removed from the design-rule requirements of the current process technology, the old design may need to be discarded in favor of a grounds-up redesign or re-synthesis. This becomes very expensive.

Hence, the economoics of the device (its lifetime, expected production volume, and selling price) may justify a redesign or it may not. If the expense of a redesign cannot be justified, the part will continue to be produced on N or N-1 technology nodes. Eventually and gradually cell libraries will migrate to new design rules and the vendors who make and sell these libraries (and the internal design departments that do the same) will update their libraries to fit the DFM (design for manufacturing) requirements of the latest process.

Your original post stated that 'It (the article) does not say anything about a new PowerPC chip on 45nm, only that IBM -- like everyone else -- is working actively on 45nm process develoment." I figured that you didn't think that IBM was developing this process for PPC (I shouldn't have assumed). Of course it seems logical that they would make a PPC on 45nm.

Not everyone in here is as well versed in chip manufacturing processes as you, I was just trying to make it clear to those that are still complaining that IBM can't get 90nm straight that this is not the same as making a 90nm chip. IBM, along with everyone else, is trying to find new ways to make smaller and faster processors and they've found a new way. This is a good thing for everyone, hopefully even Mac users.
 
840quadra said:
Yes there are many ways to cool electronic items, including R-134a compressed refrigerant, peltier (sp?) devices, and other ways to bring temps below room temperature. But how cost effective are these, and how much impact will it play on keeping computers small, light and quiet?
Do you have any idea why the industry has not embraced Peltier coolers? I see them sold all the times as parts, and they're commonly used by overclockers. But I've never seen a brand-name system use them.

My gut feeling tells me that these would work better than a water-cooled system. They would certainly be a lot smaller.
 
ksz said:
Gate leakage current (Ioff) is another significant factor (probably the most significant) for the 4x increase in dissipation power per generation. As transistors scale (i.e. de-magnify) from 180 to 90 to 65 to 45 to 30, both the gate length (Lg) and oxide thickness (Tox) drop. A shorter gate length allows a transistor to switch more quickly, but at 45nm, gate oxides are only about 5 atom widths deep. This oxide is designed to prevent current leakage from the active region back into the poly gate when the gate is turned off, but electrons jump the gate and induce a leakage current due to both (1) relatively poor dielectric property of the gate oxide and (2) narrow oxide depth. If the oxide depth is increased, more active or drive current is needed to switch on the gate, but the gate oxide will deliver better insulative properties. The ideal solution is to keep the gate oxide as thin as possible, but use or develop an oxide with a higher dielectric constant (high-k).
Are we also having to fight quantum tunneling effects yet? Or can we scale down even smaller before they become significant?
 
GFLPraxis said:
Hopefully Apple never even begun thinking about a G5 PB. Forget it. Everybody.
Apple has done quite a bit more than just think about them.

They've even built prototypes. Unfortunately, they require very large external cooling systems, which is why you're not likely to see one of these as a product in the near future.

Your use of "hopefully" is a bit strange, however. Do you think a G5 powerbook (when they're finally able to make one) would be a bad thing?
GFLPraxis said:
I'd much rather see e600 based PowerBooks. Dual core 1.5 GHz G4 at 25w or single core at 10w and 1.5 GHz, scales to 2 GHz. Incredible battery life and great speed.
Making assumptions about how an entire system will perform based on a chip-makers press release is not exactly what I'd consider reliable.
 
shamino said:
Apple has done quite a bit more than just think about them.

They've even built prototypes. Unfortunately, they require very large external cooling systems, which is why you're not likely to see one of these as a product in the near future.

Your use of "hopefully" is a bit strange, however. Do you think a G5 powerbook (when they're finally able to make one) would be a bad thing?
Making assumptions about how an entire system will perform based on a chip-makers press release is not exactly what I'd consider reliable.

How do you know they've made prototypes? Got insider information or something?
 
I wonder what happened to the next generation chip that the VP at IBM mentioned at the time of the release of the Power Mac G5. He said that they were already working on said chip. Or is this the chip that he was talking about?

Two years is certainly a long time, at least it seems to me. Don't know if i would have that kind of patience or not.
 
AidenShaw said:
Hardly - IBM uses POWER5, Xeon and Opteron chips in their fastest servers.

I should rephrase, I was referring to the Blue Gene/L Super Computer which uses the PowerPC 440GX :eek:

You're right that the processor benchmarks slower, but their fastest super computer is definately the BlueGene/L which just recently surpassed 135 Teraflops - up from 70TF reported 6 months ago.
 
shamino said:
Are we also having to fight quantum tunneling effects yet? Or can we scale down even smaller before they become significant?
Good question. Tunneling effects are an issue at 45 and 30, but I do not (yet) have quantitative information about the severity of these effects. There is quite a bit of research into such exotics as carbon nanotubes, quantum dots, and qubits/qutons that are heavily based on Quantum Mechanics. We are rapidly entering the quantum era.
 
shooterlv said:
I should rephrase, I was referring to the Blue Gene/L Super Computer which uses the PowerPC 440GX :eek:

You're right that the processor benchmarks slower, but their fastest super computer is definately the BlueGene/L which just recently surpassed 135 Teraflops - up from 70TF reported 6 months ago.

That's hardly relevant. Blue Gene/L is NOT a server, nor is it available in any serious plural forms (in other words, it's not a production unit). The fastest servers available from IBM are indeed as AidenShaw noted. Super computers are an entirely different class of machines that have little bearing on end-user personal computers because of massively-parallel interconnects, extremely high bandwidth communication, sheer numbers, software optimization for specific tasks, and the like.

I could build a specialized version of the Intel 80486 and strap 1000 together and outperform your computer, but it wouldn't make it a superior architecture or an inherently faster processor family.
 
Lacero said:
I hardly think 45nm processors will be used in computers.
are you telling me you quoted that entire page-long post just to say one sentence that seems to be pulled from a random oblivion far away from the basis of any comments so far?
 
shooterlv said:
IBM, along with everyone else, is trying to find new ways to make smaller and faster processors and they've found a new way. This is a good thing for everyone, hopefully even Mac users.
Sorry if I was a bit rough.

Anyway, I wouldn't quite say that IBM has found a new way. Instead, they are investing $200M to develop a new way and make it commercially viable.
 
ksz said:
Sorry if I was a bit rough.

Anyway, I wouldn't quite say that IBM has found a new way. Instead, they are investing $200M to develop a new way and make it commercially viable.
that's ok... my original post wasn't as clear and simple as I wanted it to be. I seemed to have confused the ones that actually know what they're talking about instead of making it simple for those who don't. :)

Speaking of...

matticus008 said:
That's hardly relevant. Blue Gene/L is NOT a server, nor is it available in any serious plural forms (in other words, it's not a production unit). The fastest servers available from IBM are indeed as AidenShaw noted. Super computers are an entirely different class of machines that have little bearing on end-user personal computers because of massively-parallel interconnects, extremely high bandwidth communication, sheer numbers, software optimization for specific tasks, and the like.

I could build a specialized version of the Intel 80486 and strap 1000 together and outperform your computer, but it wouldn't make it a superior architecture or an inherently faster processor family.

You're right for the most part, I don't want to get into a flaming war over semantics though. My point was that obviously the PPC is a more than viable architecture for any process that IBM develops and I'm sure that they would would not disount the production of PPC at 45nm. Nor would they dismiss any of their other high end processors as being viable.

BTW, If you ever get the 486 thing working - Let me know. That'd be a sight to see. :D
 
matticus008 said:
Okay :). Now all I need are some massive 486 donations and lots and lots of old motherboards to bastardize.

I'll help - but if we're going to take on Blue Gene with '486 processors we'll probably need to rent most of Hangar One to house them!
 
Careless mistakes on my part:

1. The industry considered 157nm stepper wavelength, not 153nm. 157nm may be used beyond 45nm at the 32/30nm node. Also in contention is EUVL (extreme ultraviolet light). EUVL is absorbed almost entirely by conventional lenses -- i.e., none of the light passes through the lens to expose the wafer -- so new lens materials or technologies have to be developed.

2. Numerical Aperture is increasing, not decreasing with immersion lithography. Based on Snell's Law, NA is defined as [n sin(theta)] where n is the refractive index and theta is half the angular aperture of the lens. The higher the NA, the more the incident light will bend inwards as it emerges through the lens, and will be able to resolve sub-wavelength gaps.
 
ksz said:
This thread is misleading people into believing in a chain of events that is not necessarily newsworthy for impatient Macintosh fans (myself included)... <snip>

Someone who actually knows what they're talking about. Wow.

Thanks for the informative post. You said a number of things that I wanted to, but didn't have all the technical details at hand to put down myself. It's really remarkable how excitable people around here can get, no?
 
psycho bob said:
Sooner or later the 45nm and below barrier will be broken. If it isn't all future technologies will come to a standstill. Computers have been a very accurate mark of technological and human advancement in the 20th/21st centuries. It is fair to say that in one form or another the world would be entirely different without the technologies that computers provide or the ones used to make computers.

If you had said 10 years ago that today we would be making chips using connections many many times smaller than a human hair and reaching the speeds we have you'd have been laughed at. Talking about 90nm technology 3 years ago would have thrown up many of the issues being discussed here with people saying it isn't going to happen.

The fact is technology like this goes far beyond simply benefitting the computer industry, the barriers will be broken. The thought of us as a race encountering a problem that couldn't be over come practically would be a significant and devastating first for mankind. While there are a number of things we can't do it is very rare that we come across an evolutionary problem in the technology sector that given time and money we cannot work around.

No chip maker is really leaping ahead anymore. Sure one might announce and ship something first but all are in the same ball park. Real advancement will only come if one maker can take a significant stride ahead of the rest for example a successful jump to 30nm by the end of the year. These things rarely happen if ever.

I'm afraid that your pronouncement is a little naive. As the size gets smaller and smaller, we're getting into a territory where a whole new set of physical laws apply. There is no guaranty that those physical laws will allow us to continue to make processor that will function the way that we want them to. I don't think this is, by any means, a significant or devastating first for the human race. Indeed, it's fully anticipated. The evolutionary changes (i.e. shrinking die process) for computer chips cannot continue indefinitely. At some point it will end.

However, this won't mean that "all future technologies will come to a standstill," as you so dramatically put it. It will simply mean that in order to obtain further advancement, we'll have to come up with a revolutionary change. And such changes are already being researched. At some point in the not-too-distant future, we'll be working on quantum-computers, rather than electronic-computers. And these will likely scale up astoundingly faster than our machines today, though they may well start out slower than what we already have.

So, simply put, we may get 45nm to work, and then 30nm, but at some point we won't be able to go any smaller. But it won't be the end of the world, either.
 
psycho bob said:
If you had said 10 years ago that today we would be making chips using connections many many times smaller than a human hair and reaching the speeds we have you'd have been laughed at.

Also, you might want to check you facts a little. A human hair is somewhere around 0.3 millimeter thick. By 10 years ago we were already making chips that were on processes that were many times smaller than this (being in the range of 0.35-0.5 micrometer). And as for the speeds? Anyone looking at the history of the speed increases might have laughed at you, but not because they thought that you were quoting a ridiculously high number, but rather a number that was too low!
 
in the amount of time we have had to get top speedes out of procesors it is rather sad that we are only at the stage we are in
 
poundsmack said:
in the amount of time we have had to get top speedes out of procesors it is rather sad that we are only at the stage we are in

Huh? :confused: I think I might know what you mean, but could you please state your point a bit more clearly?
 
VanMac said:
So, taking the error margin to the high end, we are still only at 2.97GHz
No, we're at 3.0 GHz, you did your math the wrong direction.

3.0 - 10% = 2.7. Margin of error is based around the normal, not the abhorrence.
 
Abstract said:
I always that while less heat was being produced if you scale down smaller (ie: 13nm to 9 nm to 6.5nm), less heat was being generated, but since everything else is smaller as well, the density of the heat on the chip actually became greater, so the chip is actually hotter now than before, which is why the new G5s require liquid cooling. So yes, less heat is being produced on todays chip but the heat density (I guess it would be Joules/cm^3 or something) is higher, so a better cooling method is needed.
There are several interacting variables that determine power consumption and dissipation. When we scale transistor and interconnect geometries, we do it for several reasons:

1. Reduce die size, produce more chips per wafer, reduce defect density (defect density being proportional to die size) to improve yield.

2. Use less power by lowering operating voltages. The historical curve for Vt (transistor's threshold voltage -- the voltage needed to bias the transistor to the on state) has been steadily declining. From 5.0 volt transistors we are now down to 1.8V and chips like Transmeta's Crusoe/Efficieon or Intel's ULV (ultra low voltage) Pentium operate at about 1.004V or less.

3. Produce faster transistors. This is achieved by a combination of (1) shortened signal paths, (2) smaller gate lengths, (3) interconnect materials with better electrical conductivity (copper is a better conductor than aluminum, hence the switch to copper interconnects several years ago), (4) thinner gate oxides.

Wouldn't it be disappointing if the industry spent exorbitant sums of money to produce smaller die sizes and less power consumption, but the resultant processor was not even a blink faster than before? The processor would certainly be cheaper to make and more energy efficient, but your good old 500MHz PC would still be running at 500MHz. You wouldn't be getting any additional work done in the same amount of time, and ultimately, time is money...possibly MORE money than the savings in purchase price and electric bill. If processors could not get faster or more capable, well, you get the idea...there would really be no worthwhile progress.

So yes, the industry can scale processors simply for the sake of lowering power consumption and building them more cheaply, but I for one am thankful that the industry is also passionate about performance and features.

These three goals, however, usually conspire to increase power consumption and dissipation as clock speeds rise and more features are added. Larger on-chip caches, AltiVec, MMX, SSE, HyperThreading, Speculative Execution and Branch Prediction, dual core, multi-core, multiple floating point units, multiple ALUs, virtual partitioning, etc. all add up to millions and millions of new transistors.

(btw, it's 130nm, 90nm, and 65nm, not 13, 9, and 6.5...but you probably guessed that already.)
 
~Shard~ said:
Huh? :confused: I think I might know what you mean, but could you please state your point a bit more clearly?

simply put, it has taken to long to get to where we are now. future road maps show chips for 10 gigz in only a few years time (sun microsystems) thats a 300% + increase. it took us much longer to get to a simple 3 or 4 gigz in retrospect.
 
psycho bob said:
Sooner or later the 45nm and below barrier will be broken. If it isn't all future technologies will come to a standstill. Computers have been a very accurate mark of technological and human advancement in the 20th/21st centuries. It is fair to say that in one form or another the world would be entirely different without the technologies that computers provide or the ones used to make computers.
I would agree with this, however I would not say that 45nm is a "barrier" because the notion of a technological barrier usually implies that current evolutionary technology is hitting a brick wall and a bonafide paradigm shift is necessary. With 45nm and even with 32nm and possibly even 22nm, that is not *expected* to be the case. The 45nm node does not present a barrier to the continued evolution of standard semiconductor process development. At 22nm and below, quantum effects may well predominate, requiring entirely new quantum-ready designs. However, the industry is well aware of *this* barrier and has R&D programs already in place for the N+4 or N+5 technology node. Intel, for example, is doing a lot of research in nanotechnology.

If you had said 10 years ago that today we would be making chips using connections many many times smaller than a human hair and reaching the speeds we have you'd have been laughed at. Talking about 90nm technology 3 years ago would have thrown up many of the issues being discussed here with people saying it isn't going to happen.

The fact is technology like this goes far beyond simply benefitting the computer industry, the barriers will be broken. The thought of us as a race encountering a problem that couldn't be over come practically would be a significant and devastating first for mankind. While there are a number of things we can't do it is very rare that we come across an evolutionary problem in the technology sector that given time and money we cannot work around.
Qualitatively speaking, I would again agree. We have always found a way. The mean-time-between-technological-revolutions (MTBTR :confused: ) may increase because the problems of the future may get harder and harder, but I suspect humanity will continue to rise to the challenge. For as long as there are problems to be solved, there will be people searching for solutions.

No chip maker is really leaping ahead anymore. Sure one might announce and ship something first but all are in the same ball park. Real advancement will only come if one maker can take a significant stride ahead of the rest for example a successful jump to 30nm by the end of the year. These things rarely happen if ever.
These things rarely happen because the problems to be solved are getting harder and harder, and increasingly expensive. Additionally, the corporate world has learned that revolutionary products, those far ahead of their time, do not necessarily do well commercially unless they benefit from economies of scale (that haven't been established). Do you need five 25GHz processors with 1 terabyte of physical memory right now (ok, loaded question, but other than Doom 3, what software you use lists that as either a minimum or recommended configuration?). People who need this level of computational power ALREADY have it in the form of massively parallel supercomputers.

As existing products begin to feel outdated and are unable to keep up with emerging computational demands, the market will *demand* faster and bigger. And the market will buy. And the companies that make, will make lots of money. The tech industry tends to move forward collectively, and so do we the consumers.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.