Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
New? Where have you been the last 25 years?

Using Macs, and you? In my opinion, they lost focus 2012-ish. And I do know that they replace widely used tech in favor of forward-looking solutions, so please keep your sarcasm to yourself. What I meant in case you didn't get it from reading the whole sentence in my post, something they have been doing as of late is backtrack from their mistakes. Something they were famously known for NOT doing it. Specific cases: the "modern" butterfly keyboard that was touted as the last coke in the desert, returning to the iPhone 4/5 design and slowly ditching that dull chinese template they've been using, getting rid of Saint Ives (more harm than wellness to the brand, in my opinion)... heck, even the startup chime is returning.
 
  • Like
Reactions: SteveW928
Not in the least. If I post a link to a high court judge with the same name as me, it does not mean I am a high court judge.
I guess your theory is that in 2007, at the same time that the engineer who authored all those papers retired from CPU design, I decided to join this forum and use that same name, so that 13 years later I could scam you?

Here are photos of my actual bound Ph.D dissertation, the actual physical JSSC journal in which my most famous paper was published about the Exponential x704 PowerPC.

Or am I just *really* good at this scam, and I happened to obtain a copy of the PhD dissertation from the one library in the world that has a copy, stole it in 1996, and held on to it for 25 years just to fool you?
 

Attachments

  • 5B84098C-A243-4186-96FE-171749BA725E.jpeg
    5B84098C-A243-4186-96FE-171749BA725E.jpeg
    337.5 KB · Views: 79
  • 070CA828-3389-446B-914C-2C84809AD72F.jpeg
    070CA828-3389-446B-914C-2C84809AD72F.jpeg
    549.2 KB · Views: 90
I'm also very interested to see what they will do. Seems the core design is the same for the A and AX variants of a chip (e.g. Lightning (high performance) and Thunder (high efficiency) in the A13) just used in differing amounts - always 2x high performance on the iPhone chip, with varying numbers of high efficiency, and 4 of each on the A12X/Z - and clocked at different speeds.

Assuming the A14 is as big an improvement as has been rumoured these are going to already be very powerful chips, and considering the leak was for a 12 core (8 high performance 4 high efficiency) Mac processor, that potentially sounds like it would fit the bill for the 16" MBP. Looks like more cores, probably somewhat higher clock speed, and of course better graphics is the recipe Apple are going for.

I wonder if the MacBook Air will share the basic iPad A14X though? The A12Z seemed to already be more than adequate for the sorts of things a MBA will be used for...
Definitely better CPU and internal GPU (my MBP 16's iGPU is really horrible as it was actually a downgrade compared to the Iris Pro found on the 2015 iGPU only MBP 15). But I would be definitely surprised if Apple makes a GPU in the SOC that is able to beat AMD's 7nm mobile GPU found in the current MBP 16 that is currently sized at 158mm2 (already bigger than whole A12z chip) or are they going with a separate dGPU of Apple's design (which would a first for them) or sticking with AMD? Are there even ARM computers using PCIe bus dGPU's?
 
  • Like
Reactions: Falhófnir
Definitely better CPU and internal GPU (my MBP 16's iGPU is really horrible as it was actually a downgrade compared to the Iris Pro found on the 2015 iGPU only MBP 15). But I would be definitely surprised if Apple makes a GPU in the SOC that is able to beat AMD's 7nm mobile GPU found in the current MBP 16 that is currently sized at 158mm2 (already bigger than whole A12z chip) or are they going with a separate dGPU of Apple's design (which would a first for them) or sticking with AMD? Are there even ARM computers using PCIe bus dGPU's?
Apple made it very clear in the SOTU talk that they have big plans for their own GPUs.

The thing that shocked me is that while listing off the custom chips they’ve made, including the upcoming “family” of SoC’s for their macs, their Head chip designer explicitly said the thing he’s MOST excited about (read that as “even more exciting than the unreleased CPUs”) is their own GPUs.

That confidence says to me that Apple has some GPUs that are going to shock the industry. Obviously they’ll have a range of of them, but I think we’re going to see some really amazing things from a graphics “newcomer” in the coming years.
 
Definitely better CPU and internal GPU (my MBP 16's iGPU is really horrible as it was actually a downgrade compared to the Iris Pro found on the 2015 iGPU only MBP 15). But I would be definitely surprised if Apple makes a GPU in the SOC that is able to beat AMD's 7nm mobile GPU found in the current MBP 16 that is currently sized at 158mm2 (already bigger than whole A12z chip) or are they going with a separate dGPU of Apple's design (which would a first for them) or sticking with AMD? Are there even ARM computers using PCIe bus dGPU's?
The GPU doesn’t have to be in the SoC.
 
  • Like
Reactions: Carles20vt
I guess your theory is that in 2007, at the same time that the engineer who authored all those papers retired from CPU design, I decided to join this forum and use that same name, so that 13 years later I could scam you?

Here are photos of my actual bound Ph.D dissertation, the actual physical JSSC journal in which my most famous paper was published about the Exponential x704 PowerPC.

Or am I just *really* good at this scam, and I happened to obtain a copy of the PhD dissertation from the one library in the world that has a copy, stole it in 1996, and held on to it for 25 years just to fool you?
😂

He don’t miss!

I’ve been on this forum since 2008 (I think). Personally it was very easy to figure who were the people that knew what they were talking about, and those who only know how to parrot tech “journalism” old wives tales.

Cmaier is the real deal, and I get a kick out of every time some know nothing here tries to “well, actually” him without knowing who they’re talking to.
 
I get the feeling that their next big hobby after turning their attention away from the massive pile of work they have for the next year or so is to focus on gaming big time. We’ll see.
I completely agree with you, in my opinion, it’s pretty obvious that Apple is focusing more on games recently, and i just watched this https://developer.apple.com/videos/play/wwdc2020/102/ and they mention how awesome Apple silicon GPUs will be for Gaming.
 
  • Like
Reactions: SteveW928
The low power cores are very useful. Look at activity monitor on mac and how many processes are running at “0%” of cpu.
...snip...
I think of the small cores mostly as improving overall system performance rather than really making much difference in power consumption on a mac.

They would also be useful for those tasks done while the machine is nominally asleep (”Powernap”), keeping mail up to day, checking for updates, etc.

In the same way, other of the integrated components from iPhones/iPads would be nice for other Mac products. For example: the M-series co-processor could be used to enable a laptop to know when it has been moved and allow it to periodically wake up and notify surrounding devices of its location (“Find my”) and/or update things like clocks, alarms, etc.
 
The GPU doesn’t have to be in the SoC.
Then a new viable GPU player in the market (of course I don't expect Apple to be selling GPU cards) would be something to be excited about as GPU advancement has also hit serious stagnation with rising prices and less gap between the old and new generations.
 
😂

He don’t miss!

I’ve been on this forum since 2008 (I think). Personally it was very easy to figure who were the people that knew what they were talking about, and those who only know how to parrot tech “journalism” old wives tales.

Cmaier is the real deal, and I get a kick out of every time some know nothing here tries to “well, actually” him without knowing who they’re talking to.
I came very close to posting a selfie of me holding the book, and inviting a comparison to the picture of me in the JSSC article - but then i realized how badly I've aged :)
[automerge]1593016195[/automerge]
I completely agree with you, in my opinion, it’s pretty obvious that Apple is focusing more on games recently, and i just watched this https://developer.apple.com/videos/play/wwdc2020/102/ and they mention how awesome Apple silicon GPUs will be for Gaming.
Pretty sure they will have ray tracing, though not clear if it will be in the first chip off the block
[automerge]1593016291[/automerge]
They would also be useful for those tasks done while the machine is nominally asleep (”Powernap”), keeping mail up to day, checking for updates, etc.

In the same way, other of the integrated components from iPhones/iPads would be nice for other Mac products. For example: the M-series co-processor could be used to enable a laptop to know when it has been moved and allow it to periodically wake up and notify surrounding devices of its location (“Find my”) and/or update things like clocks, alarms, etc.

indeed. The device can be "always on" in a more aggressive sense than current macs while still not draining the battery.
 
  • Like
Reactions: Carles20vt
Then a new viable GPU player in the market (of course I don't expect Apple to be selling GPU cards) would be something to be excited about as GPU advancement has also hit serious stagnation with rising prices and less gap between the old and new generations.
Turing and RDNA2 seems to mostly be about real time raytracing. Well that and 4K gaming with high details.

If they were serious about gaming, they could start with getting games like CyberPunk 2077 on their platform.
 
Then a new viable GPU player in the market (of course I don't expect Apple to be selling GPU cards) would be something to be excited about as GPU advancement has also hit serious stagnation with rising prices and less gap between the old and new generations.

More interestingly, GPUs purpose-built for the specific application, rather than trying to serve a broad range of system applications with one design. Think of the equivalent of A-series CPU’s High Performance / High Efficiency cores, but for GPUs. Previously, Apple did that by switching between integrated graphics and dGPU, but in a future design, this can be done more granularly. For example, why spin up the ray-tracing cores when one is only reading mail?
 
The design used latches instead of flip flops, and was clock-borrowing all over the place.

🤮

i think the only latches we ever "used" were when someone would forget to call out a signal in the activation list of an always block :) i guess verilog2k took care of that problem.
 
🤮

i think the only latches we ever "used" were when someone would forget to call out a signal in the activation list of an always block :) i guess verilog2k took care of that problem.

We only used verilog to manually instantiate gates.

nand2x1 HappyNand (n0, n1, n2)

That sort of thing. We also manually placed cells and pre-routed. So we didn't have the problem where synthesis would do insane things, at least. :)
 
  • Like
Reactions: Carles20vt
Definitely better CPU and internal GPU (my MBP 16's iGPU is really horrible as it was actually a downgrade compared to the Iris Pro found on the 2015 iGPU only MBP 15). But I would be definitely surprised if Apple makes a GPU in the SOC that is able to beat AMD's 7nm mobile GPU found in the current MBP 16 that is currently sized at 158mm2 (already bigger than whole A12z chip) or are they going with a separate dGPU of Apple's design (which would a first for them) or sticking with AMD? Are there even ARM computers using PCIe bus dGPU's?
Yes, will be very interesting to see what they do with graphics, their own solutions are already quite potent for iGPUs, and interestingly they've had a bit of a rapprochement with Imagination Technologies that makes Power VR GPUs, so sounds like they might be cooking up something in house?
 
We only used verilog to manually instantiate gates.

nand2x1 HappyNand (n0, n1, n2)

That sort of thing. We also manually placed cells and pre-routed. So we didn't have the problem where synthesis would do insane things, at least. :)
Cmaier, you're definitely in plugged in to the CPU industry so I'm asking you a question that's a bit OT, but slightly relevant to this thread. There's been a ton of rumors that Intel's stagnation is mainly coming from "IBM" like hubris culture (the one that created the utter failure of their PS/2 PC's from the 90's that culminated in giving up in the consumer PC space)? They've really stunk up the place and seem to be hemorrhaging engineering talent. Seems like a once excellent engineering dependent company like that is now being run by a bunch of middle management bean counters. Kind of like Apple's Sculley/Spindley/Amelio days?
 
  • Like
Reactions: Carles20vt
It isn’t that easy. Just ask Intel. There’s a reason their desktop chips have lagged for awhile. They started focusing on power efficiency instead of raw performance for their 14nm process. They wanted to concentrate on mobile and this in turn caused their higher performing CPUs more “leaky” and less able to handle current by generating much more heat at higher performance levels. The Positive trade off is better power performance at lower power. Broadwell and then Skylake was the beginning of this philosophy and notice how their desktop performance after this came to much smaller increments after the success stories of sandy and ivy bridge.

Even now, intel can’t make their 10nm process run at higher power levels. Notice that the true 10th gen 10nm chips are only being released in the under 45 watt class? Everything else from MacBook Pro 16 45 watt cpu to their desktop chips are all still 14nm!

If it was so easy to just add more power and cooling to their low power designs like ice lake, don’t you think Intel would be selling 10nm desktop class chips that would be much more competitive with AMD in power per performance?

Intel attempted a node shrink and a process shrink at the same time - that is what got them to this point.

Their new process won't launch until 2021 - Golden Cove appears to be on track.
 
Yes, will be very interesting to see what they do with graphics, their own solutions are already quite potent for iGPUs, and interestingly they've had a bit of a rapprochement with Imagination Technologies that makes Power VR GPUs, so sounds like they might be cooking up something in house?
I thought that the only thing PowerVR had going for it was the TBDR. Was there something else they were good at that the IMR's couldn't match?
 
Not in the least. If I post a link to a high court judge with the same name as me, it does not mean I am a high court judge.

Funny coming from the guy who calls himself Chippy99 in reference to the person who uses his real name (and has for over 13 years).

More fanboy spin. Apple are "saddled" with zero experience of making high core count enterprise desktops.

Apple has lots of experience making high core count enterprise desktops. They have lots of experience making multi-processor systems as well. As to whether they have experience making their own chips for those uses, I have no idea (nor do you). Just because Apple has not released a system with this chip architecture, does not mean they have not been building these kid of chips. Despite what you and others seem to believe, this architecture change was not a last minute spur of the moment idea. Apple has been working towards this for several years, and I would completely expect that they have already built several generations of high performance chips for desktop use to ensure they could do it before they approved this decision.

The fact remains that for desktops thermal envelopes are much less critical than in mobile and whereas the Apple CPU makes sense in a phone or perhaps even in a laptop, it is highly questionable whether there is any benefit at all for desktop owners, i.e. iMac or Mac Pro.

Not less critical at all, just different. What makes the Mac Pro great for people like my BF who work in sound and video production is that it is nearly silent. That is a big deal in comparison to the previous generation and the G5 (whose fan controller was more powerful than a NeXTCube).

As for benefits for iMac Pro and Mac Pro users (I have the former - a hand-me-down, and my BF has the latter) here are some:
  1. Apple’s Neural Engine provides acceleration for ML tasks that are now a critical part of Audio/Still/Video processing workflows.
  2. Lower power systems matter in general, but a certainty a god send to those of us in California where electricity costs about twice the national average.
  3. Better thermals means mean we are way less likely to see throttling on iMac Pro systems and fans become much less important.
  4. Simpler access to hardware acceleration for compression, etc. that are now on chip, rather than in a separate package like the T2.
  5. (added after watching the WWDC architecture talk) Security features like Pointer Authentication, Kernel Integrity Protection, etc.

How long do you think it will be before Apple CPUs can outpace a 64 core Threadripper? Or the 128 core processor which AMD will probably be offering in a couple of years. It will take absolutely ages before Apple can catch up, if ever.

I would expect that Apple will have a machine that is faster than one built AMD and/or Intel’s fastest chips in about 2 years when they finish their transition.

And yet they could have put a Threadripper in a Mac Pro tomorrow and retained full x86 compatibility for all the various apps and plugins, which is lost with the move to Apple silicon.

While being dependent on a company whose track record is even more mixed than Intel’s and giving up all the benefits of a unified system architecture, with purpose built chips.

So it is clear this move is if no benefit to desktop users. It's all about profits and what's best for Apple.

Nope, you may not like it, and you may not think that the benefits matter to you, but you are clearly wrong that there are none.
 
Last edited:
Wow, I can't believe I've read through all 67 pages. Interesting... but irrelevant to most Apple users, i.e. ordinary Joe in the street. Nobody has given that perspective as yet, so that's where I come in :)

I don't care about ARM vs Intel or anything processor/chipset - and I've been through the PPC to Intel transition.

I care about the applications I use and some characteristics of the hardware.

I am in the market for a new iMac and I'm unhappy with the current options - if Apple doesn't fix those, it won't matter to me if they have ARM inside. I suspect that's true for most ordinary Apple buyers. I think Apple is well aware of this, so while obviously they addressed themselves to developers in technical terms, when making the transition, they'll keep ordinary users in mind.

What this means in practical terms is that to make this all successful, in additon to assuring most frequently used apps make a seamless transition, they'll want to address the most common user complaints, regardless of whether that's processor related or not. That way they can claim - come in, the water's fine, fear not, these are fantastic machines and great products! It might even be an opportunity to enlarge market share, rather than shrink it as folks here fear due to losing Windows users in this transition.

To start with - I want it instant ON, and when re-booting, I want it to take no more than a FEW SECONDS. Similar to my iPad.

They absolutely need to speed up the usability responsiveness - banish the spinning beachball. I have an iPad - and not even Pro, just the Air 3 I bought new in 2019 - and I want that responsiveness and speed from my iMac. Currently my iMac constantly has spinning ball issues - in my case also additionally due to having a ton of external hard drives which go to sleep and as a result, cause problems for Finder everywhere else on the system - unacceptable... if that's going to be an issue it should be confined only to the specific external hard drive and not spread to the rest of the system.

I hate how often it hangs - everything freezes. What I want is the ability to truly KILL a process. Instead, it's a farce - there's a specific option of FORCE QUIT - except half the time it doesn't force anything and I have to yank the cord. This is unacceptable. I want the KILL command to truly kill a process.

And I don't want one application to be able to hang the whole system. Why is it that f.ex. my bittorrent client is able to hang up my whole computer?? If an application is non-responsive, it should be confined to just that application, and not affect my whole computer.

Heat and Noise. Yeah, I don't want to heat up the room, and I don't want a jet engine - 'nuff said. It's impossible to work on a computer that has screaming fans.

For laptops - obviously battery life, longer is better and I'd be very tempted by something that lasts a couple of days... as is, battery life is one reason why I let my laptop just sit in a drawer in favor of using the iPad.

Lighter and thinner - folks on these boards can never understand the Apple obsession with "thin"... well, clearly Apple has done their marketing research and they've realize that there a LOT of people like me, who actually appreciate LIGHT AND THIN. I don't want to lug around a hulking monster - nor have one sitting on my desk, including the iMac. I want as light as possible - get over it. I avoided buying the early ipads because they were too heavy - I want to be able to comfortably hold my iPad like a book in my hand when I read, and there every ounce counts. I want my iMac to be light, so I can move it around very easily when I sit down to watch a movie or in another position when I want to work on office stuff, or whatnot. If a 27" iMac weighed 10lbs instead of 20lbs, I'd be happier - I realize that's ambitious, but that's my preference.

The chipset inside doesn't matter to me one bit.
 
Cmaier, you're definitely in plugged in to the CPU industry so I'm asking you a question that's a bit OT, but slightly relevant to this thread. There's been a ton of rumors that Intel's stagnation is mainly coming from "IBM" like hubris culture (the one that created the utter failure of their PS/2 PC's from the 90's that culminated in giving up in the consumer PC space)? They've really stunk up the place and seem to be hemorrhaging engineering talent. Seems like a once excellent engineering dependent company like that is now being run by a bunch of middle management bean counters. Kind of like Apple's Sculley/Spindley/Amelio days?

I don't have any answers, so I will tell you all I know.

I once interviewed at Intel, back in 1992. Everyone was dressed very nicely. I was walked in, and as I was walking by I was introduced to a guy who was walking past. After I got to the room where I was to be interviewed, my escort said "you know who that guy was? He was the one who ****ed up and made the big floating point FDIV bug." (For those with short memories, this was a huge scandal back in the day.)

I thought it kind of rude to throw someone under the bus like that.

Then I noticed that i was stuck in a conference room as different interviewers came in one at a time, instead of being escorted from office to office like everywhere else I interviewed.

Then I was told that the CEO checks whose cars are in the parking lot each morning at 8am so he knows who is at work in the morning.

Then I was asked to pee in a cup. I *think* IBM in Vermont may have asked me to do that too, but I can't remember anywhere else asking me to do so.

This was in Santa Clara - the *good* designers were reputedly in Oregon.

Anyway.. I was offered System Architect and declined. Instead I decided to get my PhD

.... years pass ....

I am at AMD, and I am hiring folks. Candidates from lots of other companies and a couple folks from Intel.

I am interviewing the Intel folks and I recall two things.

1) they use terminology i couldn't understand. Their job titles made no sense to us, the technical terms they used made no sense to us, etc.

2) they had very narrow skill sets. One guy was the "adder" guy, and did nothing for 10 years other than redesign the same adder over and over again. He knew only how to use Intel's tools and no other. He only knew how to do logic design and had no insight into physical design.

... years pass ...

I am in Monterey for a conference, and I overhear some intel guys talking about Opteron. "There's no way they could have done that - they only had like 15 designers. We have 400 and can't do it - no way that chip runs at the speed they say it does."

So, that's what I know about their designers.

I *think* their designers didn't get worse or anything. I think their problem was something horrendously terrible happened to their fab, and I don't know what went wrong or why. It used to be their strong suit. Their fabs were great and their designs were mediocre, and that was good enough.

Now the fabs aren't working for them, so here we are.
 
🤮

i think the only latches we ever "used" were when someone would forget to call out a signal in the activation list of an always block :) i guess verilog2k took care of that problem.

At least you didn’t have to design in NMOS using pre-charge evaluate latching for registering. With only Spice for timing estimates.
 
At least you didn’t have to design in NMOS using pre-charge evaluate latching for registering. With only Spice for timing estimates.

I can one-up that. I designed a bipolar logic cpu using current mode logic.

Also with nothing but spice (and we had to make our own spice models for the heterojunction bipolar transistors).

(When I was a boy, we designed chips uphill both ways!)
 
I don't have any answers, so I will tell you all I know.

I once interviewed at Intel, back in 1992. Everyone was dressed very nicely. I was walked in, and as I was walking by I was introduced to a guy who was walking past. After I got to the room where I was to be interviewed, my escort said "you know who that guy was? He was the one who ****ed up and made the big floating point FDIV bug." (For those with short memories, this was a huge scandal back in the day.)

I thought it kind of rude to throw someone under the bus like that.

Then I noticed that i was stuck in a conference room as different interviewers came in one at a time, instead of being escorted from office to office like everywhere else I interviewed.

Then I was told that the CEO checks whose cars are in the parking lot each morning at 8am so he knows who is at work in the morning.

Then I was asked to pee in a cup. I *think* IBM in Vermont may have asked me to do that too, but I can't remember anywhere else asking me to do so.

This was in Santa Clara - the *good* designers were reputedly in Oregon.

Anyway.. I was offered System Architect and declined. Instead I decided to get my PhD

.... years pass ....

I am at AMD, and I am hiring folks. Candidates from lots of other companies and a couple folks from Intel.

I am interviewing the Intel folks and I recall two things.

1) they use terminology i couldn't understand. Their job titles made no sense to us, the technical terms they used made no sense to us, etc.

2) they had very narrow skill sets. One guy was the "adder" guy, and did nothing for 10 years other than redesign the same adder over and over again. He knew only how to use Intel's tools and no other. He only knew how to do logic design and had no insight into physical design.

... years pass ...

I am in Monterey for a conference, and I overhear some intel guys talking about Opteron. "There's no way they could have done that - they only had like 15 designers. We have 400 and can't do it - no way that chip runs at the speed they say it does."

So, that's what I know about their designers.

I *think* their designers didn't get worse or anything. I think their problem was something horrendously terrible happened to their fab, and I don't know what went wrong or why. It used to be their strong suit. Their fabs were great and their designs were mediocre, and that was good enough.

Now the fabs aren't working for them, so here we are.
Thanks for the insider insight. In other words, mediocre chip designers that were able to lean on their fabs that were ahead of everyone else. They also sound like they've gotten too big and compartmentalized too much which really hampers innovation.
Now that TSMC has caught up, they can no longer lean on their fabs to remain ahead of the competition... Even worse they've got a no BS efficient company like Apple who doesn't tolerate mediocrity teamed with top notch fab like TSMC. I can now see why Apple was willing to cut them off when they saw through their marketing...
 
Thanks for the insider insight. In other words, mediocre chip designers that were able to lean on their fabs that were ahead of everyone else. They also sound like they've gotten too big and compartmentalized too much.
Now that TSMC has caught up, they can no longer lean on their fabs to remain ahead of the competition... Even worse they've got a no BS efficient company like Apple who doesn't tolerate mediocrity teamed with top notch fab like TSMC. I can now see why Apple was willing to cut them off when they saw through their marketing...

I think that's probably right.

I guess one other point to make is that people don't get excited about going to work for intel. At least not people who have characteristics that *I* personally admire.

I mean, as an engineer, I was always most excited about doing a design from a clean sheet of paper and owning as much of the design as possible. The two best jobs I had -

1) at exponential, I was hired in as their youngest engineer by far, and only one with no work experience in the field, because my phd dissertation randomly happened to be the exact same exotic circuit technology they were using, which meant I actually had more experience than most of them already there. I was handed literally half the chip to own. The downside is the design was already "done," so my job was to make it run fast enough, low power enough, fix bugs, and to start designing the equivalent part of the chip for the follow-on design. (It was the floating point unit, as it turns out, which was a HUGE portion of the chip, sadly).

2) at AMD, after almost everyone quit, a handful of us were left to design amd64/x86-64, from scratch, with nobody telling us what it should look like or how it should work. I got to own, at various times: the instruction set design for 64-bit integer math, the logic and physical design of the floating point, the integer ALUs, and the register file and rename unit, the "globals" (power grids, clock grids, standard cell architecture, what cells actually are allowed in the library), design methodology, I wrote MANY of the cad tools myself, power analysis, clock gater insertion, buffer insertion, and probably a dozen more things I am forgetting.

To work at Intel you (At least back when I kept up with such things) ended up working on a very narrow thing for a long time. That would drive me nuts.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.