But what about the Apple research paper published on the eve of WWDC that says that generative AI isn't very smart at all?
Not particularly applicable. Apple's paper was about whether these tools were actually 'reasoning' ( creating novel solution by inference or deduction or .... Something other than regurgitating what they have already seen. )
This isn't a likely what an AI assistant for circuit design would be. First of all it doesn't need to be a general human language chatbot. The AI tool could be fed tested design specs for the general logic for a circuit. The AI tools job is just to holp lay it out using the design tools so that optimize for density , power , or some other criteria. It would be for some joe-random in accounting to type in "give me a new M6 chip. " and ta-da magical answer comes out. It is more so go to this very narrow expert task that the tool has been explicitly training on for hundreds of examples.
( Synopsis , Cadence , other chip building design tool vendors will be training these. )
This have very little with random chit-chat chat bot work. Piling in all the fictional literature on the planet , every redit/macurmors/internet forum chatter, random romance novels doesn't really contribute a whole lot to better circuit design. It is really a 'language' problem. Expert specifications go in and even more expert specifications come out. There is no 'chit chat' necessary there at all.
If a statistical "reasoning" system had been trained on 100's of tower's of hanoi examples it would have spit out the right answer. But it really would have done far more of a "Monkey see, Monkey do" solution. More regurgitation (or at best incrementally adapting to an incrementally different solution) than deduction of something 'novel'. If the systems had been trained on Towers than Apple wouldn't have used it for their paper.
All all seriousness- sounds great if we can all get more powerful more capable chips in our hardware, sooner rather than later.
This is more so about productivity on a given design that is mainly driven by human experts. So for example maybe a team of 50 does what a team of 100 did in the same amount of time. Right now Apple tends to go idle in areas. Yealy iterations of A-series means you don't get yearly iterations on Watch SoC. Unlikely to go from 3 M-series die variations out into 6 or 7 M-series variations.
The complexity of the dies being build it also going up. At one point 2B transistors was a big budget. Then 8B , then 20B , 40B , 80B , etc. 120B really shouldn't be a bigger team. The pace and breath of dies coming out of Apple wouldn't change, but just would contain more complexity with less of an increase in cost to development.