Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Ok, I'm done with this joker. Some people want to have productive conversation and some people are just trolls. This shareholder is one of them. Have a nice weekend everyone
 
"Anyone can write a book, it's the intricacies of the printing industry which makes the real difference." OK.
Bad analogy. Writing a book is irrelevant in this case. The physical properties of the book such as laminated pages vs. regular pages is a more accurate analogy.
The writer can pick fancier pages and covers for his/her book at a higher cost, but the manufacturer is the one with know-how. For the writer, this is a cost-benefit game, not a technical game.
 
  • Disagree
Reactions: Freida
You sound like you know what you're talking about, so I will only reply to you and ignore the Apple fanatics.

The fabbing process is, by far, the most important part, and it is also the hardest.
Can you explain why designing is as hard or even as important?
It's not hard to determine the transistor density and the various energy consumptions of certain SoC designs using a particular fabbing process. The problem then boils down to a cost-benefit analysis (How big do you want to make your CPU/GPU cores before your yield rates plummet?). The good fabbing process lets the non-fabbing 'designer' have more options to play with, but at the end of the day, the non-fabbing customer is a cost-benefit decider, not someone who is pushing tech like the fabbing company.

This is absolute insanity.

In 2006 Intel’s fabs crushed AMDs. Yet Opteron blew away Intel’s products. Why?

The trick in processor design is not “determining the transistor density and the various energy consumptions.” It’s figuring out what to do with more than a billion transistors, figuring out what size and shape each of them should have, where each of them should go, and how each of them should be connected to each other. It’s figuring out the path each wire should take, the dimensions of each wire, and which layers to use for each wire.

It takes 2 and a half to 3 years to design a high-end microprocessor. For opteron we had to design the entire 64-bit instruction set, which involves looking closely at operating systems and applications to try and predict where bottlenecks - for software that doesn’t exist yet - would occur.

Then we had to figure out a top-level architecture - how will we support multiple cores down the line, how will the pipelining work, how big will the reservation stations be, what will the branch prediction algorithm be, what will the load store units look like, how wide will the instruction issue be, how big should the caches be, etc. We had to floorplan the chip, figuring out how to position and size the top-level blocks, without yet knowing exactly what circuits would be in them.

We had to work out how to avoid cross-coupling by using power and ground planes, while at the same time designing our interconnect structure in a way that we would have enough routing planes. We had to design the standard cell architecture, determining how tall each cell would be, where the power/ground taps would be, etc. in an effort to optimize for density while, at the same time, allowing for sufficient bypass capacitance and thermal spreading, while obeying the design rules.

We had to work with the fab to engineer the transistor and interconnect performance we needed, and to develop spice models and parasitics models for the transistors and wires. We had to develop an architectural model for the design, and verify that it successfully ran thousands upon thousands of instruction traces - which we first had to develop, because the new ISA had no collection of pre-captured traces. We had to work with operating system vendors and internally to get OS support. For each top-level block, we had to break it down into circuits, determining the location of each transistor, its size, and the interconnections between them.

We had to determine which metal layers each wire use, and the route each wire takes. We had to determine how to route the clock wires, how many clock gates to use, and where to put them. We had to determine where to put repeaters, how many to use, and what size they should be. We had to design tools to determine the speed of each critical path, and to figure out what happens to that speed as we make design changes. We had to verify that the circuits we designed were mathematically equivalent to the architectural model. We had to test to see if we had caused any race conditions that would cause failure.

We had to develop a system to determine if cross-coupling between wires would cause any functional or performance problems, and, if so, we had to figure out how to re-route the wires to compensate. We had to repeat the “analyze-move transistors-move wires” process hundreds of times. We had to determine where the abutment pins between blocks go. If someone had to move a pin on one block, then the neighboring block had to go and adjust a bunch of wires, analyze, potentially move transistors, etc. We had to design each standard cell, both schematically and physically. We had to design custom macro blocks like PLLs and memory structures.

We had to develop FIFO structures for handling communications between clock domains. We had to design a clock deskew scheme. We had to analyze for clock skew and feed that back into our timing simulations, and, if necessary, adjust the circuits again. We had to analyze for electromigration issues, and adjust the circuits again. We had to calculate IR drop on the power rails, and move circuitry around accordingly.

We had to analyze for sufficient bypass capacitance, insert bypass capacitors, and move circuits around accordingly. And every time you move something, it causes a ripple that affects thousands of other wires, all of which has to be re-analyzed, and which usually means you have to repeat the cycle a dozen more times. When determining the circuits, you start with the architect saying “A=B+C,” and you have to design a circuit that takes two 64-bit 2’s complement numbers, adds them to produce a 2’s complement result, and does so within one clock cycle, within a certain power budget, and within an allotted number of square microns on the chip. And you are doing that exercise thousands of times, once for each simple line of architectural code.

And, in the end, this process which doesn’t “push technology” results, every time, in dozens of patents, several academic or conference papers, etc. I was, myself, published in IEEE’s Journal of Solid State Physics, and it was not because what we were doing “wasn’t hard” and “wasn’t pushing the technology.”

It would be a good idea to try and design a real product before commenting on what is hard and what is not.
 
Last edited:
This is absolute insanity.

In 2006 Intel’s fabs crushed AMDs. Yet Opteron blew away Intel’s products. Why?

I'm going to ignore your wall of text on only focus on the important pieces.

The highest-end server chips were always intel. You can not look at chips at the low and mid-range only, because intel gimped those chips intentionally to sell their higher-end ones. In fact, you can also argue that they gimped their higher-end ones to sell next year's versions.

The trick in processor design is not “determining the transistor density and the various energy consumptions.” It’s figuring out what to do with more than a billion transistors

The billions of transistors are thanks to the fabbing process, not the fab-less designer. Determining what to do with those transistors is a trivial exercise next to actually building a chip with a high transistor density.

figuring out what size and shape each of them should have, where each of them should go, and how each of them should be connected to each other. It’s figuring out the path each wire should take, the dimensions of each wire, and which layers to use for each wire.

These are mostly trivial exercises. I'm not entirely sure how this proves that the fabless designer is important at all, nor does it prove that the designer is bringing anything unique to the table.
It's like saying the guy at McDonald's having millions of worker bringing the food to your table is the reason why McDonald's is successful. While this may be true, the guy who determined how the servers bring the food to your table isn't bringing any sort of rare skillset. It is also something that can be copied easily by Burger King.
 
Last edited:
  • Disagree
Reactions: theSeb and Freida
Thank you for that. I feel that this shareholderTroll won't get it and he will continue with his nonsense.

Anyway, lets not give him more attention. He clearly has no idea what he is talking about and he is just mulling nonsense to spur heat here. So, lets not give him that and lets get excited for the weekend ahead and new week next week, shall we? ;-)

Anyway, thank you for all the complexity, it was great to see more and more.



This is absolute insanity.

In 2006 Intel’s fabs crushed AMDs. Yet Opteron blew away Intel’s products. Why?

The trick in processor design is not “determining the transistor density and the various energy consumptions.” It’s figuring out what to do with more than a billion transistors, figuring out what size and shape each of them should have, where each of them should go, and how each of them should be connected to each other. It’s figuring out the path each wire should take, the dimensions of each wire, and which layers to use for each wire.

It takes 2 and a half to 3 years to design a high-end microprocessor. For opteron we had to design the entire 64-bit instruction set, which involves looking closely at operating systems and applications to try and predict where bottlenecks - for software that doesn’t exist yet - would occur.

Then we had to figure out a top-level architecture - how will we support multiple cores down the line, how will the pipelining work, how big will the reservation stations be, what will the branch prediction algorithm be, what will the load store units look like, how wide will the instruction issue be, how big should the caches be, etc. We had to floorplan the chip, figuring out how to position and size the top-level blocks, without yet knowing exactly what circuits would be in them.

We had to work out how to avoid cross-coupling by using power and ground planes, while at the same time designing our interconnect structure in a way that we would have enough routing planes. We had to design the standard cell architecture, determining how tall each cell would be, where the power/ground taps would be, etc. in an effort to optimize for density while, at the same time, allowing for sufficient bypass capacitance and thermal spreading, while obeying the design rules.

We had to work with the fab to engineer the transistor and interconnect performance we needed, and to develop spice models and parasitics models for the transistors and wires. We had to develop an architectural model for the design, and verify that it successfully ran thousands upon thousands of instruction traces - which we first had to develop, because the new ISA had no collection of pre-captured traces. We had to work with operating system vendors and internally to get OS support. For each top-level block, we had to break it down into circuits, determining the location of each transistor, its size, and the interconnections between them.

We had to determine which metal layers each wire use, and the route each wire takes. We had to determine how to route the clock wires, how many clock gates to use, and where to put them. We had to determine where to put repeaters, how many to use, and what size they should be. We had to design tools to determine the speed of each critical path, and to figure out what happens to that speed as we make design changes. We had to verify that the circuits we designed were mathematically equivalent to the architectural model. We had to test to see if we had caused any race conditions that would cause failure.

We had to develop a system to determine if cross-coupling between wires would cause any functional or performance problems, and, if so, we had to figure out how to re-route the wires to compensate. We had to repeat the “analyze-move transistors-move wires” process hundreds of times. We had to determine where the abutment pins between blocks go. If someone had to move a pin on one block, then the neighboring block had to go and adjust a bunch of wires, analyze, potentially move transistors, etc. We had to design each standard cell, both schematically and physically. We had to design custom macro blocks like PLLs and memory structures.

We had to develop FIFO structures for handling communications between clock domains. We had to design a clock deskew scheme. We had to analyze for clock skew and feed that back into our timing simulations, and, if necessary, adjust the circuits again. We had to analyze for electromigration issues, and adjust the circuits again. We had to calculate IR drop on the power rails, and move circuitry around accordingly.

We had to analyze for sufficient bypass capacitance, insert bypass capacitors, and move circuits around accordingly. And every time you move something, it causes a ripple that affects thousands of other wires, all of which has to be re-analyzed, and which usually means you have to repeat the cycle a dozen more times. When determining the circuits, you start with the architect saying “A=B+C,” and you have to design a circuit that takes two 64-bit 2’s complement numbers, adds them to produce a 2’s complement result, and does so within one clock cycle, within a certain power budget, and within an allotted number of square microns on the chip. And you are doing that exercise thousands of times, once for each simple line of architectural code.

And, in the end, this process which doesn’t “push technology” results, every time, in dozens of patents, several academic or conference papers, etc. I was, myself, published in IEEE’s Journal of Solid State Physics, and it was not because what we were doing “wasn’t hard” and “wasn’t pushing the technology.”

It would be a good idea to try and design a real product before commenting on what is hard and what is not.
 
"Sold all my AAPL stocks and bought TSLA
Current portfolio: $8 million $5 million TSLA"

lol... Probably true. Haven't checked my portfolio in the past few weeks.
I'm definitely not panicking, though. I've handled worse dips back in March, and I just shrugged. TSLA is a better investment than AAPL right now.
 
  • Like
Reactions: Red Oak
lol... Probably true. Haven't checked my portfolio in the past few weeks.
I'm definitely not panicking, though. I've handled worse dips back in March, and I just shrugged. TSLA is a better investment than AAPL right now.

It's okay. Invest what you believe in. Buy on the dips

But according to your argument, VW, GM, Toyota, etc... should roll over Tesla once their manufacturing might is focused on EV. The "EV Platform", so to speak, is quickly becoming a commodity. Look what is happening to Tesla's EV market share in the EU the last few months. It is literally collapsing.

In a couple of years, Tesla's energy credit revenue is going to dry up. That is the only source of profits. Tesla does not make money on cars.

From my seat, Tesla is a high stakes poker bet with a big potential downside. Good luck and invest safe.
 
Why would they shrink the 11" iPad Pro to 10.9"? Or is the implication here that the iPad Air is replacing the 11" Pro? Silly if that's the case. I love my iPad Pro but I don't want a big bulky 12.9" one.
 
It's okay. Invest what you believe in. Buy on the dips

But according to your argument, VW, GM, Toyota, etc... should roll over Tesla once their manufacturing might is focused on EV. The "EV Platform", so to speak, is quickly becoming a commodity. Look what is happening to Tesla's EV market share in the EU the last few months. It is literally collapsing.

In a couple of years, Tesla's energy credit revenue is going to dry up. That is the only source of profits. Tesla does not make money on cars.

From my seat, Tesla is a high stakes poker bet with a big potential downside. Good luck and invest safe.

Yes, I fully expect legacy auto companies to roll over while Tesla steamrolls all of them. Tesla is the return of the American auto industry, and we should all be rallying behind them. I expect America to, once again, be the dominant automaker in the coming decades.

VW's ID3, ID4 and the Chevy Bolt received some very unflattering comments from consumers.
The Volkswagen ID.4 Is A Disappointing Electric Car (For Now) - YouTube

They're definitely not up to Tesla's standards, and there's no reason for me to believe they'll ever have the brand and product quality that Tesla has. I do not expect other legacy automakers to make better cars than Tesla.
Yes, I know. Tesla had some panel gap, paint job and poor seat issues, but those are fixed in the 2021 models, and they have been the safest cars rated by the NHTSA for a few years now.

Tesla has no factories in Europe right now, and sales are 100% from exports from the Fremont and Shanghai gigafactories. They still sell every model they export there. It's the same criticism that people had before Tesla had GigaShanghai. Now Tesla has 40% of the BEV revenue in China.
Once GigaBerlin goes online, it's game over in Europe.

Tesla's net income should be negative or close to $0 as they focus on expansion. They are currently in the growth phase and any high profit would actually make me more bearish, as it would make me think that Tesla's management doesn't believe they can grow much.
 
  • Disagree
Reactions: theSeb


Earlier today, DigiTimes shared a preview of an upcoming report claiming that Apple is working on both iPad and Mac notebook models with OLED displays that could launch starting in 2022. The full report from DigiTimes is now available, and it includes several new alleged details about Apple's plans.

OLED-iPad-Pro-Mac.jpg

According to the report, the first of these devices to adopt an OLED display is likely to be a 10.9-inch iPad, presumably an updated version of the iPad Air. The updated iPad is said to be planned to go into production in the fourth quarter of this year with a launch coming in early 2022. In addition to the 10.9-inch iPad, Apple is also said to be considering using OLED displays for the 12.9-inch iPad Pro and the 16-inch MacBook Pro.While rumors of OLED displays for Apple's larger portables have only recently started to surface, the company has been rumored for some time to be transitioning to mini-LED displays on its iPads and Macs. DigiTimes says that the two display technologies will exist side-by-side, "each targeting different customer groups."

A number of sources including DigiTimes have indicated that a 12.9-inch iPad Pro with a mini-LED display is coming in the first half of this year, and DigiTimes says 14-inch and 16-inch MacBook Pro models coming in the second half of the year will also adopt mini-LED.

Article Link: OLED 10.9-Inch iPad Rumored for Early 2022, 12.9-Inch iPad Pro and 16-Inch MacBook Pro Could Follow
I would rather Apple use a flicker and PWM free backlight with mini-LED than the current PWM backlights behind an OLED screen--looking at you OLED iPhones.
 
  • Like
Reactions: Mc0
oled MacBook Pro 16 and I don't need the real world anymore
 

Attachments

  • tenor.gif
    tenor.gif
    282.9 KB · Views: 67
So which is cheaper? OLED or miniLED? Burning issues of OLED? These screen are on 12h a day at least. Any issue with the menu bar being permanently "on" after awhile.
Simple Samsung undercut the MiniLED supplier and offered to replace free for Apple Any OLED displays. The MiniLED supplier couldn’t match that.
 
They have at least a dozen patents on microLED technology, so whatever they eventually reveal, they likely WILL have invented it.

Manufacturing technique is more important.
Apple's patents deal more with the utilization of microLED and not the actual production of the display. Display quality will depend on the manufacturer (Probably Samsung).

Have a look at one of their patents:

Apple Granted New Display Patent Based on Micro LED - LEDinside

Apple’s plan of adopting advanced display technologies including Micro LED and Mini LED for its new products is no news for the market. The company has been dedicating to drive the progress of these cutting-edge display fabrication methods and has won several related patents. The latest one published elaborates displays that incorporate timing controller to deliver different display areas over time.

This patent doesn't sound like a technology that improves anything about microLED image quality. It sounds more like a trivial application of someone else's microLED technology.
 
  • Like
Reactions: cruiserPimp
Manufacturing technique is more important.
Apple's patents deal more with the utilization of microLED and not the actual production of the display. Display quality will depend on the manufacturer (Probably Samsung).

Oh, again with this nonsense.

Not to mention, you’re wrong. Please explain to me how a patent entitled “Method of forming a micro led structure and array of micro led structures with an electrically insulating layer” doesn’t “deal...with the...actual production of the display.”

Or how about “Method of forming a micro LED device with self-aligned metallization stack”? Also not about “the actual production of the display?”

Maybe “method for integrating a light emitting device,” where claim 1 is directed to the following manufacturing method:

1. A method for integrating a light emitting device comprising:
picking up a micro LED device from a carrier substrate with a transfer head;
placing the micro LED device on a receiving substrate;
releasing the micro LED device from the transfer head;
applying a passivation layer over the receiving substrate and laterally around the micro LED device;
hardening the passivation layer; and
etching the passivation layer, such that a top surface of the micro LED device and a top surface of a conductive line on the receiving substrate are not covered by the passivation layer, and a portion of the micro LED device and the conductive line protrude above a top surface of the passivation layer after etching the passivation layer.

In short, you are just making things up. Apple has a TON of patents on methods of manufacturing microLED devices.
 
Manufacturing technique is more important.
Apple's patents deal more with the utilization of microLED and not the actual production of the display. Display quality will depend on the manufacturer (Probably Samsung).

Have a look at one of their patents:

Apple Granted New Display Patent Based on Micro LED - LEDinside



This patent doesn't sound like a technology that improves anything about microLED image quality. It sounds more like a trivial application of someone else's microLED technology.

They also bought luxvue quite awhile ago, and have been working on microleds and manufacturing them for a long time. And picking one random patent is meaningless.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.