Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
That's only 2.6x larger, not 10x.

For a notional wafer that produces ~200 1.6x sensors, it would yield ~77 FF wafers... if we assume all yield issues are the same, etc, then if a crop sensor is $500, the FF version is $1500. Of course, elements such as yield aren't the same ...

That's the problem; FF sensors are also much more likely to have a major flaw because of the surface area. And you're talking a circular wafer...square pegs in a round hole, not a matter of neat division. Look at how ridiculously expensive medium format CCDs are (granted, Kodak's yields are low and CCDs are expensive)--the cost of full frame is substantially higher. Of course Red is planning to come out with a 617-sized sensor--one sensor per wafer--but...I'm not holding my breath any for it.


I know nothing about video, so I have no idea how well this camera will suit the needs of today's videographers.

The missing features (raw video, clean HDMI out, 1080/60p) are super cool but mostly useless for the average professional videographer (because if your needs really justify them, you can probably afford to rent a red, f3, or phantom, and very few people really need them anyway), but a lot of people seem upset about their absence for whatever reason. The improvements (low light, a reduction in skew, improved codec, no more aliasing, headphone jack and adjustable levels) are universally extremely useful.

That said, it all boils down to real world image quality and the sample videos are very low resolution, inexplicably softer than the very soft (for 1080p) 5DII. The just-published stills are really soft, too. Kind of troubling...
 
That's the problem; FF sensors are also much more likely to have a major flaw because of the surface area.

True, but that's already partially normalized out when we use the post yield cash equivalent value for the baseline 1.6x crop chip and then adjust by area. In addition, we don't necessarily know what feature sizes they're using to try to estimate flaw size threshholds. However, what's IMO probably the most significant factor here is that we're talking about silicone wafers, which are far more mature and less fussy than other materials, such as Gallium Arsenide (GaAs) and inherently have fewer flaws per unit area.


And you're talking a circular wafer...square pegs in a round hole, not a matter of neat division.

Yes, and already accounted for:

... personally, if I were designing that wafer mask, I'd put a cluster of FF's in the middle (where yield is usually better) and then populate and build out to the edges with the smaller 1.6x sizes .. less edge waste, if nothing else...

In addition, the effects of edge loss diminishes as the wafer size increases.

Thinking out loud, one could also consider developing a pre-fab screening test that searches out and maps a wafer for what zones on it appear to be free of flaws and then adjust the placement of one's mask to suit - - ie, don't use the know bad parts of the wafer. This strategy would reduce the per-wafer number of parts, but would improve net yields.


Look at how ridiculously expensive medium format CCDs are (granted, Kodak's yields are low and CCDs are expensive)--the cost of full frame is substantially higher.

Plus buyer demand is low, so they aren't as able to leverage economies of scale.

Of course Red is planning to come out with a 617-sized sensor--one sensor per wafer--but...I'm not holding my breath any for it.

617 format? That would be 186mm x 56mm .. that might be only one per 8" wafer, but it would make so much more sense to use a 12" wafer ... unless of course they're working with a hybrid mask that fills up the rest of the wafer's real estate with other (smaller) sensors. Then it depends on what their production demands are to determine the target ratioes (before & after all yields numbers).


-hh
 
That said, it all boils down to real world image quality and the sample videos are very low resolution, inexplicably softer than the very soft (for 1080p) 5DII. The just-published stills are really soft, too. Kind of troubling...

I presume you mean the ones put out by Canon themselves. Yes, they're downright perplexing. The still shots presented by dpreview tell a completely different story, however. Those are very exciting and encouraging http://www.dpreview.com/news/2012/03/02/canoneos5dmarkiii-isoseries
 
I presume you mean the ones put out by Canon themselves. Yes, they're downright perplexing. The still shots presented by dpreview tell a completely different story, however. Those are very exciting and encouraging http://www.dpreview.com/news/2012/03/02/canoneos5dmarkiii-isoseries

Low light/long exposure performance are the things that would actually make it interesting to me. I'm not that surprised they merged the two 1D lines. 18 vs 22 megapixels can mean very little in final output quality with equivalent sensor sizes. I mean if you look at the final available pixels, it's really not that bad. I personally like when I'm able to output at a lower resolution to minimize noise yet stitch a large panorama together for detail. It's quite annoying, takes up a lot of space, and it's something I've been trying to perfect (bleh at finding a good panorama head that'll support my rig without any creeping, so many of them were disappointments with a 1Ds and a 70-200 F4 which I partially used to stay within the supported weight).

Any idea what the comparison will look like between the 1Dx and 5D MKIII? Given the long refresh cycles, I'm tempted to look at one of these.
 
Thanks to this camera I just secured a one month old 5dmkII for $1650 with all included accessories, new focusing screen and only 1000ish shutter clicks. Only problem is I need to wait for shipping confirmation of the mkIII before he will let me have it. :D

I almost paid that much for my 7D.
 
An old document but a good read on sensors nonetheless.

That math is wrong, since:

Canon 1.6x crop sensors: 22.3 x 14.9 mm (3.32 cm²)
Canon FF sensors: 36 x 24 mm (8.64 cm² )

That's only 2.6x larger, not 10x.

For a notional wafer that produces ~200 1.6x sensors, it would yield ~77 FF wafers.

That PDF that Rebby linked to is a good read on Full Frame Sensors, I read it a few years ago and that's where I got my information - straight from Canon. Here is an excerpt from Page 11:

"Thin disks of silicon called “wafers” are used as the raw material of semiconductor manufacturing. Depending upon its composition, (for example, high-resistivity silicon wafers have much greater electrical field depth -- and broader spectral response -- than low-resistivity wafers) an 8" diameter wafer could cost as much as $450 to $500, $1,000 or even $5,000. After several hundred process steps, perhaps between 400 and 600 (including, for example, thin film deposition, lithography, photoresist coating and alignment, exposure, developing, etching and cleaning), one has a wafer covered with sensors. If the sensors are APS-C size, there are about 200 of them on the wafer, depending on layout and the design of the periphery of each sensor. For APS-H, there are about 46 or so. Full-frame sensors? Just 20.

Consider, too, that an 8" silicon wafer usually yields 1000 to 2000 LSI (Large-Scale Integrated) circuits. If, say, 20 areas have defects, such as dust or scratches, up to 1980 usable chips remain. With 20 large sensors on a wafer, each sensor is an easy “target.” Damage anywhere ruins the whole sensor. 20 randomly distributed dust and scratch marks could ruin the whole batch. This means that the handling of full-frame sensors during manufacture needs to be obsessively precise, and therefore they are more expensive."


Also, basing solely on my memory, I think the $3299 Price Point was with the 24-105 lens; body only (as I bought it) was $2700. This was about a year after it came out though, I do not recall the launch prices.
 
Thinking out loud, one could also consider developing a pre-fab screening test that searches out and maps a wafer for what zones on it appear to be free of flaws and then adjust the placement of one's mask to suit - - ie, don't use the know bad parts of the wafer. This strategy would reduce the per-wafer number of parts, but would improve net yields.

I'm pretty-sure that 95% or so of the flaws are process-based for CMOS-- that is that imaging sensor wafers all start out mostly defect-free so I don't think you can pre-map them and gain much at all. The gains will come in process- which is probably currently offset by the fact that sensors are now so dense that defects down to 2 microns or so are significant where only a few years ago you could stop at about 10 microns.

Paul
 
I'm pretty-sure that 95% or so of the flaws are process-based for CMOS-- that is that imaging sensor wafers all start out mostly defect-free so I don't think you can pre-map them and gain much at all.

Mostly, although it also depends on the wafer quality grade that you want to start with, since if you're willing to accept low quality wafer, it can be dirt cheap (as in under $200). But at least Silicone is a material that you have opportunity to order "near perfect" ones ... good luck if you're working in GaAs, for example.


The gains will come in process- which is probably currently offset by the fact that sensors are now so dense that defects down to 2 microns or so are significant where only a few years ago you could stop at about 10 microns.

Agreed, although the ever-shrinking sizes of traces is also indicative for us as to what these materials currently allow to be built ... while retaining suitably high part yields to allow them into a business line as a cost effective money-maker. For example, Intel's "3D Gate" is a 22 nm (nanometer) process, which is equal to .022 microns


-hh
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.