Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
So... One needs to shoot with a mirrorless dSLR in order to be considered a serious photographer?

An iPhone doesn't cut it? Why is that?

From my perspective, most photographs are opportunistic. It limits the opportunities considerably.

I've burned a lot of opportunities by carrying the iPhone only. I gained a huge number of decent shots with the Z50 that I couldn't have pulled with the iPhone.

Also for staged photos it's a pain in the ass without decent aperture and DOF control.
 
  • Like
Reactions: solace
the directional choice to lean into computational photography is correct: it has delivered more photographic value to mass audience than anything in the 'pure' photography space, where changes have been incremental.

the improvement with p14pro vs baseline of p13/p14 is already that sensor is much larger.

for a given physical surface area, you can have:
- 1 photosite, where the data value is winner-takes-all at time of capture, or you can have
- 4 photosites, where the final data value can be determined later based on different requirements

the 2nd option is way more useful, even it collects marginally fewer photons. it opens up more potential for optimizing on definition vs contrast vs color , etc.

Maybe, maybe not. Very difficult to determine this. The dynamic range is the most useful contributor to contrast, colour and due to quantization, you limit the dynamic range and noise floor if you have 4 vs 1 photosite. Definition is positional so you have an advantage there.

Make the sensor 25% bigger, bump to 18MP, improve the optics and add 1mm to the thickness of the device and things get more interesting from all perspectives.
 
Oh, so for those of us who don't use Pro Raw as we are not photograph professionals that edit photos, it will only take 12 megapixel shots? So they may not be massively different to the standard iPhone 14 range?

Thanks for that, so it’ll automatically use pixel binning or the full 48 megapixel. Sounds good.
 
  • Like
Reactions: solace
when and why would someone be transferring those files over USB 2.0?

The files would live natively on the device, or have been backed up to the cloud through cellular/wifi in the background.

If you want the ProRAW (and God forbid, 4K ProRES video) on your Mac for editing in Affinity Photo or something, you'd transfer them over to your Mac via cable.

It could take even longer over cellular/wifi - not to mention clog up your iCloud storage allotment. Not everyone has Gigabit class internet.
 
I just got the 256.
Only extra $50 over a year bc i upgrade every year.
I might still optimize eventually though.
Any issues with optimizing?
The extra breathing room is always nice to have. No issues with optimizing, if you want to look more closely at or edit a photo that has been offloaded, you'll see the thumbnail initially and it will download it when you tap on it. Really it's just optimizing your storage.
 
That means during the capture window, per pixel there is less information being captured result in very coarse discretisation of data which can only be resolved through software afterwards. That means the higher the pixel count the more we are at the mercy of computational photography and the perception that has of what was seen rather than the actual data captured.
Nah that's not how it works. More pixels at the same sensor area results in *finer* discretisation of the captured data. If you have a 48MP shot, you can reproduce the image you'd have gotten with a 12MP sensor exactly, just by adding the intensity value of each subpixel in groups of 4 (which is what Apple has called 'Quad Pixels').

The photodiodes of the CMOS sensor in cameras work by 'translating' the number of received photons during a given time frame into an electrical signal. If you have a single 'big' pixel in a given area of the sensor, let's say a 2.44x2.44µm region, all the info you'd get would be something an electrical signal equivalent to saying something like '136 photos were detected in this pixel'. If you had 4 pixels in the same 2.44x2.44µm region instead, you'd get four electrical signals equivalent to '30, 45, 27 and 34 photons were detected'. You can recover the 'big pixel' number if you wanted to simply by adding those numbers -> 30 + 45 + 27 + 34 = 136 photons in the 2.44x2.44µm region. But instead of having a single data point, you have four, and you can infer things that you couldn't before (like the standard deviation of the distribution, for example).

That's SUPER useful for a lot of things that are done (and have always been done) in the built-in post-processing of digital images. It doesn’t mean that the post-processing will be heavier, it means that the post processing will be better informed. In fact, the effect can be the opposite: a better informed noise reduction algorithm can preserve more of the fine detail of the image because it'll be better equipped to tell it appart from background noise.

Outlier values in pixels that are caused by undesired effects (cosmic rays, ionising particles, thermal noise…) can be detected more accurately the smaller the pixel size is (because many of those effects affect single pixels) rather that when they’re averaged with other, valid image data.

A very simple example of how you'd do that: you can median-average pixel values instead of mean-average them (which is what 'bigger pixels' essentially do). The median is a better estimator for the center value of a distribution that the mean, so you'd likely get less noise just by doing that.

The actual reason the pixels in CMOS sensors are not made as small as possible and bigger pixels are sometimes preferred is because there are parts of the photodiode surface (the edges) that can’t capture photons (this is mitigated by using microlensing to redirect photons to the center of each photodiode). So if you cram too many pixels in the same area, more photons are lost to the edges of the photodiodes (because there are more edges), which can’t detect them. This is a compromise between how much data you want and how sensible you want your sensor to be. But it’s a balance. Now technology has advanced to the point that you can recover more data while having minimal losses in light sensitivity.
 
Last edited:
I’ve got the 128GB PM on order. Will probably only use ProRaw once in a blue moon! Anyone know hen the next blue moon is? I googled it… The next monthly Blue Moon is on August 30/31, 2023. The next seasonal Blue Moon takes place on August 19/20, 2024. Since both of them happen in August, they are traditionally called Sturgeon Moons. There is no Blue Moon in 2022.
 
  • Haha
Reactions: solace
If you want the ProRAW (and God forbid, 4K ProRES video) on your Mac for editing in Affinity Photo or something, you'd transfer them over to your Mac via cable.

It could take even longer over cellular/wifi - not to mention clog up your iCloud storage allotment. Not everyone has Gigabit class internet.

The advantage of wireless transfer is the upload's already happened, silently, in the background, before you even get back to your Mac.

When you're working on one photo on the Mac (that's why you used ProRAW, right?), the other ones are silently downloading in the background. On a typical residential 200Mbit connection, that's 4 seconds per 80MB photo. Surely, you aren't processing that fast.

The cable never comes out of the drawer.

Is there a *real world* scenario where a person is taking 100s of RAW photos, and require instant turnaround on the computer, such that USB 2.0 (60MB/s i.e. almost 1 photo/second) is insufficient? It's a fictitious problem.
 
From the department of the obvious: recording more pixels (by dimension) equals more disk space! For most it’s a non issue. Besides, I’d always use a binned version anyway to increase SNR and improve color. In any case, the vast majority of people consume their images on basically a 2-4 megapixel display.
Sorry to be pedantic, but SNR always goes down (less "per pixel" dynamic range) when pixel size is reduced (given the same detector sensitivity level). Binning is a computational means to overcome this. Eg:

"After deciding on the photodetector type and pixel architecture, a fundamental tradeoff must be made to select pixel size. Reducing pixel size improves the sensor by increasing spatial resolution for fixed sensor die size. Increasing pixel size improves the sensor by increasing dynamic range and signal-to-noise ratio. Because changing pixel size has opposing effects on key imaging variables, for a given a set of process and imaging constraints, an optimal pixel size may exist."

Link:
 
Sorry to be pedantic, but SNR always goes down (less "per pixel" dynamic range) when pixel size is reduced (given the same detector sensitivity level). Binning is a computational means to overcome this. Eg:

"After deciding on the photodetector type and pixel architecture, a fundamental tradeoff must be made to select pixel size. Reducing pixel size improves the sensor by increasing spatial resolution for fixed sensor die size. Increasing pixel size improves the sensor by increasing dynamic range and signal-to-noise ratio. Because changing pixel size has opposing effects on key imaging variables, for a given a set of process and imaging constraints, an optimal pixel size may exist."

Link:

the comparison is not large vs small pixel for a given physical sensor size, at the capture level

the comparison is large vs small pixel at the final output level, i.e. a decision between preserving original capture resolution vs binning.

nwcs says he prefers an enlarged final pixel, derived from binning of smaller capture pixels, (representing same image frame), because it improves SNR. which is true, because the binning process is averaging the smaller pixels together, which effectively averages out the noise, hence boosting SNR.
 
With all that Apple is doing and expanding the 5GB level of iCloud storage is ridiculous. One of the few things that Apple has never increased as all else has grown. Free level should be at least 20 GB now although Id accept, for now, and increase to 10 GB. Come on Apple do the right thing here
 
  • Like
Reactions: brucemr
Keep in mind the tax differences, warranty differences, currency fluctuations, return period difference, and legal environment are different so prices will reflect those facts. It is not as simple a comparison as it seems on the surface.
And the fact that the EU wants to force apple to USB-C and allowing multiple App stores etc. Would cost Apple so they are getting it back in cost of phone
 
Why exactly would anyone outside of pro photography require 48mp ?

If you are a pro I doubt you'd buy an iPhone in the first place , maybe for test shots I guess ( like a polaroid was in the old days )
 
iPhone 14 sensor: 8,064 x 6,048
8k TV Resolution: 7,680 x 4,320

So, why no 8k video recording, hmm? Even if it's rough and un-digitally-stabilized?
 
That's why I've chosen the 256Gb version
That’s why I will probably chose the 512GB version. TBH, I’m even considering the 1TB version because when you’re spending this much on an iPhone, an extra $200 on doubling your capacity actually doesn’t sound that bad especially when you recall some of the other things you’ve spent or more to the point wasted $200 on in the past. This makes the 128GB version look even worse than it did before.
 
iPhone 14 sensor: 8,064 x 6,048
8k TV Resolution: 7,680 x 4,320

So, why no 8k video recording, hmm? Even if it's rough and un-digitally-stabilized?

the technical requirement for 8k video is MASSIVE.

8k resolution is an 33MP image... and now, you need to record 24/30/60+ of those images PER SECOND.

8K/24 video in raw form occupies like 70 gigabytes PER MINUTE. so you need to encode/compress them on the fly instead...imagine the processing power and heat needed to compress that volume of data.

also, reading the sensor itself at that rate is also a big thermal demand.
 
256 + 2TB iCloud must be enough for it, so I have time to be connected and upload them to the cloud. And I hope any upgrade (with the same price) of the iCloud storage anytime soon.

However at the moment with all my photos (thousands since 2012) on my iCloud storage, I am only using 50% of it.

Also, I hope using the camera apps would be easier to select when you want the RAW photos and when you don't
 
I would be more concerned about iCloud storage since the phone will offload photos to the cloud. Right now we have 5 iPhones sharing 2TB of iCloud storage and we are using 1.6TB of it. So I am getting kinda anxious for Apple to provide a 3TB or 4TB tier -- preferably making it the new top tier for the same price.
 
  • Like
Reactions: Mrs. Jobs
The advantage of wireless transfer is the upload's already happened, silently, in the background, before you even get back to your Mac.

When you're working on one photo on the Mac (that's why you used ProRAW, right?), the other ones are silently downloading in the background. On a typical residential 200Mbit connection, that's 4 seconds per 80MB photo. Surely, you aren't processing that fast.

The cable never comes out of the drawer.

Is there a *real world* scenario where a person is taking 100s of RAW photos, and require instant turnaround on the computer, such that USB 2.0 (60MB/s i.e. almost 1 photo/second) is insufficient? It's a fictitious problem.

USB 2 is nowhere near 60MB/s in the real world. Actual speeds are closer to half that I'd say. USB 4 is faster than any wireless connection. And on a "Pro" device, it's really not a negotiable in my mind. The iPad Pro has Thunderbolt because it's faster than Lightning or wireless when transferring large files. Why can't the iPhone Pro?
 
From my perspective, most photographs are opportunistic. It limits the opportunities considerably.

I've burned a lot of opportunities by carrying the iPhone only. I gained a huge number of decent shots with the Z50 that I couldn't have pulled with the iPhone.

Also for staged photos it's a pain in the ass without decent aperture and DOF control.

"From my perspective, most photographs are opportunistic."

Yep.


"It [an iPhone] limits the opportunities considerably."

Nope. Certainly not for me. A phone camera enhances opportunities. I have made many photographs that I otherwise would have missed as I don't carry my so-called "real" cameras (dSLRs, mirrorless, 4x5, etc)" over my shoulder 24/7, everywhere I go in life.


"I've burned a lot of opportunities by carrying the iPhone only."

And I've taken advantage of a ton of opportunities by having an iPhone in my pocket making photographs of people/landscapes/events/unusual occurrence/and on and on/ when out and about.

.
.

Making compelling photographs has very little to do with gear, the kind/brand of camera you own, etc. Rather it's about one's ability to see, conceptualize a photo, judge light, imagination, education, life experiences, ability to compose, determine what should or should not be in the frame or left in the shadows, understanding what releasing narrative is about in an image, etc.

When I meet another photographer on the street I might ask the question: "What do you shoot?"

If the answer is something like: "I shoot a Canon 7D with a 24-70 zoom," I'd likely say, that's nice and move on. No use having another boring conversation about gear; who makes the best glass, Canon vs Nikon, and on and on.

If the answer is instead something like: "I make photographs of people and situations in underserved San Francisco neighborhoods from the consequences of gentrification" I'd probably respond with "Care to have a beer and talk about photography?"

Compelling photographs/photography is not about gear.
 
Last edited:
Will these new sensors translate to dramatically improved video quality? If so and someone has seen a post/video discussing it, I would love to see it. Trying to decide if it’s worth the dip for me :)
 
‪From what I understood watching the event today. That it takes 12MP when taking normal photos but when using ProRaw it takes full 48MP camera. That’s what I understand from 120:10 from the event video. https://youtu.be/ux6zXguiqxM‬
From what I heard they said they are using compression to maintain a size equivalent to the size of a 12MP image but offering an ProRaw option that will be uncompressed and obviously much larger.
 
I guess we all have our opinions on this. Here's mine :)

I used to have a dSLR and a good compact (with full manual controls). They took way better photos than my phone, but ... I'm not a photographer. My photos are really just to record my life - photos of my friends, where we travel to etc.

I went on one holiday with just my iPhone 6S as it was a cycling holiday and I was worried what would happen if someone stole my camera or the SD card corrupted. The iPhone backs up to the cloud.

The photos were ok and it meant I only had to charge one device while travelling. I sold my cameras and now just have my iPhone 11. It does the job. Sure, I'd love better low-light photography but I really wish Apple went with a standard 12MP sensor with bigger light-capturing pixels instead of using 48MP and binning. However, my 11 still works well and does what I need so I'm skipping this year out too. Let's see what iOS 17 and the iPhone 15 brings this time next year.
 
  • Like
Reactions: citysnaps
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.