I've seen a few references here to using RAW so as to avoid the smartHDR. I'm afraid this may be a myth.
I took some photos in both 'normal' and 'RAW' format, in an artificial cave that was lit by a studio light. That light was not washed out in RAW, as it should have been. Instead, it had the same smartHDR effect as the 'normal' image, which turned the light source blue rather than the white it would have been without HDR - not to mention the 'ghosting' / diffusion that occurs (most visible on the rocks) because of the HDR effect.
Edit: I converted the RAW .dng to .jpg and added the file below. See what I mean about the massive amount of processing in that untouched RAW image? It looks identical to the 'normal' smartHDR pics I took in that same cave.
In other words, using RAW did *not* solve the HDR/processing problem.
That is just a sensor data as a digital negative (DNG), you need to process (develop) the negative to make it a photo before we can judge it. All I see is a total lack of processing. The white balance is off, some of the colors are outside the range of displayable color spaces, its flat from lack of dynamic range, that stuff is making it look noisy, etc etc.
Smart HDR can be turned off in ProRAW, its just tags in the DNG that are associated with sliders. However the problem with that photo is its not a photo, its a negative.
ProRaw is still a RAW just with built in demosaic algorithm that includes scene range values (NOT the displays color space range), so embedded in there is data for the original dynamic range and original color values both of which are need to be adjusted for the color space and proper dynamic range.
Apples computational photography (namely Smart HDR and Deep Fusion) is required for several reasons. First, the point of computational photography makes up for the lack of hardware that doesn't fit on a phone. Second, it offers a way for consumers who want RAW photo capture but find it unrealistic to make the jump into editing RAW. Third, it picks up some of the heavy lifting normally done in post so it can still be accomplished on a mobile device.
Smart HDR, unlike the hysteria would lead everyone to believe is in charge of defining the values for the tags in the DNG. Local bright and shadow regions, local tone mapping, local linearization, local sharpening, and local exposure. Since the demosaic process is already done this Smart HDR data is
non destructive to the image since its data separate of the sensors captured image. Smart HDR is just data connected to sliders in your editing suite of choice. If you don't want it use it then you don't have to use it, just set your sliders to zero. Although setting that data to zero doesn't leave you with much.
With Smart HDR and local tone mapping you not only have the RAW data to work with but computational photography data. Before on left after on right.
Deep Fusion is reported to be used for denoise in low/medium light, in normal photo mode its prevents noise. More computational photography to make up for using the wrong camera hardware on the wrong shot. In my opinion though it does an amazing job.
Bayer RAW via Halide
Apple ProRAW
A bit tough to tell but look at all the noise on the stems near the dirt of the standard Bayer RAW versus the ProRAW. I got those images from this video. This is Josh Stunell he's a pretty good photographer recommend watching the video.
Anyway, Apple does a lot for us to make this easy but its still a RAW in a digital negative format that needs to be process, graded and exported as a photo.
My instagram is full of amazing iPhone 13 Pro pics captured in ProRAW from photographers around the world. Tons and tons of good photographer reviews I skimmed over to find the above pics. We all must just have got a bad batch of iPhones, nothing to do with skill or talent or the lack thereof, no sir...