Metal is the most predictable of all and intensity is a matter of light placement, modification, and power. I mean that reflective objects are pretty much product photography 101 and knocking out an iPhone is pentoolng 101 regardless of the resolution.
The image at the start of this thread is nice but most likely the product of 5 or so captures/layers not including screens. The phones appear to be identical and based on the angle of the image the apple and logo on the back of the first phone would reflect the back of the 2nd phone if they were shot in place.
I know I've had to knock out product shots and illustrate simple backgrounds at times myself. I don't know anyone that would shoot those all in the same place. It would just create a lot of extra reflections. Regarding metal, note it was tied to what I mentioned about creating shaders. I put a link regarding complex index of refraction. Raytracers often take awkward approaches to it, which I mention because of people mentioning renders. Whatever shop produces the images is most likely using something like mental ray. A common solution is set the refraction shader to something excessively high, but I think it looks weird as the reflections tend to look flat compared to real metal.
It's obviously different photographing it, but do you think they didn't adjust those corner gradients in post? Those seem like the most likely point to me. I figured they shot them separately, did all the outlines, brightened up certain buttons and cleaned up any internal dust, and accomplished the screen dropin and surface reflection that way (due to the screen change).
Thanks for your reply. Most of that sounded like "sdads adoahdaofoag asodisahjdopasg aposfdihjaofsdffhofhaosihfohja." But all the same, a great read!
Reading it back it looks like I wrote it in a caffeine frenzy
. I'll explain better. It's typical to render things as polygons, but cad data doesn't start off that way. It's often made of patches. It has to be converted into 3 or 4 sided polygons at a density that won't make the renders look jagged. They can't use typical methods that would smooth out closeups of polygonal objects at render time, because those tend to flatten things out. CAD models are different. The curvature tends to increase in appearance between anchor points if it's sampled at a finer level. I could probably mock up an example.
When I said "UV" it just means laying out 3 dimensional faces in 2D so that 2D textures can be easily applied without excessive stretching. This would be important when applying an image replacement to the glass. It shouldn't be terribly difficult or important on a phone, but it varies. If there are any problems applying an even texture, you end up dealing with UVs in most situations, even if it's just resolving overlap.
Light linking means you can tell a "fake" cg light to only affect certain objects in the scene, cast shadows only, or break any other physical rules. In photography some things that are difficult to light might be shot under different lighting setups and comped together. This is common with cars when it comes to flanges and wheel flares. Light linking just alleviates the need to do separate renders for that, but it should be used sparingly.
Wiki has a
good enough explanation of the issue of creating artificial metallic reflections. Of course whatever studio deals with this stuff would already have starting values worked out for generic metals with whatever rendering package they use that can be augmented to get the right look.
Anyway it still takes time to set that stuff up. There are a lot of things that can be rendered where photographers are still employed. Renders are most common where a physical model isn't available for whatever reason or for things that would be ridiculously expensive to construct within the available timeline. It hasn't put all of these photographers out of business, but I suspect it has reduced the workload of some.