Nope, sorry. I’m in the same A14 Bionic boat with you.Is the A14 Bionic not powerful enough to run this?
Nope, sorry. I’m in the same A14 Bionic boat with you.Is the A14 Bionic not powerful enough to run this?
Per the interview, the real improvements that enabled this were in the GPU and the Neural Engine. GPU is much faster than last year's.The CPU is making the background blur, is not “true”, just a special fx, so it could be done in iPhone 12, if Appled wanted to enable it. The new CPU is not 100% faster than the previous, not even 30% faster!. What is the excuse?
I think the interview says it well: they looked at traditional storytelling techniques, and saw that using focus to direct attention has been a constant in the filmmaker's toolbox, and wanted to create the ability for us phone users to have that available to use for our own videos.Ok, but WHY did they create this?
My best guess is to make a new feature that makes 13 sound less like a negligible update from 12? Sometimes itd be great if Apple took a year out and came back with more impressive tech instead of all these gimmicky increments.
The A15's biggest year-over-year improvements over the A14 were in the GPU and the Neural Engine. This feature, according to the interview, relies heavily on both. I'm sure the A14 couldn't handle it.Yes it is, and maybe the A13 and the A12.
But apple needs money money money.
New features, buy new iPhone!
Here's a movie 15 year old Steven Spielberg made using his friends and an old 8mm movie camera from his dad.
No it doesn't, does it? Title's a bit misleading, yes?This doesn’t explain how they created it.
I never heard of a serious filmmaker that forgot his equipment at home. 🤣More or less the truth... There is but one advantage with a smartphone, and that is you got it always with you, period.
I never heard of a serious filmmaker that forgot his equipment at home. 🤣
Cinematic mode is a gimmick, maybe fine for self-protrayal clows of social crap platforms.
Absolutely. What worries me is that we have to learn to live with something that is simply not there... for the time being, before they do it with true lens (they have a patent ready for periscope lens), but yea generally we just need to move a bit from realityYes, it is artificial, but anything but simple. The footage I've seen is impressive for it's ability to mimick the focus effect, but it doesn't convince me yet. I'm sure future phones will ramp up to 4K with even more convincing fields of depth. One thing's for certain: computational video- and photography is here to stay, and will allow for creative uses we cannot even imagine yet.
Neither do Apple ever thought of Spielberg filming using an iPhone. We are talking about the rest of us 🤣🤣😂I never heard of a serious filmmaker that forgot his equipment at home. 🤣
Cinematic mode is a gimmick, maybe fine for self-protrayal clows of social crap platforms.
There is only the limitation of aperture basically - inversely proportional but as we know the iphone13 has now faster lenses (1.6 f-stop?) and Apple also has a patent that will allow even faster lenses. So you never know what they are capable of doing. They can do wondersPhysics has a lot to do with that. There is a serious limitation on depth of field due to the sensor and lens size. A phone camera just can't capture the same DoF as a full size camera.
Yes, but there is graduation. Let's just say that there are lenses that create a very pleasing gradual bokeh effect. They cost a lot and I don't really know close to human vision that is, but they are accepted by professionals so ...Just like every other camera. None of them exactly match the human eye. The idea that an effect is “artificial” if it’s produced by software but “natural” if it’s produced by (artificial) lenses is nonsense. Like saying food is “inorganic” if it isn’t grown by “natural” farming methods.
The consequences of putting a marketing person or a C-level-suit someone to explain or tell us “how this magical feature was made”.This doesn’t explain how they created it.
Can get behind this. In fact portrait mode looked quite decent of a photo of some plants I took a couple of days ago… what the screen viewfinder shows is a very low quality preview of how the image will look (the usual blurred edges bleeding on the background shows several fold worse than the final taken picture).Watch the Julia Wolf music video. Cinematic mode is new software, but you can get controllable depth in practice. Once more talented people get a handle of it, you'll see it used very well. It'll never be perfect, it's a thin smartphone controlled by software, but Portrait mode also had a rocky start.
Millions and millions of Instagram, TikTok, YouTube vloggers, etc also agree with you.So what.
Does anyone really believe that buying that sports car will make them a better driver, or that kitchen gadget will make you a chef? Of course not, but professional drivers and chefs will take their money and help the marketing people to give us what we want. And we want to be seduced....
We don't have our own TV studio, so the fact it's not the same as pro gear makes no difference. Most people have no want or need for pro gear anyway. What people want is a good enough image to share with their friends, so they can see it on the 6" screen of their phone. This particular feature will appeal to some and will allow them to justify the purchase to themselves. For others, it may be longer battery life or whatever. None of us need this stuff, but we do want it.
It's been going on since the dawn of advertising and this is no different.
Look what happened to Nike once they got Michael Jordan on board. It was another shoe, just like all the others they had made, but suddenly the target market was seduced and Nike went massive.
If the Neural Engine is really being used, there are these algorithms to reconstruct and extract depth from a video or several images with considerable range. With this some fake DoF can be simulated. I wouldn’t be surprised if something like this is being done… in which case the fact that it’s happening in real time, plus all the awareness of all the faces/objects/elements on the scene, direction of eye gaze, etc. I can now believe that maybe it really needs all of the A15, at least today.No it doesn't, does it? Title's a bit misleading, yes?
Me? I was guessing they'd reused the face tracking routines along with the "Require Attention for Face ID" algorithms. Simply applying them to all faces in frame simultaneously, and then focusing on whichever face is showing the most Attention.
Or something like that.