Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Animals rarely move in such a linear way only mechanisms. As an animator we know that creatures move in specific ways, curves, specific motions etc. Its just the brain picking up on the linear movement and thinking it odd. I guess its all to do with the number of photos. I for one welcome the photo robot replacements of my friends and relatives - not creepy at all.
 
People are just scared of the unknown, even when it’s NOT creepy. To me, that’s more creepy than creating a mathematical tool that makes something appear creepy to the people that don’t understand it (like application of ML/NN)

Some people are natural caveman (not willing to change), and are scared of things they don’t understand.


Two people are standing in front of you holding a camera. One is wearing a badge that says "press". The other is wearing a trench coat but it's open at the bottom and he clearly isn't wearing any pants or underwear. He's touching himself with one hand.


That's the sort of comparison we're talking about here. Your argument is "cameras aren't creepy just because they capture the moment". Our argument is: intent matters too.
 
Instead of screaming "CREEPY!", I highly advise any of you to read the book "An Introduction to Statistical Learning: With Applications in R"
Huh? This is creepy not because:
- machine learning isn't cool.
- the result is not natural as others have said.

Both are completely missing the point. This is creepy because Google fabricates new realistically looking moving pictures with you in them. Doesn't matter if it's ML. It only matters where it will stop. Next thing: based on 10 pictures in your camera roll they will create movies of you in a surrounding of your choice. And after that activities of your choice, and so on. Completely faking reality. That's the creepy part.
 
Animals rarely move in such a linear way only mechanisms. As an animator we know that creatures move in specific ways, curves, specific motions etc. Its just the brain picking up on the linear movement and thinking it odd. I guess its all to do with the number of photos. I for one welcome the photo robot replacements of my friends and relatives - not creepy at all.
Do you also animate peoples private photos?
 
Live Photos is literally a recording of a few seconds before and after the actual still photo. This is an advertising company analysing all your photos and determining the content of them.
They will embrace any new technology that comes their way, and eventually get devoured by it.
 
  • Like
Reactions: zakarhino
Live Photos is literally a recording of a few seconds before and after the actual still photo. This is an advertising company analysing all your photos and determining the content of them.
I’m talking about the end product, you really expect them to add subliminal messages to it? The little animation doesn’t seem creepy at all.

I do agree that Google and their data mining is problematic but for people already using the service (I’m not one of them) I guess this is similar to Live Photo’s.
 
Instead of screaming "CREEPY!", I highly advise any of you to read the book "An Introduction to Statistical Learning: With Applications in R"

Good book for people that are full of it and think ML is creepy or scary!

Good luck 😁

Yes, I stopped thinking nuclear bombs were terrible weapons of mass destruction once I read a book about nuclear physics.

Hyperbole aside, nobody cares about what the code looks like, it's creepy because your brain knows what you're seeing didn't actually happen. As others have pointed out Live Photos are a GIF of what actually happened whereas this is a completely synthetic animation. I don't like it at all and reading up on R isn't going to change my mind.
 
Yes, I stopped thinking nuclear bombs were terrible weapons of mass destruction once I read a book about nuclear physics.

Hyperbole aside, nobody cares about what the code looks like, it's creepy because your brain knows what you're seeing didn't actually happen. As others have pointed out Live Photos are a GIF of what actually happened whereas this is a completely synthetic. I don't like it at all and reading up on R isn't going to change my mind.
It's making you question your own memory, by fabricating new ones.
 
It's more basic than that, it wasn't Apple who created it therefore it is creepy by default. The Google hate mob are always on standby.

Had it been Apple, it would have been nothing but Magical.

I don't care if Apple made it or if Google made it, I don't like the idea of synthetic 'life' being added into a photograph. Live Photos are short videos of what actually happened. Google's feature is a weird looking, lifeless animation.

And oh yeah, regardless of this feature's existence, Google having access to all of my precious life moments is significantly worse than Apple having it (even though Apple's no saint, we know). You're acting like that "Google hate mob" exists only because of people's adoration of Apple and not because of Google's malicious, disgusting, and vile dedication to tracking every detail of your life so they can sell you stuff.
 
I’m talking about the end product, you really expect them to add subliminal messages to it? The little animation doesn’t seem creepy at all.

I do agree that Google and their data mining is problematic but for people already using the service (I’m not one of them) I guess this is similar to Live Photo’s.
Ok so you don't want to know how the sausage is made.

The difference obviously is that one is reality, with the sounds of the time. The other is essentially a flip book like we all drew in the corner of a book in high school.


You agree their practices are problematic, but then see no problem when those practices are increased? Huh?
 
  • Like
Reactions: peanuts_of_pathos
I’m talking about the end product, you really expect them to add subliminal messages to it? The little animation doesn’t seem creepy at all.

I do agree that Google and their data mining is problematic but for people already using the service (I’m not one of them) I guess this is similar to Live Photo’s.
So Google data mining is problematic, but you defend Google data mining peoples private photos, as long as they volunteered to use their service and turn it on? By that logic it's ok Facebook is data mining because people signed up to use Facebook.
And when you see people in this thread, who are definitely not signing up to use it, you dismiss their concerns.

So really, your concern about data mining is moral vanity, at best?
 
I don't care if Apple made it or if Google made it, I don't like the idea of synthetic 'life' being added into a photograph. Live Photos are short videos of what actually happened. Google's feature is a weird looking, lifeless animation.

And oh yeah, regardless of this feature's existence, Google having access to all of my precious life moments is significantly worse than Apple having it (even though Apple's no saint, we know). You're acting like that "Google hate mob" exists only because of people's adoration of Apple and not because of Google's malicious, disgusting, and vile dedication to tracking every detail of your life so they can sell you stuff.
They understand that. They're just calling you a partisan in an attempt to defend data mining peoples private photos.
 
Huh? This is creepy not because:
- machine learning isn't cool.
- the result is not natural as others have said.

Both are completely missing the point. This is creepy because Google fabricates new realistically looking moving pictures with you in them. Doesn't matter if it's ML. It only matters where it will stop. Next thing: based on 10 pictures in your camera roll they will create movies of you in a surrounding of your choice. And after that activities of your choice, and so on. Completely faking reality. That's the creepy part.

"Next thing: based on 10 pictures in your camera roll they will create movies of you in a surrounding of your choice. And after that activities of your choice, and so on. Completely faking reality. That's the creepy part"

And that’s exactly why you should read the BOOK and do the exercises 🤣
 
Last edited:
"Next thing: based on 10 pictures in your camera roll they will create movies of you in a surrounding of your choice. And after that activities of your choice, and so on. Completely faking reality. That's the creepy part"

And that’s why you should read the BOOK and do the exercises 🤣

Just read a book on how digital advertising tracking works and now I have no problems with my personal life being tracked whatsoever. Thanks dude!
 
I am an Apple fan but I don't mind Google services. don't use Google photos but use gmail as my main email address, Google docs, Google drive etc.
 
You don't understand that if you explain how a creep did something creepy, it's still creepy? Think about it one more time, hopefully you agree.

You are now switching from explain the creepy act, in hopes to make it less creepy, to simply stating it's not creepy.
They by all means, go to Google, share your photos with them. No one here is going to stop you I'm sure.

By arguing that people are cavemen unless they accept every change coming their way, are you saying skeptics of any kind, are cavemen?

Skepticism appears when there is uncertainty (not understanding ML). Once comprehend, it’s a matter of YES or NO!

Also, you don’t automatically hand in your pictures to google since the model is locally applied in your photo library. The algorithm just scans for two similar looking pictures and creates frames in between. This is very similar to NVIDIAs DLSS techniques to upsample unseen frames. Also, Apple uses very similar techniques for auto-HDR or sort/recognize people and objects in your photo library.

The only information being shared to Google is probably the accuracy of the model that runs on your phone (how well the training data fits the testing data). And I am 99% sure that even Apple collect the same kind of data. That’s the only how they can improve the models after every IOS/Android update.
 
Just read a book on how digital advertising tracking works and now I have no problems with my personal life being tracked whatsoever. Thanks dude!

This is book is being used by academics in fields from astrophysics up to bioinformatics and genetics.

It not about how digital advertising tracking works. Tracking is NOT the same as ML.
 
Just read a book on how digital advertising tracking works and now I have no problems with my personal life being tracked whatsoever. Thanks dude!
Or read a book about how Marxists took control of Russia despite being a small minority, and we can stop worrying about what's happening in America in general, even outside of Google's assistance in the matter.
 
  • Like
Reactions: Jakuta
since the model is locally applied in your photo library
Care to elaborate on where this is specified? I've tried to find such information - for either this feature, or the earlier iteration known as "cinematic photos" from last year.

I can't find any information to clarify if it's on-device or not. There is zero reason to assume Google would do this type of thing on-device, and they've historically always done stuff like this on their servers, so unless you've got some evidence to show that it is in fact done locally, I'm gonna call ******** on that.
 
Skepticism appears when there is uncertainty (not understanding ML). Once comprehend, it’s a matter of YES or NO!

Also, you don’t automatically hand in your pictures to google since the model is locally applied in your photo library. The algorithm just scans for two similar looking pictures and creates frames in between. This is very similar to NVIDIAs DLSS techniques to upsample unseen frames. Also, Apple uses very similar techniques for auto-HDR or sort/recognize people and objects in your photo library.

The only information being shared to Google is probably the accuracy of the model that runs on your phone (how well the training data fits the testing data). And I am 99% sure that even Apple collect the same kind of data. That’s the only how they can improve the models after every IOS/Android update.
You keep arguing that as long as people understand how, it doesn't matter what. You are simply seeking to understand how evil works, so you can stop worrying about it. As long as you understand the mechanism, any evil machine becomes benigh to you.

I don't think anyone can help you.

People can chose yes or no, without understand how an evil machine works, if they can see the machines output is evil.
Learning how the machine works, actually does nothing to help you make a choice. Understanding how Google manipulates and data mines your photos, does nothing to help you decide whether or not you want them to.

Apple clarified several times they used random images from the internet, to develop their machine learning.

No one here is trying to stop you handing over your private photos to googles data mining. We're just refusing to go along with it.
 
Care to elaborate on where this is specified? I've tried to find such information - for either this feature, or the earlier iteration known as "cinematic photos" from last year.

I can't find any information to clarify if it's on-device or not. There is zero reason to assume Google would do this type of thing on-device, and they've historically always done stuff like this on their servers, so unless you've got some evidence to show that it is in fact done locally, I'm gonna call ******** on that.
Exactly. But we'll never make skeptics of the clinically naive.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.