Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
This is book is being used by academics in fields from astrophysics up to bioinformatics and genetics.

It not about how digital advertising tracking works. Tracking is NOT the same as ML.
He was just pointing to some book he found on Google Books, in an attempt to defend Google data mining their customers private photos, luring them with dazzling animations. Some people are just too easy to exploit. They're really victims of the digital age.
 
You keep arguing that as long as people understand how, it doesn't matter what. You are simply seeking to understand how evil works, so you can stop worrying about it. As long as you understand the mechanism, any evil machine becomes benigh to you.
it's basically the same argument gun-rights groups use.

"you're scared of guns because you don't understand how they work".

No no. We just don't want to get shot.



It's amazing to me how people can fail to see (or deliberately not 'see' and thus try to refocus the argument) that understanding something, doesn't inherently protect you from it.


Dentists aren't immune to gum disease.

Nuclear physicists aren't immune to radiation.
 
Skepticism appears when there is uncertainty (not understanding ML). Once comprehend, it’s a matter of YES or NO!
Not at all. There's a whole lot of gray area in ethics. In fact, it is almost entirely gray.

I understand the concepts, but this is irrelevant. The application of ML to create fakes in a nonchalant way is the issue, and not how it works. And just because you could run someone else's classifier example in a textbook exercise and now consider yourself an expert doesn't change any of that.

If, however, indeed that book offers some unexpected insights, then start sharing these secrets here instead of just advertising a random book!
 
Not at all. There's a whole lot of gray area in ethics. In fact, it is almost entirely gray.

I understand the concepts, but this is irrelevant. The application of ML to create fakes in a nonchalant way is the issue, and not how it works. And just because you could run someone else's classifier example in a textbook exercise and now consider yourself an expert doesn't change any of that.

If, however, indeed that book offers some unexpected insights, then start sharing these secrets here instead of just advertising a random book!

Don't bother, it's clear he's just trolling at this point.
 
  • Like
Reactions: Morgenland
I don't think that's the case, but even if it were.


Scenario one is an advertising company that has a well publicised history of building profiles about all of it's livestock users, manipulating your images.


Scenario two is a personal computer company that has a well publicised history of making users pay for the features they need, manipulating your images.





There's plenty of actual examples of how Apple's approach to ML 'features' provides benefits to the user without being creepy. A few years ago they added keyword tagging: you search your photos for 'dog' and it shows all the pictures of your dog, somehow. Or the facial recognition system, so you can search for "Joe" and see the pictures with Joe in them.

That's all done on-device. Apple never sees that. They don't see that you have four Great Danes and thus are more likely to click on ads for 100kg bags of dog food. Even if they did that stuff in an Apple data center, their business isn't 93% funded by selling advertising spots based on a personal profile of you.


That is why ML/AI from Google are considered "creepy" by-default for a lot of people. It's also why I expect their recent "user privacy" announcement about Android, will come with some pretty hefty caveats.

Trusting Google to treat people's data as private (as in: not use it to further their own profit centres) is like trusting a goat to protect your roses against vandals.

Thank you for your lecture.

Anyone who think Apple is serious about privacy must kidding themselves. Apple’s stand on privacy is just marketing play.

And I will continue use my Google photo, happily
 
A fun fact for the people arguing here: Google is not forcing you to use this feature.
To be fair, not one single person has even remotely implied that to be the case. The discussion is on whether or not the feature is creepy, not whether Google is mandating its use.
 
  • Like
Reactions: icanhazmac
Anyone who think Apple is serious about privacy must kidding themselves. Apple’s stand on privacy is just marketing play.
Thank you for that well reasoned argument, with a summary and detailed talking points explaining your view. Very informative. /s
 
When this is rolled out you’ll be seeing some really weird results where wrong parts of photos are animated. o_O
 
There is going to be some cool stuff coming out with machine learning. Two Minute Papers on YouTube shows amazing stuff. We all laughed at the Blade Runner movie where they would magnify information in the photo “enhance” but machine learning can do that now. It’s insane.
 
The end result is supposed to be indistinguishable from natural if a correct "ML model" is applied.
We get there at some point, then people will get used to it.

Like many things that felt unnatural, but it’s part of our lives today.
I have read stories about people back in the days that were scared to watch televisions. It felt weird to them being stared at by the weatherman.
I don’t think that’s comparable. Live TV is still showing real humans. Our brain knows that. In this case, the ML is trying to create something (movement) of a human out of nothing (just still images). You can look at the samples themselves. Since it’s machine created, our brains know that it’s not natural. It’s different, despite the human appearance. There’s a lot of studies in this field, and science has proven it. It’s our natural defense mechanism.

It’s one thing if it’s completely artificial, eg a complete animation, our brains know that. but since it uses real life person images, it just doesn’t look right, and thus it can feel “creepy.”

Here’s an example explaining it
It talks more about robots vs Android, but the idea is similar.
 
This seems to be employing similar technology as "Deep Nostalgia". (I haven't noticed anyone mentioning it in this thread)

In a few years, Apple will unveil similar functionality and at their keynote they'll craft a video tugging at the heartstrings of how they are able to bring motion to a deceased love one's photos.... they'll insert the obligatory personal anecdotes... and it will be described as... "magical". ;)
 
  • Like
Reactions: SegNerd
Not at all. There's a whole lot of gray area in ethics. In fact, it is almost entirely gray.

I understand the concepts, but this is irrelevant. The application of ML to create fakes in a nonchalant way is the issue, and not how it works. And just because you could run someone else's classifier example in a textbook exercise and now consider yourself an expert doesn't change any of that.

If, however, indeed that book offers some unexpected insights, then start sharing these secrets here instead of just advertising a random book!

This is the same text book used in almost all universities around the world to basic ML. It’s easy to read and teaches you the real world mathematical problems in ML.

If you want me to share the unexpected insight, please enroll yourself for the academic courses and follow our lectures…
I’m not going to succeed to teach anyone on MacRumors how Support Vector Machines works if they don’t comprehend the maths. That’s the reason why recommend this book. You can take your time it will probably take you 2 years to read and understand everything.
 
This seems to be employing similar technology as "Deep Nostalgia". (I haven't noticed anyone mentioning it in this thread)

In a few years, Apple will unveil similar functionality and at their keynote they'll craft a video tugging at the heartstrings of how they are able to bring motion to a deceased love one's photos.... they'll insert the obligatory personal anecdotes... and it will be described as... "magical". ;)
It probably will be magical.
--------------------------------------------------
Seems like it would be easier to implement "live photos".
 
This is the same text book used in almost all universities around the world to basic ML. It’s easy to read and teaches you the real world mathematical problems in ML.
Literally nobody is questioning the math behind machine learning.

Maybe they should give you a book about understanding what other people are actually saying.
 
Only Google can come with something that creepy
This was my immediate thought when I was reading the article at work today. ⬇️ Swap in the words, "Hi! I'm Goggy. And I can now talk and watch you sleep. Sweet dreams." with always on screen saver night eyes that shift and blink randomly.

external-content.duckduckgo.com.jpg
 
Last edited:
requires you to take two or more photos. Not really that cool at all
The cool thing is that it works on pictures taken years ago, even from a regular camera.
Yes, but I very likely only took one picture years ago that was framed the same way. NOW that I’m aware of this feature, I may take multiple photos in the future, but I can’t go back, take at least one more photo, then have this work for me.
 
It is creepy, because Live Photos is an actual video of the photo; and one frame is the actual picture.

The result of this looks creepy, because ML is filling the voids of movement between the 2 pictures.
 
I remember the first time I stumbled upon a live photo without knowing the feature existed at all.
I kept seeing my photos closing the eyes and I thought I was hallucinating.
I genuinely panicked for a minute.
 
"Figuring out 'how' someone did something creepy, does not make it any less, creepy." WUT…?

People are just scared of the unknown, even when it’s NOT creepy. To me, that’s more creepy than creating a mathematical tool that makes something appear creepy to the people that don’t understand it (like application of ML/NN)

Some people are natural caveman (not willing to change), and are scared of things they don’t understand.
100% CORRECT. Its infuriating to see people blindly trust and defend google when they datatmine and monetise every byte of your online existence with adsense , youtube, photos, search , gmail, Google drive, Android the list is endless



One example is google photos. The only reason google is now removing ‘free’ unlimited photo uploads because their big data deep learning convolutional neural network (CNN...) supercomputers that scans your uploads has matured to a stage that they don’t need to offer free unlimited uploads any more.

In simple terms Google needed as many images they could get their hands on to fine tune it AI to improve its object recognising speed accuracy.

Their ai does not need the extra exabytes of randomized uploaded google photos images thus theres no incentive to offer ‘free’ google photos and any image uploaded to google photos will now count towards your 15gb ‘free’ storage cap.



TL;DR google are not your friend their business philosophy is that if you don’t pay for the product YOU are the product, Apple doesn't do that because they make billions selling expensive electronics , alphabet makes billions selling YOU
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.