An article in Petapixel and it's not good news for FB and Instagram Fans:
I cannot wait for the tech industry to be (robustly) regulated, as they are clearly unable to regulate themselves.An article in Petapixel and it's not good news for FB and Instagram Fans:
Unfortunately, all too true.This was never in doubt.
Facebook's agreement has stated in several different forms that, if you post on Facebook, they own what you post. ...
I'm still shocked that people in this area have such loose morals, and they're not politicians.Unfortunately.
Nevertheless, the constantly evolving nature of the tech world means that lack of regulation (already grossly inadequate) for the conditions of a decade or so ago, are even more grotesquely inadequate for what is occurring now.
One need hardly refer to AI, or the veritable tsunamis of "deepfake" imagery, and, indeed, "fake news" to make this point.
The fact that, while they own images, they also are not deemed liable (under section 230) for harmful images (the 'deepfake' stuff) nor, for what appears on their platforms, is deeply disturbing.
Long term, given the clear inability (or even, desire) for tech companies to wish to moderate, or to regulate, what appears on their platforms, along with their chilling indifference to consequences, means that I think it inevitable that regulation will happen.
Just like a child, AI will grow up at some point. Before that happens, you'll want to be ready.The whole adaptive AI thing has me perplexed, as in it’s more like a Parrot than intelligence.
Repeating with slight variations vs creating .
Forgery, fakery, cheating…
The challenge we have is the rate at which tech becomes viable, useable and available is accelerating and the regulations cannot be amended quickly enough to accommodate this. Further, technology represents a truly cross border problem. One nation state may indeed outlaw or seek to regulate usage of a technology, but that doesn't mean all nation states will do the same.I cannot wait for the tech industry to be (robustly) regulated, as they are clearly unable to regulate themselves.
I'm still shocked that people in this area have such loose morals, and they're not politicians.
Like teaching someone to answer a maths exam question rather than teaching them how to solve the math problem.The whole adaptive AI thing has me perplexed, as in it’s more like a Parrot than intelligence.
Repeating with slight variations vs creating .
Forgery, fakery, cheating…
I agree a lot with what you say. I try to keep a limited online footprint. No FB or Twitter. But I also keep my real name to myself for example.The challenge we have is the rate at which tech becomes viable, useable and available is accelerating and the regulations cannot be amended quickly enough to accommodate this. Further, technology represents a truly cross border problem. One nation state may indeed outlaw or seek to regulate usage of a technology, but that doesn't mean all nation states will do the same.
To put peoples minds at ease, the training of an AI model is not the same as taking copies of your images and someone looking at them. It is "teaching" the AI algorithm to identify objects and meaning in images by reducing them down to an indexable chain of digits. They aren't using them any more than they are already.
Don't get me wrong, I think Orson was optimistic, 1984 was 40 years ahead of its time. I am ever more concerned we are entering a 1984 big brother era. I am not worried about it because I engage in criminal activity, I just fear that the more our digital history is preserved, the harder it becomes to be truly in control of one's fate.
I continue to amuse myself by saying things in earshot of an amazon echo device or my phone, then seeing how long it takes to get targeted advertising in line with the test case comments. However, this is clearly a potential thought police threat.
Now there is no defence for the big logos allowing access to content that is of questionable morality but then no one holds Glock, Colt, Heckler and Koch etc accountable for firearm related deaths and is this not similar.
There is a line of course. The example given against Meta where the warning of child abuse may be represented in a page, then being given the option to proceed anyway seems wholly unacceptable. We should try to prevent that of course through content moderation but guess what? because of the volumes of posts per day, we need AI's help to do it effectively and AI needs to be trained to spot the subjects or content portraying the topics that we need to filter and report. The training can only come from a large body of learning material - so we are back full circle. Now if indexing my images means my daughters are that little bit safer online, then OK, I will agree.
I hear you on the obfuscating your identity topic but just like my photography, I lacked creativity in coming up with a naming conventionI agree a lot with what you say. I try to keep a limited online footprint. No FB or Twitter. But I also keep my real name to myself for example.
Also an Alexa free house. But I do like the idea of Alexa delivering these adverts in the @kenoh household.
Porsche
Leica
Nikon
Cannon
Sony
Cartier
Rolex
😛
Teabag shortage Mrs AFB read this morning. I always keep 3-6 months of stock on hand.I hear you on the obfuscating your identity topic but just like my photography, I lacked creativity in coming up with a naming convention
I was laughing until you hurt my feelings by putting Cartier. I expected that list to contain more things like:
Canon pro 100
Benq
socks
underwear
teabags
😂
Shhh! she might see this man! dont plant ideas in her head on valentines day!Teabag shortage Mrs AFB read this morning. I always keep 3-6 months of stock on hand.
The Cartier is for Mrs Kenoh!
Yes.... I was watching The Birds and got Orson Wells stuck in my head sorryOrwell as in George Orwell.