Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
This was never in doubt.

Facebook's agreement has stated in several different forms that, if you post on Facebook, they own what you post. I was on a committee back when I was trying to built a photography business, and we would discuss and discuss, and then, the lawyers came out with a new way to say that they owned everything we posted.

I have 3434 posts on my sports photography Instagram account right now. It's become more of a doctor's office/hospital visit log lately and the 1100 followers are gone because I deleted them.
 
Last edited:
  • Like
Reactions: Clix Pix
This was never in doubt.
Unfortunately, all too true.

Facebook's agreement has stated in several different forms that, if you post on Facebook, they own what you post. ...


Nevertheless, the constantly evolving nature of the tech world means that lack of regulation (already grossly inadequate) for the conditions of a decade or so ago, are even more grotesquely inadequate for what is occurring now.

One need hardly refer to AI, or the veritable tsunamis of "deepfake" imagery, and, indeed, "fake news" to make this point.

The fact that, while they own images, they also are not deemed liable (under section 230) for harmful images (the 'deepfake' stuff) nor, for what appears on their platforms, is deeply disturbing.

Long term, given the clear inability (or even, desire) for tech companies to wish to moderate, or to regulate, what appears on their platforms, along with their chilling indifference to consequences, means that I think it inevitable that regulation will happen.
 
Last edited:
  • Like
Reactions: Clix Pix
Unfortunately.




Nevertheless, the constantly evolving nature of the tech world means that lack of regulation (already grossly inadequate) for the conditions of a decade or so ago, are even more grotesquely inadequate for what is occurring now.

One need hardly refer to AI, or the veritable tsunamis of "deepfake" imagery, and, indeed, "fake news" to make this point.

The fact that, while they own images, they also are not deemed liable (under section 230) for harmful images (the 'deepfake' stuff) nor, for what appears on their platforms, is deeply disturbing.

Long term, given the clear inability (or even, desire) for tech companies to wish to moderate, or to regulate, what appears on their platforms, along with their chilling indifference to consequences, means that I think it inevitable that regulation will happen.
I'm still shocked that people in this area have such loose morals, and they're not politicians.
 
The whole adaptive AI thing has me perplexed, as in it’s more like a Parrot than intelligence.
Repeating with slight variations vs creating .
Forgery, fakery, cheating…
 
  • Like
Reactions: Clix Pix
The whole adaptive AI thing has me perplexed, as in it’s more like a Parrot than intelligence.
Repeating with slight variations vs creating .
Forgery, fakery, cheating…
Just like a child, AI will grow up at some point. Before that happens, you'll want to be ready.
 
  • Like
Reactions: Clix Pix and rm5
I cannot wait for the tech industry to be (robustly) regulated, as they are clearly unable to regulate themselves.
The challenge we have is the rate at which tech becomes viable, useable and available is accelerating and the regulations cannot be amended quickly enough to accommodate this. Further, technology represents a truly cross border problem. One nation state may indeed outlaw or seek to regulate usage of a technology, but that doesn't mean all nation states will do the same.

To put peoples minds at ease, the training of an AI model is not the same as taking copies of your images and someone looking at them. It is "teaching" the AI algorithm to identify objects and meaning in images by reducing them down to an indexable chain of digits. They aren't using them any more than they are already.

Don't get me wrong, I think Orson was optimistic, 1984 was 40 years ahead of its time. I am ever more concerned we are entering a 1984 big brother era. I am not worried about it because I engage in criminal activity, I just fear that the more our digital history is preserved, the harder it becomes to be truly in control of one's fate.

I continue to amuse myself by saying things in earshot of an amazon echo device or my phone, then seeing how long it takes to get targeted advertising in line with the test case comments. However, this is clearly a potential thought police threat.

Now there is no defence for the big logos allowing access to content that is of questionable morality but then no one holds Glock, Colt, Heckler and Koch etc accountable for firearm related deaths and is this not similar.

There is a line of course. The example given against Meta where the warning of child abuse may be represented in a page, then being given the option to proceed anyway seems wholly unacceptable. We should try to prevent that of course through content moderation but guess what? because of the volumes of posts per day, we need AI's help to do it effectively and AI needs to be trained to spot the subjects or content portraying the topics that we need to filter and report. The training can only come from a large body of learning material - so we are back full circle. Now if indexing my images means my daughters are that little bit safer online, then OK, I will agree.
 
Last edited:
The whole adaptive AI thing has me perplexed, as in it’s more like a Parrot than intelligence.
Repeating with slight variations vs creating .
Forgery, fakery, cheating…
Like teaching someone to answer a maths exam question rather than teaching them how to solve the math problem.

This is why AI in particular needs some guardrails. Not for reasons of tech killing us but for the velocity at which it can potentially do something stupid and the delay in us spotting it and the potential damage caused in the meantime. Like lighting a match in a fireworks factory. Clocks ticking before the whole place goes up
 
Last edited:
The challenge we have is the rate at which tech becomes viable, useable and available is accelerating and the regulations cannot be amended quickly enough to accommodate this. Further, technology represents a truly cross border problem. One nation state may indeed outlaw or seek to regulate usage of a technology, but that doesn't mean all nation states will do the same.

To put peoples minds at ease, the training of an AI model is not the same as taking copies of your images and someone looking at them. It is "teaching" the AI algorithm to identify objects and meaning in images by reducing them down to an indexable chain of digits. They aren't using them any more than they are already.

Don't get me wrong, I think Orson was optimistic, 1984 was 40 years ahead of its time. I am ever more concerned we are entering a 1984 big brother era. I am not worried about it because I engage in criminal activity, I just fear that the more our digital history is preserved, the harder it becomes to be truly in control of one's fate.

I continue to amuse myself by saying things in earshot of an amazon echo device or my phone, then seeing how long it takes to get targeted advertising in line with the test case comments. However, this is clearly a potential thought police threat.

Now there is no defence for the big logos allowing access to content that is of questionable morality but then no one holds Glock, Colt, Heckler and Koch etc accountable for firearm related deaths and is this not similar.

There is a line of course. The example given against Meta where the warning of child abuse may be represented in a page, then being given the option to proceed anyway seems wholly unacceptable. We should try to prevent that of course through content moderation but guess what? because of the volumes of posts per day, we need AI's help to do it effectively and AI needs to be trained to spot the subjects or content portraying the topics that we need to filter and report. The training can only come from a large body of learning material - so we are back full circle. Now if indexing my images means my daughters are that little bit safer online, then OK, I will agree.
I agree a lot with what you say. I try to keep a limited online footprint. No FB or Twitter. But I also keep my real name to myself for example.
Also an Alexa free house. But I do like the idea of Alexa delivering these adverts in the @kenoh household.
Porsche
Leica
Nikon
Cannon
Sony
Cartier
Rolex
😛
 
  • Haha
Reactions: Clix Pix
I agree a lot with what you say. I try to keep a limited online footprint. No FB or Twitter. But I also keep my real name to myself for example.
Also an Alexa free house. But I do like the idea of Alexa delivering these adverts in the @kenoh household.
Porsche
Leica
Nikon
Cannon
Sony
Cartier
Rolex
😛
I hear you on the obfuscating your identity topic but just like my photography, I lacked creativity in coming up with a naming convention

I was laughing until you hurt my feelings by putting Cartier. I expected that list to contain more things like:

Canon pro 100
Benq
socks
underwear
teabags

😂
 
  • Haha
Reactions: Clix Pix
I hear you on the obfuscating your identity topic but just like my photography, I lacked creativity in coming up with a naming convention

I was laughing until you hurt my feelings by putting Cartier. I expected that list to contain more things like:

Canon pro 100
Benq
socks
underwear
teabags

😂
Teabag shortage Mrs AFB read this morning. I always keep 3-6 months of stock on hand.

The Cartier is for Mrs Kenoh!
 
Teabag shortage Mrs AFB read this morning. I always keep 3-6 months of stock on hand.

The Cartier is for Mrs Kenoh!
Shhh! she might see this man! dont plant ideas in her head on valentines day!

Teabag shortage? what? That qualifies as a British terror attack right? national emergency.I need to go get stocked up.... brb...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.