We should all realize that AI is going to be (is being) weaponized against society, to engage in all sorts of nefarious activities. Perception management (which we see already, in mass media), censorship (this Grok is just one)... pretty much any form of control and bias you can think of, it's probably going to happen or get more complex.
In two years, we will likely be in a very different environment if something doesn't take place to stop this -- I don't even know what could. Venues like social media will be completely controlled. Parts of our society will be so highly monitored, we'll likely be in a very robust censorship (and reporting!) model, more so than it is now.
Sound unrealistic, or paranoid? Look at how quickly AI has evolved so far.
But, having said that, I don't think AI itself is bad. It's a tool, that could be (and is) used to make life better, like finding medications, medical treatment and some really amazing things like materials and engineering. But leave it to our governments and our favorite three-letter intelligence agencies to f*ck that up for everyone.
Sorry for the rant, it just irritates me. But also, the only reason why large companies like Microsoft, OpenAI, Musk, et al, may seek "regulation" is so obvious, to let them control it all, and leave us out in the cold, with lesser AI (blunt tools). This is why we gotta design a fool-proof way to have AI protected and always free.
In two years, we will likely be in a very different environment if something doesn't take place to stop this -- I don't even know what could. Venues like social media will be completely controlled. Parts of our society will be so highly monitored, we'll likely be in a very robust censorship (and reporting!) model, more so than it is now.
Sound unrealistic, or paranoid? Look at how quickly AI has evolved so far.
But, having said that, I don't think AI itself is bad. It's a tool, that could be (and is) used to make life better, like finding medications, medical treatment and some really amazing things like materials and engineering. But leave it to our governments and our favorite three-letter intelligence agencies to f*ck that up for everyone.
Sorry for the rant, it just irritates me. But also, the only reason why large companies like Microsoft, OpenAI, Musk, et al, may seek "regulation" is so obvious, to let them control it all, and leave us out in the cold, with lesser AI (blunt tools). This is why we gotta design a fool-proof way to have AI protected and always free.