Generic questions, in which only chore tasks are asked from GPT like "write a Swift data class for this and this example JSON" (with the keynames / values properly changed) is of little help to Microsoft, should they really want to spy on Apple.
I 100% agree with your points.
Exactly the way I use it, instead of having to navigate stack overflow’s unpleasant answers or pages of documentation (both of which still have a real and valid use), chatgpt has been fantastic for a first quick working state (plus learning how it’s done). Plus the iterative nature is great, later on your swift class stored chat, you can ask it to “add another field to the swift data class”, or “make it conform to x or y protocol” and see what comes out.
Also great for corporate spiels and HR blabbers… it’s like having the armies of assistants that the highest higher-ups have for every communication they want to send, but for the layman in the trenches.
What I understandably see happening is that it only takes ONE person to ask the AI about Apple Car or some other highly confidential endeavor, hence they have to carpet bomb the whole thing and everybody pays the price.
But not on the “deducing secrets” powers of chatGPT… I don’t see how asking for random FooBars boilerplate on random languages can somehow result in top secret information conclusions. Millions are or will be using it that way, what makes it say that for one FooBar prompt it’s an apple car endeavor but for another person it’s a learning hobby calculator app. Unless I’m underestimating its powers.
EDIT: alright, some commenters make the point of “in aggregate on thousands of data points” then it could get far finding out what a corporation might be doing.
It does make sense but still find it far fetched? in the same way that we could theoretically predict the future because we have understanding of micro and macro physics, so we would “just” have to aggregate what all the particles and objects are doing right now and extrapolate forward.