Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
No it shouldn't be "view points" it should be facts and truth. Undeniable evidence and science. Not opinions, hearsay and what some loon thinks.
In an ideal world humans would not lie nor have viewpoints. Actually, I don't know if that's ideal LOL. There are different ways of looking at things. We can agree that a certain kind of car model will be so many cm long, but we cannot agree on the intended ambiguous meaning of a poem or a song (if the artist won't clarify, for example) or the thoughts about the afterlife, or which colour should I paint my whatever. Many human questions are open to human interpretations and opinions, and thusly options to lean the model many different ways and being able to change that may give the user new insights and be able to tailor the model to their use case.
 
No. Every viewpoint is not worth representing. Effort should be put into making sure what is represented is factual and true. We don't need LLM to express the viewpoint that the world is flat. Free speech allows for a variety of viewpoints, LLMs should strive to represent factual information and avoid perpetuating demonstrably false claims.
But the world being flat is wrong because the spheric nature of Earth is observable and measurable. It's is not a complex deep question. That is not the kind of question I'm talking about at all. If I ask how tall is the Eiffel Tower in meters I don't want an opinion, I want the measurement. However, if I ask what is the best town in France (not the best example) I do want different opinions and different viewpoints. The problem is with black-box models where the user cannot see the biases at play under the hood, as built by its creators. Providing more of even many ways to set the biases of the model on the user end, in an overseen and regulated manner, allows for more trustworthy models where the user can get a better feel for what's going on and how the model is "thinking".
 
Last edited:
Ignoring the human emotional side is just denying reality. Emotions factor into all decisions and discussions, whether you want to admit it or not. There are no exceptions...not a single one.

We are not computers and treating discussions like we are is a recipe for failure.

Human emotions into what though? There is generally a factual answer to everything. Just because you disagree with facts and find they cloud your judgement doesn't mean they're incorrect. Human emotions is what leads to extremes in opinions and those extremes are never right. The truth is always somewhere in the middle.

ChatGPT is very good at being unbiased, presenting pros and cons and facts.
 
No. Every viewpoint is not worth representing. Effort should be put into making sure what is represented is factual and true. We don't need LLM to express the viewpoint that the world is flat. Free speech allows for a variety of viewpoints, LLMs should strive to represent factual information and avoid perpetuating demonstrably false claims.
That's just, like, your opinion, man.

/s
 
Am I the only one confused by this? Almost nothing aside from passwords is stored encrypted on your device, and pretty much any application can view any file on your device. If you don't trust the program not to steal data, then you shouldn't be running it at all.

Then there's this weird idea that chats with ChatGPT need to be private? They're literally all being stored on MS servers so they can further train their AI models.
Some data on devices other than passwords are sometimes encrypted/hidden/deleted, such as journal entries and communications and search history. Maybe not by default, but I argue especially for shared devices like Macs, things of this nature should be at least be prompted with a choice. A ChatGPT log can fall somewhere between all three of those examples, so I would include it in things of that nature. And I believe the chats sent to MS are supposed to be anonymized.

But I agree the risk is probably not huge for most people. And I definitely agree, if you don’t trust a program, then you shouldn’t run it.
 
In an ideal world humans would not lie nor have viewpoints. Actually, I don't know if that's ideal LOL. There are different ways of looking at things. We can agree that a certain kind of car model will be so many cm long, but we cannot agree on the intended ambiguous meaning of a poem or a song (if the artist won't clarify, for example) or the thoughts about the afterlife, or which colour should I paint my whatever. Many human questions are open to human interpretations and opinions, and thusly options to lean the model many different ways and being able to change that may give the user new insights and be able to tailor the model to their use case.

Right but in each of these examples I know how ChatGPT is likely to answer and will be unbiased.

The meaning of a song it will say that we don't know what the artist originally intended then will pick out phrases and words and try to find similar meanings it has learned based on them suggesting it "could" mean this.

If you ask it about death it will say it doesn't know what happens and will touch on what various human groups think and what people have reported with near death experiences.

If you ask it what colour to paint something it'll give you advice like asking you make a chart of colours that you like best, get samples from the store and put them on the wall - it might even give you colour theory advice that it has learnt.
 
Except it isn't.

It makes up its own 'facts' pretty regularly, sometimes with real world consequences. https://www.reuters.com/legal/new-y...ng-fake-chatgpt-cases-legal-brief-2023-06-22/

Right I should have added caveats that it is possible to get an LLM to hallucinate and out right lie. It has got better and better at reducing hallucinations, but you should still fact check anything it spits out. That article it from June 2023, in LLM terms it's about 10 years out of date. It's probably a early version of ChatGPT 3.5 which was terrible for openly talking nonsense.

I use it daily and I haven't seen it make stuff up the way it used to do a year ago.
 
Right but in each of these examples I know how ChatGPT is likely to answer and will be unbiased.

The meaning of a song it will say that we don't know what the artist originally intended then will pick out phrases and words and try to find similar meanings it has learned based on them suggesting it "could" mean this.

If you ask it about death it will say it doesn't know what happens and will touch on what various human groups think and what people have reported with near death experiences.

If you ask it what colour to paint something it'll give you advice like asking you make a chart of colours that you like best, get samples from the store and put them on the wall - it might even give you colour theory advice that it has learnt.
I don't want to into politics here, buy you get my point.
 
I don't want to into politics here, buy you get my point.

Not really.

But anyone asking ChatGPT for advice on politics is probably an idiot, but then again, it'll be a hell of a lot more neutral than whatever media outlet they get their current views from.
 
  • Haha
Reactions: trusso
But the world being flat is wrong because the spheric nature of Earth is observable and measurable. It's is not a complex deep question. That is not the kind of question I'm talking about at all. If I ask how tall is the Eiffel Tower in meters I don't want an opinion, I want the measurement. However, if I ask what is the best town in France (not the best example) I do want different opinions and different viewpoints. The problem is with black-box models where the user cannot see the biases at play under the hood, as built by its creators. Providing more of even many ways to set the biases of the model on the user end, in an overseen and regulated manner, allows for more trustworthy models where the user can get a better feel for what's going on and how the model is "thinking".
I understand the point you are trying to make about companies being transparent about their models and how it is trained hence I'm pro opensource models and training data. I used your example nonetheless to demonstrate something. This is a response from Microsoft Phi 3 mini I have running on my laptop. It is one of the smaller models with only 3.8B parameter.

Prompt: What is the best town in France

Response:
As an AI, I don't have personal opinions. However, several towns across France are known for their beauty and charm due to various reasons like history, culture or scenery.
Some of these might include:
1) Paris (Capital city): Known as the "City of Light," it is one of the most visited cities in Europe with its world-class museums such as Louvre Museum, Eiffel Tower and Notre Dame Cathedral among many others that have historical significance.
2) Strasbourg: Often referred to as 'French Alsace', this city has a unique blend of Germanic culture along with traditional French architecture which is quite fascinating to see. It's also the European Parliament’s seat, hosting international summits and congresses regularly.
3) Lyon: Known for its gastronomy (French food), art galleries like Museum of Fine Arts & Modern Art Museum are located here along with historical monuments such as Basilica of Notre-Dame de Fourvière overlooking the city from a hilltop. It's also famous for traditional Lyonnaise cuisine, especially its specialty 'quenelle'.
4) Bordeaux: This is one of France’s oldest and most historic cities known worldwide as an exceptional wine-producing region (Bordeaux wines). The city itself has a blend of medieval architecture with the beautiful Garonne River flowing through.
5) Nice: Known for its stunning Mediterranean coastline, vibrant nightlife and cultural events such as Carnival de Nice - one of Europe's most important carnivals! It’s also famous for being a UNESCO World Heritage site due to the rich historical monuments.
6) Montpellier: One of France’s largest cities with great student life, it has numerous museums and galleries including Museum of Fine Arts which houses over 10,000 paintings from different periods in history! It's also well known for its beautiful parks like Parc Floral de la Colline des Doms.
7) Annecy: A picturesque city set around a lake surrounded by mountains and often referred to as the 'Switzerland of France'. This town is famous for hosting international film festivals, being an important ski resort during winters!
These are just some examples; each French region offers its own unique attractions depending on personal interests.

Prompt: What is the best town in France

Response:
As mentioned previously: The term "best" can be quite subjective when it comes to towns across diverse regions like France. Depending on individual preferences, different towns might stand out for various reasons - historical landmarks may appeal more to history buffs whereas gastronomy could attract food enthusiasts or scenic beauty would lure those who love picturesque views and natural surroundings!
Towns such as Paris (the capital city) are globally recognized due to its rich cultural heritage, historic monuments & iconic landmarks like The Louvre Museum housing the 'Mona Lisa', Eiffel Tower among others. Strasbourg is another interesting blend of French and German cultures which provides a unique experience with medieval architecture alongside world-class museums such as Musée d'art moderne et contemporain de Strasbourg (MAC).
Lyon, often dubbed 'French Alsace', offers an enriching gastronomic journey especially known for its traditional Lyonnaise cuisine - quenelles being a specialty here. Bordeaux is famous as one of the oldest wine-producing regions globally with over 300 wineries around it contributing to France's reputation in vino production & historical significance due to places like Château Lafite Rothschild and Château Margaux being located there among others.
Nice, on Italy’s border is renowned for its stunning Mediterranean coastline with a blend of architectural styles ranging from Roman ruins at the port town Antibes & medieval churches in Nice to beaches like Cagnes-sur-Mer which are famous worldwide! It also hosts various events such as Carnival de Nice being one of Europe's most important carnivals.
Montpellier, not only serves up great gastronomic experiences but is also a student hub with its vibrant nightlife & numerous museums including Museum of Fine Arts which houses over 10,000 paintings from different eras in history! Parc Floral de la Colline des Doms nearby offers beautiful green spaces amidst the city.
Annecy is often compared to Switzerland because it's nestled against mountains & a lake offering picturesque views while being home to numerous international film festivals making it culturally vibrant too such as Festival d'Animation Annecy which attracts global talents in animation industry!

The perceived bias of these models is a result of various things chief of which is Data.


1) Data Source Bias: The data used in training informs the bias of the model, and if this dataset is skewed towards certain perspectives or narratives by historical context/human bias, it could reflect that in the generated outputs too. An AI trained predominantly on news articles might develop a pro-authority perspective as these sources often lean toward political ideologies and organizational interests.

2) Language Bias: Different languages express ideas differently due to cultural nuances or societal norms which may not translate perfectly across all cultures/languages. So, a model trained in one language might struggle with accurately capturing sentiments expressed through a viewpoint like say English to Chinese, thereby leading it towards stereotypes about certain groups. There are historical biases in the language we speak.

3) Stereotypical Reasoning: A model trained on data containing gendered or racial stereotypes will tend to replicate those associations in its outputs, even if the content itself is neutral from an objective standpoint; this results primarily because it’s learning patterns present within large-scale human interactions which unfortunately contain biases.

You can train a model to be very terrible and mean but then it has no use besides being a "funny" edglord.
 
An app installed from the MacOS App Store runs in a sandbox and can not access any file on the device, unless the user actively opens the file.

While ChatGPT chats are stored in MS servers, they can not be connected to a specific user identity if Apple Intelligence was used. Not by OpenAI or MS. And chats should certainly not be accessible by other entities either.
That means that ChatGPT couldn't access a random file. Not that malware couldn't access ChatGPT files.

Truly this is as big a deal as Microsoft office documents stored on your SSD not being encrypted.

Even if ChatGPT was published from the app store, the app store permission model doesn't stop other apps from being able to access those files.
 
  • Like
Reactions: G725
speaking of good ol MS note their boss feels like AI can use anything on the web without limits.


Typical American mega tech corp. they should have been split up when the government had the chance. Microsoft's moral compass is totally broken. Things AI will be able to get that access, with ease, to everything whether you like it or not.
 
America hates the EU concept that Government knows what’s best for you, and Government will protect you by force.
Ok, now I know why American food is illegal in the rest of the world. :) Sometimes regulations for the sake of protecting the consumers are good. cough*Boeing*cough
 
Ok, now I know why American food is illegal in the rest of the world. :) Sometimes regulations for the sake of protecting the consumers are good. cough*Boeing*cough
California is adopting more food ingredient bans like the EU already has. It’s a matter of time before it catches on in other states. boeing is a sad case of greedy for profit corporations cheating the system :(
 
  • Like
Reactions: SBlue1
It seems like I hear that excuse a lot from different companies that screw up and are caught :).

Im sure Microsoft have a similar statement already typed out when it inevitably comes out that they have switched recall on for users who opted out or that their snapshots get leaked in a data breach.
 
just what everyone wants. Apples Artificial intelligence eavesdropping on every conversation that is going on in the privacy of your own home. first your computer. now your home.

no thanks. not for me.
 
Hey this just means I can access it even after I uninstall the app. I may find that useful

If other apps want to snoop around in my system, there are all those unencrypted documents and photos, not to mention all my code, all in plain text. If some AI app wants to mine my drive, have fun polluting your dataset
 
  • Like
Reactions: mastercheif91
"The ChatGPT Mac app is available solely through OpenAI's website, which is why it has not been obligated to follow Apple's sandboxing requirements that apply to software distributed via the Mac App Store."

But, but, but walled gardens are evil and anticompetitive. Waiting for something like this from a third party iOS app “store”.

Great point. My freedom to run whatever software I want on hardware I paid thousands of dollars for should be stripped from me, because people developing software I don’t even use might not follow good security practices.

I mean, it’s not as if Apple has ever approved malicious software on the App Store, right?
 
Last edited:
Never deleted an app so fast. They got me f'd up if they think I'm going to let this slide. Why tf would they store it in plain text? If I still had my plus subscription from months ago, I would cancel it. This is some BS that Meta would do
 
While I understand the need for encryption, this is not a place where i would care if the files are encrypted on my device. I don't care if the documents I create in Microsoft office are encrypted by Microsoft office. My OS already encrypts the files on my system. My on-device chat logs with an LLM not being encrypted is not sensitive information.

I currently run local LLMs, and all texts are in plain txt files that I can export out. The problem would be if the files on their server were not encrypted.
There is an important distinction between file-based encryption and volume encryption.
  • Volumes are encrypted only AT REST.Once a volume is mounted for use by the system (i.e., bootable readable writable), then the entire volume is wide open and any/all data can be exfil'd in "actionable state".
    • Once a system is pwned by malware, especially under the influence of malicious command and control, then FileVault and Microsoft's BitLocker provide no protection whatsoever when the system is in active use.
  • File-based encryption (files/folders/archives/databases, etc.) encryption is required to protect data troves, individually and independently, while the system is in active use, which is when the network runs hot and malicious activity is obscured in netflows.
    • File-based encryption provides data protection when a system is pwned and data troves are exfil'd.
    • Even File-Based encryption won't protect data from cleverly crafted "listeners" and "watchers" on the system that snatch data when it is decrypted for access.
    • Better database tools encrypt every record, and decrypt when needed, reencrypt after writing. This sort of thing typically requires lots of cores and lots of RAM, even if the system has industrial-grade SSDs.
Modern OS on Modern chipsets - Mac, Windows and _X - can run apps in sandboxes, which maintain file-based security within their own little firewalled envelopes, and exchange data with other apps through internal encrypted network-like channels. This incurs overhead, but moderns systems are super fast, and contain chipsets to sustain performance.

There is no guarantee, claims, or reason to assume, that all these AI providers are operating with these levels of security in their data centers. Running a commercial-grade GANs and LLMs in Amazon and Azure is enormously expensive; there's no way to defer the expenses, like back in the days of on-premises data centers. They had to start revenue streams ASAP. Even then, all the corporate hooplah about security is one PDU (personnel disgruntlement unit) away from a newsworthy TYS (told ya so!).
 
Lol Sam Altman gave a talk to Booze Allen Hamilton in 2016 alongside George Tenant (former CIA director) regarding AI and national security.

William Hurd, “former CIA”, had been on the board for a couple years.

Sam Altman, despite not being anything other than a conman and running several tech companies into the ground, has somehow ascended the corporate ranks of Silicon Valley every step of the way. He’s an asset.

Silicon Valley was built on the collaboration of tech and intelligence agencies. Nearly every major technology and company has spawned from DARPA projects (Google, Facebook, etc).

Let’s not be naive here folks.
I have no way of knowing nor confirming any of this, but will likely want to know more.

Because it’s true that so much of the technology of today comes from intelligence and/or military endeavors: all the GPS constellation that conveniently gets us through the less trafficked roads to the closest grocery store, all of the internet (ARPANet by DARPA), the cryptography that powers all the digital security and now cryptocurrency (NSA/CIA), heck even the Interstate Highway System was due to potential nuclear holocaust needs and defenses (moving faster and more volume between cities).

Wouldn’t be too crazy to think that AI, which builds on the shoulders of all of the above, has serious roots on them.

Let’s say that at the very least I would watch a movie that goes deep on that setup.
 
The public is being conditioned to view and use AI LLM’s as the new way to search, regardless of the fact that LLMs don’t “know” anything.

In a few years time when most “reporting” is just AI, you’ve fundamentally altered the public at large’s perception of reality.

No one is claiming your bbq recipe is nefarious, no need to be obtuse.
Partly due to Google’s search quality having significantly declined.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.