Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Let’s be clear - Microsoft bought themselves a seat at the table with their investment in OpenAI. They do not have any inside expertise in AI or speech assistants. Does anyone remember Cortana?

They are simply using OpenAI APIs to integrate that service across various customer touch-points (e.g. Bing). Everyone is acting like Microsoft themselves did something dramatically innovative

Still better than Apple because Apple bought the dumb dumb Siri 🙄.

You can’t always spend your way into greatness if you don’t have vision and a sharp BS detector.
 
Funny how we keep suggesting Apple should do what other companies consider competitive innovation without any regard to risk?


AI has hardly been free of scandals in recent months, and it’s those worries that fuelled the backlash against Microsoft’s disbanding of its AI ethics team. If Microsoft lacked a dedicated team to help guide its AI products in responsible directions, the thinking went, it would struggle to curtail the kinds of abuses and questionable behavior its Bing chatbot has become notorious for.

The company’s latest blog post is surely aiming to alleviate those concerns among the public. Rather than abandoning its AI efforts entirely, it seems Microsoft is seeking to ensure teams across the company have regular contact with experts in responsible AI.

Still, there’s no doubt that shutting down its AI Ethics & Society team didn’t go over well, and chances are Microsoft still has some way to go to ease the public’s collective mind on this topic. Indeed, even Microsoft itself thinks ChatGPT — whose developer, OpenAI, is owned by Microsoft — should be regulated.


#1: The Information Isn’t Always Accurate

ChatGPT answers questions with authority, but that doesn’t mean the information is always correct. ChatGPT will sound confident in its responses, but that doesn’t mean they are always correct, which can lead to confusion for customers. Even its makers openly acknowledge the bot’s shortcomings, stating: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”

#2: It Doesn’t Have Any Special Training

ChatGPT can comb the internet for information, but it doesn’t have special training or access to internal resources to provide personalized responses. If answers to customers’ questions aren’t available online but require industry or company knowledge, ChatGPT won’t be able to answer correctly—but that won’t stop it from responding.

#3: It Provides Different Answers Every Time

One of the lauded benefits of ChatGPT is that it offers a new response every time. But that creates an inconsistent customer service experience. Companies can never count on the bot to provide a specific answer, making monitoring customer needs and requests challenging. One of chatbots’ most common use cases is repetitive questions, such as order status or account information. These questions require a set answer, which ChatGPT can’t provide consistently.

Doesn’t matter because humans make the same error.

Ask your mom what happened to your other sock, she will give you a new excuse every time.
 
I have 4 through my institution, and while the difference in superficial writing is massive, it is not any more knowledgable about my research subjects than 3.5 was, and still is unable to produce actual academic text or research. The fake sources problem is particularly bad!

Wait for AutoGPT. It can self train and do live reinforcement learning. Basically, meta aware.
 
I'd love to try ChatGPT but they want my phone number. Sorry, no.
If you’ve ever used WhatsApp or even if anyone who has your phone number in their Contacts uses it (regardless of whether you do) then Meta already has your number, as does anyone else who’s willing to pay for it.
 
  • Like
Reactions: arkitect
Honestly, although Apple may have missed the boat on AI, they could in all honesty afford to do so. They get their income more from hardware and services than from software, the way things are integrated in Apple’s case means they don’t feel it so much. Google is under huge threat in the search space, but Apple hardly has a presence there. Microsoft is doing very well with Copilot in development tools and office software, and Apple will be behind there. But on the whole, Apple’s position is still very strong.
 
If you’ve ever used WhatsApp or even if anyone who has your phone number in their Contacts uses it (regardless of whether you do) then Meta already has your number, as does anyone else who’s willing to pay for it.
Right. On the other hand I'm not so interested in using it. I see what others are doing and, while it's cute, I have no use for it.
 
Thanks so much for responding, sifting through the noise in this thread is difficult but I really appreciate these concrete examples, this first one was actually pretty challenging. I eventually was able to get a correct list but it took probably 10 minutes or so, I'm going to see if I can work on a single prompt that gets it right in a first shot, it's a really good use case because it's verifiable, is information that it probably does know but isn't weighted very heavily given the training corpus.


The second one I got in the first shot but I did continue the same conversation structure that I set up for the first one, so it already had some Canadian constraints in mind.
Hmm, that's actually a good question for a power user. I tend to run each new 'query' in a seperate instance of GPT, so every single time I need to recontextualize the model. I wonder if running prompts in fewer instances might lead the model to better understanding the mind of research I do?

Regardless, I do think there is a bigger problem for the so-called AI takeover of academia. I'm writing a book with a major publisher about a collection of artists (not the ones I listed) right now, and chat GPT isn't able to find information on them. My publication on Google Books could easily just feed into GPT, leading to fewer sales of my text overall. So both myself and others are in active conversations with publishers about how to keep data from being trolled by LLMs.
 
  • Like
Reactions: novagamer
Apple is using machine learning to do things like recognize handwriting, pulling text from images, and pulling objects out of images. Apple’s ML/AI efforts seem to have been focused on making their OS’ more intuitive and add new features. It’s obviously important to Apple since they’ve been adding ML units to their CPUs for years now.
True. Still it would be nice if Siri wasn't so dumb. ;)
 
  • Like
Reactions: aylk and PBG4 Dude
Apple is both a hardware and software company. What other companies have their own OS's and applications for their hardware in the same comparison. MS is another example, not so much hardware but ample software/OS's.
last i checked there are no official support for ios or macos on other hardware system. apple builds gheir software for their own devices. it is highly coupled with their hardware
 
last i checked there are no official support for ios or macos on other hardware system. apple builds gheir software for their own devices. it is highly coupled with their hardware
The definition of a software company doesn't mean they have to sell/license their OS's to other computer vendors. Apple does have a suite of applications they support also. In Apple case it's free, can MS say the same even if MS does offer hardware also? :D
 

#1: The Information Isn’t Always Accurate

ChatGPT answers questions with authority, but that doesn’t mean the information is always correct. ChatGPT will sound confident in its responses, but that doesn’t mean they are always correct, which can lead to confusion for customers. Even its makers openly acknowledge the bot’s shortcomings, stating: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”

#2: It Doesn’t Have Any Special Training

ChatGPT can comb the internet for information, but it doesn’t have special training or access to internal resources to provide personalized responses. If answers to customers’ questions aren’t available online but require industry or company knowledge, ChatGPT won’t be able to answer correctly—but that won’t stop it from responding.

#3: It Provides Different Answers Every Time

One of the lauded benefits of ChatGPT is that it offers a new response every time. But that creates an inconsistent customer service experience. Companies can never count on the bot to provide a specific answer, making monitoring customer needs and requests challenging. One of chatbots’ most common use cases is repetitive questions, such as order status or account information. These questions require a set answer, which ChatGPT can’t provide consistently.

Hi, Reality,

I find it fascinating that there are other people here that also like AI.
I would like to address some of the points you brought to the table.

Regarding point #1, you can greatly improve on information accuracy in several ways. GPT's hallucinations to either diminish or stop altogether when you embed a database into it, and ask it to read from the database.

Here's how it works: suppose you would like to know more about the Official Microsoft Windows 11 API, and GPT is making up information. Bad bot!

What you do is, either you fine-tune it to give the required responses (i.e, you condition it to give the right answers by showing it examples on how it should respond, and then training it), and/or you can simply tell GPT, "here, read this documentation and tell me what it is about in your own words."

Fine-tuning is more reliable, but embedding documentation into it is more versatile. With ChatGPT, one way to make it read documentation is to simply copy and paste the documentation you want and ask it to learn it. Of course, that only lasts for a short while, so instead, you can use the API to force GPT to read from any documentation you want if you need a more permanent and reliable solution.

Of course, in this case, you will pay per token. But the cost tends to be small, especially with the dumber AIs (there are several AI variants, some smarter than others).

I mostly also covered point #2, but I would like to add that ChatGPT does have special training. However, it is fine-tuned to interact to you in a conversation, which GPT (the default IA that you use with the API) is not. No, you CANNOT fine-tune ChatGPT, but you can definitely fine-tune GPT.

Regarding point #3, if you use the API, there's a setting called "temperature". The closer it is to zero, the more predictable ChatGPT will respond, tending to give the same responses every time.

If after setting the temperature to zero you STILL find that it's not complying, you can simply integrate it with the information you want, and either copy and paste the information and/or ask GPT to paraphrase it – whatever works for you.
 
  • Like
Reactions: aylk and novagamer
Let’s be clear - Microsoft bought themselves a seat at the table with their investment in OpenAI. They do not have any inside expertise in AI or speech assistants. Does anyone remember Cortana?

They are simply using OpenAI APIs to integrate that service across various customer touch-points (e.g. Bing). Everyone is acting like Microsoft themselves did something dramatically innovative
It goes back to my earlier definitions of invention vs innovation.

Henry Ford did not invent the car, but he found a way to mass produce it and make it readily accessible to the people. This was his innovation. That's how you run a successful business, by having a successful go-to-market strategy, because that's ultimately what pays the bills.

The problem with openAI is that it's a product with no clear path to monetisation. The innovative company will be the one who can package it properly into a compelling user experience that makes me, the consumer, willing to pay for it. If Microsoft can somehow pull this off, then I consider them innovative. If they screw up the implementation somehow, then sorry, I don't consider them to be innovative, no matter how impressive the underlying technology underpinning their products is.

That's how I look at it, at least.
 
Hi, Reality,

I find it fascinating that there are other people here that also like AI.
I would like to address some of the points you brought to the table.

Regarding point #1, you can greatly improve on information accuracy in several ways. GPT's hallucinations to either diminish or stop altogether when you embed a database into it, and ask it to read from the database.

Here's how it works: suppose you would like to know more about the Official Microsoft Windows 11 API, and GPT is making up information. Bad bot!

What you do is, either you fine-tune it to give the required responses (i.e, you condition it to give the right answers by showing it examples on how it should respond, and then training it), and/or you can simply tell GPT, "here, read this documentation and tell me what it is about in your own words."

Fine-tuning is more reliable, but embedding documentation into it is more versatile. With ChatGPT, one way to make it read documentation is to simply copy and paste the documentation you want and ask it to learn it. Of course, that only lasts for a short while, so instead, you can use the API to force GPT to read from any documentation you want if you need a more permanent and reliable solution.

Of course, in this case, you will pay per token. But the cost tends to be small, especially with the dumber AIs (there are several AI variants, some smarter than others).

I mostly also covered point #2, but I would like to add that ChatGPT does have special training. However, it is fine-tuned to interact to you in a conversation, which GPT (the default IA that you use with the API) is not. No, you CANNOT fine-tune ChatGPT, but you can definitely fine-tune GPT.

Regarding point #3, if you use the API, there's a setting called "temperature". The closer it is to zero, the more predictable ChatGPT will respond, tending to give the same responses every time.

If after setting the temperature to zero you STILL find that it's not complying, you can simply integrate it with the information you want, and either copy and paste the information and/or ask GPT to paraphrase it – whatever works for you.

It’s still spyware, should be open source like they were supposed to be, can be used for mass oppression in the wrong hands and doesn’t silo your data and gives you no option to opt out of not training it with your inputs.

There are tons of new scams riding on the back of GPT and they are not being transparent about how they treat your data.

Something super ugly is going to happen soon linked to these data abuses.
 
True. Still it would be nice if Siri wasn't so dumb. ;)
For sure. I’d also like it if using Siri wouldn’t break HomeKit. Works for months without issue. Ask Siri to turn on a light and boom!, “Updating…”. Every. Single. Time.
 
For the most part, Apple doesn’t show off half-baked products. I have to wonder if they’ve done their own testing and the hallucinations/inaccuracy levels/sometimes abusive responses by chat AIs has them keeping their work under wraps until it’s usable by the general public.
 
  • Like
Reactions: aylk
It’s still spyware, should be open source like they were supposed to be, can be used for mass oppression in the wrong hands and doesn’t silo your data and gives you no option to opt out of not training it with your inputs.

There are tons of new scams riding on the back of GPT and they are not being transparent about how they treat your data.

Something super ugly is going to happen soon linked to these data abuses.

Arguing that OpenAI is not transparent about how they handle data is one thing (and neither is Apple in some instances, Microsoft or Google). It's another to claim that there are many scams riding on the back of GPT.

OpenAI is not directly guilty of people who take advantage of GPT to commit fraud. That would be similar to say Apple is directly guilty if people use Siri to scam others, or that Microsoft is directly guilty because someone created an infected Docx file to infect others and steal their money.

Claims have to be reasonable, don't you think?
 
Last time they allowed Siri to communicate out, there was massive uproar over it and they immediately decided to triple down on their privacy motto. Apple knows that privacy is a huge selling point of these devices, so they’re not going to risk it at the moment. It’s not hard for them, but the customers got very heated about it. We’re the reason why Siri is dumb.

That's why I think the "enhanced" Siri should be optional, requiring user consent to loosen the privacy at least for the time when she's in development.
 
I'd love to try ChatGPT but they want my phone number. Sorry, no.
This is to stop people from creating multiple accounts to access the free trial. Every person (person, not account) is given a limited number of free computational time... because AI computational time costs electricity and money.

It's not to invade your privacy or sell your phone number. It's a profit-mechanism, to make sure they don't bankrupt themselves with "free loopholes".
 
  • Like
Reactions: adrianlondon
This is to stop people from creating multiple accounts to access the free trial. Every person (person, not account) is given a limited number of free computational time... because AI computational time costs electricity and money.

It's not to invade your privacy or sell your phone number. It's a profit-mechanism, to make sure they don't bankrupt themselves with "free loopholes".
I get that. Truth is, I don't have much use for it other than as a novelty. I can't see myself using it on a regular basis.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.