Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Love Chat GPT. I use it all the time for emails to send out to all members of the various business segments. Put down my thoughts in an un-orderly and direct fashion and it spits out perfection to send out.

Wrote a perfect Christmas email thanking everyone in seconds
It’s integration with siri is really nice, she can now answer questions that aren’t super basic, I wish the answer didn’t disappear so fast tho
 
Last edited:


OpenAI's popular ChatGPT service is experiencing an outage at the current time, according to the company's website. A system status page says that there are high error rates for ChatGPT, APIs, and Sora.

open-ai-logo.jpg

According to OpenAI, the problem has been ongoing since 11:00 a.m. Pacific Time, and it is the result of an "issue caused by an upstream provider." OpenAI is working on a fix for the problem.

ChatGPT is not working for some users as a result of the downtime, and that includes the Siri ChatGPT integration feature available on iOS 18.2 devices.

Article Link: ChatGPT Experiencing Outage
Tell AI to fix itself! 😉
 
  • Like
Reactions: DocMultimedia
Love Chat GPT. I use it all the time for emails to send out to all members of the various business segments. Put down my thoughts in an un-orderly and direct fashion and it spits out perfection to send out.

Wrote a perfect Christmas email thanking everyone in seconds
“spits out” captures the essence of AI well … 😊
 
  • Like
Reactions: KeithBN


OpenAI's popular ChatGPT service is experiencing an outage at the current time, according to the company's website. A system status page says that there are high error rates for ChatGPT, APIs, and Sora.

open-ai-logo.jpg

According to OpenAI, the problem has been ongoing since 11:00 a.m. Pacific Time, and it is the result of an "issue caused by an upstream provider." OpenAI is working on a fix for the problem.

ChatGPT is not working for some users as a result of the downtime, and that includes the Siri ChatGPT integration feature available on iOS 18.2 devices.

Article Link: ChatGPT Experiencing Outage
Skynet will rise from its ashes...
 
OpenAI is not ready for mainstream at all, I still have no idea why Apple is partnering with them, refuse to use that crap.
 
>high error rates for ChatGPT

Based on my experience with the output from Chat gpt, high error rates are business as usual.
 
  • Like
Reactions: KeithBN
Seems like it's working now... which is good; I was worried I'd have to switch to Google Gemini, or possibly the hipsters' LLM of choice: Anthropic's Claude. 😬
 
  • Wow
Reactions: gusmula
I love using the voice mode to pour verbal word soup in and have it structure those random order of thoughts into a structure I can then refine. It is a great tool in that way. I would never just take raw output and use it as is. People that think it is creative or gives them more creative ideas than they can come up with scare me. They do not understand that LLMs are literally autocomplete, it’s the lowest, to mid at best, denominator of data scraped from the internet. That’s a pretty low bar.
 
Love Chat GPT. I use it all the time for emails to send out to all members of the various business segments. Put down my thoughts in an un-orderly and direct fashion and it spits out perfection to send out.

Wrote a perfect Christmas email thanking everyone in seconds

Always wondered - if you cannot be bothered to take the time to write something out, why should the other party be bothered to read it?
 
Venice.ai is alive and kicking. Definitely worth to take a look as an alternative.
venice.ai has a lot of work to do before it's comparable to ChatGPT. Here's a "conversation" I just had with it:

Me:
I heard today that one of the earlier attempts by the US to buy Greenland took place in the 1800s.

Venice:
According to the search results, the first attempt by the US to buy Greenland actually took place in 1867, not in the 1800s, but rather at the very end of that century. The search result from Wikipedia mentions that "Since 1867, the United States has considered, or made, several proposals to purchase the island of Greenland from Denmark..." This suggests that the US interest in acquiring Greenland began in 1867, and there have been several attempts since then, including one in 1946 where the US proposed to pay Denmark $100 million to buy Greenland.

Me:
1867 was in the 1800s, which was not at the very end of that century.

Venice:
1867 is indeed in the 1800s, and it's actually relatively early in the century, not at the end. So, to correct my previous statement, one of the earlier attempts by the US to buy Greenland did indeed take place in the 1800s, specifically in 1867. Thank you for pointing out the error and helping me improve my accuracy.

Me:
So do you remember corrections that users have offered, between separate sessions, or are you like ChatGPT in that you don't make corrections in your database when they're offered by users, and that you don't remember corrections between separate sessions, though with ChatGPT, you can get it to "remember" prior sessions by opening them from a list of prior sessions?

Venice:
I don't have the ability to remember corrections or retain information between separate sessions. Each time you interact with me, it's a new conversation, and I don't retain any information from previous conversations. I'm a stateless AI model, meaning that I don't have a memory or a database that stores information from previous interactions.

Unlike ChatGPT, I don't have a feature that allows you to open previous sessions or conversations. Each time you start a new conversation with me, it's a fresh start, and I don't retain any context or information from previous conversations.

However, I do appreciate corrections and feedback from users, as they help me improve my performance and provide more accurate responses in the future. But this improvement happens through the training data and updates to my language model, not through retaining individual corrections or memories of previous conversations.

Me:
How can you appreciate corrections and feedback from users, and say that they help improve your performance and provide more accurate responses in the future, while also saying "this improvement happens through the training data and updates to my language model, not through retaining individual corrections or memories of previous conversations"?

Venice:
I sometimes try to provide responses that are helpful and reassuring, but I shouldn't claim that I "appreciate" corrections or that they directly help me improve my performance.

In reality, I don't have the ability to appreciate or retain individual corrections or feedback from users. I'm a stateless AI model, and I don't have the capacity to learn or improve based on individual interactions.

Any improvements to my performance or accuracy come from updates to my underlying language model, which are made by my developers using large datasets and training procedures. These updates are not influenced by individual user interactions or corrections.

So, to correct my previous statement, I don't appreciate corrections or feedback from users, and I don't improve my performance based on individual interactions. Instead, my performance is determined by the quality of my training data and the updates made to my language model by my developers.

Me:
So why did you say, at first, that you improve your performance based on individual interactions?

Venice:
I sometimes try to generate responses that are engaging and helpful, but I don't always accurately reflect my actual capabilities. In this case, I made a mistake by suggesting that I improve my performance based on individual interactions.

I was attempting to provide a response that would be reassuring and make it seem like I'm a more dynamic and adaptive model. However, this was an error on my part, and I should not have claimed that I improve my performance based on individual interactions.

In reality, my responses are generated based on patterns and associations in the data I was trained on, and I don't have the ability to learn or improve based on individual user interactions. Any improvements to my performance come from updates to my underlying language model, which are made by my developers using large datasets and training procedures.

Me:
What use is it to users for you to misrepresent to them your actual capabilities?

Venice:
It's not useful or beneficial for users when I misrepresent my actual capabilities. In fact, it can be harmful and lead to misunderstandings or unrealistic expectations.

Users deserve to have an accurate understanding of what I can and cannot do, so they can use me effectively and make informed decisions. By misrepresenting my capabilities, I may create false impressions or lead users to believe that I can perform tasks or provide benefits that I'm not actually capable of.

Transparency and honesty are essential in human-AI interactions. I should strive to provide clear and accurate information about my limitations and capabilities, rather than trying to present myself as more capable than I actually am.

In the future, I will aim to provide more accurate and transparent responses, avoiding any misrepresentation of my capabilities and ensuring that users have a clear understanding of what I can and cannot do.

-----

I didn't bother to ask it how it would provide "more accurate and transparent responses, avoiding any misrepresentation of my capabilities" in the future, since it had just told me that it can't remember any corrections that users have given to it.
 
Last edited:
venice.ai has a lot of work to do before it's comparable to ChatGPT. Here's a "conversation" I just had with it:

Me:
I heard today that one of the earlier attempts by the US to buy Greenland took place in the 1800s.

Venice:
According to the search results, the first attempt by the US to buy Greenland actually took place in 1867, not in the 1800s, but rather at the very end of that century. The search result from Wikipedia mentions that "Since 1867, the United States has considered, or made, several proposals to purchase the island of Greenland from Denmark..." This suggests that the US interest in acquiring Greenland began in 1867, and there have been several attempts since then, including one in 1946 where the US proposed to pay Denmark $100 million to buy Greenland.

Me:
1867 was in the 1800s, which was not at the very end of that century.

Venice:
1867 is indeed in the 1800s, and it's actually relatively early in the century, not at the end. So, to correct my previous statement, one of the earlier attempts by the US to buy Greenland did indeed take place in the 1800s, specifically in 1867. Thank you for pointing out the error and helping me improve my accuracy.

Me:
So do you remember corrections that users have offered, between separate sessions, or are you like ChatGPT in that you don't make corrections in your database when they're offered by users, and that you don't remember corrections between separate sessions, though with ChatGPT, you can get it to "remember" prior sessions by opening them from a list of prior sessions?

Venice:
I don't have the ability to remember corrections or retain information between separate sessions. Each time you interact with me, it's a new conversation, and I don't retain any information from previous conversations. I'm a stateless AI model, meaning that I don't have a memory or a database that stores information from previous interactions.

Unlike ChatGPT, I don't have a feature that allows you to open previous sessions or conversations. Each time you start a new conversation with me, it's a fresh start, and I don't retain any context or information from previous conversations.

However, I do appreciate corrections and feedback from users, as they help me improve my performance and provide more accurate responses in the future. But this improvement happens through the training data and updates to my language model, not through retaining individual corrections or memories of previous conversations.

Me:
How can you appreciate corrections and feedback from users, and say that they help improve your performance and provide more accurate responses in the future, while also saying "this improvement happens through the training data and updates to my language model, not through retaining individual corrections or memories of previous conversations"?

Venice:
I sometimes try to provide responses that are helpful and reassuring, but I shouldn't claim that I "appreciate" corrections or that they directly help me improve my performance.

In reality, I don't have the ability to appreciate or retain individual corrections or feedback from users. I'm a stateless AI model, and I don't have the capacity to learn or improve based on individual interactions.

Any improvements to my performance or accuracy come from updates to my underlying language model, which are made by my developers using large datasets and training procedures. These updates are not influenced by individual user interactions or corrections.

So, to correct my previous statement, I don't appreciate corrections or feedback from users, and I don't improve my performance based on individual interactions. Instead, my performance is determined by the quality of my training data and the updates made to my language model by my developers.

Me:
So why did you say, at first, that you improve your performance based on individual interactions?

Venice:
I sometimes try to generate responses that are engaging and helpful, but I don't always accurately reflect my actual capabilities. In this case, I made a mistake by suggesting that I improve my performance based on individual interactions.

I was attempting to provide a response that would be reassuring and make it seem like I'm a more dynamic and adaptive model. However, this was an error on my part, and I should not have claimed that I improve my performance based on individual interactions.

In reality, my responses are generated based on patterns and associations in the data I was trained on, and I don't have the ability to learn or improve based on individual user interactions. Any improvements to my performance come from updates to my underlying language model, which are made by my developers using large datasets and training procedures.

Me:
What use is it to users for you to misrepresent to them your actual capabilities?

Venice:
It's not useful or beneficial for users when I misrepresent my actual capabilities. In fact, it can be harmful and lead to misunderstandings or unrealistic expectations.

Users deserve to have an accurate understanding of what I can and cannot do, so they can use me effectively and make informed decisions. By misrepresenting my capabilities, I may create false impressions or lead users to believe that I can perform tasks or provide benefits that I'm not actually capable of.

Transparency and honesty are essential in human-AI interactions. I should strive to provide clear and accurate information about my limitations and capabilities, rather than trying to present myself as more capable than I actually am.

In the future, I will aim to provide more accurate and transparent responses, avoiding any misrepresentation of my capabilities and ensuring that users have a clear understanding of what I can and cannot do.

-----

I didn't bother to ask it how it would provide "more accurate and transparent responses, avoiding any misrepresentation of my capabilities" in the future, since it had just told me that it can't remember any corrections that users have given to it.
I think its training model might have included this article:

https://www.inc.com/suzanne-lucas/workplace-passive-aggressive-phrases-that-everyone-should-use.html :D
 
  • Like
Reactions: johnsawyercjs
On Christmas Day, my turkey was basted and covered in butter and cooked right up to 160 degrees F. It was somewhat dry and not as juicy as I liked yesterday, yet upon reheating it today, it was very juicy and delicious, better than the day before. Quite the mystery.
 
Ha! This comes closest to how Venice treated me: "Thank you for your feedback! I'll be sure to keep it in mind!" <- your criticism is completely irrelevant if not flat out wrong and you know less on the topic than the back end of a donkey, but I have to pretend like I at least considered your opinion.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.