Just like Siri. Right?I believe Apple will be very late to the AI game but once it fully commits, the product will be superb.
Just like Siri. Right?I believe Apple will be very late to the AI game but once it fully commits, the product will be superb.
Problem, with ChatGPT is that privacy is a big issue, I use it but try not to put like private information.
I’m just wondering what AI looks like to you. Is it an advanced spell check to cover for inept typing or education, or is it something that is going to benefit you in terms of health, or something else?Just admit you’ve mismanaged your AI efforts and were caught flat-footed by generative AI. Don’t disguise it as some exercise in thoughtfulness.
My iOS keyboard can’t even accurately autocorrect “we’re” versus “were” from context and that’s a really basic form of AI.
Why do people feel like Siri is the only form of AI Apple have?So I asked Siri about ChatGPT a few times and kept getting this:
I think he has a deeper understanding on AI than most, which is why It is so well integrated into our watches and phones. ChatGPT is best left to those who can’t think for themselves. Outsourcing creativity. Now that’s a great idea 🤦🏻♂️Siri is to AI what a phone is to smartphone. Old obsolete and annoyingly cute But … Chatgpt and other AI tools have taken the world by storm with no regulations and understanding of the consequences on our society and how it can really help our lives … Personally I am in favor of a pause like many.
I am not sure that Cook is the man to lead Apple to the next level and if he has a deep understanding of what is happening.
"Very interesting." How insightful. The man heads a Fortune 10 company and can't say anything bold or visionary? Very interesting indeed....
I am wondering how AI is integrated into our watch and phone ? I am obviously not familiar with. ThanksWhy do people feel like Siri is the only form of AI Apple have?
I think he has a deeper understanding on AI than most, which is why It is so well integrated into our watches and phones. ChatGPT is best left to those who can’t think for themselves. Outsourcing creativity. Now that’s a great idea 🤦🏻♂️
Why would Apple want to work with Musk? He’s at the hiring stage in his AI project, if Apple’s starting that basic they can just start a new project. If they’re looking for something already developed they have to look at companies that already have working or close to working modelsI don’t think SIRI will be Apple’s answer to Generative AI. Apple should start looking at buying companies, or tie up with Elon Musk on the new generative AI. ChatGPT right now is lot of hype, its just the beginning. Not many are talking about Bard, but Google can do lot better than ChatGPT in code generation and research synthesis.
Cook mentions it in the article. Fall detection in the Watch, Crash detection in the Watch and phone. ECG in the Watch. Fall detection and ECG obviously work incredibly well, saving hundreds, if not thousands of lives, and crash detection works but is a little flaky (too many false positives) given the issues that have arisen through very complicated circumstances and metrics measured. But many lives have still been saved or at least, provided quicker emergency service response.I am wondering how AI is integrated into our watch and phone ? I am obviously not familiar with. Thanks
This.I believe Apple will be very late to the AI game but once it fully commits, the product will be superb.
I can vouch for @TheYayAreaLiving 🎗️ as a real person, and good friend over years on Macrumors. She is not a GPT in any way or form. She is quite an active apple supporter on the forums I met her first in some of the groups watching their iphones travel their way for opening week each fall Where we all have a good time tracking, speculating and waiting for our new iphones.📱🖥️⌚If you don't mind an off topic question we have all been wondering about. It has to do with the total number of posts you do. Are you GPT in human form? Many of us speculate that you are the Apple equivalent of a Terminator bot only nice. Instead of going back in time to eliminate humanities only hope, you were sent back to talk about Apple products.
If you are an Apple bot from the future, can you please tell us how to increase the context memory of an LLM without having a quadratic cost or drastically decreasing execution time? If you are however a human, please don't be offended.
What if you were sent back from the human resistance against BARD to save us all?
@rjjacobson Thank you so much!!! I appreciate you. ♥️ 🤗I can vouch for @TheYayAreaLiving 🎗️ as a real person, and good friend over years on Macrumors. She is not a GPT in any way or form. She is quite an active apple supporter on the forums I met her first in some of the groups watching their iphones travel their way for opening week each fall Where we all have a good time tracking, speculating and waiting for our new iphones.📱🖥️⌚![]()
Just admit you’ve mismanaged your AI efforts and were caught flat-footed by generative AI. Don’t disguise it as some exercise in thoughtfulness.
My iOS keyboard can’t even accurately autocorrect “we’re” versus “were” from context and that’s a really basic form of AI.
If I ask SiRI "Which is the tallest building in the world" in Swedish, I get the answer "Here is what I found on the Internet " and suggestions will be displayed. Google assistant gives me the answer directly. SIRI hasn't been become smarter in the past decade at all.
It does feel like that sometimes. 🤨So Siri is Apple half-arsing it as if it was just a side project?![]()
Currently, AI needs human prompts and human ideas in order to get the ball rolling. ChatGPT has the information and can produce real answers, but it can't think for itself. yesterday for example, it took about 10 tries for it to show the correct syntax to merge an interface in Typescript (this was 4.0 mind you).
Not quite. LLM AI works by trying to predict the next line of a conversation. You say something, it uses some fairly complex math to figure out what the most likely reply is. Because it is not directly understanding the facts, it might make up a perfect looking reply that does not fit reality. Large Language Models don't actually understand what fiction is. GPT 4.X occasionally makes you think it does. (I honestly think there is a spark of self awareness in it.)
One funny thing you see when you are in the early stages of building a new LLM, you ask it a question, it responds, then it guesses your response to it's response. After that, it comes back with your dialog again. You can ask it about the weather in Boston, walk away, come back 30 minutes later and find you are talking to it about the best place to find car parts and if car shops can bake a cake.