Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
A dystopian hellscape run by hallucinating automation that gaslights people into believing that they are doing something worthy for their existence other than feeding data into the model which gaslights them.

That's no better than middle management today! :)

I come from a software automation background so it's probably not a huge surprise that I'm so excited by all this. :DMy team has done a lot of really incredible stuff over the course of the past 12 years to drive up value and drive down costs by automating literally everything we can. Apple Intelligence is just going to take things to an even higher level for us, using the devices and technologies we rely on everyday to radically reduce friction in many different processes.
 
I'm on a 30 day trial which I think is open to anyone.

Would I pay for this? Hell no.

Will I pay for Apple Intelligence? Hell yeah. It's clear Apple is building a cohesive intelligence experience around its entire ecosystem that just makes sense for everyone.

So what you're saying is you'll pay for something which isn't even released yet but you won't pay for something that is released because it has no value.

That's some rational thinking that.
 
  • Like
Reactions: Mr_Ed and arkitect
So what you're saying is you'll pay for something which isn't even released yet but you won't pay for something that is released because it has no value.

That's some rational thinking that.

If you're heavily invested in Google Workplace and use Android, there's probably a ton of value there for you. We use Apple hardware and software to manage our business. If you can tell me how we can use Gemini to improve workflows in Apple Mail, Apple Calendar, Notes, Reminders, etc. then please give me some of your rational thinking.
 
I come from a software automation background so it's probably not a huge surprise that I'm so excited by all this. :DMy team has done a lot of really incredible stuff over the course of the past 12 years to drive up value and drive down costs by automating literally everything we can. Apple Intelligence is just going to take things to an even higher level for us, using the devices and technologies we rely on everyday to radically reduce friction in many different processes.

I'm sure you do. I run a very large engineering team of a very large software company as well has having built most of their principal product initially. That is on top of spending a decade as an EE / embedded engineer designing things that left the planet. While I respect that automation is a noble goal and I support that, but one has to consider determinism and idempotent behaviour, neither of which an LLM delivers.

Specifically in context to software engineering, there are two sides to this argument: science versus faith. We must err on the side of science which means that anything we do must be objective, repeatable, documented and reproducible. And that means it must reach a certain standard of evidence, which an LLM or an engineer using one cannot ever meet.

A fine example is the queue consumer problem. How does one reliably consume a message from a queue i.e. ensure that a message is delivered at least once (exactly once is even more difficult!). The context of the question is extremely difficult to reason about even by seasoned professionals because it depends on many aspects including the consumer's durability as well. Of course the lazy developer will get ChatGPT or whatever to write a queue consumer and make the assumption that it works. Eventually that becomes the de facto standard and the brain drain begins. We already have problems recruiting people who can solve this problem effectively and we have to do that or the SEC will ream us. While not reamed yet, the problem was discovered and the root cause turned out to be a first monkey using a piece of generated code which guaranteed delivery exactly 0-3 times depending on the weather, approved by a second monkey who used an automated review tool to review it and a third monkey using an automated test generator to fail to test it. Of course because these tools were specified, trusted purely on faith without verification and implemented in faith, there was no one ultimately responsible of course.

It only takes a dead body for that to turn into a $50,000,000 problem where EVERYONE is responsible.

There is too much faith and little to no verification. To make parallels to the hard engineering disciplines we are still in the 1800s in software.

300px-Train_wreck_at_Montparnasse_1895.jpg
 
Give us more comprehensive language support. Lots of things I can't do in my native language. More to come, I bet. For me, AI is only icing on that cake.
 
No. Not interested.
1) I’m already very intelligent so I don’t need help
2) It will make me lazy and my brain will not be exercised
3) I think there are better ways to spend electricity. 3-5 questions answered by AI equivalents washing a whole machine full of laundry.
 
Last edited:
  • Like
Reactions: Mr_Ed
I'm sure you do. I run a very large engineering team of a very large software company as well has having built most of their principal product initially. That is on top of spending a decade as an EE / embedded engineer designing things that left the planet. While I respect that automation is a noble goal and I support that, but one has to consider determinism and idempotent behaviour, neither of which an LLM delivers.

Specifically in context to software engineering, there are two sides to this argument: science versus faith. We must err on the side of science which means that anything we do must be objective, repeatable, documented and reproducible. And that means it must reach a certain standard of evidence, which an LLM or an engineer using one cannot ever meet.

A fine example is the queue consumer problem. How does one reliably consume a message from a queue i.e. ensure that a message is delivered at least once (exactly once is even more difficult!). The context of the question is extremely difficult to reason about even by seasoned professionals because it depends on many aspects including the consumer's durability as well. Of course the lazy developer will get ChatGPT or whatever to write a queue consumer and make the assumption that it works. Eventually that becomes the de facto standard and the brain drain begins. We already have problems recruiting people who can solve this problem effectively and we have to do that or the SEC will ream us. While not reamed yet, the problem was discovered and the root cause turned out to be a first monkey using a piece of generated code which guaranteed delivery exactly 0-3 times depending on the weather, approved by a second monkey who used an automated review tool to review it and a third monkey using an automated test generator to fail to test it. Of course because these tools were specified, trusted purely on faith without verification and implemented in faith, there was no one ultimately responsible of course.

It only takes a dead body for that to turn into a $50,000,000 problem where EVERYONE is responsible.

There is too much faith and little to no verification. To make parallels to the hard engineering disciplines we are still in the 1800s in software.

300px-Train_wreck_at_Montparnasse_1895.jpg

I absolutely understand the extreme approach we take to automation is not suitable for every business. I can relate to some of the points you raise especially in regards to documentation. We learned some very painful lessons as our business grew in the early phases. Far too much valuable knowledge was retained by single specialists within our business. Today, I'm very proud of the depth of searchable knowledge we have documented on our intranet. It's not been a problem onboarding a new hire for many, many years. Making sure they adhere to the same high standards of documenting their work though can be more bumpy. :D

We've certainly had our fair share of issues as we've scaled up our automated approach, including one or two near-catastrophes we had to work days and nights to sort out without sleep. I'm sure we've all been there, though!!???!

For us, the benefits of automation far outweigh the risks. If huge mistakes carried risks of huge fines, or financial losses - possibly even imprisonment - then yeah, I understand why you're always striving to err on the side of absolute caution.

We do a pretty excellent job in QA with our work overall. I suspect, based on what you wrote above, you would be horrified by some of the automated QA tools we use to do QA haha. :D Humans are heavily involved though in using, analysing and understanding the results from use of those tools.
 
  • Like
Reactions: Bungaree.Chubbins
I absolutely understand the extreme approach we take to automation is not suitable for every business. I can relate to some of the points you raise especially in regards to documentation. We learned some very painful lessons as our business grew in the early phases. Far too much valuable knowledge was retained by single specialists within our business. Today, I'm very proud of the depth of searchable knowledge we have documented on our intranet. It's not been a problem onboarding a new hire for many, many years. Making sure they adhere to the same high standards of documenting their work though can be more bumpy. :D

We've certainly had our fair share of issues as we've scaled up our automated approach, including one or two near-catastrophes we had to work days and nights to sort out without sleep. I'm sure we've all been there, though!!???!

For us, the benefits of automation far outweigh the risks. If huge mistakes carried risks of huge fines, or financial losses - possibly even imprisonment - then yeah, I understand why you're always striving to err on the side of absolute caution.

We do a pretty excellent job in QA with our work overall. I suspect, based on what you wrote above, you would be horrified by some of the automated QA tools we use to do QA haha. :D Humans are heavily involved though in using, analysing and understanding the results from use of those tools.

I'll add I'm not saying your perspective is wrong of course, just proceed with caution and verify what you think is true. It's not because I'm a cynic but because I made a lot of mistakes :)

I think risk appetite varies across industries of course. But the problem is that the industry has a poor risk appraisal etc. People like to amble over to the finance side of things and discover that it's a lot harder and we're not exactly even strict compared to aviation, space or defence etc. It really puts things into perspective when there's a price on failure.
 
Bing Image Creator is free and it looks like it works better than Apple's Image Playground. Google's Imagen 3 / ImageFX is also free. Midjourney is usually a paid service, but it's also temporarily free.

If you want to find good coffee along your route, Google Gemini is accessible via the Google app on iOS, and you can ask it to find you good coffee (or whatever you're in the mood for) in a specific area or along a specific stretch of road, and it'll show you those results on a map, along with links to establishments, customer ratings, and a little description of what they're like. It's much easier to access and gives more information if you're using a Pixel, but it still works surprisingly well on iOS. Perplexity (also available via an app) will do the same thing, with even more information.

All of this is available today, on your current iPhone. No betas, no waitlists, no "maybe you'll get it by this date, but only if you buy a new device."
You misinterpreted my post 😁. I’m cynical as to the current state of ‘AI’, be it Google, Apple or Whatever. I’m waiting for the real deal, not some hodgepodge of apps and data that I have to manipulate myself to find what I want.

Apple is not really ‘behind’, because nobody is yet ‘far ahead’.
 
You guys saying you've never used Siri, really? Never? Never even asked it to start a timer, set an alarm, or see what the weather's going to be like?

Anyway, I honestly have zero need for something to rewrite my emails. I wrote something in a certain way, with all that information for a good reason. If I wanted someone to come along and remove a few words here and there then I would be okay with that, but rewriting my email in the way the examples show is really just losing the point and context of the email. As far as I can see, that aspect of Apple Intelligence is a no-no for me.

And, given that, the rest is likely going to be ignored, too.
 
Only AI features I would like are phone based. Do not care much about off the phone AI. If they gave Siri the ability to actually do something useful, that would be nice.
 
I keep hearing about how the 15 is out of date as it will have no AI. Sure it will be the selling point in a video but I never use Siri. In a few years when foldable and better cameras come AI will just be there as a feature as standard for those who wish to use. Don’t see it being a main feature I read even Apple don’t think it will be a selling point earlier in year?
I hear so many saying they don't care at all for voice assistants or Siri and don't want AI because they never use Siri.

Do you know why you don't use Siri? Because you have to learn what phrases and words it recognizes. And you also have to speak in a specific cadence and manner to have it even pick up your words.

And even when you do everything right, Siri still might not work for several many reasons. It's frustrating and too often unreliable.

I'm not saying Apple Intelligence is worth it just to get a solid and useful voice assistant.

But the reason you don't use Siri is not because the idea of a voice assistant is bad or you have no use for it. It's because Siri has always sucked.

There's also this notion in here that some or all versions of Siri have been "AI" which, if we consider what LLMs have given us, is incorrect.

Siri post versus pre Apple Intelligence will be a night and day difference.
 
Last edited:
  • Like
Reactions: uacd
I'm guessing a lot of people here don't understand what a 10uH thingy is, nor what a 22pF thingy is.

Without using more and more energy to generate another 'AI' response that barely anyone here understands, what are you guys talking about?!

The point I'm making is that you're saying this answer or that answer is wrong, but showing us screenshots of an 'AI's idea of your question isn't helping because we do not understand what the correct answer is.

If you're saying the question is bunkem, then surely the 'AI' should be telling us the question is unanswerable? If none of them are saying that, then they're all wrong, correct?

Please don't kill another puppy by asking another AI to answer the question.
 
I mostly don’t care about AI. I use chat assistant Poe from Quora from time to time just to check what I already know or even to put its knowledge to the test, just for the sake of fun. I believe that AI is still far from doing research on its own and human is able to get more into some delicate details the machine is unable to see.

Few times I have tried this tool to summarize some studies. While I understood most of it, AI’s summary was rather bland, lacked crucial detail and was too short to actually deliver main points. I am not telling machine does not learn, but somehow doing that myself feels more natural at the moment. Maybe future will change everything and people would all become “chatbot managers” lol. At least it seems like where its all going to.

But I have some strong anti-AI sentiments too. Those are all about creative industry use of AI. For many years I have been SICK AND TIRED of computational photography in smartphones. This was probably not AI in classic sense but rather a set of stupid algorithms. I just mean, if camera sensor and lens cannot deliver sharp details naturally then camera makers SHOULD NEVER try to bring that clarity back in post, NEVER. Same with noise reduction, HDR (if sensor is unable of good dynamic range, why glue multiple shots all the time??), saturation. Anecdotally it is believed most people like it.

Just look at this shot of mint my SE3 took today. Do you see anything? Because I don’t! Not a single focus point, tons of automatic HDR that turned shadows and highlights into equal mess. I probably can save it with some vignetting, spot edits and such, but what if I just don’t want to? It looks extremely unnatural.

1724793670107.jpeg
 
Genuine question; what is an agnostic AI?
Wasn't my term, but I'd define it as one that doesn't have anything pushing it in a particular political/philosophical direction. Anecdotally, every time they try to create one it trains itself to be far right. Some may say that's because they're correct, others that they are the most vociferous and prolific group posting online.
 
  • Like
Reactions: arkitect
What we have now is not AI.

It's a crappy tokenizer and statistical inference network that outputs something stupid people haven't worked out is stupid yet. This is all powered by an investment mill and hype engineering operation like no one has ever seen before.

Quick example.

It's impossible to have a 10uH capacitor (it should be 10pF) and a 22pF inductor also makes no sense (22uH it should be). Also 1THz is a frequency which is waaaaay outside any reasonable model where a lumped element circuit even makes sense to reason about.

An EE tutor / engineer would roll up a newspaper, hit you round the back of the head and tell you to stop being a dumbass.

CoPilot answers you authoritatively while completely misinterpreting the question and giving you an answer which has no meaning at all.

It's hopeless.

View attachment 2409915
I have gotten some HORRIBLE programming responses as well. These tools are just so frustrating. And especially since every product and company is shoving it in our face.
 
  • Like
Reactions: dk001 and cjsuk
I'm not interested in Apple Intelligence, just because it seems so behind all the competition.

A tool to rewrite e-mails? Image Playground? A better voice assistant? Most third-party AI services will give you all that and more, and you won't need to get a new phone either. Maybe someday, Apple Intelligence will be genuinely competitive in the AI space, but right now, unless you really like Siri, there's not much to look forward to with Apple's AI efforts.
Most third party services require access to your personal data and that’s Apple’s biggest selling point here.
If all that’s advertised is achieved you’d be using the AI without even realizing it as it’s tailored to you. Might be 2-3 years out for all the devs to catch up though as it’s when you’d benefit the most.
 
I genuinely don't get how people take the use of AI to such hypothetical extremes. People will always want to talk to other people.

well it starts with getting ai summaries of the people talking to you

then it ends with you telling the ai to send your friends a message saying [your intent]

and now you talk to people through ai
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.