Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
In my experience, Apple Intelligence has proven to be largely ineffective.
One specific instance occurred when I was driving to HED to collect medication for my child. I posed a specific question to Siri regarding the medication, but it failed to provide an accurate response. Subsequently, I requested that Siri inquire with ChatGPT, which yielded an answer. However, I was unable to read it while driving. I then attempted to have Siri read the reply, but it appeared to be unaware of the context of my request, resulting in significant frustration.
 
Efficient enough to mitigate the carbon footprint?
A recent study suggested that it's actually less energy than an LED lightbulb.


However, we believe that this figure of 3 watt-hours per query is likely an overestimate. In this issue, we revisit this question using a similar methodology, but using up-to-date facts and clearer assumptions. We find that typical ChatGPT queries using GPT-4o likely consume roughly 0.3 watt-hours, which is ten times less than the older estimate. This difference comes from more efficient models and hardware compared to early 2023, and an overly pessimistic estimate of token counts in the original estimate.
 
A recent study suggested that it's actually less energy than an LED lightbulb.

Then why is AI already such a big drain on the electric grid? Yes, I could maybe get a robot to tie my shoes or vacuum my floors for an energy expenditure similar to a light bulb, but should I? If millions or billions of people starting using AI for trivial things, the effect is not a couple watts. It becomes MW and GW pretty easily, hence the need for a revitalization of the nuclear power industry. We're no longer talking about reducing energy usage or swapping technologies to reduce carbon emissions, but rather we are talking about ways to feed increased demand. All so that we can make ever-more elaborate emojis :-(
 
Then why is AI already such a big drain on the electric grid? Yes, I could maybe get a robot to tie my shoes or vacuum my floors for an energy expenditure similar to a light bulb, but should I? If millions or billions of people starting using AI for trivial things, the effect is not a couple watts. It becomes MW and GW pretty easily, hence the need for a revitalization of the nuclear power industry. We're no longer talking about reducing energy usage or swapping technologies to reduce carbon emissions, but rather we are talking about ways to feed increased demand. All so that we can make ever-more elaborate emojis :-(
Training costs are not inference. Using the models is relatively low power, at least for the current (bad) ones.

Complex chain of thought combined with newer reasoning capabilities are a different story, and can cost hundreds to thousands of dollars per question depending on complexity and do use a ton of power. This may or may not be reducible, but the AI firms aren't sharing their research advances publicly so it's difficult to know.

My understanding is that even the $200/mo ChatGPT subscription is a net loss for OpenAI, but I assume that includes training costs. I expect the reasoning capability to be very limited on cheaper options, and I doubt medium-term that $200/mo will be the cap. I expect either a la carte options in the publicly facing UI or else they'll just force everyone with complex tasks to the API directly which already charges based on usage.

...

Most people getting help with research are doing surface level tasks and are using it to scan across a corpus of data that was trained, they cannot get novel research out of current public LLMs. It can still accelerate things like someone mentioned competitive analysis in the sense of you can have it look through the training corpus (to a point, statistically, not directly sourcing which is critical to know and most people don't – ASK IT FOR THE SOURCES and you will find out that many of them are inferred / do not exist!) but it is not a panacea and the true reasoning power requires enormous compute with current model development, even at the cutting edge. True "Agentic AI" with current technology is going to be incredibly expensive for a while.

Within ~5 years the entire paradigm will likely shift and nobody will be using pure LLMs for anything except people holding on to legacy implementations. We need persistent memory and world models for true advances, and they are technologically impossible with the approach taken with LLMs where even the reasoning capability is more like a modification and brute-force tactic to force a square peg through a round hole.

Research into those is ongoing and will be for a while. We have to basically just hope whoever develops them are benevolent to a point and decide to open-source things, because if not we're all going to be paying ungodly sums of money to use "compute" and it will matter with those ‘world’ models because they'll very likely be required to perform knowledge work in the future. You can see why this is a problem for not only technologists but also society as a whole.

Aside: Ignore anyone who ever mentions AGI, at least until the 2030s when some research may start arriving (my personal view is that biological computing will probably be a prerequisite and those processors are only now starting to barely operate in labs). It's all smoke and mirrors marketing now and will be for a while, even world models won’t qualify but I believe they will still be enormously impactful to society, for better or worse.

‘Fun’ times ahead.
 
Last edited:
It's so weird reading things like this. I haven't found any use for it in my personal or work life, and my wife is tried it in her law practice but pretty quickly gave it up as well.

Ah well!
I use it for work as well as for personal things. At first I didn't know what to do with it besides jokes and playing around. You need to learn what it can do and what it can't do reliably but then it can get very useful very quickly.

Just imagine having a buddy you can ask anything. You want to know what the meaning of a song is? How much sun your new lemon tree needs and why it doesn't grow like you want it to? What does that meme mean? How to calculate something? It may be something simple but it may be something very complex with a lot of variables. Want to practice a new language, just ask ChatGPT to ask you ten different irregular verbs in Italian every day. Want to change a part in your espresso machine but you have no idea how a specific part is called, just ask what to google for. Want new practice routines for your guitar? Want to teach your kids and their friends how to paint in watercolor but you don't have any teaching experience, so just ask it to create a structured plan for 5 lessons. Want to learn to cook something new, brainstorm with ChatGPT like with a friend who is also a professional cool. Translate texts, even if it's some strange local dialect, or youth language. Want to visit London but would like to have a personal travel guide just for you that is taylored for your interests and not the touristy trash? Want to know why Napoleon started his Russian campaign, how did the Ottomans think about this, or even how the everyday person in Paris felt about that? Want to know why pulsars exist, why are they the way they are and not something else, why not, what about that theory, how does it fit with this theory, and so on. It's like talking to a physics teacher but you can ask things you would maybe not ask your teacher.
 
  • Like
Reactions: johnsawyercjs
Makes total sense. I do work with clients and we would never use it with client data (or honestly, even about a client without their data); but a good part of my job involves winning new business, researching competitors, etc. and that's where it has proven invaluable.
My company has a business deal with ChatGPT and we got a walled garden version just for us. So no data is leaked out and everything stays in-house.
 
  • Like
Reactions: johnsawyercjs
Yep - we're investigating that for our internal stuff, but our clients are generally have extremely strict data protection rules and regulations - often we're using their locked-down hardware to connect to their network.

I actually saw language in one proposal that whoever won the work would not be allowed to use AI for anything more than brainstorming; and using it for brainstorming required prior approval and then a written report of what the AI tool produced and how it was used.
 
My company has a business deal with ChatGPT and we got a walled garden version just for us. So no data is leaked out and everything stays in-house.
Walled garden but still in the cloud right?

Only Goldman-level corporations with considerable investment have gotten actual host-able versions as far as I know.

I’m sure they do have data protections BUT my point in mentioning this is you have to wonder what the “we won’t train on your information” toggles actually mean for the higher level consumer plans.

Parallels to Amazon nearly rolling out a medical consultant pipeline using Alexa and during that process slipping up by mentioning current implementations didn’t meet the privacy and security requirements. Amazon killed that entire offering before it launched to shut that story up.
 
  • Like
Reactions: johnsawyercjs
Walled garden but still in the cloud right?

Only Goldman-level corporations with considerable investment have gotten actual host-able versions as far as I know.

I’m sure they do have data protections BUT my point in mentioning this is you have to wonder what the “we won’t train on your information” toggles actually mean for the higher level consumer plans.

Parallels to Amazon nearly rolling out a medical consultant pipeline using Alexa and during that process slipping up by mentioning current implementations didn’t meet the privacy and security requirements. Amazon killed that entire offering before it launched to shut that story up.
Yeah we are in the Fortune Global 500. Before the deal we were not allowed to use any AI or open clouds at work.
 
  • Like
Reactions: novagamer
For those who say AI hasn’t done anything for us, the Nobel Prize in Chemistry just went to two research groups that used AI to revolutionize their work.

One group shaved decades off the process of mapping protein structures. If you ever donated computing power to Folding@Home, you contributed to that same research. Before 2020 scientists were identifying about 10,000 protein structures per year. Then, in just a few months, this group discovered 200 million, a feat that would have taken half a century at the previous pace. Understanding what proteins look like and how they work is a game-changer for medicine, biology, and chemistry.

The second group worked on designing proteins from scratch. Take snakebite antidotes, for example. Traditionally, they’re made by injecting a small amount of snake venom into a large animal like a horse, then extracting antibodies from the horse’s blood to produce the treatment. But some people are allergic, or their bodies reject those antibodies. Thanks to AI, scientists can now create proteins with any desired structure and function. This means synthetic antibodies can be mass-produced in hours. Researchers are also developing proteins that break down plastic, neutralize explosives, absorb CO₂, or even fight cancer.

And AI is just getting started. In physics, it could help discover new materials for more efficient solar panels, faster and more powerful chips, smaller batteries, lighter and stronger materials, or fireproof compounds. It could revolutionize construction with better techniques, taller buildings, and safer bridges. In chemistry alone, AI has accelerated discoveries by a factor of 100,000. Imagine what it will do in other fields of science.
 
For those who say AI hasn’t done anything for us, the Nobel Prize in Chemistry just went to two research groups that used AI to revolutionize their work.

One group shaved decades off the process of mapping protein structures. If you ever donated computing power to Folding@Home, you contributed to that same research. Before 2020 scientists were identifying about 10,000 protein structures per year. Then, in just a few months, this group discovered 200 million, a feat that would have taken half a century at the previous pace. Understanding what proteins look like and how they work is a game-changer for medicine, biology, and chemistry.

The second group worked on designing proteins from scratch. Take snakebite antidotes, for example. Traditionally, they’re made by injecting a small amount of snake venom into a large animal like a horse, then extracting antibodies from the horse’s blood to produce the treatment. But some people are allergic, or their bodies reject those antibodies. Thanks to AI, scientists can now create proteins with any desired structure and function. This means synthetic antibodies can be mass-produced in hours. Researchers are also developing proteins that break down plastic, neutralize explosives, absorb CO₂, or even fight cancer.

And AI is just getting started. In physics, it could help discover new materials for more efficient solar panels, faster and more powerful chips, smaller batteries, lighter and stronger materials, or fireproof compounds. It could revolutionize construction with better techniques, taller buildings, and safer bridges. In chemistry alone, AI has accelerated discoveries by a factor of 100,000. Imagine what it will do in other fields of science.
Isn't that Google's AlphaFold which discovered over 200 million new proteins?

1739491466788.png
 
Amazing stuff. We recently got access to Gemini at work included in our Google Workplace suite. I've mostly used it to help write clearer emails.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.