Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
This is nice to hear a report about

One trend I've noticed in all the AI stuff is that it has some situations where it's very useful, right now

Where it's falling down, I think, is this seemingly somewhat rushed attempt to make it "be everywhere" and insistence that it will "do everything"

(not by users like yourself, but by the owners and developers of the tech)

They are busy trying to create a gold rush for themselves when it's not at all clear how ready or close to ready it really is for a lot of the things they claim.

In the context of this thread, for instance, I'm pretty darned sure we aren't anywhere close to "Elon has created a maximal truth machine" ... especially when he says it's using X has one of it's two main data sources


Yes, although it's saved me a ton of time over the last few months, but I also wasted 3-4 hours yesterday trying to get something to work in AI and it would not. They're not all wins, but the wins are out there.
 
  • Love
Reactions: turbineseaplane
Yes, although it's saved me a ton of time over the last few months, but I also wasted 3-4 hours yesterday trying to get something to work in AI and it would not. They're not all wins, but the wins are out there.

I really appreciate your candor on this topic
It behooves all of us to be as honest about what is or isn't the current state of things in this space
👍
 
  • Like
Reactions: Howard2k
I would really like everyone who posts such things tell what job/position they have. It seems to be almost universally IT people and coders.
I am a Management Consultant; no IT use here. I use it for research, poking holes in my recommendations, first drafts of reports, ideation/brainstorming, data analysis, competitor analysis, and other things.

The key, in my experience, is figuring out how to write effective prompts. I'm stealing this prompt from a professional group that I am member of, but asking it to "give me examples of risks that I should be considering in a building renovation project" will give me basic, blah answers that aren't helpful. Instead, asking something like:

"You are an experienced project manager that specializes in construction management. Identify potential risks in a project to transform a designated hospital wing into a functional outpatient clinic. I’ve attached tables, other projects’ completed risk register templates that identify risks related to construction delays, budget overruns, patient safety, hospital, operations during construction, and other potential risks. Ensure compliance with healthcare regulations and safety standards. Consider the impact on existing hosptital operations and patient care. Ensure all relevant risks are identified, including construction delays, budget overruns, patient safety, hospital operations during construction, and other potential risks. Check if compliance with healthcare regulations and safety standards is adequately considered. Verify the risks are accurately categorized and described. Ensure that risk mitigation strategies are realistic and actionable. Format the risk register as a table."

Do I need to review that the risks are complete and error free? Absolutely. Am I still going to run it by my team and our healthcare industry SMEs on staff to make sure it didn't miss anything? Of course. But, it will knock that initial draft out in a minute or two, compared to the three or four hours it would take me or one of my team members to do; and even if done by a human I still need to run it by others on my team and the healthcare industry SMEs.
 
I would really like everyone who posts such things tell what job/position they have. It seems to be almost universally IT people and coders, which is fine and great, but it doesn't mean it's useful in other areas.


Well yeah, if you're laying bricks then AI is not going to be a huge help right now I don't think.

Business analysis for me. Not IT or coding.
 
  • Like
Reactions: turbineseaplane
I guess it all comes down to my concerns about objective truth

I'm not a huge fan of the idea of "objective truth" so far as we are discussing things of a subjective nature. "Objective truth" in my mind are those things that are covered in the STEM topics; but even then, once you get into theories of relativity and quantum mechanics, those too become suspect.

Let's take, as an example, the oft-fought idea of the Apple Vision Pro! Some on here ;) think that it is an objectively terrible product. Others think it is an incredible and innovative product. Can there ever be any sense of an objective truth around this? It depends on what your "weighting" of the data is, and that's an entirely subjective process.

I'm just unclear on how machine learning is going to parse that situation in a way where I'd ever be comfortable assuming the output is accurate to the point of not questioning any aspects of the chain that got it there

It's either math or it's not. If it's math, it can fall closer to objective verification on the spectrum between objective and subjective.

It's a little bit like building on an imperfect foundation -- the taller the building on top, the more magnified tiny issues at the bottom are.
This is an engineering problem, and math can be used to settle disputes. But outside of STEM? It will always be a debate.

It's unclear to me that systems are being built in to understand how output was derived and why

Have you played around with "thinking models" like Deepseek, and read through the thought process that Deepthink provides before providing its answer?
 
I guess it all comes down to my concerns about objective truth

I'm just unclear on how machine learning is going to parse that situation in a way where I'd ever be comfortable assuming the output is accurate to the point of not questioning any aspects of the chain that got it there
Models like o3(-mini) print out the reasoning steps (chain of thought), so to an extent you can see what it's doing.

The only danger I can see is that people may get lazy and stop verifying results if the models are getting "too good". But that's up to the individual.

To be sure, there are tons of ethical issues ("theft" of training data, energy usage, concentration of power in the hands of a few, manipulated training data etc.), but that is another story.
 
So ... Q

If the models are so good (or getting there) at achieving "maximal truth", as Elon purports

Why would he not want to train on other data beyond X and let the machine do its magic?

(yes, he pays lip service to "the Internet", but it seems he feels X is a "good/better data source" in the social space)

Wouldn't the best output come from the broadest range of inputs, even if they disagree with his personal views?

I don't see how he is getting to "maximal truth" without incorporating data points that he may not personally like or believe
 
Just to be a bit silly, but with a point. I just made the following request of ChatGPT, and it responded in about 10 seconds. Nothing objective, but if I had no ideas, this would be incredibly helpful. :)

Today is my wedding anniversay. I live in Antigua, Guatemala. Can you suggest some ways in which my wife and I could best celebrate our anniversary?

ChatGPT said:​

Happy anniversary! Since you're in Antigua, Guatemala, you have some incredible options to make the day special. Here are a few ideas tailored for romance, adventure, and relaxation:

Romantic Experiences

  1. Sunrise or Sunset at Cerro de la Cruz – Start or end your day with a breathtaking view over Antigua. Bring a bottle of wine or coffee and enjoy the moment together.
  2. Private Rooftop Dinner – Book a table at a rooftop restaurant like Tenedor del Cerro, Casa Santo Domingo, or Hector's Bistro for an intimate meal with a beautiful ambiance.
  3. Couples' Spa Day – Treat yourselves to a relaxing couples' massage at a luxurious spa like San Rafael Hotel Spaor Casa Santo Domingo Spa with a volcanic stone therapy session.

Adventure & Unique Activities

  1. Hot Air Balloon Ride – If you’re up for something unforgettable, look into a hot air balloon flight over the valleys surrounding Antigua.
  2. Private Cooking Class – Take a romantic Guatemalan cooking class together at a place like La Tortilla Cooking School, where you can make (and enjoy) a special meal.
  3. Hike to Acatenango Base Camp (or Pacaya for a lighter option) – If you both enjoy adventure, take a guided hike and toast marshmallows over lava flows.

Cultural & Intimate Experiences

  1. Wine Tasting & Vineyard Visit – Visit a vineyard like Casa del Ron or go for a tasting experience at Rincon Tipico, pairing local wines with great food.
  2. Horseback Ride through Coffee Plantations – Book a scenic horseback ride together through the lush coffee farms around Antigua.
  3. Staycation in a Romantic Boutique Hotel – If you want to make the night extra special, book a stay at a boutique hotel like Mesón Panza Verde or El Convento Boutique Hotel and enjoy a candlelit dinner in your suite.

Personal & Thoughtful

  1. Memory Walk & Love Letter Exchange – Stroll through the cobblestone streets of Antigua, visit meaningful spots from your relationship, and exchange handwritten love letters over a quiet dinner.
  2. Custom Jewelry or Art Gift – Visit a local artisan to commission a unique piece of jewelry or art to commemorate your anniversary.
  3. Anniversary Picnic in a Secret Garden – Pack a gourmet picnic and head to the beautiful gardens of Filadelfia Coffee Resort or Earth Lodge, where you can enjoy peace and nature together.
Would you like any specific recommendations or help booking something? Whatever you choose, I hope your anniversary is magical! 🎉💖
 
  • Like
Reactions: Howard2k
I don't see how he is getting to "maximal truth" without incorporating data points that he may not personally like or believe


I think by removing the guardrails.

Other AIs have, apparently, stronger guardrails in place that might get away from the truth. Woke agendas.
 
I think by removing the guardrails.

Other AIs have, apparently, stronger guardrails in place that might get away from the truth. Woke agendas.
To be honest, I think Grok will actually be the one with higher guardrails; Musk already thinks he knows what "truth" is and will be sure that his model follows his subjective view of truth.
 
@Timo_Existencia

Ok, so probably a dumb Q -- but trying to mentally map this out here

In the future, how will a new restaurant that competes with "Tenedor del Cerro" get surfaced in these models?

What's the input going to be that gets that in the scope of the models and what will make the model elevate a new data point (such as a new restaurant)?
 
  • Like
Reactions: Howard2k
@Timo_Existencia

Ok, so probably a dumb Q -- but trying to mentally map this out here

In the future, how will a new restaurant that competes with "Tenedor del Cerro" get surfaced in these models?

What's the input going to be that gets that in the scope of the models and what will make the model elevate a new data point (such as a new restaurant)?
The models are beginning to integrate with the Internet.
 
The models are beginning to integrate with the Internet.

Right, but what's the input from the internet?

Is there some baked in assumption that humans are going to keep posting reviews?

And even so, what about all the systemic flaws already cooked into the way online reviews work, are curated and flat out censored/sponsored

(I'm a refugee of dealing with Yelp extortion from the very beginning of that horrendous company)


It seems like all of this is somewhat predicated on assumptions of how the data sources have been in the past
 
  • Like
Reactions: Howard2k
I think by removing the guardrails.

Other AIs have, apparently, stronger guardrails in place that might get away from the truth. Woke agendas.
There is strong evidence that Musk has tweaked Twitter's feed algorithms to amplify his own agenda. There is little reason to assume that he'd be above doing the same with his AI models.
 
  • Like
Reactions: Howard2k
I've probably told this story before on here, but I just couldn't believe this person I sold my e-bike to last year.

He couldn't figure out how to get the App to sync with the bike and had pulled up a ChatGPT reply about it and it was wrong at basically every single possible step of the process.

Ok - fair game, it's a niche thing and it's still developing

What was alarming to me though, was how totally inept in real world reasoning and interaction with tech he was. His social skills were pretty bad too, actually.

I'm really concerned about the iterative process of humans learning information and skills if they start adapting to just "getting answers" all the time.

It's pretty well established what a wild difference there is in comprehension of knowledge when consuming a complete book or range of sources on topics vs just getting the cliff notes / answers (as one silly example that comes to mind)

This is to say nothing of what it does for your ability to reason, troubleshoot and discover new questions to even be asking ... in the process of learning.

These are just some of my broader concerns
 
  • Like
Reactions: Howard2k
To be honest, I think Grok will actually be the one with higher guardrails; Musk already thinks he knows what "truth" is and will be sure that his model follows his subjective view of truth.

Sure. But from what I've seen Musk and "people" are saying "Grok has fewer guardrails" and there is no pushback from others (Meta, Gemini, Copilot etc) or the general AI using public saying "No, we at [Other AI] have fewer guardrails".

That doesn't prove anything, but I'm just saying that the noise seems to go in one direction. If you're seen otherwise I'd love to see too.

I know we're not allowed to post AI content, but even DeepSeek agreed that Grok has fewer guardrails. Of course, it's just analyzing what it's read and/or been told to say.
 
There is strong evidence that Musk has tweaked Twitter's feed algorithms to amplify his own agenda. There is little reason to assume that he'd be above doing the same with his AI models.


Thanks, where is this evidence please?
 
What was alarming to me though, was how totally inept in real world reasoning and interaction with tech he was. His social skills were pretty bad too, actually.

I own 13 airbnb's. In my airbnb's I provide large-screen theaters with projectors. My guests frequently need to pair the Amazon Fire Stick to the Bluetooth soundbar. I even provide step-by-step instructions with photos.

The number one problem my guests have is how to connect the speaker. I frequently have to send maintenance to do it for them. Bluetooth. In 2025.

All the criticisms about the weaknesses of AI don't seem to realize how much of the human population is basically tech-illiterate (and, to be fair, tech is often implemented in stupid ways that make simple things much too difficult for non-techie types).
 
Right, but what's the input from the internet?

Is there some baked in assumption that humans are going to keep posting reviews?

And even so, what about all the systemic flaws already cooked into the way online reviews work, are curated and flat out censored/sponsored

(I'm a refugee of dealing with Yelp extortion from the very beginning of that horrendous company)


It seems like all of this is somewhat predicated on assumptions of how the data sources have been in the past

There is currently manual grooming of information based on feedback, I believe. When you thumbs down a response it gets flagged. Then, I'm speculating, those thumbed down responses get aggregated and where there is the detection of a large number of poorly rated responses those can be reviewed.
 
  • Like
Reactions: turbineseaplane
I've probably told this story before on here, but I just couldn't believe this person I sold my e-bike to last year.

He couldn't figure out how to get the App to sync with the bike and had pulled up a ChatGPT reply about it and it was wrong at basically every single possible step of the process.
You'd likely get much better results with reasoning models now (some of which can also reach out to the Internet to get up to date information). Gen AI is still improving in leaps and bounds.

I'm really concerned about the iterative process of humans learning information and skills if they start adapting to just "getting answers" all the time.

It's pretty well established what a wild difference there is in comprehension of knowledge when consuming a complete book or range of sources on topics vs just getting the cliff notes / answers (as one silly example that comes to mind)

This is to say nothing of what it does for your ability to reason, troubleshoot and discover new questions to even be asking ... in the process of learning.

These are just some of my broader concerns
That is a real concern. See e.g. this recent study:

 
  • Love
Reactions: turbineseaplane
There is currently manual grooming of information based on feedback, I believe. When you thumbs down a response it gets flagged. Then, I'm speculating, those thumbed down responses get aggregated and where there is the detection of a large number of poorly rated responses those can be reviewed.

So the assumption of human data input in perpetuity is pretty key then?
(at least for many types of data, it would seem)
 
  • Like
Reactions: Howard2k
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.