Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The AI bubble has already begun popping, and you're seeing the results in realtime. Tech stocks are dropping like crazy, and people are getting fed up with AI generated videos passed of as facts. People are getting quicker at identifying AI fakes, though there's still a sizeable amount of gullible people out there that will likely never gain this skill.

The inherent bias in most LLMs has long taught me not to trust outputs without double or triple checking it's results. I've said it before, and I'll say it again, AI is a glorified search engine, and it's results are about as reliable as Google's.
 
Exactly that.

I note that you say that they're good at summarising stuff then immediately discredit it, so I'd argue they are no use for that too.

Anyway it gets worse as this post describes well...


View attachment 2606057

People are tuned to the science fiction definition of AI when in fact we have LLMs which are statistical ******** generators.

The only reason they are as successful as they are is due to the relatively high level of incompetence of people who use them and the marketing job.

Assuming this example is actually real (it's Reddit, so the likelihood is 50/50 at best) that's just a really stupid way to use AI as it currently exists. The metaphor I've heard that resonates the most with me is that you should approach AI tools as an over-eager intern in your company. Give them the grunt work to do but in no way do you just fully trust the results and blindly pass them off as accurate without verification. The number of people I've heard about using AI to write a contract without having an attorney review is astounding to me. It's coming at some point where we're going to see a dispute over one of these things where the AI bot hallucinated some clause that changes the nature of the agreement in a way not initially intended and one (or both) of the parties involved will be in a pickle.

This stuff is absolutely the future. But far too many people have taken the promise of what it could become and assumed we're there now, which is how you end up with stories like the above where people use it in really stupid ways with far too much trust for the output.
 
Assuming this example is actually real (it's Reddit, so the likelihood is 50/50 at best) that's just a really stupid way to use AI as it currently exists. The metaphor I've heard that resonates the most with me is that you should approach AI tools as an over-eager intern in your company. Give them the grunt work to do but in no way do you just fully trust the results and blindly pass them off as accurate without verification. The number of people I've heard about using AI to write a contract without having an attorney review is astounding to me. It's coming at some point where we're going to see a dispute over one of these things where the AI bot hallucinated some clause that changes the nature of the agreement in a way not initially intended and one (or both) of the parties involved will be in a pickle.

This stuff is absolutely the future. But far too many people have taken the promise of what it could become and assumed we're there now, which is how you end up with stories like the above where people use it in really stupid ways with far too much trust for the output.

What use is it if you can't trust it?
 
In a world of 'Targeted Advertising' vs 'Untargeted Advertising', I'd would pick the former.

In a world of 'Advertising' vs 'No Advertising', I would pick the later.
 
It can get you started on a project, but we're living in a world where people want the bot to do the project for them. And it's just not ready for that (yet).

It's not even any good for that to be fair. I've seen huge failures on process modelling and software architecture so far.
 
What use is it if you can't trust it?

The issue I find is the people using it, are not either willing, or not intelligent enough to learn the correct answer on their own, and would rather ask a faceless AI to lead them by the nose. And then they blindly accept what the AI outputs as fact.

Again, the I in AI is a lie. There's absolutely nothing intelligent about it.

I really need to make that my signature.
 
Surprised by the number of people here on a tech forum that actually think AI is going away. There has never been a technology that has gone from ~ unused to essential in so short period for so many users. That has occurred in a single year, even for the technologically uninclined. Hoping it away is simply silly, head in the sand behavior.

Are the security concerns real, absolutely. Are the market bubble concerns real? Likely, and to what degree we'll find out. Is AI going away? Absolutely not.

Companies like Anthropic see less than 20% of their revenue from individual users who would potentially be looking at advertising. Most is enterprise. Google/deep mind usage of their AI is interwoven into all of their products and is not directly reliant on consumer spend for the lion share of what they're doing. OpenAI is an outlier there as they are largely consumer based right now. Focusing on monetizing that one segment is an interesting sidenote but far from the big picture.
I don't think AI will go away. I think LLM chatbots will continue to get worse and worse, unless someone figures out a completely different way to make them work. They don't "think," they just combine words in a probabilistic way to simulate human writing using scraps of the same. As they put out more and more of the existing written word, the GIGO problem will increase. Models digesting each other's output will have access to less facts and actual human writing, and more and more regurgitated AI slop.

There are other applications of (so-called) AI that show some promise. It's already gotten really good at reading radiographs. It can sort through crazy amounts of data very quickly. It can drive cars for us and control other kinds of robots. These and similar are the actually useful applications of AI that will become the focus once people realize the shortcomings of having an LLM do your coding or email job for you.
 
Remember when Wall Street analyst and sartorial nightmare Dan Ives said Apple will fix its AI problems by buying Perplexity? This despite Perplexity not actually making the foundation models that Apple desperately needs to compete in the AI race. Most of the people investing in the AI arms race have are gambling that this will be the biggest paradigm shift in the last 30 years if not more, which is causing a lot of skepticism and critical thinking to go straight out the window.
"Sartorial nightmare" deserves highlighting. Bravo.


[Much to this chagrin of the C-suite] I'm curious if Covid - not AI - might prove the bigger paradigm shift the last 30 years - at least in Corporate America. I'd also like to credit Covid for the rabid and [dare I suggest] irresponsible adoption rate of AI in Corporate America. The time was 2019...

At that time, the executive playbook for success looked something like this: business casual employees, sitting in cubicles for 8-10 hours a day, Monday through Friday, miserable, sick, stuck. That approach made them billions. Afforded them travel, golf, wives. Life was good. Insert Covid. Completely upended that structure, illuminating its fallacies. Turns out, business was not adversely affected by pajama-clad employees, sitting on the couch, working 6-8 hours a day, Monday through Thursday, and 1/2 day Fridays. In many cases, it actually increased margins as assets like office space, the random pizza party, were no longer necessary. But the real rub? Employees were happier. Healthier. Wait - were the employees right all along? This was not good. If there is one thing execs detest, it is being wrong. Worse, being proven wrong. The egos that got them in the suite, tend to be fairly wrong-averse. But mid-pandemic, they had no option but to comply, adopt. But they didn't forget.

Fast-forward to AI. The promise of eliminating staff, and [more specifically] the expense associated with staff (salaries, PTO, healthcare benefits, the before-mentioned pizza parties, etc.), proved (and is proving) too tempting and execs gleefully went all-in on AI. The strategy of having employees develop agents and processes that will eliminate their own jobs, all in the name of "career development," was too perfect a retribution. This pleased executives. They got the last laugh.

Maybe.

The unapologetic, all-in corporate adoption of AI may have been short-sighted (which may prove an understatement). Partly because they asked AI how to develop and implement this strategy and partly because they completely mistook human empowerment for human replacement. AI is a powerful tool that can enhance the speed and potential depth of the work humans can produce. It is not [yet] a replacement of them. And here we are. The next 2 years in Corporate America should be radical, popcorn-worthy shenanigans. Buckle up.

It goes without saying: I reserve the right to be completely wrong on all counts. I mean, that's kind of the fun of this site, right? 😜
 
  • Like
Reactions: Falco McGregor
Great to hear this. Ad free experience is the best. Hopefully they will not change their stance in the future.
 
  • Like
Reactions: mganu
People cheering for the AI bubble to burst think it will humble the hype and level the playing field.

It won’t.

When bubbles pop, the weak disappear and the strong consolidate. Fewer competitors means more pricing power. Subscriptions get more expensive. Limits get tighter. The best models move further behind paywalls.

That doesn’t kill AI. It concentrates it.

And when advanced tools concentrate, advantage concentrates with them. The institutions and individuals who can afford premium access get better leverage. Everyone else gets a thinner version.

We’re already seeing tiering between public-facing AI and private deployments. A crash won’t flatten that gap. It will likely widen it.

The bubble bursting won’t democratize AI. It will sort it, and unless you have a lot of money you won’t get it.
 
I don't see anything saying it's going away, just that it's a bubble. Dot.com was a bubble, and the web didn't go away...and I don't see any industry, outside of tech, where AI is "essential". Not everyone works in tech; I don't. Just a guess, obviously, but I think around 10% of the current crop of AI companies will still be around in 5 years, and the survivors likely won't be pure AI providers, like Google, for example.

Now if someone can come with an equivalent usable system that doesn't require the power of a thousand suns, and enough hardware to build death star, they will be the winner....The current direction of LLMs is unsustainable for a multitude of reasons, and the quicker it dies, the better.
The issue isn’t the number of suns that will power the system. The issue is how much AI and the infrastructure will cost you and how little you will get. It’s not going away so you can pretty well accept that your Electric prices are going to go through the ceiling and your access to AI is going to diminish. You’re gonna pay the freight but you’re not gonna get the product. That’s the truth.
 
The issue isn’t the number of suns that will power the system. The issue is how much AI and the infrastructure will cost you and how little you will get. It’s not going away so you can pretty well accept that your Electric prices are going to go through the ceiling and your access to AI is going to diminish. You’re gonna pay the freight but you’re not gonna get the product. That’s the truth.

We have a lot of land, so a lot of space, and have started the process of putting as much of our usage as possible on solar, even though our electricity rates haven't budged in several years and they aren't in the process of doing so. We live in an unincorporated area, so can more-or-less do as we wish, and our power company is a non-profit Co-Op. Unfortunately our winters are snowy and dark/cloudy, so ipower generation will be seasonal, but still..

That said, people are really, really, starting to push back on data centers, and several in the area have been cancelled or just not approved in the first place. That helps....and I do think that companies are going to find some way to make all this more energy efficient, if for nothing else, their bottom line.
 
Last edited:
The A.I. bubble is going to pop at any moment now 😆 most companies are figuring out they have created a monster without any regulation/safety switch.
And it can't happen soon enough.
I agree… but I have the impression that there are too many resources and interests put into that machine, to stop it now. I mean, even many jobs are being automated with AI.

I feel like I lost some value after the AI boom. I was (and still am) someone proud of writing quite well in my native language, but nowadays if I try to elaborate my language, people accuse me of using ChatGPT to write. It’s a bit frustrating…

But I feel like the whole industry is way too invested in AI to now backtrack.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.