Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Your knowledge on this is rather naive. The $50bn doesn't exist. It's a promise to try and drive stock value up by demonstrating it needs more capacity which allows them to live a few more months and keeps NVDA up.

The thing is the investors want to cash in their return now. And there isn't one. And they're starting to get rather vocal about it. The big investment companies have already dumped their own stock onto bagholders and are now cutting their losses.

Ignoring the technical, social issues, the basic finances don't work. The technology has no tangible return on the spend, only loss. It's so so so bad the pissants like Altman went begging to the US gov for a bailout.

It's not a bubble, it's market fraud.
I disagree strongly, and unfortunately your knowledge regarding Anthropic isn't exactly correct on this: they are working with fluidstack and are invested in Google TPUs, it's got nothing to do with NVDA at all. Amazon is their financial backer much like Microsoft is for OpenAI, and both aren't going to give up within the next few years.

People really want this to be true and are hoping for some crash soon for a lot of reasons that I understand very well. I also understand the technology pretty well, and have spent a lot of time working with research technology precursors. There are limitations but the utility is undeniable, the usage is upwards of 1 billion people, and the US DoD has large contracts with all of the foundation model companies.

GenAI isn't going anywhere, whether we like it or not. And there are a lot of reasons to not like it, but it's the reality.

Ads are coming and the infrastructure is already there. For coding etc. the prices will go up (Anthropic's prices are already high and profitable for that) and companies will pay it because the utility is there.

Free users will be milked for data and ad sales. Paid users at lower tiers will probably also be ad targeted too. It's obvious what's coming near-term.

Long term we've got World Models and other things to look forward to. Yann just left Meta to start his own company for a reason :).


As far as "dumping onto bag holders" nvidia is somewhat risky but the ROI is enormous, they have $600B in commitments through 2026 which is ridiculous. Of course that won't continue indefinitely but their technology and particularly their ability to build out fast with clusters and their specialized networking hardware / software stack is unmatched.

The risk comes in now that China is looking to ban imports of the chips, because eventually there may be a competitor or some novel technology that emerges from there which takes the lead away. That is a risk for investors, for sure.

All of this is a risk too, as I said there will be a crash absolutely once the small companies fail and people misunderstand the long-term... but the smart money will hold on (or dump high, and reinvest low, which is some of what's happening now).

Feel free to @ me in a year when NVDA is in the toilet, I'll take my naive self and eat some crow then!
 
  • Like
Reactions: Luke MacWalker
Ah yes, nice to see them spending time and resources on *squints* making it "warmer and more playful" ...instead of helping it remember anything of consequence for more than a single message at a time or providing code that works; all that nonsense.
 
  • Like
Reactions: MakaniKai
I am blown away by the sheer volume of negativity in this thread, so obviously I haven’t been paying attention 🤷‍♂️

So many speculators here!

I use ChatGPT for loads of things, including coding in C++, and it’s very helpful. Not always 100% accurate but helpful. I suspect many of the folks here ask ChatGPT to create miracles of code out of 10,000 lines of garbage and are frustrated when it doesn’t work. In my experience, giving it simple, atomic chunks to work with is great. It understands, it helps optimize, and it often gives multiple solutions. But if you, the user, don’t know how to code, really, not even ChatGPT can help.

Or it could just be that you’re using python.
 
If Sam Altman is involved, no thanks. OpenAI is as shadowy as Alphabet, Google, whatever they’re calling themselves these days. Grok is the superior platform, anyhow.
 
If Sam Altman is involved, no thanks. OpenAI is as shadowy as Alphabet, Google, whatever they’re calling themselves these days. Grok is the superior platform, anyhow.
I was with you until “Grok is the superior platform, anyhow.” It’s hypocritical to shame AI built by Altman and then praise AI built by Musk. Have you seen Twitter these days? Or how Musk is trying to “fix” the opinions of Grok? Talk about shadowy... I believe they both deserve the shame.
 
I was with you until “Grok is the superior platform, anyhow.” It’s hypocritical to shame AI built by Altman and then praise AI built by Musk. Have you seen Twitter these days? Or how Musk is trying to “fix” the opinions of Grok? Talk about shadowy... I believe they both deserve the shame.

I don’t use Twitter, so not sure. Elon isn’t much better, but he doesn’t hide his intentions and is pretty straight forwards about what he has in mind. Sam Altman on the other hand. I wouldn’t dare turn my back on him if we were alone.
 
I don’t use Twitter, so not sure. Elon isn’t much better, but he doesn’t hide his intentions and is pretty straight forwards about what he has in mind. Sam Altman on the other hand. I wouldn’t dare turn my back on him if we were alone.
You have a point there. Elon is definitely very transparent about his goals lol. I didn’t see it that way. Thanks for pointing it out.
 
  • Like
Reactions: delsoul
Will definitely give this a try as soon as it is available for free tier users. Want to see how the new models perform.
 
  • Like
Reactions: mganu
Grok and Gemini have surpassed ChatGPT it's not even close.
Have not used Grok much but I like Gemini as it is hard to realize it is machine (on one side scary) but.... GPT provides sometime more precise and useful information but in very rough form. That is my experience. So I use them in tandem.

AI is a huge mistake.
It is natural evolution of growing computational power and one of areas that drives sales of new HW otherwise for majority current performance is for majority overkill.
But since many thinks technological advancement = to progress of society here we are.

Just waiting for the AI Bubble to burst wide open.
If they will not find proper ways to monetize all investments other that state surveillance and military purposes then blast will be audible across Earth lol
 
Some thinking points for your faith argument from an analytical perspective (this is my job):

1. Your company will be in trouble when the LLM token pricing goes through the roof.
2. Your company will be in trouble when the LLM company changes the model and your prompts do not function correctly.
3. Your company will be in trouble when the LLM company goes down the toilet and the other LLM company gets an influx of traffic they can't handle with their hardware provision. This also incurs point 1 and 2 as a damage multiplier.
4. It's a tangible business risk building on technology which has absolutely no working revenue model. It may disappear tomorrow.
5. You do not have the cash, hardware or resources to train your own model and make a ROI on it and run it yourself even on a cloud platform.
6. You are likely to reach regulatory and legal problems when it comes to making employment decisions based on automation of this class (chain of proof).
7. Robots and manufacturing have near zero use for LLMs. There is some specific AI use cases in inspection. That is it. Having humanoid robots working in a factory setting is science fiction. Production is required to be 100% deterministic and LLMs are not.
8. You can't replace people with AI. But you can replace people with AI spending and watch your stock prices rise while burying the lay off.

This whole thing is faith without empiricism.
All good points, but not just for LLMs. This is true for any technology we use. Adobe Creative Cloud becoming crapware. Affinity going into the Canva Lockware-Ecosphere. Office the same. Cloud spaces like Dropbox bring central to your work yet obviously rudderless. Operating Systems ****ing up SMB. Apple becoming worse every year, Google however being much worse already. Your internet provider has problems and Mail, Downloading and anything online will not work for hours. Working with tech makeshift work more fragile and dependent. Which is why security driven IT seems to bring the companies they work at back to stone-age level of tech in order to reduce risks that cannot be avoided as they are tech-inherent. The only ones safe from all of the above and more problems of technology are people working without, craftsmen and artisans, small shops, artists, writers and so on. The less tech you use, the safer. If you work in or with. A lot of tech or software, welcome to the worries where every update can mess up your business.

Still, AI is neither a bubble nor only a assembly-line-style method of replacing human labour force. Iris neither a glorified spell-checker nor a doomsday machine. The fact that it is allow the above and much more, the speed of implementation, the global competitiveness and the fact that we live in times of anti-democratic uncertainty generally make the technology a very unsafe ground to operate on and En********ation is absolutely sure to happen sooner rather than later.

But that doesn’t change the fact that this could be the most game-changing technological innovation of our generation and we all should not be Luddites but embrace it, shape it and as consumers, designers, users, devs, managers do our best to make it as good and as beneficial as we can.
 
  • Like
Reactions: novagamer
It did twice before. Look up AI Winter.

The previous AI winters were almost entirely B2B/institutional, which is an incredibly different environment from the AI market of today.

The second fundamental difference from then to today is that current AI has crossed utility and mass market adoption thresholds that makes it sticky in ways previous generations didn't.

A "winter" will undoubtedly happen in the investment/hype cycle of AI, but the consumer adoption creates a floor that prevents complete collapse. It's more like social media - even if investor enthusiasm waned, the products were too embedded in daily life to disappear.

AI is a foundational technology. It will be around much longer than you or I, regardless of what the markets think about it.
 
  • Like
Reactions: novagamer
The previous AI winters were almost entirely B2B/institutional, which is an incredibly different environment from the AI market of today.

The second fundamental difference from then to today is that current AI has crossed utility and mass market adoption thresholds that makes it sticky in ways previous generations didn't.

A "winter" will undoubtedly happen in the investment/hype cycle of AI, but the consumer adoption creates a floor that prevents complete collapse. It's more like social media - even if investor enthusiasm waned, the products were too embedded in daily life to disappear.

AI is a foundational technology. It will be around much longer than you or I, regardless of what the markets think about it.

There is virtually no consumer adoption.

The moment you ask money for it, the customers go away.

It doesn't solve any useful problems for the average person.

It's dead.

The mathematics and the idea will live on, but the commercials mean it is dead.
 
  • Love
Reactions: turbineseaplane
There is virtually no consumer adoption.
A BILLION people are using GenAI.
The moment you ask money for it, the customers go away.
Facebook doesn’t ask for money and LLMs specifically are going to be monetized similarly for product references and advertisement.

OpenAI introduced the product carrousel for exactly this reason and are already providing tracking links ahead of the affiliate programs.

It doesn't solve any useful problems for the average person.
This is just patently untrue, if you have data share it, instead of what you might understandably want to be true.

It's dead.

The mathematics and the idea will live on, but the commercials mean it is dead.
Reading doomsaying takes without a critical lens or understand the technology or how it can be useful is judging something from a place of ignorance, not a strong foundation of understanding.

Consider how many people even knew what “AI Winter” was in the late 90s, or expert systems before that, etc.

The scale, market penetration, and technology is wholly different this time.

“AI” is still the wrong name for the nerds who understand they really don’t mean “ASI” but for average people, all of this is AI now, right or wrong.
 
  • Like
Reactions: wanha
There is virtually no consumer adoption.

The moment you ask money for it, the customers go away.

It doesn't solve any useful problems for the average person.

It's dead.

The mathematics and the idea will live on, but the commercials mean it is dead.

I admire your blind certainty in an ever-changing world
 
  • Love
Reactions: novagamer
Some thinking points for your faith argument from an analytical perspective (this is my job):

1. Your company will be in trouble when the LLM token pricing goes through the roof.
2. Your company will be in trouble when the LLM company changes the model and your prompts do not function correctly.
3. Your company will be in trouble when the LLM company goes down the toilet and the other LLM company gets an influx of traffic they can't handle with their hardware provision. This also incurs point 1 and 2 as a damage multiplier.
4. It's a tangible business risk building on technology which has absolutely no working revenue model. It may disappear tomorrow.
5. You do not have the cash, hardware or resources to train your own model and make a ROI on it and run it yourself even on a cloud platform.
6. You are likely to reach regulatory and legal problems when it comes to making employment decisions based on automation of this class (chain of proof).
7. Robots and manufacturing have near zero use for LLMs. There is some specific AI use cases in inspection. That is it. Having humanoid robots working in a factory setting is science fiction. Production is required to be 100% deterministic and LLMs are not.
8. You can't replace people with AI. But you can replace people with AI spending and watch your stock prices rise while burying the lay off.

This whole thing is faith without empiricism.
LLMs are a tiny part of the potential for AI. In addtion to Small Lanuage Models and Private Small Language Models, many forms of AI being developed today are not langage models at all. If you don't believe me, ask your LLM.

LLM today feels like when the EU was consumed by the fear of Microsoft bundling Internet Explorer with Windows. They thought the browser leader would dominate the internet.

What they didn't see was the coming revolution. For example, bank payments processing (check and wire transfer) as well as policy sales and claims processing at insurance companies. It transformed our customer experience, greatly reduced costs while absolutely gutting hundreds of thousands back office of jobs in the US financial industry, plus at all companies that billed by mail and an estimated 300,000 at the USPS. Who won the browser war had no impact on that.
 
The conversational features need to be full duplex. It's annoying that the slightest background noise causes the response to pause.

If I’m in a noisy environment I will mute the microphone as soon as I’ve finished speaking so the background noise doesn’t cause chatGPT to stop and wait for more input.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.