At some point, it will begin to improve itself, killing everything in its path.
That’s what I meant by “model complexity”… These AI systems aren’t a single monolith model. They have multiple parts that make a network. And that hasn’t hit our limitations. We still have plenty of data, cpu, and software complexity to achieve. What is meant by this is “diminishing”.. and it’s all meant economically. It won’t be achievable at $20 month plans. Maybe $70?This isn't a cpu, power or model issue, it's an approach issue. We (royal) aren't building machines that learn.
No idea what cap means.
You aren’t the only one. I hate the hype. Read Weapons of Math Destruction and Coded Bias and then get back to me on AI.Good. I genuinely hope the AI bubble bursts.
But human computers did.Really? When calculators came along, mathematicians didn't suddenly disappear.
That’s what I meant by “model complexity”… These AI systems aren’t a single monolith model. They have multiple parts that make a network. And that hasn’t hit our limitations. We still have plenty of data, cpu, and software complexity to achieve. What is meant by this is “diminishing”.. and it’s all meant economically. It won’t be achievable at $20 month plans. Maybe $70?
The problem is that it doesn’t replace people and people rely on it as it’s the authenticated source, as they do Wikipedia now. I think it’s good in the long term as long as people realize it’s just a tool. I constantly joke with my daughter who I say should write “Thanks Grammarly” on her masters graduation cap.Some of the animosity of AI is a little overblown. At times, it’s great at handling complicated questions in plain speak English, and quickly giving a detailed answer. Sure, it will need to be regulated, but I like the technology.
Yup. It’s the freakin self driving car/fully autonomous car utopia that will never come to fruition in our lifetime all over again!! It’s insane that the tech industry didn’t learn from it…Considering the wildly unrealistic expectations (AGI) many people have for LLMs, it's inevitable that the bubble will have to burst at some point.
There are alternative approaches being worked on with some showing promise. One example is building in dendritic properties in deep neural networks: https://www.sciencedirect.com/science/article/pii/S0959438824000151This isn't a cpu, power or model issue, it's an approach issue. We (royal) aren't building machines that learn.
It won’t. Generative AI is here to stay, regardless of what this article states. Progress will be slow at times, but the path is clear, there is no turning back now.Good. I genuinely hope the AI bubble bursts.
Dang, I'm so shocked by this...not.
Leading artificial intelligence companies including OpenAI, Google, and Anthropic are facing "diminishing returns" from their costly efforts to build newer AI models, according to a new Bloomberg report. The stumbling blocks appear to be growing in size as Apple continues a phased rollout of its own AI features through Apple Intelligence.
![]()
OpenAI's latest model, known internally as Orion, has reportedly fallen short of the company's performance expectations, particularly in handling coding tasks. The model is said to be lacking the significant improvements over existing systems when compared to the gains GPT-4 made versus its predecessor.
Google is also reportedly facing similar obstacles with its upcoming Gemini software, while Anthropic has delayed the release of its anticipated Claude 3.5 Opus model. Industry experts who spoke to Bloomberg attributed the challenges to the increasing difficulty in finding "new, untapped sources of high-quality, human-made training data" and the enormous costs associated with developing and operating new models concurrently with existing ones.
Silicon Valley's belief that more computing power, data, and larger models will inevitably lead to better performance, and ultimately the holy grail – artificial general intelligence (AGI) – could be based on false assumptions, suggests the report. Consequently, companies are now exploring alternative approaches, including further post-training (incorporating human feedback to improve responses and refining the tone) and developing AI tools called agents that can perform targeted tasks, such as booking flights or sending emails on a user's behalf.
"The AGI bubble is bursting a little bit," said Margaret Mitchell, chief ethics scientist at AI startup Hugging Face. She told Bloomberg that "different training approaches" may be needed to make AI models work really well on a variety of tasks. Other experts who spoke to the outlet echoed Mitchell's sentiment.
How much impact these challenges will have on Apple's approach is unclear, though Apple Intelligence is more focused in comparison, and the company uses internal large language models (LLMs) grounded in privacy. Apple's AI services mainly operate on-device, while the company's Private Cloud Compute encrypted servers are only pinged for tasks requiring more advanced processing power.
Apple is integrating AI capabilities into existing products and services, including writing tools, Siri improvements, and image generation features, so it can't be said to be competing directly in the LLM space. However, Apple has agreed a partnership with OpenAI that allows Siri to optionally hand off more open-ended queries to ChatGPT. Apple has also reportedly held discussions with other LLM companies about similar outsourcing partnerships.
It's possible that the challenges faced by major AI companies pursuing breakthrough general-purpose AI models could ultimately validate Apple's more conservative strategy of developing specific AI features that enhance the user experience. In that sense, its privacy-first policy may not be the straitjacket it first seemed. Apple plans to expand Apple Intelligence features next month with the release of iOS 18.2 and then via further updates through 2025.
Article Link: AI Companies Reportedly Struggling to Improve Latest Models
That’s an oversimplification. Models like o1 are not just ‘pattern recognition’, and can genuinely produce something akin to reasoning. Sure it may be limited (for now) and it doesn’t always work as expected, but it’s definitely not just ‘pattern recognition’. Unless of course one defines ‘pattern recognition’ in such a broad manner that humans and their brains would also likely fall in this definition.Yup. I did some work with pattern recognition in the 70's. Fascinating stuff and not new; the only thing new is the availability of, and the ability to, take in large data sets.
I always verify, but it’s quicker to ask ChatGPT and verify than to manually look up the information in the first place.The problem is that it doesn’t replace people and people rely on it as it’s the authenticated source, as they do Wikipedia now. I think it’s good in the long term as long as people realize it’s just a tool. I constantly joke with my daughter who I say should write “Thanks Grammarly” on her masters graduation cap.
"Glorified"?Not quite sure what they’re expecting. It’s just glorified pattern-recognition software, and there are only so many ways to dress it up as ‘the next big thing’.
Yeah, there is much hype around and Apple own research proved how fragile LLMs are in terms of calculations as Macrumors reported week or so ago. Plus some other research showed diminishing benefits of more power/parameters/whatever thrown into playfield.Considering the wildly unrealistic expectations (AGI) many people have for LLMs, it's inevitable that the bubble will have to burst at some point.
There are alternative approaches being worked on with some showing promise. One example is building in dendritic properties in deep neural networks: https://www.sciencedirect.com/science/article/pii/S0959438824000151
There have been huge breakthroughs in the past 5 years. Additional breakthroughs will happen, they simply will take more time and, as you implied, different approac
I love this damn idea. We expect Data but we wind up with an expendable red shirt 😂They've scraped up all the real, human created, content and are hitting a wall?
View attachment 2450781