Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I have bad news for you then.

Countries ain’t doing this for the goodness or oneness of mankind. They’re throwing so much money and manpower into AI with one goal in mind: supremacy.

The country with the best AI will be the country with the most power in the world. This is about power.

This ain’t love for humanity.

And there are plenty of other goals the world should rally around before AI, like staving off the climate catastrophe (which energy-hungry AI will only accelerate) or making sure all human beings on the planet have free healthcare.

Oh yes, I fully agree, and it's what makes this "worldwide project" even more fun.
The "well if we don't do it, they will, and if they will then, they could end up ruling over us"
Is a MASSIVE driver in all of this and I love this.

So many times, certain people with certain opinions get in the way of the creation of something.
We won't do something due to it being bad for X or damages Y, or I don't like it morally blah blah blah.

When it comes down to "If we don't do it, or hold ourselves back then they will do it, and that's the end of us"
Is an amazing driver, and one reason why I'm so excited/interested in just how far the world can go on a single project with almost nothings being artificially :) put it the way.

As I've said before, whilst there have been major changes over time around the globe, Machinery, Internet, Nuclear power etc etc.
I've never seen such focus on one thing by so many from so many places, with so much money every before.

Please note: I'm stepping outside the "Should we do it" or "Is it good in the long term" mindset.
I'm just excited as I want to see what humanity can actually do, when we all focus on a single? task.

And as a follow up, I'd suggest AI and artificial life, "IS" the long term future for humanity both on this planet to some extent, but drastically more when it comes to branching out from the earth to elsewhere.

Unless something comes along to stop us on this path (Asteroid hit type event) It feels inevitable.
Not a matter of "IF" it's only a matter of "When"
 
I don't care who it's from... there is NO SUCH THING AS "AI".

It's Eliza 2.0 at best, and it's an insane waste of energy and resources to reinvent a square wheel that no one asked for.

And after using Apple computers since 1987 no, it would NOT matter if this were Apple's "invention".
It's a fair viewpoint if that's what you feel.
Just to clarify are you saying there is no AI or there is no AGI.

I think we'd all agree there is no AGI, but I'd suggest what we have now is AI in that it's artificial, and as far as the user? is concerned it's giving back Intelligent answers.
At least as intelligent that most humans could respond.

We could line up say 10 current AI models, and let's say 10 Teenage American students

Ask them all the same set of questions on all manner of subjects and vote as to which are the most intelligent.

Personally I don't care what's going on in the "Black Box" or what's going on in "The Brain"
It's what comes out of the mouth/computer speaker that matters.

If I may... As I wish this was used more as an example.
Commander Data in Star Trek Next Gen.

Would you consider him/it to be artificially intelligent (AI)
He/It seems to be in any real way we can test.
Is that all that matters?

I can understand there are people who will always regard anything that's not organic in nature as just a clever box of tricks.
 
...

As for AI Mode, it is better than the previous little "AI" blurbs at the top of regular Google searches. I think people really need to try it and see if it helps when asking those questions on Google instead of doing specific searches.

I hope so. In a different forum we were having a discussion about a Charlie Sheen movie (don't ask 🤣) and for some clarity on a role he played, I googled him. The AI summary started accurately and then it told me how many Emmys he had been nominated for, winning one, for his role as the president in the West Wing TV show. The AI had given information about Martin Sheen, not Charlie Sheen. An AI should know the difference, but instead it just throws things into the results for unknown reasons.
 
So many times, certain people with certain opinions get in the way of the creation of something.
We won't do something due to it being bad for X or damages Y, or I don't like it morally blah blah blah.

Yes, exactly. People should think critically about what they are doing, and what possible help or harm their actions do to others. Life isn't a game, with other human beings as plastic figures. Sure Ford or GM can build a car that has razor sharp bumpers and spikes on the side, but someone somewhere is gonna say "but this will injure and kill a lot of people."

If Apple's next SOC is the fastest and most efficient chip ever made, totally blows away everything, but due to its radical new manufacturing process it emits a colorless and odorless gas that kills kittens, I'd hope someone says that morally they can't release this chip.

Sure, the Silicon Valley cliché is "move fast and break things." At some point people have to tell tech companies that we're tired of them breaking things.
 
  • Like
Reactions: rmariboe
To all those who go on and on about current AI not being totally accurate at all times.

Before criticising it took much.
Just consider how many people think.

The Earth is flat.
9/11 Was an inside job and controlled demolition.
The moon landings were faked.
Aliens control the world in secret.

We are in the millions who will state with full confidence all of the above.

So don't be too harsh on AI as generally I'd trust it a lot more than your random in the street human ;)
 
  • Haha
Reactions: Shirasaki
But it's very exciting to see technology today make things worse and devolve human intellect just so they can have a disruptive technology in the aims of screwing the peasants over
I believe we are already on that track. They just hit the pedal a bit harder than before, and new tracks support much higher top speed.
 
To all those who go on and on about current AI not being totally accurate at all times.

Before criticising it took much.
Just consider how many people think.

The Earth is flat.
9/11 Was an inside job and controlled demolition.
The moon landings were faked.
Aliens control the world in secret.

We are in the millions who will state with full confidence all of the above.

So don't be too harsh on AI as generally I'd trust it a lot more than your random in the street human ;)
Well I guess comparing to run of the mill human, AI can technically be a bit more impartial. Maybe that’s a good thing.

Education is very much overrated.
 
It's a fair viewpoint if that's what you feel.
Just to clarify are you saying there is no AI or there is no AGI.

I think we'd all agree there is no AGI, but I'd suggest what we have now is AI in that it's artificial, and as far as the user? is concerned it's giving back Intelligent answers.
At least as intelligent that most humans could respond.

We could line up say 10 current AI models, and let's say 10 Teenage American students

Ask them all the same set of questions on all manner of subjects and vote as to which are the most intelligent.

Personally I don't care what's going on in the "Black Box" or what's going on in "The Brain"
It's what comes out of the mouth/computer speaker that matters.

If I may... As I wish this was used more as an example.
Commander Data in Star Trek Next Gen.

Would you consider him/it to be artificially intelligent (AI)
He/It seems to be in any real way we can test.
Is that all that matters?

I can understand there are people who will always regard anything that's not organic in nature as just a clever box of tricks.
It’s important to distinguish between intelligence and language models. The language model cannot be said to be intelligent anymore than a lookup table: It literally just models language, and it has no intrinsic understanding of the sentences it is producing. There is also - crucially - no feedback loop meaning it will never learn from its interactions, and that in itself makes it not an intelligence.
Models will not be comparable to intelligence until they continually learn from their interactions with the real world; we need not just textual but multi modal, unidirectional interfaces in order to achieve this - and we simply do not have the processing power to achieve it realtime.
Silicon may not even be the right medium for such a processor - if only there existed some prior work to draw inspiration from 🤔
 
  • Like
Reactions: Jumpthesnark
It’s important to distinguish between intelligence and language models. The language model cannot be said to be intelligent anymore than a lookup table: It literally just models language, and it has no intrinsic understanding of the sentences it is producing. There is also - crucially - no feedback loop meaning it will never learn from its interactions, and that in itself makes it not an intelligence.
Models will not be comparable to intelligence until they continually learn from their interactions with the real world; we need not just textual but multi modal, unidirectional interfaces in order to achieve this - and we simply do not have the processing power to achieve it realtime.
Silicon may not even be the right medium for such a processor - if only there existed some prior work to draw inspiration from 🤔
I don't disagree we need AI? Put into robots which can then learn from the real world.

However I'm not fully onboard with it has to be this way or nothing.

If you took a human brain that had no real world experience and wired it up for communicating and learning.
I'd be happy to accept it was intelligent even if it was in a jar.
 
  • Like
Reactions: rmariboe
To all those who go on and on about current AI not being totally accurate at all times.

Before criticising it took much.
Just consider how many people think.

The Earth is flat.
9/11 Was an inside job and controlled demolition.
The moon landings were faked.
Aliens control the world in secret.

We are in the millions who will state with full confidence all of the above.

So don't be too harsh on AI as generally I'd trust it a lot more than your random in the street human ;)
What's your point? I don't ask flat-earthers (and others in your example) for factual information about all kinds of things, for that exact reason.

Search engines are supposed to deliver accurate information to a request, period, and not make things up. Maybe we should have high standards for it?
 
  • Like
Reactions: BGrifter
So, did you actually have something to contribute to this debate, or were you just going to play patronizing gaslighter for the AI LLLM fanboys? 🤨

(Can the current LLLMs pretending to be intelligent produce useful results? Sure. Just as a large number of monkeys banging on keyboards randomly might produce something interesting... but you still have to find, train and feed them until they do, and you're now up to your neck in monkey poop as well. What "problem" were you solving... other than trying to find a way to remove pesky morals and ethics from the mix?)
I have no idea how your comment relates to mine. I was just pointing out the lie and exaggeration in the initial comment.
 
What’s google? Is that some new version of ChatGPT or Grok? Does it search as well as they do? I am not sure I need to try out some new fangled website when I am already getting all the answers I need.
 
What's your point? I don't ask flat-earthers (and others in your example) for factual information about all kinds of things, for that exact reason.

Search engines are supposed to deliver accurate information to a request, period, and not make things up. Maybe we should have high standards for it?
But if you don't know something and you ask another human, how do you know if they are correct, just making it up, or have views that contradict others?

I'm just saying we have people who rate humans vastly above AI constantly, despite humans being wrong on so many things all the time.
But an AI gets something wrong and we hate on it.

Some have vastly different standards.

Be interesting what people would think of a fully AGI system that was as intelligent as the average human.
Would we call it intelligent or stupid?

Expecting computers to always be right on everything, it an extremely high standard and vastly beyond a even a few 100 of the greatest human minds in a room.
 
Expecting computers to always be right on everything, it an extremely high standard and vastly beyond a even a few 100 of the greatest human minds in a room.

Yes, it is a high standard. And if an AI is as likely to report falsehoods as a flat-earther then maybe that's a sign that the AI isn't ready, rather than a sign that we should lower our standards for what facts are.
 
I’d just like to go back to the days when Google results were accurate more often than not.

Feels like that shouldn’t be an impossibly high bar.
 
  • Like
Reactions: Jumpthesnark
Yes, it is a high standard. And if an AI is as likely to report falsehoods as a flat-earther then maybe that's a sign that the AI isn't ready, rather than a sign that we should lower our standards for what facts are.

I'm not thought about this too long and hard, but I'd probably trust most answers from an AI right now than a typical random stranger in the street.

I do have a feeling we are being too critical of AI, like we WILL be too critical of self driving cars and robots in the future.
Do I want AI to be perfect and correct on everything, yes.
Is that even possible No......

You may ask why is is not possible to be correct on everything?
Well I may suggest that there are many things in the world that are not black and white facts, but which have viewpoints and opinions.

Many humans, listen to others, gain details from them, perhaps also media as well, and then using our own built in perspective on the world we come up with our own conclusion.
It's simply not possible to know for sure, unless we had direct involvement in whatever is being asked, so we form our "best guess" based upon all the data.

Rather than expect perfection I'm be more happy for an AI to say it does not know, or is not sure.
Like a honest and sensible human hopefully does.

Then again I know people personally who will simply assume stuff and make things up rather than admit they are wrong or don't know.

As an aside, personally I'd like there to be many many AI's with their own viewpoints and opinions, exactly like humans do.
I'm not going to agree with every human as each human has their own viewpoints and that makes humans interesting.
I'd like to see that from AI also.

But sadly I feel we are too harsh on AI and many will only focus on mistakes.
As I said with self driving cars in the future.....

Lets make something up....

So 1000 people die on the roads each year with human drivers.
AI Driving takes over, and those deaths drop from 1000 to 500 people.
So AI drivers have killed 500 people.
The media will go crazy on "Killer Cars" and they will be banned due to all those deaths.

We will only focus on the 500 killer cars and not the 500 lives we saved.
Sadly that's how the media works these days.

:(
 
You may ask why is is not possible to be correct on everything?
Well I may suggest that there are many things in the world that are not black and white facts, but which have viewpoints and opinions.

2+2 = 4
Brussels is the capital of Belgium
The earth is a globe, not a disc.

These are facts. I'm not interested in hearing a person or AI's "viewpoints and opinions" if they differ from those. Nah bro, hear me out, 2+2 = 22!

Keep your opinionated, not-too-factual AIs, they have no value to me. If it just makes things up, it is useless as a tool. And if that is me being "too harsh" on AI, I'm perfectly fine with that. Do you think I might hurt its feelings? 🤣
 
2+2 = 4
Brussels is the capital of Belgium
The earth is a globe, not a disc.

These are facts. I'm not interested in hearing a person or AI's "viewpoints and opinions" if they differ from those. Nah bro, hear me out, 2+2 = 22!

Keep your opinionated, not-too-factual AIs, they have no value to me. If it just makes things up, it is useless as a tool. And if that is me being "too harsh" on AI, I'm perfectly fine with that. Do you think I might hurt its feelings? 🤣
I suppose it depends whether your long term view is to regard AI as more than just a "Tool"
There are millions today that view it as vastly more than just a Tool for factual yes/no answers, and this number will increase vastly over future years.
Some may not like the thought of this, but it's going to happen.

With regard to facts however, could there be any truth in "some aspects" of "some" of the tragic 9/11 building collapses were more that the official statements of the time would have you believe?
Whilst I'd like to believe everything is exactly how it was officially stated, and I'm probably 90% happy to accept that.
I must retain perhaps just 10% of doubt that some elements of the story may be not as stated.
And as I was not personally involved and I was not in top positions of all aspects of government and legal agencies at the time I cannot be 100%

So what do I want from an AI?
An objective opinion or just to state the official line on every story?
 
I felt I needed to post an update.
I asked chatgpt about the total honesty regarding aspects of 9 11 and particularly the 3rd building collapse and I must give it credit that it did not just spout out the official line on it.

It did not say anything crazy, but I was fair and honest about people having concerns of what happened and the specifics.
Eg, it's a bit iffy.
 
So what do I want from an AI?
An objective opinion or just to state the official line on every story?

How about not an opinion at all. You seem to say that those (either opinion or stating "the official line") are our options. And earlier you suggest that AI is more than "just a tool for yes/no answers," though I never said that it was.

It's easy to dismiss other people's arguments when you minimize and mischaracterize them.

An entire newspaper story filled with accurate, factual information and quotes from real people is much more than opinion, "the official line" and yes/no answers. It's the kind of thing we see dozens of times a day, and so far it seems to be the kind of thing that AI still struggles to get through (witness the Chicago Sun-Times AI-generated summer reading list debacle) without screwing up. That was a list, something an AI should be able to deliver. Ask yourself why that AI felt compelled to just make things up, when there are real books it could have named in the list. Why?

Yes, AI is a tool, for all kinds of things. Right now it is a flawed tool.

LLMs are generally trained by scraping the Internet, and that means those LLMs are going to be filled with wild speculation, misinformation, disinformation and lies, as well as accurate information and data. So I'm sure when you ask about 9/11, you're asking an AI that relies on an LLM that uses factual information as well as two decades of comments from edge lords ranting jet fuel can't melt steel beams, and other low-value information. So until an AI is able to process your question (I have no idea what specifically you asked) without relying on uninformed opinion, it is going to deliver results of dubious usefulness. Add to that its tendency to hallucinate.

And all of that is without the moral and legal issues related to LLMs that are trained on copyrighted material that has been used without licensing or permission. I don't like that using an AI is being complicit in theft of IP.

Those are my concerns. I'm not interested in an AI that simply surpasses the low bar you mentioned in previous posts, about whether it is better than a random person met on the street. We should have standards higher than that for all of these supposedly soon-to-be-life-changing technologies.
 
Apologies if you felt I mischaracterised your viewpoint.
All viewpoints and welcome, and it's what makes humans and humanity great.

Though I guess me saying that then does not allow me to also say I also welcome all AI viewpoints, and AI having different viewpoints being a possible interesting future.

Just for a moment, and dear god what a terrible existence it would be is all humans had the same opinions and "truths" as ever other human :)

I suppose I am suggesting there are only so many actual genuine facts that 99.9% of humans all agree are factual truths, and then we start running into human (or perhaps) AI opinions.

Does it hurt a typical human when I sit them on the head using reasonable force, with a hammer made from Steel and Wood?
Yes.

Can a typical tree found on Planet Earth drive a car designed for a human, which is not modified in a manner that movements of a tree over time would operate it's controls?
No.

That's all great stuff, but what about the VAST array of things humans talk and discuss/argue that don't have yes/no answers, or a humans answer is tainted by their life experience, interaction with others, and position in society.

Perhaps I could ask an AI if wealth should be distributed more fairly across the globe.
A top end wage limit of $500,000 a year, and anything more than that gets filtered down to others.
Is the age of concept the right age, and why?
Should Alcohol be banned due to the harm it causes society.
Should abortion be legal
And a billion other questions.

No right or wrong answer for an AI to offer.

You pick 20 people in a room (with real intelligence) and you're going to get a whole host of answers, some they may passionately believe is correct or not.

So I don't know what people expect from AI here?

IMHO, like News/media outlets, Magazines? and people you choose to hand out with as they to a degree match many of your views.
I can easily see a future when AI's are the same, and share the values you do.
 
I don't disagree we need AI? Put into robots which can then learn from the real world.

However I'm not fully onboard with it has to be this way or nothing.

If you took a human brain that had no real world experience and wired it up for communicating and learning.
I'd be happy to accept it was intelligent even if it was in a jar.
I’d always be happy by having a brain in a jar hooked up to electronics 😃
And my main distinction regarding intelligence is wether there’s a learning process or not so I think we agree on that 😄
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.