Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I don't recall Steve running a company that profits off mass stealing of others content without permission, compensation or attribution.
Like all of the apps they've sherlocked? Or money from scam apps they've collected money from users and not refunded? No, Apple are completely innocent and moral.
 
  • Like
Reactions: tomchr9
I miss Phil. Bring back live keynotes and bring him back on stage. Guy has been with Apple for like 30+ years now.
 
  • Like
Reactions: uacd
Like all of the apps they've sherlocked? Or money from scam apps they've collected money from users and not refunded? No, Apple are completely innocent and moral.
I think it more applies to Apple under Tim. He was one who promoted subscription model for everything. And isn’t that a scam when you download app, click “next” and can accidentally click to pay with your card for that app? Isn’t this scam?

Back when Steve was CEO AppStore was a brilliant place. There were no thousands of useless apps, lots of them were free or had adequate single-purchase price, and moreover we had more games – nowadays we still have same games but none of the newer ones. Because there is subscription/ads based model that many developers use, and it automatically equals to poorest gaming experience possible (clash of clans type stuff).

If I was Tim I would have removed FilmicPro and Halide from App Store without questions. Those developers are total swindlers, they both had single-purchase products and then with few updates these got converted into subscription-based crap. It is like if you go to a store, buy whiskey and next day it gets converted to beer because “manufacturer has explicit rights blah blah blah…”. Those developers do not understand what the concept of “ownership” means
 
I think it more applies to Apple under Tim. He was one who promoted subscription model for everything. And isn’t that a scam when you download app, click “next” and can accidentally click to pay with your card for that app? Isn’t this scam?

Back when Steve was CEO AppStore was a brilliant place. There were no thousands of useless apps, lots of them were free or had adequate single-purchase price, and moreover we had more games – nowadays we still have same games but none of the newer ones. Because there is subscription/ads based model that many developers use, and it automatically equals to poorest gaming experience possible (clash of clans type stuff).

If I was Tim I would have removed FilmicPro and Halide from App Store without questions. Those developers are total swindlers, they both had single-purchase products and then with few updates these got converted into subscription-based crap. It is like if you go to a store, buy whiskey and next day it gets converted to beer because “manufacturer has explicit rights blah blah blah…”. Those developers do not understand what the concept of “ownership” means

Hopefully with the gradually interest in emulation on IOS, apple will (Finally) see the writing on the wall.

It's no surprise that people are loving playing the old Zelda and Pokemon games. Full featured games that aren't coded to feel like coin-operated arcade machines, demanding more money.

Apple Arcade+ goes halfway there, but what's really needed is the (hinted in the past but never delivered) curation of the App Store.

Some kind of official integration with handful of review websites. Some way of filtering out the dross and only showing good quality apps.

It doesn't matter if they're subscription only, they just have to be good.
 
I don't recall Steve running a company that profits off mass stealing of others content without permission, compensation or attribution.
A lot of people would have said exactly that about the MP3 era and the iPod. Or even Rip Mix Burn.
They eventually turned it around with the iTunes Music Store, but they could negotiate it from a position of "otherwise you just get nothing for your content". There are some parallels there, for sure.
 
Good thing there are not “average” people out there.

I am just glad my fingers still work enough, that I can still type what I am looking for in a search bar.

I was a huge skeptic for a long time. I don’t believe in 99% of the hype still but it is very useful for many things in my field of work. Mundane things.

Recently one of our users reported a phishing email, it included an email attachment with a html file inside of that with JavaScript that was obfuscated. I asked ChatGPT what the function was and it within a few seconds provided a detailed breakdown even covering the base64 encoding of one part to human readable text.

That’s what I like about “AI”. I imagine cases like this being far more useful to the general masses, not so much of this image generation stuff.
 
  • Like
Reactions: Cosmogon-999
I was a huge skeptic for a long time. I don’t believe in 99% of the hype still but it is very useful for many things in my field of work. Mundane things.

Recently one of our users reported a phishing email, it included an email attachment with a html file inside of that with JavaScript that was obfuscated. I asked ChatGPT what the function was and it within a few seconds provided a detailed breakdown even covering the base64 encoding of one part to human readable text.

That’s what I like about “AI”. I imagine cases like this being far more useful to the general masses, not so much of this image generation stuff.

I work in clinical healthcare and my company has all the AI sites and services domain/IP blocked on the entire network, for employees and visitors.

Tried it (chatGPT and copilot) anyway on my own device and it was wrong enough that I wouldn't trust it.
 
  • Like
Reactions: nt5672
I work in clinical healthcare and my company has all the AI sites and services domain/IP blocked on the entire network, for employees and visitors.

Tried it (chatGPT and copilot) anyway on my own device and it was wrong enough that I wouldn't trust it.
Blocking it from visitors is pretty shady. Do you block Google also? Doctors are not infallible and patients should always be thinking 'trust, but verify'.
 
Blocking it from visitors is pretty shady. Do you block Google also? Doctors are not infallible and patients should always be thinking 'trust, but verify'.

You can't login to Google accounts on the employee-side with company issued devices and mail.google.com is blocked, but regular Google is available.

I don't work with IT in any fashion other than putting in service tickets so I have no idea their reasoning, other than potential HIPAA violations.
 
Last edited:
Hope he'll manage to convince openAI to introduce some 30% extortion scheme that go into Apple pocket.
 
Tried it (chatGPT and copilot) anyway on my own device and it was wrong enough that I wouldn't trust it.

I think it's an issue of expectations. AI doesn't know anything, it just strings words together and happens to look knowledgeable.

ChatGPT is great for sanitizing data to paste into excel. That's 90% of my AI usage, the remaining 10% is reminding me the difference between a JOIN and a LEFT/RIGHTJOIN in an SQL query.
 
Gemini on iOS is going to be so much more useful for search than OpenAI.

Hopefully they get the google deal done before launch.
 
I am sure OpenAI is much more privacy-aware than the abomination of a company called Google and the disgrace of an operating system called Android! Right?
LMFAO, you're drinking too much apple juice bud.

Gemini is coming to iOS, I expect you'll change your tune when you realise OpenAI is useless on search.
 
  • Like
Reactions: StyxMaker
Apple Intelligence does most of it's work on-device, only going out to ChatGPT for stuff it can't handle. Think of it like asking your coworker about something you're not super familiar with at work.
Continuing on whether Siri becoming more restricted to Apple Intelligence hardware requirements. (Siri able to hand complicated user requests over to ChatGPT with explicit user permission)


We’re about to enter the Apple Intelligence era, and it promises to dramatically change how we use our Apple devices. Most importantly, adding Apple Intelligence to Siri promises to resolve many frustrating problems with Apple’s “intelligent” assistant. A smarter, more conversational Siri is probably worth the price of admission all on its own.
But there’s a problem.
The new, intelligent Siri will only work (at least for a while) on a select number of Apple devices: iPhone 15 Pro and later, Apple silicon Macs, and M1 or better iPads. Your older devices will not be able to provide you with a smarter Siri. Some of Apple’s products that rely on Siri the most–the Apple TV, HomePods, and Apple Watch–are unlikely to have the hardware to support Apple Intelligence for a long, long time. They’ll all be stuck using the older, dumber Siri.
This means that we’re about to enter an age of Siri fragmentation, where saying that magic activation word may yield dramatically different results depending on what device answers the call. Fortunately, there are some ways that Apple might mitigate things so that it’s not so bad.
This article looks at multiple ways to work around this fiasco.
 
There‘s nothing to estimate or to be surprised about. I merely pointed out the business deal between OpenAI and MS. And then questioned what type of Apple customer information will be processed across MS resources. Maybe it’s no big deal to some and a concern to others. You seem to be trying to make ‘what if’ points. None of which I understand of see their relevance.

It is relevant! None of the data processed by Apple's AI goes through Open AI's ChatGTP. Apple has their own AI and Large Language Models (LLMs). Users have the option of using ChatGTP if they want to but they will be asked and have to opt-in to do that, that is separate from using Apple's own AI models. Apple has their own Virtual Private Cloud.
 
I work in clinical healthcare and my company has all the AI sites and services domain/IP blocked on the entire network, for employees and visitors.

Tried it (chatGPT and copilot) anyway on my own device and it was wrong enough that I wouldn't trust it.

Yeah it is not a fit for everyone, which is why I mentioned my specific use case. Healthcare should block the use of it, unless you have a BA with OpenAI or Microsoft, you set your organization up for HIPAA violations.
 
You can't login to Google accounts on the employee-side with company issued devices and mail.google.com is blocked, but regular Google is available.

I don't work with IT in any fashion other than putting in service tickets so I have no idea their reasoning, other than potential HIPAA violations.
Employees are different than visitors. It's why I specifically called it out. If I'm a patient at a healthcare location and am being prevented from checking my email, that's also an issue. Though I can just use cell (along with any employees tbh).
 
  • Like
Reactions: WarmWinterHat
I think it is a tad shortsighted to equal «AI» (which isn't what the name says anyway in almost all cases) with ChatGPT. Machine recognition, learning and transforming will be a boon when combined with the regular day-to-day applications in many small ways. And it already is, has been for some years. It is a more or less inevitable next step in the evolution of science, from Taylorism as a thinking model of rationalizing work, the Spinning Jenny, the fully integrated robot engineering manufacture, the computer, new and faster means of communication and so on. It is only a first step, which will sooner or later co-relate with leaps in neuroscience, nanotechnology, robotic and so on. It will kill jobs and it will create jobs, it will be a weapon and a tool of creative endeavors, as all things us humans come up with, we use it to make art, make life better and f%&/( kill each other (AI is used in Ukraine and Gaza alike). Like any technology, it is about how we use technology, how society reacts to progress. Do we have models of re-distributing wealth so that machines doing repetitive work do not only make 1% of society richer but all of us. Do we have measures for trust and truth in a time in which anything can be a hyper-detailed and realistic simulacrum? Look at the pivot smart phones and social media have brought to society and extrapolate that to VR/AR/«AI» and maybe neurohybrid man/machine-interfaces. It's almost boring to talk about AI, as so many people do 2024, when next year it will be just another feature on your phone. But on the other hand, we are moving into the Asimovian world, so slow we almost do not notice it, of robots and computers that one day will be smarter than us, maybe not only in reproducing and transforming information but also in creating new ideas. And the most important thing is, that journalism, media, companies and most of all politics should have a model where we/they want to go. At the moment there is not only a sore lack of idea of the kind of society we want to shape with technology (if you exclude the Randian Silicon-Valley-models) but a depressing roll-back to the 30s of the last century... and these emerging kryptofascist kleptocracies will be terrible to bear (or get rid of) when they employ the new technologies, from face recognition to AI-based societal «nudges» to Orwellian social media used to steer public opinions. At this moment in time we do not need only better tech, we need an idea of society that harmonizes with the technology. We're living in ****ing interesting times.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.