Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You guys are thinking short term with current technology.

Long term, there will be sufficient cellular bandwidth that everything will be running on the cloud, your mobile phone or AR glasses will simply function as access point to a super computer on the cloud (this includes the whole OS, apps & video games).

And the major two methods to improve the end point experience are 1- Improve the network speed (which is out of Apple's hand) 2-Compression/AI powered methods to produce greater fidelity using less network bandwidth (where Apple is investing).
AI upscaling is not improving fidelity, AI upscaling is improving resolution.

Fidelity is capturing something as is in real life to the greatest extent possible. AI is simply guessing how something should look like, like those Samsung moon photos.

And what you have described is nightmare.
 
I’m out of my knowledge here. I’m very curious to skeptical what the real use is here if nos of the markets Apple participates and targets here would need compression.

I see the need for compression for low bandwidth networks when most of the target markets have high speed internet if not fibre or LTE/5G (non mmWave) so … what is the real use and advantage of Apple having this?!

Simply less cost for streaming hosted WWDC events?
I would wager it’s all about providing high-quality 4K and 8K content remotely and/or wirelessly. The uncompressed version of 4K videos, which contains ALL the data, streams at like 100 Mbps (maybe more, maybe less, depending on source). That’s if you want full blu-ray quality detail. If you have a blu-ray player, try streaming a version of any movie in 4K. Then watch the exact same movie on the blu-ray player. The difference is immediately obvious, but I think most people don’t realize it yet (or possibly don’t care).

That said, Apple has always pushed the quality limits of their screens, and now possibly a VR headset. In that headset, you’ll need 4K per eye at a minimum. There’s already rumors that 8K per eye is coming, and yes the human eye can perceive the difference at such close range.

So… if you want the best possible color reproduction, best detail, most pixels, etc… you’re looking at 1-10 Gbps streaming, uncompressed. Obviously that’s going to present a challenge, especially when going wireless. So you have to compress, which means a loss of data (and quality).

If you could find some way to compress and decompress quickly, and restore close to 100% of the quality, you would have found a way to overcome this. AI engines are really becoming excellent at “upscaling” videos by adding detail, not just multiplying the pixels. They are fast enough (with the right hardware) to do this in real time, even with some buffer, so it’s a truly viable option.

The problem with these upscalers is that they make a best guess as to the data that was lost, and fill in the blanks. Sometimes things come out too smooth or too polished. Programs allow you to add grain artificially to hide that, but it’s still not perfect.

However, if the AI is trained on a known compression algorithm, it will “guess” close to perfectly as to what data is missing every time. I think it’s a great idea and could really help move things forward with higher quality streaming all around.
 
  • Love
Reactions: DeepIn2U
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.