Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Would you use AR glasses with the features in the OP?

  • Yes

  • No


Results are only viewable after voting.

Lemon Olive

Suspended
Nov 30, 2020
1,208
1,324
I believe we will come to find that all of Apple's efforts in AR up to this point have merely been proof-of-concept for their true purpose, which is in a product like Apple Glasses. AR will always be limited in use case when it requires someone holding up a viewfinder. It's good for about 20 seconds.

Apple Glasses opens up whole new possibilities, but not just for AR ideas...for application of even the most basic AR uses that are still very unrealized.
 
  • Like
Reactions: Macative

DeepIn2U

macrumors G5
May 30, 2002
12,825
6,880
Toronto, Ontario, Canada
I'll start:

- non-obstrusive notifications (think like Mac banners flying in on the side)
- always-on Shazam (like Now Playing notification banners)
- subtitles for persons speaking (bonus: highlight them with colors and color the subtitles accordingly)
- navigation purposes, like Apple Maps extension
- ViewTime; like FaceTime but the called person see's what you're seeing (ie for support purposes)
- and of course games like Pokémon Go

What are your ideas? :)

Nice & concise choices.

- calls, name + memoji /Animoji (that Apple created can never get this right).
- user selectable Shazam (because this will drain an already limited battery before 2hrs is done using existing Li-polymer batter technology. Maybe with graphite battery tech we could see 2 days fully powered use??
- subtitles, again User selectable based on battery concerns. The key here is WHERE to place the subtitles? will the font and font size be user choosable within a limited range? Will user selectable size affect visual acuity or obstruct user vision?
- Apple Maps with Lookover will be implemented, no doubt about that, just how will the Look Over be implanted and activated or cancelled? Hmm. This is where I think hand gestures will be used immensely including for silent and semi private iMessage/Email and other apps user text entry, and some function navigation.
- a new game API will most definitely be needed and some form of ‘ViewTime‘ notification that the other user is seeing what you see. Apple’s software team will be heavy with privacy here and will need to really lock this down. Maybe Siri AI knowledge of what a Bill, Credit Card, Tax Return (USA first of course), and potentially what a medical record all looks like and emit a blurred block over to the external user unless each instance end user allows for that??
^ contacts tagged as family physician/doctor?

my ideas:
Apple’s Glasses will be a compliment to iPhone/Watch on first few iterations.
Reasons: to power larger more battery draining apps like always on Shazam, Apple Music, VOIP communications, etc.

Micro Apps from 3rd parties will be implemented first.

Communications:
Audio will be routed via AirPods they work and the synergy and experience will be much better than using limited space/cheap parts on tiny glass frames. Also greatly saves on battery consumption.

Text input - will be either via Siri dictation from AirPods or your BT headphones, yet for private communications using text it’ll be with ASL. The privacy, speed and accuracy will be paramount, not to mention some people can speak one language, sentence and topic differently from ASL communications ~ in brief bouts. You can see this in Children of Dune special where the Bene Gesserit‘s (Paul’s official wife Uralan chats with jailed wife of former Shaddam Emperor. Really cool actually if you get the chance). This show features a very young James McAvoy.

Power:
auto standby on glass removal with IR detection. Deep standby in 30mins SHOULD be done. Auto power on, with detection and rapid sync in 10seconds or less. Charging will be like AppleWatch yet with a VERY small 1 cm wide wireless connection at the center of the nose bridge area.
 
  • Like
Reactions: chcurtis

chcurtis

macrumors newbie
Jan 3, 2021
4
2
... Charging will be like AppleWatch yet with a VERY small 1 cm wide wireless connection at the center of the nose bridge area.
I suggest that a charging port is better on one of the arms, or at least at the corner of the frame. A popular thing to do on the Oculus Quest is to wear it with a portable power pack behind your head, and I imagine that would be a popular option here, too. A power connection at the bridge of the nose would make that impossible.

Your idea of using AirPods as an adjunct is interesting. The proximity of the eyeglass arm and the AirPod make me wonder if charging there might also be possible.

PS - I hope you meant 1mm. A centimeter is about the size of a USB type A plug.
 
  • Like
Reactions: DeepIn2U

DeepIn2U

macrumors G5
May 30, 2002
12,825
6,880
Toronto, Ontario, Canada
I suggest that a charging port is better on one of the arms, or at least at the corner of the frame. A popular thing to do on the Oculus Quest is to wear it with a portable power pack behind your head, and I imagine that would be a popular option here, too. A power connection at the bridge of the nose would make that impossible.

Your idea of using AirPods as an adjunct is interesting. The proximity of the eyeglass arm and the AirPod make me wonder if charging there might also be possible.

PS - I hope you meant 1mm. A centimeter is about the size of a USB type A plug.

Centimeter as in the width, about the same width as the bridge of someone's nose - seems pretty constant from toddler's age and well into our latter years as an adult. Depth - yes 1 mm.
 

JuneJune

macrumors member
Oct 10, 2020
68
97
Scandinavia
I’m on my second set of Bose Frames with prescription lenses, been wearing them as my only glasses a bit over a year and a half. I’m really hoping Apple will come out with Glasses for prescription lenses before my current frames break (the firat set broke, one hinge didn't transfer signals, after a bit over a year)

having immediate and unobtrusive access to audio for phone calls and background music is great, and I am looking forward to what Apple might bring to glasses. And that they will be available for prescription lenses with a few frame options
 

rhyme2

macrumors newbie
Dec 14, 2020
20
16
I am borderline face-blind. I really, really want an AR product that will label people I see with their name (if I’ve seen and named them before in a linked app) and when and where I last saw them.
Also handy would be a built-in function to keep track of objects like keys and books and bring up an inage of where I last saw them if they are lost.
 
  • Like
Reactions: JuneJune

mattspace

macrumors 68040
Jun 5, 2013
3,146
2,861
Australia
I am borderline face-blind. I really, really want an AR product that will label people I see with their name (if I’ve seen and named them before in a linked app) and when and where I last saw them.
Also handy would be a built-in function to keep track of objects like keys and books and bring up an inage of where I last saw them if they are lost.

Given the world-wide backlash against facial recognition, it would be interesting to see how people with Prosopagnosia are able to use facial identification in AR:
  • Would it only work in the public sphere where privacy is not protected?
  • Would it fail to work near public children's playgrounds where photography is generally banned?
  • Would private venue owners be able to geofence their premises to force it to stop working, similar to the way drones refuse to work in no-fly zones?
  • Would there be medical exemptions from geofencing, and who would grant them / enforce their application?
    • How do you ensure the person wearing the medically exempt AR system is the person to whom the exemption was granted?
    • Does that give rise to a black market in unlocked glasses, or is it a capability built into every headset that is unlocked by the presence of another biometrically tagged device?
People are usually very happy to have wheelchair ramps in their environment, but less so when someone with a neurological impairment pulls out a recording device for its prosthetic memory capabilities.
 

rhyme2

macrumors newbie
Dec 14, 2020
20
16
Given the world-wide backlash against facial recognition, it would be interesting to see how people with Prosopagnosia are able to use facial identification in AR:
  • Would it only work in the public sphere where privacy is not protected?
  • Would it fail to work near public children's playgrounds where photography is generally banned?
  • Would private venue owners be able to geofence their premises to force it to stop working, similar to the way drones refuse to work in no-fly zones?
  • Would there be medical exemptions from geofencing, and who would grant them / enforce their application?
    • How do you ensure the person wearing the medically exempt AR system is the person to whom the exemption was granted?
    • Does that give rise to a black market in unlocked glasses, or is it a capability built into every headset that is unlocked by the presence of another biometrically tagged device?
People are usually very happy to have wheelchair ramps in their environment, but less so when someone with a neurological impairment pulls out a recording device for its prosthetic memory capabilities.
I think it might address a lot of the privacy concerns if it doesn’t look people up on the internet but instead will only label people that the user has personally labeled in a linked app on the phone.
 

RadioHedgeFund

Cancelled
Sep 11, 2018
422
869
I imagine the following situation at my workplace, a large lab space/engineering workshop/CAD lab:

“I get to work and I see that my AR goggles are flashing blue, indicating some jobs need to be done. I don my Ski-mask sized goggles. Upon starting them up I see my boss has sent me a few emails. I read them in floating text that appears before me in the air. He says he is going to be late and that some guys will be in later to service the laser extractors. I throw them to one side and go out into the workshop.

IOT sensors within the pieces of equipment alert me to a fault with a stepper motor on a CNC Router. I open it up and find it needs replacing. I check the dept inventory as a HUD and discover we have one left in supplies. This is a new model I am not familiar with. I pull up the manufacturers YouTube guide for replacing the motor which hovers in the air. Downloaded instructions then use a combination of Lidar and FaceID-type sensors to show me exactly which way to fit it. I close up the machine and leave it on a diagnostic cycle to check it works.

I gaze around the workshop to check the other bits of machinery. As I gaze at them a small display pops up over each one showing me when it was installed and all relevant diagnostic data like when PAT and LEV testing was last done.

The lab door goes off and a student group comes in. They have a CAD model they want analysing for 3D printing. Instead of waiting for it to load up on a desktop one of them ‘throws’ it from their phone towards my headset and it appears in the air in front of me. I load it onto the projector system and they all take a pair of filter glasses allowing them to see it in the air. We walk around the model and it looks good. The slicing system loads the print, estimates the weight and let’s them know the end cost.

A notification dot appears and I tap the side of my goggles. One of my colleagues has recorded a video message and I play it in front of me. She has been out taking LiDAR scans of a site and wants my input on how we might improve drainage there as part of a research project we are both working on. I load up the scans and walk through the site. I send her some audio notes tagged to where I recorded them. On her end she is able to walk around the field with her headset on and see the notes I have made over specific points. She takes them and moves on.

We have a class in that afternoon looking at construction material samples but it has been a couple of years since I have done this one. I load up the lab guide as a floating document and it creates a green outline of the equipment for each lab bench and where it should go. I forget where certain things are and large arrows appear in front of me leading to their location. My goggles scan the objects to make sure each one is in the relevant wireframe and ticks them off.

I take the goggles off and charge them over lunch whilst I go outside. I wander the city centre, happy to not have things floating and distracting me outside of a controlled environment.

Later that afternoon we have a project student wanting to build some small walls. Earlier that month I recorded a tutorial in first person that they can load onto their lent lab goggles. The actions from my tutorial appear as a blue hologram and they follow through on my first few actions before removing the headset and taking control themselves, gaining independence.

I get another notification dot. The contractors have turned up but at the top floor from the adjoining building. I send one of them a link from my goggles which gives me first person access to their phone camera on a floating window. I direct them down the right corridors and stairs from my own lab space.

5pm rolls around and it is time to go home. I have 2 last notifications. I open the first and discover the CNC diagnostic has failed the new stepper motor. I quickly overnight one from Germany so I can fit another tomorrow. I remove my headset and leave the lab. The 3D Printer pings my phone telling me it will be another 16 hours. I set a reminder for my headset the next day and go home.”
 

robertmorris2

macrumors regular
Mar 15, 2006
125
122
Key West
I could see these guys used for robotic surgery. Right now, as far as I know, there is just a monitor to view the surgery. A 3d view would really help out .
 

wonderings

macrumors 6502a
Nov 19, 2021
651
550
I used to love the idea of AR for every day life, now I am generally not interested as I don't want more tech interfering and being ever present. When my Apple watch dies I will go back to a traditional watch and I have even contemplated switching to a dumb phone as I look at what I do with my iPhone and it really is nothing productive or anything that gives any meaning to my life. Works pays for my phone though so I am guessing I won't ditch that. What I do love about AR is for navigation. BMW had a tech demo, or maybe it was just a concept video of an AR HUD in a motorcycle helmet. There have been a few attempts at HUDs in motorcycle helmets but none have come out with anything that looked as slick as this. I would buy glasses for this feature, though with my desire to "de-tech" my life I would really prefer it in a stand alone helmet, of course that is more of a niche market.

 
  • Love
Reactions: turbineseaplane

subjonas

macrumors 603
Feb 10, 2014
5,541
5,869
I used to love the idea of AR for every day life, now I am generally not interested as I don't want more tech interfering and being ever present. When my Apple watch dies I will go back to a traditional watch and I have even contemplated switching to a dumb phone as I look at what I do with my iPhone and it really is nothing productive or anything that gives any meaning to my life. Works pays for my phone though so I am guessing I won't ditch that. What I do love about AR is for navigation. BMW had a tech demo, or maybe it was just a concept video of an AR HUD in a motorcycle helmet. There have been a few attempts at HUDs in motorcycle helmets but none have come out with anything that looked as slick as this. I would buy glasses for this feature, though with my desire to "de-tech" my life I would really prefer it in a stand alone helmet, of course that is more of a niche market.

This is a super cool HUD helmet, but not AR from the looks of the video since it doesn’t anchor to the world. Or were you referring to a different helmet?
 

MikeELL

macrumors regular
Aug 18, 2006
127
1
Perth, Australia
So, today we’re gonna reinvent the phone.
...
Now, we’re gonna start… with a revolutionary user interface.. is the result of years of research and development, and of course, it’s an interplay of hardware and software.
...
We have been very lucky to have brought a few revolutionary user interfaces to the market in our time.
First was the mouse.
The second was the click wheel.
And now, we’re gonna bring multi-touch to the market.
And each of these revolutionary user interfaces has made possible a revolutionary product – the Mac, the iPod and now the iPhone.

I keep coming back to these words when I think about what Apple will do with VR/AR, because to me the whole space doesn't really make sense without a user interface that is just as obviously better than the status quo, as when you saw Steve Jobs unlock the iPhone and scroll through his music for the first time. The only reason it makes sense to me that they've held off for so long is that they're really going to swing for the fences with the UI. Something that makes current VR systems feel like you're an elderly person pecking at a keyboard with two pointer fingers

No inside knowledge here I should say. I just know what I would love to see, which is a full "Minority-Report" style gesture interface that can manipulate virtual objects in your view field. I'm presuming there will be cameras/depth-map sensors facing towards your hands from the headset, but even knowing what they've achieved with the depth sensors in mobile devices, the U1 chips and the FaceID-type imagine processing, I keep wondering whether they'll be able to achieve a sufficiently accurate level of gesture recognition with just one sensor setup.

I had an idea a while back about the headset being "bimodal" in some sense, in that it would have some simpler display capabilities when on it's own, but when used as an accessory to a Mac, iPad or iPhone with an additional FaceID sensor setup, it would offload the (presumably processor intensive work) of sensing what your hands are doing in 3D space from two or more different perspectives, making the potential for much more complex gestures and a much more intuitive UI.

It's also going to take some time for developers to figure out what to do with it, so what I'm hoping for/expecting is that they'll have to demo this long before it comes onto the market, like they did with the iPhone. Here's hoping that happens this WWDC...
 

mattspace

macrumors 68040
Jun 5, 2013
3,146
2,861
Australia
I keep coming back to these words when I think about what Apple will do with VR/AR, because to me the whole space doesn't really make sense without a user interface that is just as obviously better than the status quo, as when you saw Steve Jobs unlock the iPhone and scroll through his music for the first time. The only reason it makes sense to me that they've held off for so long is that they're really going to swing for the fences with the UI. Something that makes current VR systems feel like you're an elderly person pecking at a keyboard with two pointer fingers.

You really need to look at Ultraleap's developer documentation: https://docs.ultraleap.com/xr-guide...tracking#provide-as-much-feedback-as-possible

...and understand that it's Unity & Unreal who are going to build the development environments for VR/AR, not Apple.

ARKit is a dumb pipe to Apple's sensor hardware, but developers don't work in ARKit - it's been over 1000 days since the "made in ARKit" twitter account, which was publicising ARKit stuff in the early days has posted anything. AR/VR devs work in Unity/Unreal, which then deploy to the lowest common denominator of ARCore/ARKit to talk to the hardware. But realisticaly, no one is developing AR for "Apple platforms" they're developing for Unity / Unreal - those are the platforms.

Hand tracking and proprioceptive embodiment is the killer app for VR/AR, although Valve have a very good idea wth their strap-on controllers that combine hand tracking with a button and control-surface based peripheral. There's a need for different physical peripheral tools, for different tasks.


I think the fundamental disconnect Apple-centirc folks have from what the technology is, is that there is no reason to expect for a platform-unified UI paradigm in VR/AR. The UI is furnished within the application, and is almost always a custom thing - in the same way that Games do not all use the exact same UI, aside from generalised conventons for what works well within genres. Indeed, I suspect most developers would be inherently hostile to the idea of a VR/AR platform vendor attempying to impose a standard UI upon them. Realistically, what a platform vendor can furnish in a standardised form, are open/save filesystem contents, which of course would be styled by the application to fit its ui aesthetic.

In terms of gestures, pinch to zoom, grab to stretch, thumb-forefinger twist rotation etc - AR/VR is 3D physical, and gestures only make true proprioceptive sense when they map to physical object manipulation.

And this follows on with how gestures have largely become a lost cause on iOS - what started out as something that was only used for its skeumorphic value of imitating the movement of a specific virtual physical object, has become an entirely abstract commandline of touch, where arbitrary gestures have to be rote memorised as connected to arbitrary outcomes.

Again, it you look at what Ultraleap can do, they have full finger tracking and articulation - an AR keyboard can be mapped onto any flat surface, and povide exactly the experience of typing on a touchscreen. Likewise a VR tagged physical keyboard can function equally well (except the key caps can be anything).

In practical terms, I forsee text entry for things like open/save dialogs to be far more likely to be implimented with cellphone style thumb keyboards, than virtual two hand keyboards, from a sheer ergonomic perspective - holding one arm out to type with one thumb is MUCH easier than holding both hands out in the air to attempt to pose over a full keyboard - and VR is largely a stand-in-the-middle-of-the-room activity. There's no reason to use VR seated at a desk, when you could just as easily use a 3d-glasses based 3d monitor.

AR/VR is not about putting a monitor-based computing paradigm onto your face, it's about recreating a physical, pre-computer-abstracted workshop way of working, but with the physical work-devices virtualised. That the thing - physical devices are almost always better, more satisfying things to work with than flat computer onscreen versions. A real airbrush, is in every way a better tool for doing airbrush art, than a wacom tablet and an airbrush tool. I would bet you any 3D animator would prefer to use a Dinosaur Input Device, to do physical stopframe, than they would use an entirely on-screen 3d package.

VR/AR is about creating a place, and manipulatable objects within that place. The entire history of computer UI has been about the destruction of place, and the destruction of objects, and their replacement with abstracted proxies of those places, and objects as flat images.

I'm not sure Apple, who to be fair, has been on a long drawn out collapse in ability to field good UI since 2013ish (most notably having to scrap the Apple Watch UI paradigm and start again from scratch after releasing the original version) has the ability to recognise what AR/VR require, beyond a simplistic "translucent notification HUD, with an appstore attached" - ie an iphone you wear on your face.
 
Last edited:

MikeELL

macrumors regular
Aug 18, 2006
127
1
Perth, Australia
You really need to look at Ultraleap's developer documentation: https://docs.ultraleap.com/xr-guide...tracking#provide-as-much-feedback-as-possible

...and understand that it's Unity & Unreal who are going to build the development environments for VR/AR, not Apple.
<snip>
I think the fundamental disconnect Apple-centirc folks have from what the technology is, is that there is no reason to expect for a platform-unified UI paradigm in VR/AR. The UI is furnished within the application, and is almost always a custom thing - in the same way that Games do not all use the exact same UI, aside from generalised conventons for what works well within genres. Indeed, I suspect most developers would be inherently hostile to the idea of a VR/AR platform vendor attempying to impose a standard UI upon them. Realistically, what a platform vendor can furnish in a standardised form, are open/save filesystem contents, which of course would be styled by the application to fit its ui aesthetic.
Appreciate the long-form response, and was definitely NOT aware of Ultraleap's work. That's fantastic! Also agree that Unity and Unreal are the development environments of choice (at least for now).

I guess the reason I'd disagree and say I think Apple will come into the VR/AR space in a big way - no doubt annoying a great many people in the process - is because they have to from a business perspective. I don't think anyone who's paying even casual attention (as I have been), can doubt that some form of VR/AR is what the future of personal computing looks like. Apple's entire business model is computing platforms that they make in a vertically integrated way, hardware and software. There's no way they're getting on the bandwagon without trying to vertically integrate in this space too, and if they don't get on the bandwagon, they'll go out of business eventually.

I expect that whenever Apple's next generation of OS's arrive, they will be for devices that are VR/AR integrated in a deep way. Maybe that does mean getting away from 2D screens and the desktop paradigm. Maybe it's Apple TV and Home pod setups end up being the vehicle they use to bring VR/AR people in their living rooms - and orders of magnitudes more people than have ever tried it before. Apple's whole claim to fame is that they build good UI models for previously inaccessible technologies (mixed success), and bring them to the masses (undoubtedly true).

I noticed that the Ultraleap documentation specifically talks about occlusion as "something that happens rarely", but evidently not so rarely that they don't have to advise developers to design around that limitation. Obvious way around that problem is to have two or more sets of sensors in a space as I supposed in my post above, and have them talk to each other with sufficiency low latency - I genuinely think this would be something Apple would be good at, even if ARKit remains a dumb pipe to Apple sensors for developers in a Unity/Unreal environment.

In practical terms, I forsee text entry for things like open/save dialogs to be far more likely to be implimented with cellphone style thumb keyboards, than virtual two hand keyboards, from a sheer ergonomic perspective - holding one arm out to type with one thumb is MUCH easier than holding both hands out in the air to attempt to pose over a full keyboard - and VR is largely a stand-in-the-middle-of-the-room activity. There's no reason to use VR seated at a desk, when you could just as easily use a 3d-glasses based 3d monitor.

AR/VR is not about putting a monitor-based computing paradigm onto your face, it's about recreating a physical, pre-computer-abstracted workshop way of working, but with the physical work-devices virtualised. That the thing - physical devices are almost always better, more satisfying things to work with than flat computer onscreen versions. A real airbrush, is in every way a better tool for doing airbrush art, than a wacom tablet and an airbrush tool. I would bet you any 3D animator would prefer to use a Dinosaur Input Device, to do physical stopframe, than they would use an entirely on-screen 3d package.

VR/AR is about creating a place, and manipulatable objects within that place. The entire history of computer UI has been about the destruction of place, and the destruction of objects, and their replacement with abstracted proxies of those places, and objects as flat images.

I'm not sure Apple, who to be fair, has been on a long drawn out collapse in ability to field good UI since 2013ish (most notably having to scrap the Apple Watch UI paradigm and start again from scratch after releasing the original version) has the ability to recognise what AR/VR require, beyond a simplistic "translucent notification HUD, with an appstore attached" - ie an iphone you wear on your face.

Totally agree about thumb keyboard input, and about AR/VR being a completely different paradigm from strapping a 2D monitor on your face. I'm not sure why you leap from there to not wanting to use VR while at a desk (or at least while seated), nor why you think 3D animators would prefer a Dinosaur Input Device: Cool relic of history, but I found this youtube movie saying it literally only got used for ~6 movies before it became obsolete from animators shifting over to a fully digital paradigm.

I've spent 10+ years CAD'ing in a virtual environment, and find it mostly intuitive (Sketchup, Shapr3D, and Fusion 360 each have their strengths). The most annoying thing is where the UI interfaces change from application to application, so I personally see a great need for standardisation of interfaces for dealing with 3D objects/spaces (I almost don't care which one it is, just pick one). I've also got a friend who runs a free-roam VR gaming centre, which is fun to play once in a while, but it doesn't really hold my interest... really I just want an interface I can build stuff with. The Ultraleap interface, or something like it, is something I would love to see applied to CAD work in a virtual environment. It's simple enough that I could totally imagine having a whole bunch of adults/kids wearing headsets and sitting in a circle, playing with objects/architecture/CAD in a virtual space.

That last is another killer app: the ability to make VR/AR a truly communal activity is something I could see Apple excelling at, but as you point out there's also good reason for skepticism that Apple can pull it off. Still, if anyone can do it...
 

MandiMac

macrumors 65816
Original poster
Feb 25, 2012
1,431
882
Still thrilled about the mere possibility of live subtitles for real world conversations in different languages. Could be a massive game changer when everyone can suddenly communicate with everyone, thanks to Apple Glass.
 
  • Like
Reactions: Macative

Macative

Suspended
Mar 7, 2022
834
1,319
Still thrilled about the mere possibility of live subtitles for real world conversations in different languages. Could be a massive game changer when everyone can suddenly communicate with everyone, thanks to Apple Glass.
Technology continuing to make knowledge itself redundant.
 

subjonas

macrumors 603
Feb 10, 2014
5,541
5,869
Technology continuing to make knowledge itself redundant.
Not always a good thing but also not always a bad thing. Not everyone has time to learn every language, but it’s still very beneficial and even crucial to be able to communicate in certain situations. We depend on other people who are professionals to do things for us that we don’t know how to do too (like fix a car or plumbing), so in this case, whether tech or another human, they’re both delegation of knowledge/work for practical reasons.
However, if the translation technology is done with machine learning, I suppose the general question that poses then is whether humankind should be using machine learning and AI, and to what extent.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.