- A lot of people falling for this "engineers" spiel. Of course the PS4 pushes gaming forward. It can do cloud AI, "Playstation Platform" to, as they said, play your PS games on any device (and showed a picture of a tablet, PS3, Vita), stream sharing, stream control, demo streaming. All those gesture commands are possible with the PS4 camera too (which I probably won't get unless something compelling comes out for it).
Both devices push gaming forward in a huge way, just because the Xbox removed DRM doesn't stop it doing so. Just don't buy disc based games!
Sony said they can do cloud offloading but that's all they've said. Meanwhile they offer no competing cloud computation service to Azure so everyone's in the dark on how they'd implement it. If they can do cloud offloading, I've been wondering how come all the games that are using cloud offloading (Titanfall, Sunset Overdrive, Forza 5, etc) are MS exclusives?
PS4 eye isn't the same as Kinect. All Sony did was increase the camera resolution of the previous Eye and add a second camera for stereoscopic depth perception. Facial recognition should be fine, basically the same as a FB auto-tag, and voice recognition should be fine too (unless you're doing language modeling like Siri, voice recognition implementation is pretty simple). They're basically using pattern recognition technologies that have been around for awhile. Low light still sucks because they use the visible spectrum, which is why you need a shiny light on a controller for it to find. Kinect is really a whole new use of AI. It uses IR pulses for imaging, and juxtaposes skeletal modeling, muscle rotation, heartbeat detection, along with a physics model for movement. What I'm excited about is using this in conjunction with a conventional controller, something that could lead to newer gameplay experiences.
- 3DS and Vita supports running a web browser whilst gaming too. Both PS4 and Xbox are well above those systems so of course both will have that same feature.
Point isn't so much a browser as it is split screen multitasking with the hypervisor managing resources. Skyping next to a game, browser next to a TV, and (what I'm hoping but haven't seen yet) TV next to a game so I can be playing a game on one side and have an NBA game playing on the other. Mix and match.
- Is Kinect really that useful for media? I've used a lot of tech to play around with my media centres; iOS apps, BT controllers, Kinect gestures and commands, TV gestures. Nothing beats the speed of a controller still. Voice is great for AIs (Siri) but not for issuing command controls where speed and precision is king.
Nobody knows yet. The 360 didn't have a TV interface built in and didn't treat your cable box as an input. The only console that's tried so far was the Wii U and their implementation wasn't all that.
- Inteliroom [sp?] looks great initially until you look into it. The Xbox is slower than the PS4 and has less memory for games, so rendering a 180-degree FOV, high resolution projection would definitely have an effect on the image on the screen. Possibly fixed by networking multiple consoles (like GT5 on Ultra HD displays using networked PS3s). I think it's the next-gen where we'll see more tech like this, and standardised support for VR headsets like the Oculus Rift.
I don't know how the Illumroom projector would work (it's still an R&D project) but I'm all for anything that can lead to greater immersion
I have reservations about the 5 GB max reserved for games claim (that hypervisor should be able to dynamically allocate that 8 GB of memory to whatever needs it) but I'm sure we'll find out later
- PS4 Eye
It has actually been confirmed that it is much more accurate than the first Kinect (obviously, there are no comparisons to Kinect 2 yet) and is integrated into the OS (for example: hand gestures, facial regocnizition to login).
It's not just a dual-camera to follow the PS4 controller, or PS Move controllers.
Oh, and it also does advanced voice recognition stuff. I don't know how it compares to Kinect 2's voice recognition, but Sony is pretty good at this. They create and build microphones for the professional market (as you may, or may not, know: Sony has a huge library of music and movies (for which it provides cameras, microphones and 3D technology).
I'll believe it when I see it. So far all I know is it uses visible spectrum, so if I'm trying to a gesture at night or in low light situations, it'll have a big chance of screwing up. Business-wise, the Eye isn't a major selling point to Sony so I expect them to treat it as niche. Engineering wise, all they seemed to have done was stick two cameras on it and integrated whatever pattern recognition tech was readily available
And yeah they all have voice recognition but Sony doesn't have the interface to use it as a hub. I can't say "Watch HBO" to my PS4 and expect it to react. I doubt gamers who are buying the PS4 strictly for gaming really care about stuff like this, but this is what I mean when I say the PS4 is status quo.
And than there are the things that Microsoft doesn't offer.
- Dedicated GamePlay sharing
Microsoft has a solution for this, but it eats up system resources. Sony has a dedicated OS chip that saves the last 15 minutes of gameplay.
- Gaikai streaming
The possibilities are endless: play before you even have downloaded a single KB. It also allows for special demos (imagine being able to play Grand Theft Auto V, for one hour at a specific time (gone/away at that time, too bad), a month before it releases.
These are true. I don't care about sharing but it's been pointed out to me a lot of people want this feature. Gaikai, if they can get the infrastructure in place and the devs to buy in, would be a nice feature