or Connect
AppleInsider › Forums › General › General Discussion › Ouya CEO on mobile gaming: 'You're not having an emotional experience'
New Posts  All Forums:Forum Nav:

Ouya CEO on mobile gaming: 'You're not having an emotional experience' - Page 3

post #81 of 96
Duh.
But yeah, he's totally right. The most powerful emotion I felt on portable games was with Superbrothers and Dead Space.
post #82 of 96
Quote:
Originally Posted by bugsnw View Post

The only emotional experience I've had with our PS3 is "crap....there went $65..sucky game."

 

I'm with others anyhow. The emotional experience with gaming is not that high a bar to reach. As Solipsism said, he gets a tingle up his leg playing WWF. Can't argue with that. I absolutely fell in love with Letterpress. I've twisted and turned painfully while playing Real Racing as well.

 

The games that leave me feeling empty inside are almost any 1st-person shooter title on the PS3, of which there are thousands.

 

When (if) the Apple TV can play the games we download on our iOS devices and bluetooth controllers abound, that will be very interesting. Though I sure don't mind playing on my iPad while comfy in bed. Both have their strengths.


You are playing the wrong games.

Try Journey, MGS, FFXIII, Valkyria Chronicles, Dead Space, GTA IV, Mass Effect 2-3, Bioshock, Enslaved, Silent Hill, Heavy Rain...

and soon, Bioshock Infinite, GTA V, Beyond : Two Souls, Metal Gear Solid Ground Zeroes and MGS5...

And that's just on ps3

post #83 of 96
Quote:
Originally Posted by Marvin View Post

Hardware will get cheaper and less relevant but everybody will always consume and pay for quality content. 

Why will hardware get cheaper and less relevant? Surely the goal is virtual reality, and until we reach that, prices will stay the same as the specs increase and increase?

post #84 of 96
Quote:
Originally Posted by ascii View Post

Why will hardware get cheaper and less relevant? Surely the goal is virtual reality, and until we reach that, prices will stay the same as the specs increase and increase?

There will be mainstream goals and technological goals. Things like 3D video and UHD are technological goals but I don't see them as mainstream technologies. Given a long enough time anything can be as they exhaust the current mainstream technology but there are things that make certain tech impractical. They have virtual reality already and it might not be the way most people play games:



Not everyone has enough space to be able to move around to allow for full positional tracking, not everyone will want to have the headset on or even wear glasses.

I'd say the Watch Dogs game is one of the most realistic (not just visually) environments I've seen in a game and I just don't see how much further they can go after they almost exactly mimic real life. They can improve lighting and shading but realistic animation and AI is more important and all of this is available now. When you see the complexity they already have, what is left to improve?



The visuals can improve, the environments can be bigger but there are workload limits as far as the teams that build the games. Every stray piece of garbage in that city has to be put there by the game developer. Some things can be controlled by algorithms but it still has to be tested.

Look at books, they haven't changed for thousands of years, moving to digital just made them more convenient to access. Movies have had to improve in resolution and visual effects but with The Life of Pi, you can see they have pretty much peaked the requirements for visual effects. Books and movies haven't dropped in price but what you're paying for is content and the medium you access it on can be free. Sony's move to the cloud for gaming with their purchase of Gaikai is a strong indicator that they see how irrelevant the hardware is even now, especially when they don't make a profit on it. Why try and produce and sell a PS3 they sell at break-even or a loss when they can sell you just a game they take a profit from on every sale?

The next-gen consoles are x86 PC hardware. In say 5 years or so, the Mac Mini will match them in terms of performance. The upcoming mobile GPUs will have graphics capabilities that put them on par with the technologies available to last-gen consoles, this effectively makes them like the Wii U but can be sold in inexpensive Android hardware.
post #85 of 96

AI has to improve A LOT.

Eventually when processing power allows it we'll have realtime 'atoms' for objects (actually a lot bigger than atoms but still invisible to the eye), for all objects.
There is a lot to do in every possible domain.

Just look how far ahead 3D animation in movies is compared to games.

post #86 of 96
Quote:
Originally Posted by cnocbui View Post


Gender identification fail.  Try again.

A thousand apologies.
post #87 of 96
Quote:
Originally Posted by ClemyNX View Post

AI has to improve A LOT.

It depends what you are using it for. Games typically have an end-point and once you reach it, you don't want to play the game any more. Developers can invest a lot developing proper AI characters to interact with but they will ultimately be there to direct a storyline unless the aim is to give you a digital world you don't want to leave. They just need believable behaviour from NPCs that doesn't distract you from the setting. They have made steps like what they call biomechanical AI with engines like Euphoria:



That gives NPCs a certain degree of behavioural AI, which is enough for the game to be believable.
Quote:
Originally Posted by ClemyNX View Post

Eventually when processing power allows it we'll have realtime 'atoms' for objects (actually a lot bigger than atoms but still invisible to the eye), for all objects.

There is a lot to do in every possible domain.
Just look how far ahead 3D animation in movies is compared to games.

There are fluid simulations that take loads of computing power so if you wanted a gamer to experience a flood or something, it's something you might want but even Indie games can give you the same experience:



There are steps to improve simulations by using suitable approximations like Digital Molecular Matter:



and we have the tessellation features of the latest graphics APIs that negates a lot of the need for fully atomic objects of a voxel system because it dynamically tessellates objects based on the viewport. The technology in feature films can take orders of magnitude more processing power but if it's not going to add enough to the real-time simulation then there's no point in pursuing it in any big way because you're not going to convince anyone of what the added value is. I just don't seem them pushing the power agenda any more and I see it in the PC market too. It's all about more convenient distribution and form factors, battery life etc. Everything else can move to the cloud. Cloud performance is only limited by the scale of the data centre vs the demand, they don't have to convince people to invest in new hardware.
post #88 of 96
Short and sweet. Developer support. That is what will make or break this. The Ouya already has a handful of big names on board as well as come creative first time Indies who have never developed for Android at all. As far as pirating goes, we all know it's not about ease of pirating that stops it, people will pirate if they want, they'll find a way, it's about the quality and the price. If it's reasonable, people won't go out if their way to steal it. Software is not an issue and the hardware if optimized for the game well enough will provide plenty of horsepower to run some good looking games. I think more than anything the people who won't buy this because of lack of what may be considered AAA games are people who are set in their ways to continue to buy their long established franchises made by their well established devs over and over and over again. Not that there is a problem with that, but the Ouya isn't as big of a deal nor as imminent of a failure as many people would like to believe.
post #89 of 96
Quote:
Originally Posted by Marvin View Post

Look at books, they haven't changed for thousands of years, moving to digital just made them more convenient to access. Movies have had to improve in resolution and visual effects but with The Life of Pi, you can see they have pretty much peaked the requirements for visual effects. Books and movies haven't dropped in price but what you're paying for is content and the medium you access it on can be free. Sony's move to the cloud for gaming with their purchase of Gaikai is a strong indicator that they see how irrelevant the hardware is even now, especially when they don't make a profit on it. Why try and produce and sell a PS3 they sell at break-even or a loss when they can sell you just a game they take a profit from on every sale?

The next-gen consoles are x86 PC hardware. In say 5 years or so, the Mac Mini will match them in terms of performance. The upcoming mobile GPUs will have graphics capabilities that put them on par with the technologies available to last-gen consoles, this effectively makes them like the Wii U but can be sold in inexpensive Android hardware.

 

I think we both agree that content rules, and eventually it will be 99% of the price, I just think we are further away from that time than you. 
 
As impressive and detailed as Watch Dogs is relative to what we currently have (thanks for the video by the way), looking at it you can still straight away tell it's rendered. What I would be after is games to be indistinguishable from a video taken outside.
 
And I thought Sony were just using the streaming game tech to provide backwards compatibility with PS3 games, not as a serious go-forward thing? Could be wrong about that.
post #90 of 96
Quote:
Originally Posted by Marvin View Post


It depends what you are using it for. Games typically have an end-point and once you reach it, you don't want to play the game any more. Developers can invest a lot developing proper AI characters to interact with but they will ultimately be there to direct a storyline unless the aim is to give you a digital world you don't want to leave. They just need believable behaviour from NPCs that doesn't distract you from the setting. They have made steps like what they call biomechanical AI with engines like Euphoria:



That gives NPCs a certain degree of behavioural AI, which is enough for the game to be believable.
There are fluid simulations that take loads of computing power so if you wanted a gamer to experience a flood or something, it's something you might want but even Indie games can give you the same experience:



There are steps to improve simulations by using suitable approximations like Digital Molecular Matter:



and we have the tessellation features of the latest graphics APIs that negates a lot of the need for fully atomic objects of a voxel system because it dynamically tessellates objects based on the viewport. The technology in feature films can take orders of magnitude more processing power but if it's not going to add enough to the real-time simulation then there's no point in pursuing it in any big way because you're not going to convince anyone of what the added value is. I just don't seem them pushing the power agenda any more and I see it in the PC market too. It's all about more convenient distribution and form factors, battery life etc. Everything else can move to the cloud. Cloud performance is only limited by the scale of the data centre vs the demand, they don't have to convince people to invest in new hardware.

 

I know all those examples and they are indeed great examples of tech that has to be worked on. DMM was far from being what it promised to be. And those water or breaking simulations still look unreal. I meant that games have still a lot to go to achieve movie-like photorealistic quality. And it's not only harder because you have to make it work in real-time, but also because simulations and events can't be scripted for most games.

What you are describing works very well for some kind of games where people are used to having constraints. I was thinking of more open-world games, where the player virtually can do anything.

And yes, I'm counting on cloud computing exactly for AI.

post #91 of 96
Quote:
Originally Posted by ascii View Post

 

I think we both agree that content rules, and eventually it will be 99% of the price, I just think we are further away from that time than you. 
 
As impressive and detailed as Watch Dogs is relative to what we currently have (thanks for the video by the way), looking at it you can still straight away tell it's rendered. What I would be after is games to be indistinguishable from a video taken outside.
 
And I thought Sony were just using the streaming game tech to provide backwards compatibility with PS3 games, not as a serious go-forward thing? Could be wrong about that.

 

No, the streaming will allow for a lot of other stuff in games. For example when you'll buy a game on the PSN you will be able to play in a matter of seconds because it'll load the game engine and initial levels first and keep downloading in the background. Cinematics that take a lot of space will be streamed directly within the game while they are being downloaded so you'll play a game half locally, half on the cloud.

post #92 of 96
Quote:
Originally Posted by ascii View Post

What I would be after is games to be indistinguishable from a video taken outside.

That isn't just the tech but the art direction. GTA 4 for example has a cartoon style but it can be more realistic with different textures:



I could see it being good in some cases but most of the time, I like a stylised environment. Paintings rarely mimic photographs and it's even the case that photography tends to be stylised to avoid the mundane appearance of real life.
Quote:
Originally Posted by ascii View Post

And I thought Sony were just using the streaming game tech to provide backwards compatibility with PS3 games, not as a serious go-forward thing? Could be wrong about that.

They confuse the issue a bit because they talk about streaming PS4 games to the Vita but they are usually talking about using the Vita as a display for the PS4 locally if someone else is using the TV. There's nothing to stop them from streaming PS4 games though. If they don't use it for the full PS4 games, I can at least see them using it for demos so you can try a game before buying. Apple should have something like that for iOS software e.g use it for up to the few minutes specified by the developer. That way the developer can also tell what interest there is in the app.
Quote:
Originally Posted by ClemyNX 
IWhat you are describing works very well for some kind of games where people are used to having constraints. I was thinking of more open-world games, where the player virtually can do anything.

There always has to be constraints and rules (like in the real world) but I know what you mean, go into any building, interact with any character and it would be as realistic as the real world. I don't see that being commercially viable because it's too complex to make and without any direction or scripting, it just becomes like a sandbox.
post #93 of 96
Quote:
Originally Posted by Marvin View Post


That isn't just the tech but the art direction. GTA 4 for example has a cartoon style but it can be more realistic with different textures:



I could see it being good in some cases but most of the time, I like a stylised environment. Paintings rarely mimic photographs and it's even the case that photography tends to be stylised to avoid the mundane appearance of real life.

I also like stylised games but I don't think they could do photorealistic even if they wanted to. Despite the name of the GTAIV mod above I don't think it's photorealistic. Things continue to progress though. Here is NVidia's latest face simulation tech but note it needs fully half the performance of the latest Titan GPU just to do one face, which makes me think hardware is still a factor:

http://www.youtube.com/watch?v=5d1ZOYU4gpo

post #94 of 96
Quote:
Originally Posted by ascii View Post

Despite the name of the GTAIV mod above I don't think it's photorealistic.

That's why it doesn't make sense to pursue photographic imagery. No matter how close they get, someone will still say it looks CG. Some people say that about the tiger in Life of Pi:

http://movieline.com/2012/09/26/life-of-pi-trailer-ang-lee/

"The tiger looks fake, everything shown of them "at sea" doesn't look anything like being stranded in the ocean. If the imagery was actually shot out in an ocean and had some beauty, I would feel differently, I'm sure. If the tiger was a real tiger, I would feel differently."

I think very few people would agree with that ( http://online.wsj.com/article/SB10001424127887323549204578320172311129716.html ) but an important element in this is when you know something is fake. A lot of the time, the best visual effects in movies are the ones you don't know are there because if you missed them, they achieve a suspension of disbelief. As soon as you are pointed to the fact that something is computer generated, you scrutinise it more than you would if you assumed it was real and the very knowledge that it is fake doesn't allow you to accept it as equivalent to the real thing. It's much the same with breasts, no matter how good they look, if you know they are fake, you have a lower opinion of them.

It's worse with gaming because you always know that everything is CG whereas with movies, the blend between real and fake helps to hide what's fake. I consider a lot of modern games to be rich enough visually that it no longer detracts from the immersion. The Tomb Raider game is one example:



There are areas that you can see a need for improvement like the foliage, the hair simulation, some of the water effects but if you had those, how much different would the immersion in the game be? With earlier gaming systems the deer wouldn't run like that, they wouldn't collide properly with things, the trees and grass would be flat textures, there would be no volumetric lighting or effects, no rain, loading screens every 10 minutes, movements that were not much different from Frogger. I don't see limitations any more that would make the experience significantly better if they were improved.
Quote:
Originally Posted by ascii View Post

Things continue to progress though. Here is NVidia's latest face simulation tech but note it needs fully half the performance of the latest Titan GPU just to do one face, which makes me think hardware is still a factor

When it comes to games and real-time interaction, they can pre-compute a lot of data and re-use it. If they can eventually use synthetic voices, some things would be more dynamic but even then, the dialog will be scripted. Action scenes will be directed. They can pre-compute data for high quality skin now that runs in real-time on current hardware:



For the kind of experience of a full AI sandbox that mimics the real world, the amount of processing could easily be orders of magnitude higher than what we have now. I just don't think that's a mainstream goal. That doesn't mean everyone will stop pushing the higher-end systems - they need to have selling points - but I think buyers will stop caring about that metric. The sheer amount of data, textures and work required to pull off a $60 game that does that is not worthwhile. Will we start getting games that are 100GB+ in size and need multiple Blu-Ray discs? It's possible but just like there are limits to the practical resolutions of movies, there are practical limits to the quality of interactive media.

Often there's an allusion to the 640k of RAM being enough remark but if you said 300 PPI is enough, it would be accurate because there is a human limit to what we can resolve. It's the same with audio. There are sounds outside what we can perceive but there's no point in reproducing them. All that technology has to do is fill the need in the most efficient way.
post #95 of 96
Quote:
Originally Posted by Marvin View Post


That's why it doesn't make sense to pursue photographic imagery. No matter how close they get, someone will still say it looks CG. Some people say that about the tiger in Life of Pi:

http://movieline.com/2012/09/26/life-of-pi-trailer-ang-lee/

...

There are areas that you can see a need for improvement like the foliage, the hair simulation, some of the water effects but if you had those, how much different would the immersion in the game be? With earlier gaming systems the deer wouldn't run like that, they wouldn't collide properly with things, the trees and grass would be flat textures, there would be no volumetric lighting or effects, no rain, loading screens every 10 minutes, movements that were not much different from Frogger. I don't see limitations any more that would make the experience significantly better if they were improved.

 

I don't think it's a hopeless endeavour. As you say there are limits to what a human being can perceive so in theory it should be possible to fool him completely at some point. Though I once read a theory that each generation can always tell CG from reality, growing up as children they learn to discern the slightest of tells that their parents don't see, whereas their parents could discern the CG of their generation.
 
I just finished Tomb Raider myself actually, on a Radeon 7970 GHz Edition, Ultra settings + TressFX, so got a good idea of the best it can do. While it was excellent, I disagree that it couldn't be improved much by photorealism. Remember Lara found a cult on that island, worshipping an ancient Sun god, and there was lots of gruesome dead bodies and human sacrifice and such. I'm sure the artistic impact of such horror would be amplified were it photorealistic. Though, on the other hand, not sure I would have bought the game then.
post #96 of 96
Quote:
Originally Posted by ascii View Post

I don't think it's a hopeless endeavour. As you say there are limits to what a human being can perceive so in theory it should be possible to fool him completely at some point. Though I once read a theory that each generation can always tell CG from reality, growing up as children they learn to discern the slightest of tells that their parents don't see, whereas their parents could discern the CG of their generation.

The limit is mainly in the workload. The Life of Pi movie was visually impressive but it took about 600 artists to do the scenes with the tiger and the studio has gone bankrupt:

http://blogs.wsj.com/bankruptcy/2013/02/25/rhythm-hues-gets-oscars-shout-out/

The movie budget was $120m. That's to get directed non-interactive photorealism. NVidia showed off interactive photorealism on 112 GPUs remotely but it's still not real-time and it was a basic scene - it takes a few seconds to get a clean output. So say it takes 5 seconds, that means another 150x required to get 30FPS so we need 112 x 150 = 16,800x speedup and that's for the highest-end GPUs. It's not surprising, when movies frames can take 10 hours each or higher. Even targeting 1080p, if they aimed at 30 minutes per frame quality, that would require 30x60x30 = 54,000x speedup.

We only have around 10 years left before they run out of process nodes to shrink tech down to. If they double every year, that gives us 32x speedup. GPUs can go a little higher than double but not much. Best case it's 100x. They might be able to put algorithms in hardware, they might get some better cooling or better materials to be able to clock things higher but it'll still fall short.

I'm sure they'll find a good enough compromise within even 32-100x over what we have now but they need artists to use it effectively.
 
Quote:
Originally Posted by ascii View Post

I just finished Tomb Raider myself actually, on a Radeon 7970 GHz Edition, Ultra settings + TressFX, so got a good idea of the best it can do. While it was excellent, I disagree that it couldn't be improved much by photorealism. Remember Lara found a cult on that island, worshipping an ancient Sun god, and there was lots of gruesome dead bodies and human sacrifice and such. I'm sure the artistic impact of such horror would be amplified were it photorealistic. Though, on the other hand, not sure I would have bought the game then.

Yes there is that too, the death sequences are pretty gruesome. Like in Call of Duty if the human bodies reacted exactly like when they were shot, that might be a little too much for some gamers. Some games seem to be progressing towards this but a movie has the luxury of cutting the scene just at the right time. In a fully interactive game scenario like No Russian where you shoot up an airport, the only option they have is to prompt 'do you really want to play this? y/n'.

What I look forward to most in games these days is developers who take the time to make an immersive game. No matter how realistic they made Bulletstorm, it's just a shoot-em-up game. I prefer when it's treated more like a story-telling medium where they start with a book, make the equivalent of a screenplay, get professional voice actors and storyboard it from start to end.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: General Discussion
AppleInsider › Forums › General › General Discussion › Ouya CEO on mobile gaming: 'You're not having an emotional experience'