VR: The Developers’ Dilemma

 

The Promise of VR

Over the past 30 years, video games have evolved from interactive stories on early PCs to sprite based arcade worthy games on 8-bit/16-bit cartridges to full 3D games on powerful consoles/PCs using CD/DVDs for storage, and more recently to cloud connected MOBAs, e-sports and multi-core mobiles devices that are pushing game content directly to our mobile devices. I’ve loved every transition along the way, from Zork and Pong to The Sims, Warcraft, Titanfall and Clash of Clans.  But the current transition to VR and AR is the most revolutionary ever; systems with high res stereoscopic displays that track your every head tilt, gesture, eye movement and step. The games we make for these new platforms will be more impressive technically than anything we’ve experienced before; players will literally be ‘in the game’.

Being ‘in the game’ means that if you’re viewing anything in VR it has to look and respond like it does in real life, as you’re not merely looking at a screen image of that object, you are THERE and need to feel like you can literally reach out and touch it.  We’re talking about dual 4k displays and textures, hundreds or even thousands of dynamic light sources in full 360 degree field of view. Characters, objects, scenery and gameplay will have to move and react like their real world counterparts.  This becomes especially true when players have the amount of movement and viewing freedom that VR provides.  And these games will have to do it all at a consistent 90 frames per second.

VRmarketshares_zpsqaxnp7id.pngVR Market by 2020 via Digi-Capital

 

The VR Challenge

When games don’t run consistently at 90 FPS, players experience what is colloquially referred to as “discomfort.” For me, “discomfort” means nausea and general loss of equilibrium.  To be sure, there are multiple causes that contribute to “discomfort,” but consistent framerate is the #1 culprit, closely followed by multiview coherency.

The promise of VR and AR is fantastic when you’re in an experience and completely immersed in the world around you. But here’s what you don’t hear a lot about: being in the game is most satisfying when that game’s world is as rich and dynamic as our own. And right now, with the current mobile VR devices that means the quality of the world is about the same as what you get on the highest resolution tablets and phones. There are great mobile games out there, but they’re meant to be held at arm’s distance from your face, you’re not supposed to be in them. When converted for VR, users quickly realize how simplistic and “shallow” the worlds are. It’s not that they can’t be fun or interesting, it’s just that the games lack a sense of realism and immersion that is expected when strapped into a virtual world. Is this because that’s how game developers envision the world? Of course not. It’s because developers have to scale back their simulations and graphics until they can get that golden 90FPS. In a lot of cases, this is not because the hardware isn’t powerful enough; it’s because current game runtime technology is inefficient. The actual game software can’t take full advantage of the hardware, so developers have to sacrifice their creative vision for framerate and what the end user gets is a compromised experience. Many first generation VR concepts are solving this by creating novel, slower-paced virtual “experiences” that require less movement from the player and very little actual simulation and artificial intelligence (AI). This may work for the first generation of VR but won’t satisfy gaming audiences expectations in the long term.

So why can’t the game take full advantage of the hardware? In short, it’s because the game engines available today are not designed to make game runtime executables that harness the full capabilities of the hardware. These engines were architected before multi-core computers & mobile gaming were popular. As one anonymous developer recently stated about the current VR development engine options: “I call them VR Dream Compression Algorithms. You start with an incredible dream and have to continuously scale it back until it can work with the tools.”

MaxPlay_feature_MaxCore.gif

Functional Multi-Threading - the current state of engine technology

Current game engines are multi-threaded for multi-core hardware, however, they are functionally multi-threaded; which means they put most of the work on 1 or 2 cores and are able to intermittently take advantage of other cores for different functions. At best, they get full efficiency out of 2 cores and much less efficiency out of the remaining cores.

So why haven’t we noticed? Because developers have managed to build tremendous single screen experiences from one to eight feet away from the player. At those distances, 30FPS is the magic framerate requirement and the developers have found creative ways to work within the restrictions of the existing engines. If framerate drops in places it can be annoying but it doesn’t make you hurl. And, until recently, most devices haven’t had that many cores.  Some phones are sporting up to 8 cores now, much of which goes largely underutilized.

The MaxCore™ Advantage - a new approach

During GDC 2016 we introduced MaxCore, a scalable, data parallel computing architecture and advanced rendering system built for multi-core devices including VR hardware. The MaxCore architecture allows devs to harness the full capabilities of any multi-core hardware device. In short, our proprietary system stages and packets the data then spreads it over all available cores equally so that the CPUs are all working together effectively. Our games run at full data-parallel speed. This means if you have 4 cores, you’re getting full efficiency on all 4 cores. 8 cores? Efficiency on all 8 cores.

In complex CPU flocking simulation benchmark tests across Unity, Unreal & MaxPlay, we are seeing framerate gains 12x faster than Unreal and 41x faster than Unity.  How is that possible? Because when the CPU load is distributed across all cores the entire system functions massively more efficiently. Go figure. ALL of the CPU power for the device is now used without waste. It’s pretty freakin’ amazing.

Imagine the amazing VR and game experiences developers will create with these gains on high-end machines and mobile devices. In addition, when mobile device CPUs are less taxed, machines have less heat-induced shutdown and even improved battery life. So this is a game changer for all devices that use multi-core hardware.

 

MaxCore Rendering

Developers make a lot of different choices when creating their worlds, but the one guiding rule is that things should look and feel intentional.  So if you want something to look photorealistic in VR, you need hi-res textures, lots of polys, physically based rendering, and lots of lights. Otherwise your users end up in a cheap approximation of reality. And no one wants that. Make it real, or don’t, but don’t go halfway.

 

MaxPlay_OrigamiSky_09.png

MaxPlay's Origami Sky Demo

 

MaxPlay’s MaxCore uses a Forward+ (Light Indexed Deferred, or LID) rendering model which gives developers literally thousands of lights in a game scene in fewer rendering passes while using less graphics buffer memory than the rendering models in other engines. No need to pre-bake lighting. You can have multiple materials and multiple lights in the same scene at no extra cost. You could have particles made of actual lights. You want translucent objects? You got ‘em. With Forward+ LID, you get photo-realism at a fraction of the compute power. So you get higher quality at a better framerate. Crazy, but true.

Shared_view.jpg

And, for the VR curious, we’ve got something here too; shared data rendering. Our rendering model is set to natively handle efficient multi-screen, multi-resolution rendering. Traditional VR rendering requires calculating every view in a scene twice (once for each eye’s view). But, in reality, there is shared overlapping screen data in each view that is offset for each eye. We keep track of this shared data and only calculate it once as needed. This is another place where MaxCore is rendering more efficiently. This is especially valuable on mobile devices, where every cycle and render pass counts. Our rendering even works on lower-end phones by gracefully switching to a highly efficient Forward rendering model when necessary.

At the end of the day, each developer will decide what kind of world they want to create, from simulation to level of photorealism. Our goal is to give freedom to developers to make the most creative decisions possible to produce the most immersive VR experiences. Decisions made without the restrictions of outdated engine architectures. Decisions that allow their creative visions to captivate the end user. If you have a VR dream that you would like to create without compromise look no further...

Contact us 

 

Sinjin