Friday, 25 July 2014

Arrival of a Train and Tiny Airplane Windows - Building VR Experiences with Motion

So I was flying home from the Las Vegas, the plane was just taking off and I was getting a great view of the dessert from the air.  I thought to myself "I wish the top half of the plane was made of plexiglass so I could get a truly epic view of the area."  I quickly realized why this would be a terrible idea. People would be freaking out!

But isn't this what we do in VR all the time?  In fact, one of the primary reasons VR is taking off now is because we've been able to cheaply create a device with a field of view capable of giving negative experiences like freaking people out or making them sick.

So my question is, what can we learn about comfortable moving VR experiences from Tiny Airplane Windows.

Before I address that directly, I'm going to diverge to the "Arrival of a Train at a Station".

This clip is one of the first moving pictures ever shown to a public audience.  And once again, people were freaking out and running for their lives!  They thought the train was going to crash right into the theatre.  If the Lumiere Brothers lived today, it would be natural for them to propose a list of best practices for film making that would include a note that "objects should not move towards the camera" in order to make the film going experience comfortable for everyone.  Of course the audience quickly grew out of these freak outs and we have a vibrant film industry that recently refit half the screens across North America so it could move objects towards the camera in stereo 3d.

Don't get me wrong.  I love Couch Nights and all the stationary experiences coming down the line. And I understand that at the start of a market such as this, Oculus should be concerned with the widest adoption of VR possible.  I'm just tired of hearing about the "best practices" as if they're some sort of fixed laws.  The fact is they're a set of hypotheses - really good guesses made by experts on the current state of the populace.  These hypotheses will be validated in an experiment that will begin when the first commercial kit gets it's big marketing push.

One year later though, the populace will have changed and the best practices will have to be adjusted to fit the expectations of the market as it begins to take shape with ages, genders, races, orientations and expectations on what the VR experience can deliver.

Just like we can't avoid trains in movies forever, we're not going to be able to avoid moving VR forever either.

So how can we develop a VR experience with motion in it today when the audience may still be freaking out at trains?  Easy - tiny airplane windows!

Build your experience with a foreground object (it could be a cockpit but it could also be a helmet or visor) that has a variable sized viewport.  You could then open or close the view based on the comfort level of each user.  This would allow you to build the motion experience you want to build and not alienate the beginner audience.  A user such as myself could get the plexiglass airplane while my mother in law could get the little tiny window and we could both enjoy the trip.

Tuesday, 22 April 2014

SnowDrift is now available on Oculus Share!

I recieved word this morning, that SnowDrift has gone though the certification at Oculus Share and has been approved for release on their site!

You can check out the page here...

While you're there, check out the new screen shots and the new youtube trailer!  If you haven't played SnowDrift in a while, download it and check out the major changes it has gone through!

Also please leave your comments and reviews on the download page.


Friday, 4 April 2014

It Can Hurt to Lose Your Personal Indie Thing

I'm happy about the Facebook acquisition of Oculus, but I understand why someone would not be because I've been through this before.

I'm from Vancouver, and we have a decent indie music scene.  In the late nineties I found a band on the radio that had submitted a song to a city wide battle-of-the-bands.  I thought they were great. When they released their first couple studio albums to the local market I played their music to my friends on the east coast and told them how this band was going to be huge!  At the time they were playing humble tunes from their experiences growing up.  Their songs were deeply personal and you could really get a feel for who the band was.  You could even get a sense of kinship with the members of the band.

Then the band got big, and lost their human touch.  The band in question here was Nickelback.  I can already hear you groaning through the internet.  Most of you know how the rest of the story goes.

This is where some of the backlash against Facebook is coming from.  We used to all feel that we were part of something personal and home-grown and now we don't.

I'm psyched for Oculus.  I'm happy to be working in an industry that has so much more reach today than it did a month ago.

Hopefully Palmer won't start writing music about strippers.

Saturday, 15 March 2014

VRChat: Now with Custom Avatars


VRChat has moved from this custom avatar solution to using an SDK that has custom rooms, avatars, animations and even interactive elements.  I'll leave this article here for now, but the links are obsolete. You can download the full SDK from and the updated directions are included.



Howdy All

VrChat is a great VR application (  Many enthusiasts and developers meet up in there after Cymatic Bruce's Sunday afternoon stream.  This last week I've had the priviledge of working with it's creator Graham Gaylor on a new feature.  It's one of the corner stones of virtual reality and any cyberpunk offshoot... Customization.

I don't want any limit on who or what I can be in virtual reality.  I like skiing (check out and so I want to look like a down hill skier. Maybe I'd even put my game's logo on my back.  I've used a bunch of character creation schemes in the past but none have the level of customization I want.  Sure, City of Heroes came close but it still had limitations imposed by the developers on what I could look like.

You might say "that sounds good Jesse, but short of a full model importer that's as good as you're going to get!", to which I would reply "Challenge Accepted!".

You now have the ability to use any existing model you like as your avatar in VRChat.

For the first version of this tool you need to have Unity Pro, or a friend with Unity Pro. Hopefully there will be enough interest to expand this feature to other formats in the future.  You'll also need somewhere to host your file so it can be downloaded by the other people who connect to VRChat who can see you.  This should be any normal web hosting environment.

Here's how you make it work...

Reminder, this document is obsolete, download the SDK from and follow the included directions.

Building your Custom Model Package
  1. Load your model into the latest version of unity.  Make sure the import scale and settings are such that the model is the correct height in meters.  It should be about 1.5 to 2 game units tall.  1.5m is pretty short and 2.0m is pretty tall.  For most models your import scale will be 0.01.

  2. In your model import settings, set the rig type to "humanoid".
  3. Instantiate your model into your scene.
  4. Add the VRC_AvatarDescriptor object from the SDK by dragging it onto your player.
  5. Change the view position field until the little sphere is hovering about 15cm in front of your model's eyes.
  6. ===================================================================
    Reminder, this document is obsolete, download the SDK from and follow the included directions.

  7. Now for the technical part, setting up the AvatarDescriptor object.
    1. Add three bones from your player's spine to spine 1, 2 and 3 in order from bottom to top to the Avatar Descriptor (see pic). 
    2. Also add the character's neck.
    3. And the left and right arm.  The arms should be the bones that have a pivot at the character's shoulder.  
    4. Fill in a name for your character and leave the URL blank and you're done.  Here's what the Avatar Descriptor should look like.
  8. Create a blank prefab to hold your model, name it as you wish and drag your model (and the attached script) onto it, creating a prefab of the whole package.
  9. Select the prefab in your asset hierarchy. Select the menu item "Assets/Build AssetBundle from Selection" and save the *.unity3d file onto your computer.

Using your Custom Model

  1. ===================================================================
    Reminder, this document is obsolete, download the SDK from and follow the included directions.
  2. Upload your model to a web site that's visible from the internet. 
  3. Launch VRChat.
  4. bring up the console (` key) and type "/setavatar http://<url>/<package name>.unity3d"
  5. After a little download time you should be wearing your new avatar.
  6. You may share the URL of your avatar if you'd like to make it available to other people to use.

  • ===================================================================
    Reminder, this document is obsolete, download the SDK from and follow the included directions.
  • Can you please host my model?
    • Probably not. That would likely open us up to legal responsibilities associated with distributing this material.
  • Why Unity Pro?
    • Unity pro is required to build the asset bundles that models are distributed in.
  • Can't you just import my model at run-time?
    • Hopefully soon, but not yet, no.
  • I can see my model on my body, but it's not animating.
    • You probably forgot to set the avatar type to humanoid, or perhaps you're not using the latest version of unity.
  • I'm in the chat room but my body is invisible to other people
    • The other people in the chat may be still downloading your avatar file, or perhaps they are having problems with the link.
  • My model is live and the url is correct but no-one can download it in the app.
    • Make sure you have a valid crossdomain.xml file.
  • Can I make my avatar a naked man or woman?
    • Yes, but please don't. Virtual Reality is in it's early stages and we need to be accepting and tolerant of cultures, genders, sexes, races and orientations.  We're trying to grow a community and everything we do that reduces the speed of that growth will hurt us all. 
  • All those directions look difficult. I only understood half of it. I really want to have a custom avatar though.
    • Find someone with Unity Pro, give them a link to these instructions, your model and $10.
  • Can I host my character on dropbox or google drive?
    • I haven't tested it, but I'm pretty sure the answer is no.
  • Where can I find a model to use if I'm not an artist
    • Try or There's a wide selection there.  But let's keep it PG people.  Also some hobbyist models are just technically bad.  Try them out in the default world before you commit to using it in a chat.
  • Isn't the barrier to entry kinda high?
    • Yes, for now.  Hopefully we'll be able to relax some of these restrictions in the near future.

Thursday, 27 February 2014

Large World Rendering on the Oculus Rift

When The Elder Scrolls: Oblivion launched, there had never been a game of that scope that let you see so far into the distance.  Seeing this huge world spread out before you went a long way towards building the immersion that game is known for.

Oblivion: I can see for miles

Of course it's impossible to have a background world so expansive with the same level of detail required for the foreground so you basically need to render your world twice.  A low-res distant version and a high res foreground version.  Then the distant version can be rendered into the foreground on a plane.

One of the benefits of rendering the scene twice instead of using a continuous LOD system is the precision of the depth buffer.  If you render a scene that's kilometers deep there will be a lot of object flicker close to the camera, but if you render a shallow scene near the player and a deep one in the distance you can get good depth buffer precision near you and good distance as well.

Red is all Foreground (plus some of the water)

Nothing I've said so far is in any way related to the Rift, but here's where it gets interesting.  We're already rendering the scene twice (once for each eye).  If we were to double that we'd get four scene renders.  That seems excessive.  If only there were a way to optimize this.

Well, we're in luck because when the distance scene is hundreds of meters away and our eyes are only 6-7 cm apart, we really don't get any benefit of rendering the scene for each eye.  We can render the distanct view once and draw the same render into each eye and no-one will notice.  This means we only need three scenes to render.

In my scene I have 3 cameras attached to the player.  One is centered between the eyes and renders the sky box and the distant mountain range into a texture.  Then I have the two cameras for the player's eyes.  The first thing rendered into this camera is a plane with the distant lands texture on it.  We have to make sure this plane is far from the player so we don't end up cross eyed when we focus on it.

That's it.  A very simple optimization for rendering large complex scenes in the Rift.

Red foreground is rendered in left camera, blue foreground is rendered in right.  
The distant mountains and skybox are rendered only once onto a plane.

To see more of snowdrift, check out

And as usual, all the comments are in the Reddit thread...

Tuesday, 18 February 2014

I was a guest on the RevVR Podcast

I had a blast this week on the RevVR podcast hosted by  Reverend Kyle is pretty easy to talk with and I could have kept going for hours!  He's a smart guy to stop me :)

Check out the write up article here...

Wednesday, 29 January 2014

What's the opposite of Augmented Reality and why should I care?

I'd like to talk about something I'm calling "Thin Reality." I've also been tempted to call it "Submented Reality" just because of how terrible it sounds  There might be another name, but these will have to do for this article.

What I'm talking about is displaying a thin layer of reality in a VR environment. This is unlike Augmented Reality which displays a thin layer of the virtual in real-reality.

A company called Sixense makes magnetic positional sensors - the Hydra and the upcoming STEM system. These devices allow you to detect the position and orientation of an object.  Most people are planning to use these to determine the position and orientation of the player's hands or a prop such as a sword or gun.

Imagine attaching a STEM sensor to your keyboard instead of your hand.  Then in your game you could ask the game for help locating your keyboard.  A transparent glowing model of your keyboard could then appear inside your virtual world, but in it's real world location relative to your eyes.  You could then instinctively reach out and place your hands on your keyboard without removing your Oculus Rift.

This could be done for your mouse, keyboard, controllers, headphones, cell phone or even your coffee mug; anything you might have cause to reach for during a play session.  It might seem to break immersion to have this out-of-game object appear in your carefully crafted virtual world but it wouldn't be as bad as removing your Rift every time you want a caffeine fix.

While magnetic tracking like the STEM may be the "perfect" solution it's cost might put it out of reach for some.  Fortunately most situations could be handled by a simpler & cheaper solution.  Much like the position tracking for the Oculus unit itself, a stationary camera could be used to identify and locate many common items by QR code as long as they weren't obstructed.  Thanks to dbhyslop on Reddit for pointing out that the Rift will already ship with a camera that could be used for this.

Wednesday, 15 January 2014

Using the keyboard with the Oculus Rift

Sometimes we just need to type text on the keyboard.

The Oculus Rift makes this challenging.  Showing the UI and letting the player see the text they're typing isn't the problem.  It's much simpler to describe, and much harder to deal with.

The player can't see the keys.

Even if they're a great touch-typist, just finding the keyboard on your desk and reaching the home position can be problematic.

It's not ideal, but the solution I've come up with is to ask the player to take off the Rift while they're typing.  I'm lucky enough that the only thing I need them to type is an online log-in which I can ask for as soon as the game is launched.  This means they're not taking it off and putting it on through out the experience and since they weren't using the rift to operate windows before launching the app they'll get in the habit of not donning it until they're signed in.

The challenge remaining is how to communicate to the user whether it should be worn or not.  At some point in the future it would be nice to detect if the Rift is being worn or not, but for now this capability doesn't exist so we need to be able to communicate to the user whether they're wearing the Rift or not.

Here's a screenshot of how I'm doing this on SnowDrift...

This isn't final, but you can see the important bits.  It's a busy screen, with typing that requires the user to take off their rift.  If they were wearing the Rift unit when they got to this screen, almost everything you see here would only appear in one of their eyes, and they would be unable to comprehend it clearly.  The only thing that would be clearly visible to them would be the light blue message to take off the rift since it's displayed in stereo at a comfortable focal distance.

As we grope our way through the first few months and years of real VR, I hope we come up with a better solution than this, but for now I hope someone finds this discussion handy.