XR Development Tools


Internal tools at Moving Labs
UX + interaction designer
‍Twelve weeks (part-time)


This project was part of an internal effort at Moving Labs to build our own tools for XR development. We hoped to address our own needs first and then release them for others to build on top of.

The Camera

Although the design was never produced, these high-fidelity mockups explore styling, typography, and overall proportions.

Requirements from the future

Since we were a new team and hadn't worked together on XR development projects before, we couldn't look to an existing workflow to help us define our requirements. Creating three rough proto-personas helped us group and prioritize features from our exploration according to audiences and their potential motivations.

  • Designers on our team who want to communicate with developers about UI implementation issues
  • Designers on our team who want to document an overall experience to present to the wider company or clients
  • Other XR creators using our tools to help build their own experiences

We focused on the first persona, using some rough user journeys of existing workflows like the one shown above to align team members around core tasks. With a hard look at schedule and resources, we came up with the following must-have features to support them:

  • Begin in 1st person POV mode
  • POV and floating or tripod view modes
  • User can review photos and videos from the device
  • Photo and still recording

Gathering references

Playing with existing cameras in VR applications helped us visualize possibilities and problems. One feature that stood out was the prevalence of a floating preview screen. I kept this idea since I found its feedback of what the camera was doing to be very helpful in learning how the devices worked and operating them.


Other cameras used a mix of VR controller buttons and floating button affordances to record. When acting out some rough scenarios with VRChat prototypes, I realized that many situations would require me to have my hands engaged with whatever UI elements I was documenting. Because of this, I focused on a floating button that would track player movement, thinking this would be especially useful for recording videos. I designed for easy wand selection or hand tracking button push by making the record button physically separate from the rest of the interface.

The minimized size also allowed for it to be conveniently grabbed if users preferred to use the controller trigger for recording, which was potentially more convenient for taking multiple stills rather than using wand selection if one hand could be free to hold the record button.

We anticipated mostly needing a first person view of the scene but also thought of some instances where walking through trigger volumes or other larger-scale interactions might need to be documented from an external view. I sketched a virtual drone (in the style of Snap's IRL Pixy product) that indicated camera position in 3d space and was an affordance for repositioning the camera.

Reviewing Recent Captures

I also anticipated that, similar to doing bug reporting or UI audits in 2d screen design, I would need to review recent images and possibly retake or re-record footage before ending the app session. To help with this, I designed a recent image/viewfinder and image gallery that could be expanded as the camera was flipped over.

When testing out a rough prototype in VRChat, I realized that the initial recent image size was too small. After this, I changed my designs so that the last image was displayed more prominently as the 3d ui flipped over, rather than immediately showing the image gallery. Although we didn't see many examples of interactions like this, Gravity Sketch's controller menu does something similar, using the three-dimensionality of the interface very well to divide up groups of content and buttons.

I created this Unity prototype to test some interface controls, yielding valuable insights about the "ergonomics" of the camera.

Prototyping to find out

I did our first initial tests using VRChat as a quick and sharable prototyping platform. These helped us visualize some of the design questions about where UI elements would live in the scene and what kinds of UI would be most comfortable to operate. By providing a canvas to improvise on, they also helped us get a sense for filling in what we might design for people in the future in VR.

Having VRChat’s prefabs ready to go also helped us uncover some new ideas through playful interactions, like this one below where a follow camera tracks an avatar as they run around drawing lines with a pen. We realized that in lieu of creating more fully developed VR scenarios, we could instead stage some of the building blocks to help us get clear on what kinds of spatial interactions we might need to be documenting.

The project ended before we were able to block out a full list of these and test a staging/improv space. VRChat seemed an ideal way to create a space like this with interactable objects and quickly share a link that other team members could use to jump in and test things out together or asynchronously.