Company: Hypergiant
Role: Creative Technologist
Tools: Python3+, Raspberry Pi, Electrical Engineering

HyperVSR

May 28, 2019

HyperVSR is a display system that can help better manage complex space missions and improve astronaut safety by putting more information at their control allowing the crew members to cut down on how much they need to maneuver in a spacesuit. It is designed to quickly gather data about a situation, health, or mission critical information all driven by computer vision.

The project has been currently split up into two phases. The first phase was a very rough prototype fashioned from off the shelf components and 3d printed parts. The importance of this phase was to see if the functionality was even remotely possible so the focus was on getting the first pass of software up and running.

The software was built on top of the Qt framework for driving the interface application. I utilized running the custom hand detection model as a separate thread to the interface so the hand model would not block the main thread for processing each frame captured by the camera. The custom hand model, built against the 11k Hands dataset allowed the ability to classify hands for both dorsal (back) and palmar (front), as well as left and right hands from the users point of view. To handle for any abnormalities or random false-positives, the hands state is stored and aggregated over a set number of frames. Allowing a cache of predictions helps create a solid overall prediction without worring about the potential of haveing a fluxuating frame rate from the source input camera.

With the first phase a success, the project expanded into a more polished second phase. This revolved around fabricating experiences around both and active (full-face) and passive (additive) helmets.

The active display took the idea of taking full control of the users view and allowing our natural human optics to be augmented to support interactions like zooming into a view point, looking in reverse, and even adapting additional optics such as Thermal vision.

The application for the active display was built upon the same Qt base application framework, and built in a way that features were added in modularly. However instead of driving the optics augmentations through hand identification it was decided to be simplified via a non-latching push button so that content can be passed through to a larger external screen in addition to what was being shown in the helmet at the time. This allowed for demos to be held in larger groups.

The passive display was an extension of the first phase. Purely driven by hand identification, the interface was triggered and passed to the active display glasses through a locally hosted web server. With this functionality, health information from an external device could be passed through to the interface and update values of the user in realtime.

Featured on Futurism.