Hyun S Kim
video screenshot.png

HopCam Orig


Role: Lead Interaction Designer, Storyboarder, Animator
Project Nature
Ideation / Prototyping / Visual Design
 
Date: Nov. 2015
Timeline: 4 weeks
Team: Janet Kim, Catherine Jou, & Xiao Yan
 

HopCam

Video cameras are ubiquitous and most development has been seen in improving the quality of the images and the technical video capture experience.  Exploration of new interaction models, however, is currently less explored.  For this project we explored new interaction methods for body cameras, through sketches, storyboards, and  a video prototype.

 

Concept Ideation

We began ideation by attempting to separate our mental model of what a camera is from what it can do. This culminated in an exploration through sketching on how videos of shared experiences can more seamlessly accessed through interaction during consumption.  One of the ideas that struck me as interesting is how poorly video is archived and accessed. Videos are categorized typically by author, and keywords. However, many videos can be grouped by having shared a experience of location and time. 

My proposed interaction concept helps connect videos with shared view points, by allowing viewers to easily switch between cameras in the field of view. This concept can be applied to live streaming videos or recordings after the fact. When recording a consumer may switch to any other video stream within view. This interaction can be easily applied to watching sporting events, tourism, live music events and more. Instead of browsing visual lists of videos, the concept allows for a more immersive browsing of available video.

 
 

Storyboarding

The concept I chose to explore is relatively simple, however the contexts of use was difficult to describe in words. A storyboard was developed to communicate an ideal but realistic scenario.

 

The story begins with a protagonist putting on the device, which was at this point a monocle. The monocle is transparent initially, but can display feeds from other video cameras. The narrative takes the audience through the protagonists chosen path of view point switching. 

 

Team Formation and Physical Product Design

At this point of the project I teamed up with Janet Kim, Catherine Jou, and Xiao Yan. Each of us had developed a body camera concept and storyboard. My concept was chosen to carry on in development through a video prototype. 

Although the initial device concept in my storyboard was a monocle, Janet, a product designer, transformed it to a more sensible broach or clip on device. 

 

Video Prototype

I sketched a simple user interface that gives users information on number of camera jumps, distance travelled, locality, and the duration of their viewing experience. The animations I designed attempted to give the user a sense forward motion when camera jumps are triggered.

HopCam was chosen for the product name and Xiao developed the logo. Janet and Catherine and I filmed the video, and all four of us edited the final cut. Due to scheduling issues we filmed at night, however it might have been more successful during the day. Better storyboarding for the video could have helped flesh out a narrative. The music in the film was a song I made prior but finished recording to use with the video.

 

Concept Poster

The final deliverable was a concept poster. I experimented with the idea of porting HopCam to a mobile application, along with explaining the general concept of the product and interface.

 

Outcomes

The concept was well received, however further development and a round of usability testing would likely uncover issues with the interface implementation. I suspect that the camera switching method may need better affordances to allow easier switching of camera view points. The initial concept is for the user to directly track and select other HopCam users, who may be moving targets. I also have questions about the implementation method of this design. Should visually occluded cameras be selectable for view point switching? How would excluding visually occluded cameras be culled from the interaction? Additionally, would additional video meta data be necessary to track the users view point and locate other cameras? Further development and research could resolve these open issues.