The BabyView camera is designed to collect high-resolution, at-home egocentric video data from children 9 - 30 months of age. The camera consists of a rotated GoPro Black Bones camera attached to a lightweight baby safety helmet using a custom 3D printed mount and attached to a rechargeable 9V battery. The BabyView camera was designed in collaboration with Daylight, Inc., a product-design firm in San Francisco, CA, and and is the result of more than a year of prototyping.
All design documentation, safety testing protocols, assembly instructions, pilot data, data management protocols, and sample participant instructions can be found at https://osf.io/kwvxu/, which also links to a preprint of a forthcoming paper with this same content.
If you are interested in being part of a batch of orders for BabyView cameras for production in summer 2022, please contact bria [at] stanford.edu.
Our goal was to capture a toddler’s field of view and their interactions as accurately as possible while also providing head tracking data. We found that the GoPro Hero Bones meet these needs with a ~100°+ FOV, Gyroscope, accelerometer, high resolution video and recording time up to 45-60 with this current battery choice. Data is recorded onto micro SD cards which can hold up to 256G. During our experimentation with the GoPros, we determined that orienting the camera vertically at an angle neutral to the face place of the child was preferable because it enables the camera to capture both adult faces and objects within a child’s hands in the same image; see example images below.
Bria Long1, Sarah Goodin2, George Kachergis1, Virginia A. Marchman1, Samaher Radwan1, Violet Xiang1, Chengxu Zhuang1, Oliver Hsu2, Brett Newman2, Daniel L. K. Yamins1,3, Michael C. Frank1
1 Department of Psychology, Stanford University 2 Daylight Design 3 Department of Computer Science, Stanford University
Research and development of the BabyView was supported by a generous gift from The Schmidt Futures Foundation (https://www.schmidtfutures.com/).
For more information, contact Michael Frank.
This work is licensed under a Creative Commons Attribution 4.0 International License. It can be shared and adapted, but you must acknowledge the original by citing the paper above.