Author Archives : Jeff Burke

OPT Pose Recognition

As part of the V2 Gnocchi update, OpenPTrack now uses machine learning for pose recognition alongside person tracking and its new object tracking capabilities. OPT pose recognition extends the OpenPose skeletal tracking library to multiple cameras, and includes the ability to train the system to detect unique poses. Early adopters include UCLA and Indiana University STEP researchers, who are integrating pose recognition into […]


Zed Camera, CNN-based People Detection

OpenPTrack V2 “Gnocchi” Coming Soon!

The OpenPTrack team is preparing for the release of OPT V2 (Gnocchi), which will provide object tracking, pose recognition and ZED camera support, as well as enhanced real-time person detection.  Gnocchi will also update the underlying software stack to Ubuntu Linux 16.04 LTS and Robot Operating System (ROS) Kinetic Kame.  OPT V2 Gnocchi has been under development for the […]

New TouchDesigner Components

Two open source components for Derivative’s TouchDesigner have been released for receiving person tracks streamed from OpenPTrack. The components were developed by Phoenix-based developer/stage designer Ian Shelanskey, and can be found in our GitHub repository as well as in Ian’s. The first component is a TOX using Python that improves on the original examples in the […]

OpenPTrack at IEEE VR Los Angeles

UCLA researcher & PhD student Randy Illum presented Mixed-Reality Barriers: Person-Tracking in K-12 Schools at the IEEE VR conference in Los Angeles on March 19. The paper, cowritten with GSE&IS PhD student Maggie Dahn, details the use of OpenPTrack in classrooms at two schools—one, a university laboratory elementary school, and the other, a public charter school. Illum’s presentation was given as part of […]

An iSTEP for Cyberlearning with OpenPTrack

Building on the ongoing Science Through Technology Enhanced Play (STEP) project, Interactive Science Through Technology Enhanced Play (iSTEP) will begin to incorporate new OpenPTrack capabilities currently in development.  STEP’s computer simulation, with OpenPTrack as the interface for body-based interaction, has been helping students understand scientific phenomena at UCLA Lab School and Indiana University. The projects’ embodied play and […]


OpenPTrack to be Presented at MW2016

UCLA REMAP researcher and UCLA GSE&IS doctoral student Randy Illum will be introducing OpenPTrack to MW2016 on April 9. Scheduled as one of the conference’s Lightning Talks, Illum will present “OpenPTrack: Body-Based Group Tracking for Informal Learning Spaces,” which will share the OpenPTrack platform and OpenPTrack projects-to-date, to generate conversation about the future of body-based interaction and visitor engagement […]

UCLA REMAP Crowdfunding for New OpenPTrack Collaborations

UCLA REMAP has launched a campaign via UCLA’s crowdfunding platform, UCLA Spark, to raise support for three new OpenPTrack deployments and deepen the open source process. By collaborating with artists and educators not initially involved in OpenPTrack’s development, the research team will be able to expand upon efforts to make the platform easier to use, more […]

OpenPTrack at TouchDesigner Workshops West

UCLA REMAP has set up a temporary installation of OpenPTrack at Derivative‘s Los Angeles Space. The team will be showcasing OpenPTrack and its easy integration with Derivative’s TouchDesigner at TouchDesigner Workshop Los Angeles on April 30 and May 3. The deployment consists of (1) Kinect v2, (2) Kinect v1, and (3) CPUs.   (Click for a larger view of […]

Prototype Deployment – Indiana University

Dr. Joshua Danish and the Learning Sciences program at Indiana University are now using OpenPTrack to develop Cyberlearning tools with elementary school students at a Bloomington area school. The Indiana installation is supporting a thread of the NSF-funded Science Through Technology Enhanced Play (STEP) project, led by Dr. Noel Enyedy of the UCLA Graduate School of […]

Kinect2 Support

Support for the Microsoft Kinect2, which will provide enhanced range and accuracy, is now in testing at UCLA! The detection code leverages GPU processing to handle the higher resolution RGBD images. Steht ein häuschen in der mitte, rund und rührend ghostwriter hausarbeit Ich fand es gut zum verlieben