Now, we plan to invite potential collaborators–creative coders in education, arts & culture and computer vision developers across the world–to use and contribute to the open source platform. Please follow us on Twitter for posts about developments and areas for collaboration.
Your contributions will help shape our plan for the platform’s next steps. Feature requests can be submitted as issues on Github, or contact our authors by email. Our current plans include:
- Continuing to improve configuration and calibration–to make deployment in real-world applications as simple and robust as possible.
- Adding better support for redeployment, such as for touring shows or other situations using portable systems–with the goal of providing a consistent coordinate system and consistent performance to applications, even as imager setups vary significantly.
- Continuing to improve ID stability towards the goal of a single ID for each individual while they are in the tracking volume, with a reduction in lost/reacquired tracks that result in new IDs for the same person. An improved person detector is included in the “Gnocchi” release.
- Building additional processing functions for turning basic centroid data (for individuals and groups) into higher-level interaction signals (analogous to how finger touches are translated to gestures in multi-touch systems) and corresponding high-level APIs.
- Considering ways to fuse other sensing systems (e.g., for active RF positioning or inertial sensing), which may provide features currently difficult to provide through RGBD imagers alone.
- To explore support for more pose and gesture signals within the platform, to expand the interaction vocabulary that it will support–while robust centroid tracking remains OpenPTrack’s primary objective. Initial support for pose recognition is included in the “Gnocchi” release.
This roadmap will remain a work-in-progress, so please check back regularly.