This blog introduces a paper “Toward Among-Device AI from On-Device AI with Stream Pipelines” , and briefly explains new functions to be released on Tizen 7.0 Machine Learning feature.
NNStreamer is a Linux Foundation (LF AI & Data) open source project  accepting contributions of the general public: https://github.com/nnstreamer/nnstreamer
Modern consumer electronic devices often provide intelligence services with deep neural networks. We have started migrating the computing locations of intelligence services from cloud servers (traditional AI systems) to the corresponding devices (on-device AI systems). On-device AI systems generally have the advantages of preserving privacy, removing network latency, and saving cloud costs. With the emergence of on-device AI systems having relatively low computing power, the inconsistent and varying hardware resource and capability pose issues to on-device AI developers. In the affiliation of authors, we have started applying a stream pipeline framework, NNStreamer, for on-device AI systems, which saves developmental costs and hardware resources and increases performance.
We want to expand the types of devices and applications with on-device AI services not only for products of the affiliation, but also for products of second/third parties, and want to make each AI service atomic, re-deployable, and shared among connected devices; we now have yet another requirement introduced as it always has been. The new requirement of “among-device AI” includes connectivity between AI pipelines so that they may share computing resources and hardware capabilities across a wide range of devices regardless of vendors and manufacturers. We propose extensions of the stream pipeline framework, NNStreamer, so that NNStreamer may provide among-device AI capability. We now have planned two major features for this: NNStreamer-Edge and Machine Learning Service.
NNStreamer-Edge is a lightweight and portable library to connect with NNStreamer pipelines and or edge devices with pub/sub/query functions. NNStreamer-Edge is an open source software package independent from the main project, NNStreamer, and its basis GStreamer . It does not depend on NNStreamer or GStreamer so that devices that cannot afford GStreamer or heavy operating systems may easily use NNStreamer-Edge.
NNStreamer-Edge has minimal extra dependencies, anyone may implement their own proprietary software with NNStreamer-Edge. For example, third party developers may implement a proprietary Mediapipe  plugin with NNStreamer-Edge so that arbitrary Mediapipe pipelines may communicate with NNStreamer pipelines. Note that DeepStream  pipelines can connect to NNStreamer pipelines trivially because both are based on GStreamer.
Figure 1. NNStreamer-Edge scenarios
Figure 1 shows various NNStreamer-Edge scenarios. NNStreamer element, tensor_query, is a filter that provides remote inference - AI offloading: sending preprocessed tensor stream and receiving the inference result from other pipeline. NNStreamer-Edge can support various methods for connectivity - publish, subscribe, and query between the pipelines or edge devices.
After wide deployment of the previous work including mobile phones, wearable devices, TVs, and home appliances, we have worked on prototypes of various internal clients, and observed the following lessons and future work.
A lot of users appear to feel barriers against adopting pipe-and-filter architecture. We have been implementing initial prototypes or practicing pair programming for the clients. However, we could have observed unexpectedly steep learning curves to adopt pipeline concepts and to describe pipeline topology. For the former issue, we are preparing writing more diverse pipeline examples and documents for users. For the latter issue, we have a plan to implement a pipeline editor, helping users to make their own pipeline easily.
Besides, we tried to catch up with technical questions in various channels (Slack, Github issues, mailing lists, and direct contacts. Promoting communication with the NNStreamer community may address this issue; however, similar cases are observed and a few inappropriate pipelines have been released before we could address. This is especially troubling considering that we are still at an early stage of deploying the pipeline paradigm to AI developers in the affiliation for such issues.
Figure 2. Machine learning service overview
We would like to suggest additional methods, directly intervening pipeline development process: Machine Learning Service. First, we need to provide common parts of pipelines (sub-pipelines) as a library that can be invoked by or inserted into a user pipeline. There are a lot of commonly used parts of pipelines in AI applications: e.g., pre-processing video streams for object detection. This may be able to prevent a few of inappropriately designed pipelines as well. Second, we need to prepare pipelines and sub-pipelines in software platforms so that applications without special AI features may simply invoke such pipelines without actually writing pipelines. This not only removes the need for writing pipelines for common basic AI applications, but also allows to share an instance of pipelines between different instances of applications. Moreover, if a vendor wants to separate application development division and AI development division, this approach will cleanly separate code repositories for corresponding divisions, which is the reason a few of clients have requested this.
1. MyungJoo Ham, Sangjung Woo, Jaeyun Jung, Wook Song, Gichan Jang, Yongjoo Ahn, and Hyoung Joo Ahn. 2022. Toward Among-Device AI from On-Device AI with Stream Pipelines. In Proceedings of The 44th International Conference on Software Engineering (ICSE 2022).
2. NNStreamer. https://lfaidata.foundation/projects/nnstreamer
3. GStreamer. https://gstreamer.freedesktop.org
4. Camillo Lugaresi, Jiuqiang Tang, Hadon Nash, Chris McClanahan, Esha Uboweja, Michael Hays, Fan Zhang, Chuo-Ling Chang, Ming Guang Yong, Juhyun Lee, et al. 2019. MediaPipe: A Framework for Building Perception Pipelines. arXiv preprint arXiv:1906.08172 (2019).
5. Nvidia. DeepStream SDK. https://developer.nvidia.com/deepstream-sdk