In the PyTorch developers day 2021 conference, Meta launched PyTorch Live that uses a single programming language to design AI-based applications for both Android and iOS platforms. The mission of PyTorch is enabling cutting-edge research to developers, co-developing with many stakeholders, modularity to allow developers with their desired tools, and being performant and production-oriented. By keeping these four criteria as a base, PyTorch introduces PyTorch Live.
PyTorch Live simplifies the stringent resource restrictions of mobile devices and also reduces the workloads of mobile developers to create novel ML-based applications. PyTorch Live is a set of tools to build AI-powered mobile applications that runs on both Android and iOS platforms.
Usually, building apps that work across different platforms requires expertise in multiple programming languages and therefore increases the cost of leveraging mobile models on different devices. In addition, developers would be required to separately configure the project and build UI (User Interface) that runs on different platforms, thereby slackening the app development process.
Read more: Hopkin to use Artificial Intelligence to promote Healthy Aging
PyTorch Live is powered by two successful open-source projects called PyTorch Mobile and React Native to build AI-powered mobile applications. While PyTorch Mobile is a runtime used to perform on-device inference for training and deploying in mobile applications, React Native library is used to build interactive user interfaces for Android and iOS platforms.
To design and build AI-powered mobile applications, developers can use PyTorch Live’s CLI (Command Line Interface), Data Processing API, and Cross-Platform apps. While CLI quickly sets up the mobile app development environment and bootstraps the mobile app projects, Data processing API is used to prepare and integrate custom models for building a new mobile application. The Cross-Platform apps use PyTorch Live APIs to build AI-powered mobile apps for Android and iOS.
Users can build their own user interface to build models using PyTorch Live’s Core APIs like Camera API and Canvas API. While Camera API is used to build a UI that identifies the objects in an image captured by a user, Canvas API is used to build a UI that allows a user to draw and predict the respective letters or digits.
According to Meta, PyTorch Live (GitHub) will also support developers to work with audio and video data in the near future.