IBM announced the open-sourcing of a new serverless framework called CodeFlare on Wednesday. It will help integrate and simplify artificial intelligence workflows on multi-cloud and scaling of massive data.
The framework was built on an open-source distributed computing framework known as Ray while adding additional features for scaling and integration for developers and data workers. To simplify the training and optimization of machine learning models to perform tasks is an extremely human intrinsic process. To simplify this process and save some time, IBM released CodeFlare.
The simplification is carried out by creating pipelines to integrate, parallelize, and share data using a python-based interface. IBM had created pipelines earlier to deploy applications on IBM cloud called Continuous Delivery Tekton Pipelines, but they weren’t compatible with hybrid cloud platforms.
But, CodeFlare can unify these pipelines on IBM cloud and Red Hat OpenShift without learning a new workflow language for every infrastructure. IBM CodeFlare provides adapters for event triggers like the arrival of a new life for the pipelines to integrate and bridge with other cloud-native ecosystems. It also allows loading and portioning of data into data lakes, cloud storage, and distributed file systems.
Read more: IBM Broadens 5G Deals With Telefonica And Verizon With Cloud And Artificial Intelligence
With IBM CodeFlare, it’s easier for colleagues in the same team or organization to figure out what has been done in the past to run the pipelines. With robust tools and advanced APIs, the workload is more consistent, giving more room for actual research and development than understanding and deploying complexities.Â
The robust qualities of CodeFlare kicks in when its speed comes into play. According to the IBM blog, when CodeFlare was applied to optimize and analyze 100,000 machine learning training pipelines, it could execute each pipeline in 15 minutes instead of 4 hours. IBM is using the framework for its own artificial intelligence research and will be developing much more complex pipelines that are more consistent, reliable, and enhanced fault tolerance.