Home Blog Page 253

Stanford Team uses AI to set World Record for Fastest Genome Sequencing

Stanford university genome sequencing ai
Image Credit: Analytics Drift

A team of Stanford scientists set the first Guinness World Record for the fastest DNA sequencing technology, which took only 5 hours and 2 minutes to sequence a human genome. The research team led by Stanford University collaborated with NVIDIA, Oxford Nanopore Technologies, Google, Baylor College of Medicine, and the University of California at Santa Cruz to use AI to speed up the end-to-end process, from collecting a blood sample to sequencing the entire genome and identifying disease-linked variants. The record was certified by the Genome in a Bottle group of the National Institute of Science and Technology, and it is documented by Guinness World Records.

Sequencing genomes entails extracting short sequences of DNA from the 6 billion pairs of nucleobases inherited from our parents, namely adenine (A), thymine (T), guanine (G), and cytosine (C). Using a typical human genome as a reference, the sequences are then replicated and reattached together. This method, however, does not always capture the full genome of a patient and the data it gives can sometimes leave out variations in genes that point to a diagnosis. This means that locating mutations that occur throughout a wide portion of DNA might be difficult, if not impossible. Hence researchers use lengthy-read sequencing which preserves significantly longer segments of the patient’s genome, increasing the chances of finding mutations, minimizing errors, and correctly diagnosing the patient.

Genome sequencing is a vital tool for clinicians diagnosing uncommon genetic illnesses. It aids them in determining whether their patients’ genes are mutated and, if so, what genetic disorders such mutations could cause. However, it is not an easy task due to oddities such as variances in sequencing techniques and technologies, as well as data storage formats and data exchange protocols. Machine learning and deep learning are two AI technologies that are already well-known for their remarkable data processing and pattern recognition prowess. As a result, AI frameworks are used in healthcare research to allow for the efficient interpretation of massive complicated datasets, such as genomes.

The researchers were able to reach the record-breaking speed by refining each step of the sequencing process. Stanford researchers used a DNA sequencing platform from Oxford Nanopore Technologies, called PromethION Flow Cells. This device reads genomes by pulling large strands of DNA through pores that are similar in size and composition to the openings in biological cell membranes. It detects the DNA sequence by reading small electrical changes specific to each DNA letter as a DNA strand travels through the pore. Thousands of these pores are dispersed over a flow cell device. The researchers sequenced a single patient’s genome simultaneously over 48 flow cells, allowing them to read the full genome in a record duration of 5 hours and 2 minutes (7 hours and 18 minutes in total, including diagnosing it). The device also supports “long-read sequencing.” 

They generated more than 100 gigabases (one billion nucleotides) of data every hour employing high nanopore sequencing on Oxford Nanopore’s PromethION Flow Cells, then expedited base calling and variant calling using NVIDIA GPUs on Google Cloud. At this stage, the device’s raw data are converted into a string of A, T, G, and C nucleotides, which are then aligned in near real-time. The scientists quickly realized that sending the data directly to a cloud-based storage system allowed them to boost computational power enough to handle all of the data generated by the nanopore device. Because it dispersed the data among cloud GPUs, it immediately reduced latency. 

The next step was to look for little variations in the DNA sequence that could lead to a hereditary disease. The researchers used the NVIDIA Clara Parabricks computational genomics application framework for both base calling and variant calling. Clara Parabricks used a GPU-accelerated version of PEPPER-Margin-DeepVariant, a pipeline developed by UC Santa Cruz’s Computational Genomics Laboratory in partnership with Google, to speed up this stage. DeepVariant employs convolutional neural networks for very accurate variant calling. Clara Parabricks’ GPU-accelerated DeepVariant Germline Pipeline software produces findings at ten times the speed of native DeepVariant instances, reducing its time to find disease-causing variants.

Read More: Stanford’s ML Algorithm Accurately Predicts the Structure of Biological Macromolecules

Using this rapid genome sequencing approach, scientists scanned the 3-month-old patient’s entire genome in just eight and a half hours. They discovered that the baby’s CSNK2B gene was altered. CSNK2B is a gene linked to Poirier-Bienvenu syndrome, a rare neurodevelopmental condition characterized by early-onset epilepsy. Doctors diagnosed the patient with Poirier-Bienvenu within a few days, administered the appropriate antiseizure prescription, and provided disease-specific counseling and a prognosis to the patient’s family. On contrary, an epilepsy gene panel (which did not include CSNK2B) that had been ordered at the time of the presentation, and the findings, which arrived two weeks later, revealed only several nondiagnostic variations of questionable significance.

This highlights a huge milestone in genome sequencing and health diagnostics. With the ability to sequence a person’s entire DNA in just hours, super rapid genome testing could become a life-saving technology for detecting inheritable disorders in humans. This will also enhance patient prognosis by discovering certain disorders early. Rapid genome sequencing could also be the key to identifying and classifying undiagnosed adult patients with unknown genetic disorders.
For more information visit here.

Advertisement

Cnvrg.io announces AI Blueprints, an open-source suite of ML Pipelines

Cnvrg.io AI Blueprints ML Pipelines

Full-stack data science platform providing company cnvrg.io announces AI Blueprints. The newly launched machine learning pipelines are an open-source suite that allows users to quickly deploy artificial intelligence applications. 

It is a very easy-to-use and user-friendly platform that can work on any infrastructure. AI Blueprints are designed for developers who wish to create machine learning-powered apps and services for typical commercial applications such as recommender systems, sentiment analysis, object detection, and more. 

According to the company, AI blueprints are based on cnvrg.io’s experience working with some of the world’s top data and AI teams, studying and recognizing recurring bottlenecks and technical difficulties that arise when deploying machine learning. 

Read More: Aurora and U.S. Xpress collaborates to develop Driverless Truck Networks

Cnvrg.io has packed AI Blueprints with multiple features and resources, including a complete library of data connectors, pre-built ML pipelines, and cnvrg ready-to-use blueprints. 

CEO and Co-founder of Cnvrg.io, Yochay Ettun, said, “The cnvrg.io AI Blueprints are developer-friendly, open-source, and fully customizable – enabling any developer to easily add ML to their applications.” 

He further added that AI Blueprints is a way of allowing data scientists to easily share their work while also assisting organizations in keeping up with demand and applying AI to a broader range of applications. 

Israel-based artificial intelligence and machine learning company Cnvrg.io was founded by Leah Kolben and Yochay Ettun in 2016. The firm specializes in providing end-to-end solutions that allow organizations to accelerate innovation and build high-impact machine learning models. 

Cnvry has a vast customer base of companies of all sizes, including multiple Fortune 500 organizations. To date, the firm has raised $8 million in a funding round held in 2019 from investors like Hanaco Venture Capital and Jerusalem Venture Partners. 

In 2020, semiconductor manufacturing giant Intel acquired Cnvrg.io in a deal. However, no information was provided regarding the valuation of the acquisition deal. 

VP Intel AI Strategy and Execution, Kavitha Prasad, said, “We’re excited to work closely with cnvrg.io to enable developers to get more value from their AI initiatives.” 

Interested users can check out AI Blueprints from the official website of Cnvrg.io. 

Advertisement

General Motors to invest $7 billion in Michigan facilities for EV Production

General Motors invest EV Production

Global automobile manufacturing giant General Motors (GM) announces its plans to invest $7 billion in four Michigan facilities to accelerate the production of electric vehicles (EV). 

With the new investment, GM plans to start production of electric pickup trucks by 2024. General Motors says that the new investment will help in creating more than 4000 new job opportunities while exponentially increasing the production capabilities of electric trucks and battery cells. 

It is a milestone development as this is the single largest investment that GM has ever made in its history. CEO and Chair of General Motors, Mary Barra, said, “Today we are taking the next step in our continuous work to establish GM’s EV leadership by making investments in our vertically integrated battery production in the US, and our North American EV production capacity.” 

Read More: Top 10 Python Data Science Libraries

She further added that GMC HUMMER EV, Cadillac LYRIQ, Chevrolet Equinox EV, and Chevrolet Silverado EV have all received excellent customer feedback and reservations since their recent electric vehicle releases and debuts. 

Therefore, GM will construct a new Ultium cell battery plant in Lansing and the conversion of GM’s Orion Township assembly factory to support the p[roduction of its highly rated electric vehicles. 

Once the facilities start operating at full scale, GM will be able to manufacture 600,000 electric trucks. The company expects to have the biggest EV portfolio of any carmaker, solidifying its route to EV leadership in the United States by the middle of this decade. 

This investment, according to GM, is the next step in the company’s plan to become the EV market leader in the US by 2025. 

“These important investments would not have been possible without the strong support from the Governor, the Michigan Legislature, Orion Township, the City of Lansing, Delta Township, as well as our collaboration with the UAW and LG Energy Solution,” said Barra.

Advertisement

Aurora and U.S. Xpress collaborates to develop Driverless Truck Networks

Aurora U.S. Xpress Driverless Truck Networks

Self-driving vehicle software developing company Aurora has partnered with trucking company U.S. Xpress to develop driverless truck networks. 

According to the agreement, the companies will determine the best deployment tactics for Aurora Driver-powered vehicles to meet the demand while improving operational efficiency and productivity. 

Aurora is developing Aurora Horizon, a service that comprises a fully automated driving system and a suite of fleet management tools and services. The newly announced partnership will allow Aurora to further refine its autonomous Driver-as-a-Service product, Aurora Horizon, enabling the companies to initiate the commercialization of the product. 

Read More: MeitY’s Data Policy unlocks Government Data for all

Aurora, which has been in business for six years, has previously formed partnerships with multiple vehicle manufacturing companies, including truck makers PACCAR, Volvo Truck, and others like FedEx Corp and Uber Technologies’ Uber Freight. 

The new strategic partnership will enable the companies to identify the sectors and areas where autonomous technology can make the most difference, leveraging Aurora and U.S. Xpress’ expertise in the subject. 

CEO and President of U.S. Xpress, Eric Fuller, said, “Professional truck drivers will always have a place with our company, while autonomous trucks will supplement and help provide much-needed capacity to the supply chain.” 

He further added that Aurora is creating new technology for the future of trucking, which is the reason they are cooperating early to ensure they are the first to market with self-driving trucks.

Additionally, Aurora and U.S. Xpress will also integrate application programming interfaces (APIs) into Variant’s platform to improve dispatching and dynamic routing following the launch of Aurora Horizon. 

United States-based autonomous driving software developing firm Aurora was founded by Chris Urmson, J. Andrew Bagnell, and Sterling Anderson in 2017.The company specializes in providing a platform that brings software and data services to operate passenger vehicles. 

“Aurora carefully designs its industry collaborations to enhance the value and maximize the impact our product can deliver for our partners’ businesses,” said Co-founder and Chief Product Officer of Aurora, Sterling Anderson. 

He also mentioned that they are pleased to work with U.S. Xpress to provide the benefits of this game-changing technology for their company and customers. 

Advertisement

Another Phishing attack on OpenSea: Are Phishing threats on rise in NFT Marketplaces?

Opensea phishing attack
Image Credit: Analytics Drift

In yet another alarming development, OpenSea, the world’s largest NFT marketplace, disclosed that it had been attacked by a phishing attempt, with at least 32 customers losing NFTs valued at US$1.7 million. This comes after Devin Finzer, the co-founder and CEO of Opensea has rebutted reports that the NFT marketplace has been breached. 

The incident occurred when OpenSea was migrating to its new Wyvern smart contract system, which started on Friday and is expected to be finished by February 25. Wyvern smart contract is an open-source standard commonly used in NFT smart contracts and Web3 applications, notably OpenSea. Users of OpenSea were obliged to convert their listed NFTs from the Ethereum blockchain to a new smart contract as part of the contract upgrade. Users that do not transfer from Ethereum risk losing their old, inactive listings, which presently do not require gas fees for migration. The contract upgrade to remove inactive NFTs from the platform had a one-week deadline attached to it. To assist them, the platform sent out emails to all users with advice on how to confirm the listings’ migration.

The news of attackers hacking the to-be-listed NFTs broke just hours after OpenSea announced its update. The phishing actors took advantage of this process and sent the message from OpenSea to authenticated individuals using their own email addresses, fooling them into thinking their original confirmation had failed.

According to an explanatory thread posted by Finzer, the victims were asked to sign half of a Wyvern order. Except for call data and a target of the attacker’s contract, the order was practically empty, with the victim signing half and the attacker signing the other.

Following signature, the attacker calls their own contract listed in the double-signed order, which initiates the transfer of the victim’s NFTs to the attacker.

According to Finzer, OpenSea determined that neither its website nor a previously unknown weakness in the platform’s NFT minting, purchasing, selling, or listing functions was used in the attack. Clicking on the site’s banner, signing the new Wyvern smart contract, and migrating listings to the new Wyvern contract system through OpenSea’s listing migration tool were all found to be secure.

As per Peckshield, a blockchain security firm, up to 254 tokens were taken, including NFTs from Decentraland, Azuki collections, and the Bored Ape Yacht Club. Molly White, the creator of the Web3 is Going Great blog, estimated the loot to be worth 641 Ethereum. In addition, according to the security firm, the OpenSea hacker(s) allegedly misused the privacy mixer application Tornado Cash to wash ETH 1,100. Tornado Cash has the ability to mask the Ether tokens’ final destination.

The phishing attack is currently being investigated by OpenSea. Current examination indicated that the NFTs were stolen using phishing emails before being moved to OpenSea’s new smart contract. At the moment, OpenSea has denied that the attack was caused by the new contracts and the phishing emails had originated from outside the platform.

Finzer stated that they have yet to determine which websites were fooling users into maliciously signing mails at this time. In the meanwhile, OpenSea is notifying concerned users to offer assistance with the next steps.

NFTs are digital tokens that serve as proofs of authenticity for, and in certain cases, ownership of, assets ranging from high-end ape paintings to collectibles such as celebrity signatures and tangible commodities such as a case of rare whiskey.

With over one million active user wallets and a market capitalization of $13 billion, OpenSea is one of the largest NFT marketplaces. According to Dune Analytics, a Blockchain analytics business, its average daily trade volume is over $260 million, with a monthly volume of over $2 billion in January 2022. According to blockchain tracking service DappRadar, the platform has done $21.8 billion in lifetime trades, which is around $5 billion more than the second-largest platform — LooksRare.

Read More: Polygon and GameOn to Develop NFT based Games: How will it impact the market?

CheckPoint Research issued a security alert in October 2021 regarding a vulnerability in OpenSea that, if abused, may have let attackers take over user accounts and empty their crypto wallets by sending malicious NFTs. In fact, according to a report released earlier this month by Chainalysis, illicit wallets amassed nearly $11 billion in cryptocurrency in 2021 alone.

Last year, a phishing scam led to the loss of 15 NFTs worth $2.2 million from Todd Kramer’s Ethereum wallet. Among the NFTs hacked were four from the Bored Ape Yacht Club. The British heavy metal legend, Ozzy Osbourne debuted his CryptoBatz collection in January, which consists of 9,666 digital bats modeled after Osbourne’s personality. However, only two days after the tokens were issued, collectors reported being targeted by a phishing scam that drains cryptocurrency from their wallets, via a faulty link posted by the project’s official Twitter account. Wormhole Portal, a crypto platform, was hacked in February of this year, losing $322 million, making it the second-largest hack in the Defi industry.

It also noted that the tendency of hackers to create enthusiasm around a project in order to inflate pricing before abandoning it has become more common in the last year or so. 

Advertisement

Top 10 Python Data Science Libraries

Python Data Science Libraries

Python is undoubtedly the most popular and user-friendly programming language for implementing data science and machine learning-related tasks. Since Python has a vast collection of advanced packages or libraries, users or developers can use it for seamless implementation of any high-end AI-based tasks. Python has more than 137,000 libraries where each library focuses on implementing a particular function or purpose to your program. When it comes to high-end processes like machine learning and deep learning, there are extensive collections of Python data science libraries that help you in various phases of the ML model development life cycle, including data loading, data visualization, and model building. 

This article focuses on the top 10 Python library lists that are used in different phases of the model development lifecycle.

  1. TensorFlow

Developed by Google in 2015, TensorFlow is an open-source and most popular Python data science library for deep learning and artificial intelligence applications. With TensorFlow, you can quickly solve any complex numerical computations and implement large-scale machine learning models, including handwritten digit classification, image recognition, text classification, and recommendation systems. TensorFlow provides a comprehensive ecosystem of tools and APIs for developers, enterprises, and researchers. Such tools help developers to build and deploy scalable machine learning and deep learning applications. 

Designed to be a highly compatible library, TensorFlow can run on various devices and platforms, incorporating the advantage of each. TensorFlow is being used by data science and machine learning teams of the world’s top companies to serve and understand customers better. For example, Airbnb uses TensorFlow to build a user-interaction model that automatically guides guests through the payment, cancellation, or refund process. With this, Airbnb provides an instant and smart response to their customers, thereby enhancing the booking experience.

TensorFlow offers standard documentation that guides you through the features, functionalities, and methodologies for implementing machine learning use cases.

  1. Keras

Keras is an open-source and easy-to-use Python data science library for machine learning and deep learning operations. It is also termed as one of the high-level APIs of TensorFlow for implementing neural network strategies on deep learning models. Keras perfectly runs on high-level CPU and GPUs, enabling fast experimentation of neural networks models. 

Incorporating multiple neural network models like CNN and RNN in its backend, Keras helps you build high-end and complex deep learning models in less time. 

Since Keras is beginner-friendly and fast during model deployments, users can develop high-end deep learning models with minimum codes in less time. As of 2021, Keras is being used by over one million individual users worldwide. Some of the world’s most popular companies like Netflix, Uber, and Instacart use Keras for analyzing customer engagement for delivering a better user experience. For example, Netflix uses Keras to build recommendation systems based on past user preferences. 

Keras also offers strong community support to its users by providing standard and easy-to-understand documentation, allowing any user to quickly learn and implement Keras on their own.

  1. OpenCV

Originally developed by Intel, OpenCV is the most popular Python artificial intelligence library for building real-time computer vision, machine learning, and image processing applications. OpenCV was originally written in C++ that comprises over 2500 optimized algorithms. Being one of the widely used libraries, OpenCV allows you to implement computer vision applications like video processing, image recognition, object detection, motion tracking, and much more. It is a cross-platform package that supports various programming languages, including Python, Java, and C++. 

Since OpenCV is an open-source and cross-platform library, it can be used across many operating systems, including Windows, macOS, and Linux. OpenCV can run even on Android and iOS, enabling users to build computer vision-based mobile applications. Because of its high-end features and functionalities, most prominent companies like Google, Microsoft, Honda, and Toyota use OpenCV to build models for real-time computer vision applications. 

OpenCV is perfectly documented, incorporating respective codes for various methods and functions which is easy to understand, and beginner-friendly.

  1. PyTorch

Developed and released by Facebook’s AI research group in 2016, PyTorch is one of the most popular open-source Python data science libraries. PyTorch mainly focuses on deep learning applications, including image classification, handwriting recognition, and time series forecasting. Being highly compatible with the Python programming style, PyTorch is one of the go-to Python data science libraries for implementing complex neural network use cases. Developers use PyTorch for designing complex and high-end deep learning models because of its fast and flexible experimentation feature. 

PyTorch has a distributed training feature that allows developers to distribute the computational tasks among multiple CPUs or GPUs, enabling parallel processing to boost productivity. PyTorch also has a large community of researchers and ML developers who regularly build new tools and libraries to extend the functionality of PyTorch. In addition, some of the most popular companies like Microsoft, Disney, and OpenAI use PyTorch for scaling and optimizing their AI systems. 

PyTorch offers clear and perfect documentation for users to easily understand its features and functionalities along with its use cases.

  1. Sci-Kit Learn

SciKit-Learn, also called sklearn, is one of the most popular Python data science libraries that comprises a variety of supervised and unsupervised algorithms for building machine learning models. With the SciKit-Learn library, you can access various algorithms for machine learning use cases, including regression, classification, and clustering. 

SciKit-Learn has a rich set of functions and modules that allow you to seamlessly perform all machine learning-related tasks, from loading the dataset to model building to evaluating the metrics. With this Python library, users can perform machine learning operations with minimum code adjustments instead of writing a complex algorithm from scratch. 

SciKit-Learn is perfectly documented and has a vast research community where individuals can contribute their newly developed algorithms. 

  1. Seaborn

Seaborn is one of the most popular Python libraries for data visualization and exploratory data analysis. With Seaborn, you can create high-level and attractive statistical plots in different styles and colors. In other words, using Seaborn, based on your data, you can create aesthetic and informative plots, including scatterplot, lineplot, and displot. Many other Python data science libraries are used for data visualization and exploration, but Seaborn is widely used among data analysts and data scientists because of its unique features. 

Seaborn eases the process of data visualization where you just need to pass your dataset into the seaborn function to instantly get insights into your dataset. With its high-level interfaces and customizable themes, Seaborn allows you to easily customize the plots and charts according to your likings and use cases. 

Seaborn is so perfectly documented that even beginners can quickly learn and start implementing data visualizations in Python.

  1. Pandas

Pandas is one of the most straightforward yet powerful Python data science libraries for performing data analysis. It is also one of the most popular open-source libraries for implementing data manipulation and wrangling operations. Pandas provides an easy syntax for performing all data-related analytics operations, making it easier to manipulate and understand data. In other words, you can not only manipulate data but also load, clean, prepare, merge, join, reshape data, and much more. 

With Pandas, you can perform data-related operations for data that are represented in the 2D tabular format i.e., rows and columns. Such two dimensional tables consisting of rows and columns are called a dataframe. Since the dataset is represented in a dataframe format, it is easier to fetch and manipulate the data according to your use cases. With Pandas, you can read data from any file format like text, CSV, JSON, and xlsx to easily convert it into dataframe format for further data analysis operations. 

Pandas is perfectly documented that covers all its features and functionalities, making it easier for beginners to quickly learn and start implementing data operations.

  1. Numpy

Numpy stands for Numerical Python, which allows you to perform logical and mathematical operations on arrays. In other words, Numpy is an open-source Python library that consists of multi-dimensional array objects and a set of routines like mathematical, logical, statistical operations for performing fast operations on arrays. 

Since Numpy comprises pre-defined and high-level mathematical functions, you can quickly solve complex math problems without writing a single line of code. Additionally, when combined with Scipy, a scientific library, and Matplotlib, a visualization library, Numpy can effectively replace MatLab, a technical computing software. 

Numpy has excellent documentation that incorporates all its functions and methodologies, making it easy for beginners to understand and implement complex math operations.

  1. NLTK

NLTK stands for Natural Language ToolKit, which is an open-source Python library that allows performing Natural Language Processing (NLP) operations on human language data. In other words, NLTK is a suite that contains libraries and programs for performing language processing operations on text data, including tokenization, stemming, and lemmatization. 

With NLTK, you can also perform visualizations and graphical demonstrations on text data, making it easier to understand and analyze the patterns behind texts and sentences. Furthermore, NLTK is a community-driven project that helps all AI enthusiasts, including linguistics engineers, ML engineers, and researchers. 

NLTK offers you standard documentation that incorporates all the functions and methods for implementing all NLP-related operations.

  1. Beautiful Soup

Beautiful Soup is one of the open-source Python data science libraries for performing web scraping operations using Python. In other words, this library is used to pull required data out of HTML and XML files. Web scraping is a phenomenon of collecting data from the internet by using different frameworks and tools. 

With Beautiful Soup, you can seamlessly extract all data or filter only specific elements of interest from the respective website. After collecting data from a website, you can store those in specific formats like CSV or text according to your likings. Such files can then be loaded and converted into data frames using Python to perform any data-related operations. 

Beautiful Soup is perfectly documented, incorporating all the web scraping syntaxes, methodologies, and functions.

Advertisement

MeitY’s Data Policy unlocks Government Data for all

MeitY’s Data Policy

India’s Ministry of Electronics and Information Technology (MeitY) recently published a data policy draft for public consultation that makes government data available for all citizens. 

The newly published draft ‘India Data Accessibility and Use Policy 2022’ mentions that except for a few exclusions, all data collected, generated, and retained by government ministries and departments will be accessible and shareable, and all government agencies must adhere to these new standards. 

India Data Accessibility and Use Policy 2022 mentions, “This policy will be applicable to all data and information /created/generated/collected/archived by the Government of India directly or through authorized agencies by various ministries/departments/organizations/agencies and autonomous bodies.”

Read More: China develops new Quantum Computing Programming Software isQ-Core 

A new regulatory body named Indian Data Council and an agency, the Indian Data Office, will be formed to ensure that the new norms are being followed in the country. The Indian government mentioned that IDC would be made up of IDO and data officers from five different government ministries.

 In contrast, IDO will be set up by MeitY to streamline and consolidate data access and sharing across the government and other stakeholders. With this new data policy, the government aims to drastically improve India’s ability to use public-sector data for large-scale social change. 

Experts believe that it is crucial for the country to use gathered public data effectively and efficiently to make India a trillion-dollar economy in the coming years. 

This new policy will considerably help startups and other organizations harness the power of quality data to bring innovations and provide better services through data licensing, sharing, and valuation within the frameworks of data security and privacy. 

Additionally, the draft clearly states that government bodies must define the data retention period for transparency. “A broad set of guidelines would be standardized and provided to help ministries and departments define their data retention policy,” the draft mentioned. 

However, the policy has also been subjected to criticism for multiple reasons, one of which is because of the monetization model. Salman Waris from TechLegis, said, “This policy may also see a big pushback from big tech firms as their business models are based on monetizing this kind of large-scale data.”

Advertisement

China develops new Quantum Computing Programming Software isQ-Core

china Quantum Computing Programming Software isQ-Core

The Institute of Software of the Chinese Academy of Sciences has unveiled a new quantum computing programming software named isQ-Core. Researchers have announced that isQ-Core has been implemented on a CAS quantum computing cloud platform, which is currently China’s largest in terms of hardware scale.

According to its principal developer, the Chinese Academy of Sciences’ Institute of Software, it represents a significant step forward in the merging of home-grown quantum computing hardware and software. The isQ-Core, as per the institution, offers simplicity, ease-of-use, great efficiency, solid scalability, and high reliability. In an official statement released by the institute on Thursday, it will help scientists perform quantum computing theory and applied research.

With the recent debuts of “Jiuzhang,” “Zuchongzhi,” and “Zuchongzhi 2,” China is on an accelerating course to dominate the quantum computing industry. Quantum computers, like conventional computers, require software to manage hardware devices, run applications, and provide a user interface. However, due to the fundamental difference between quantum software and classical software, the corresponding quantum software tools are more complex and difficult to design.

The Chinese Academy of Sciences’ Institute of Software previously developed the isQ platform, which contains a number of tools such as quantum programming, compilation, simulation, analysis, and verification. The platform’s primary purpose is to compile a high-level source language into a low-level intermediate representation (IR). Its functions are divided into four sections: the compiler, simulator, model verification tool, and theorem prover.

Read More: Microsoft’s Azure Quantum to receive Rigetti Superconducting Quantum Computers next Year

The isQ platform’s compiler can turn a quantum program written in a high-level language into an instruction set language, which can then be passed on to follow-up tools like simulators and model checking tools for processing. The simulator can replicate the execution of quantum programs on a conventional computer and display the results, which is useful in the early stages of quantum program design and testing. Model-checking tools can be used to investigate a variety of quantum system features. The theorem prover implements the quantum Hoare logic proposed by the team. It is currently the only platform in the world capable of validating the accuracy of a quantum program. 

The Institute of Software of the Chinese Academy of Sciences believes both isQ-Core and isQ compiler software tool can lead to fruitful and new developments in the quantum computing industry.

Advertisement

Fractal Analytics to go Public, targets $2.5 billion Valuation

Fractal Analytics go Public

India-based artificial intelligence company Fractal Analytics is all set to go public via an initial public offering by the second half of this year. 

According to company officials, Fractal Analytics is expecting to hit a valuation of $2.5 billion. People familiar with the situation mentioned that the firm aims to dilute 15-20% of its stock in an IPO that might be a combination of primary and secondary offerings. 

The ongoing COVID-19 pandemic has changed the working system of many companies as now organizations are shifting to cloud-based solutions, which has helped data service providers a massive boost in their growth, including Fractal Analytics. 

Read More: Meta’s Social VR platform Horizon hits 300,000 Users

Economic Times reported that Fractal Analytics had hired JP Morgan, Morgan Stanley, and Kotak Mahindra Capital to manage the fundraising for the IPO. 

Earlier this year, Fractal Analytics became the second Indian company to achieve unicorn status in 2022 after receiving $360 million funding from TPG Capital Asia. According to the company, the transaction of the funds is planned to close by the first quarter of this year. 

Indian artificial intelligence company Fractal Analytics was founded by Pranay Agrawal and Srikanth Velamakanni in 2000. The enterprise specializes in developing multiple artificial intelligence-powered products like Qure.ai and Crux Intelligence that help businesses make better strategic decisions. 

Fractal now employs over 35,000 people worldwide and has operations in 16 countries, including the United States, Singapore, and Australia. Recently Fractal also acquired data analytics services providing firm Neal Analytics to further increase its cloud AI offerings. 

“They have built a great client-centric, people-oriented culture and have an impressive track record of solving and scaling AI engineering challenges, especially on the Microsoft platform, for marquee clients,” said Co-founder and group chief executive of Fractal, Srikanth Velamakanni, regarding the acquisition. 

Advertisement

Steel Authority of India Wins AI World Awards in Manufacturing category

Steel Authority of India AI World Awards

The Steel Authority of India (SAIL) wins the AI World award for its outstanding performance in the manufacturing category. 

AI World Awards is an Indian platform that honors AI solutions, AI design, software, new product development, research, education, and service providers from a variety of industries and specialties. 

In the presence of top industry pioneers with AI expertise, the distinguished award was bestowed to Sanjeev Kumar, Chief General Manager-in-Charge, SAIL- IISCO Steel Plant. Amarendu Prakash, Director-in-charge, congratulated Sanjeev Kumar and his entire team for achieving this milestone. 

Read More: H2O.ai launches Deep Learning Training Engine H2O Hydrogen Torch

Apart from manufacturing, the AI World Awards also included multiple other sectors, including agriculture, healthcare, retail, media, finance, legal, gaming, aerospace, automobile, and several more. 

The AI World Awards is a platform that brings together AI service providers, software designers, software engineers, and end-users to celebrate their achievements. It is a purposeful attempt to bring attention to the work that individuals and businesses are doing with artificial intelligence technology. 

The Steel Authority of India was founded in 1954, and is currently the leading steel manufacturing company in the country. 

The company not only produces steel for domestic usage but also for large-scale sectors like railway, power, automotive, defense, and more. Additionally, SAIL also is one of the most prominent steel exporters in the country. 
SAIL has stated in a post that the company had one of its best physical performances in the quarter and nine months ending December 31, 2021. According to the company, its net profit had grown by 12% in the third quarter of 2021, making the net profit more than Rs. 9500 crore.

Advertisement