Saturday, November 22, 2025
ad
Home Blog Page 250

Euler Digital acquires Independent AI Research Platform, AI Forum

Euler Digital acquires AI Forum

Cayman Islands-based artificial intelligence company Euler Digital announces that it has acquired independent artificial intelligence research platform AI Forum in a recent deal. 

Euler Digital has acquired a 100% stake in AI Forum and plans to use the company’s expertise to bring in breakthroughs in the artificial intelligence industry. Euler Digital decentralizes Artificial Intelligence by deploying a worldwide, open eco-system for data, models, apps, computing, and products utilizing blockchain technology. 

Euler Digital Blockchain (EDB) peer-to-peer payment rails open up a world of developing technologies, many of which are meant to improve the quality of life for millions of people who suffer from debilitating health problems. 

Read More: UAE Students offered Internships in Cryptocurrency and AI

The company was founded by Ian Gilmour in 2021. Marketplace technologies and novel value transfer models, such as AI non-fungible tokens, are among the company’s innovations, intending to make artificial intelligence more accessible, equitable, and affordable. Euler Digital raised more than $270 thousand over its pre-seed and seed funding rounds. 

Founder of Euler Digital, Ian Gilmour, said, “As soon as we got a sniff of this deal, we knew we had to add this world-class platform to our growing portfolio. On behalf of the staff and Directors of Euler Digital, we want to thank Peyman for entrusting us with his baby.” 

He further mentioned that adding blockchain expertise opens up new possibilities in domains like EdgeAI, and they’re excited to announce the first new AI Forum relationship with Cudos. 

The firm also plans to launch its new decentralized cloud computing platform this year. Interested individuals can visit the official website to register for a free testing program and also submit their feedback.  

“AI Forum’s customers will benefit from the addition of blockchain AI knowledge, which will inevitably drive new insights and discoveries. It has been a terrific experience, and I want to thank all our customers and advisory board members for their support,” said the founder of AI Forum, Peyman Mestchian. 

Advertisement

Africa’s first AI research center Inaugurated in Republic of Congo 

Africa first AI research center

The United Nations Economic Commission for Africa (UNECA) announces the launch of the region’s first artificial intelligence research center in the Republic of Congo. 

The newly established artificial intelligence research center is located in the capital city of the Republic of Congo, Brazzaville. The Denis Sassou Nguesso University in Kintélé, in the north of Brazzaville, will house the scientific research center. 

The research center was jointly developed by UNECA and the Government of the Republic of Congo with the sole purpose of advancing artificial intelligence research in the areas of digital policy, infrastructure, finance, skills, digital platforms, and entrepreneurship to advance digital technology in Africa. 

Read More: Intel expands its AI Developer Toolkit OpenVINO

According to Congolese officials, the center plans to conduct multiple research projects in the academic year 2022-2023, including neural networks, computer vision, machine learning, VR/AR, natural language processing, robotics, and others. 

UNECA, in a statement, said, “The center will be the first of its kind in Africa, and it will provide a regional hub for the development of emerging technologies in the region.” 

The African Research Center on Artificial Intelligence, which the ECA and other partners are supporting, will provide the required technology education and skills to enhance Africa’s integration, as well as contribute to inclusive economic growth and job creation. 

In addition, the facility will also work to alleviate poverty for the continent’s socio-economic development and ensure that Africa has access to the latest digital management tools. 

“This institution aims to undertake cutting-edge research on artificial intelligence by focusing on a human-centered approach to maximize benefits and counter development challenges,” said Congolese Minister of Posts, Telecommunications and Digital Economy, Leon Juste Ibombo. 

The AI research center is part of a larger strategy to make Africa the world’s future powerhouse by 2063. Apart from mainstream academics and research, the AI research center will provide certified online training to Africans and also offer an introductory training course on robotics and AI through the green classroom for primary and secondary school students.

Advertisement

Would cryptocurrency play an influential role in Ukraine’s future amid Russian invasion?

cyptocurrency bitcoin donations ukraine russia invasion

The geopolitical tensions over Russia’s invasion of Ukraine have been soaring in the past few weeks. Amid these strained situations, on Thursday, the National Bank of Ukraine released several decisions in connection with the country’s current martial law. Among these, the National Bank of Ukraine has instructed electronic money (e-money) issuers to halt e-money issuance and e-wallet refilling. The written ruling further said that e-money distribution was temporarily prohibited. 

Here, e-money refers to fiat currency maintained in digital form, such as that held in a PayPal account or a digital bank through a cash app. While the reason for the central bank’s e-money ban is unclear, it appears to be a wise move to protect Ukraine’s financial institutions from cyber-attacks and prevent cash outflows during the crisis. It’s unclear if this applies to cryptocurrency or other digital money, which are classed as ‘virtual assets’ under Ukrainian law. Last year, the central bank was granted authorization to launch a central bank digital currency (CBDC). The restrictions also halt the foreign exchange market, restrict withdrawals from customers’ bank accounts, and prohibit foreign currency withdrawals from customer accounts.

Quite recently, according to Chainalysis’ Global Crypto Adoption Index, trailing after Vietnam, India, and Pakistan, Ukraine stood fourth with roughly US$8 billion in bitcoin passing through the nation each year. With a market worth of about $80 billion, Tether is the most popular stablecoin, which doesn’t experience volatility like bitcoin, Ethereum, and other cryptocurrencies. As a result, for a nation whose citizens are accustomed to dealing with the dollar as a reserve currency, Tether is a favorite.

Almost a week earlier, in a tweet, Ukraine’s Vice Prime Minister Mykhailo Fedorov said that the country’s Parliament had passed a bill on virtual assets that will legalize cryptocurrencies. Fedorov further stated that by making this move, Ukrainians’ assets would be protected from any exploitation or fraud. According to a study published by blockchain researcher Elliptic, this move came when Bitcoin contributions skyrocketed to Ukrainian volunteer and hacker organizations that support the Ukrainian government, as fears of a possible Russian strike grew. On the 25th of February alone, one of the organizations called ‘Come Back Alive” received almost US$400,000. As per the latest update, the total donations are pegged at US$13 million.

Last September, Ukraine’s Parliament instituted legislation legalizing cryptocurrencies. However, President Volodymyr Zelenskyy vetoed it the following month. According to Zelenskyy, Ukraine could not afford to build a new regulatory structure to manage cryptocurrency.

This time, according to Zelenskyy’s recommendation, the revised version of the legislation on virtual assets, places crypto regulation under the supervision of the National Commission on Securities and Stock Market.

Notably, the bill excludes Bitcoin and other cryptocurrencies from the definition of legal tender. Ukraine has not followed El Salvador’s lead, which made Bitcoin a legal tender in September of last year. However, the new law will provide enterprises operating in what was previously a ‘legal limbo’ some peace of mind.

Despite the absence of institutional regulation in the past, Ukraine has established itself as Europe’s main crypto hub. The New York Times reports that the Eastern European country handles more cryptocurrency transactions every day than it does in its fiat currency, the hryvnia. Officials in the government believe that the new law would encourage more foreign investment into Ukraine’s developing crypto economy, which is presently being overshadowed by Russia’s invasion.

Read MoreAnother Phishing attack on OpenSea: Are Phishing threats on rise in NFT Marketplaces?

The military invasion of Ukraine by Russia has had a detrimental influence on global markets, including the cryptocurrency industry, which lost more than US$200 billion in value on Thursday. In the meantime, the Russian ruble has fallen to its lowest levels versus the US dollar since 2016. Bitcoin and other major cryptocurrencies have declined by almost 10%. This freefall means that Bitcoin, which was previously considered a safe asset like gold, is now subject to external influences, a common occurrence in the share market.

While the Ukrainian government may be striving to block cash outflows, the Russian government has suggested that in response to growing economic penalties and allegations of western nations freezing Russian nationals’ overseas assets, it will take severe measures. Parallelly, on Sunday, Fedorov has taken to Twitter to urge major crypto exchanges to block addresses of Russian users. 

Today, cryptocurrency fundraising has become increasingly prominent for various social causes. Earlier, the Ukrainian government had tweeted their Bitcoin and Ethereum wallets addresses seeking donations for their resistance against Russia.

Unfortunately, at the same time, cybercriminals aren’t taking a break and have started taking advantage of such events, including the Ukraine invasion. Elliptic says one social media post was found to copy a legitimate tweet from an NGO, but with the author swapping the Bitcoin address, presumably with one of their own. 

Advertisement

UAE Students offered Internships in Cryptocurrency and AI

UAE Students internships Cryptocurrency AI

Students from the United Arab Emirates are receiving next-generation internship opportunities in fields like artificial intelligence and cryptocurrency. 

The opportunity will considerably help students make them industry ready for their future careers. According to the plan, digital internships will be provided at major global firms such as HSBC, Uber, Weiss Asset Management, KPMG, and Dentons to students aged thirteen to twenty in the United Arab Emirates.

Crimson Education, a higher education consultancy in the UAE, is in charge of executing this one-of-a-kind internship program. This significant step by UAE is to train a group of highly skilled workers in specialized areas for an ever-changing employment landscape. 

Read More: NASSCOM CoE – IoT and AI launches Healthcare Innovation Challenge 3

Students will emphasize on three competencies, including cryptocurrencies, which gives an understanding of how the industry operates. The second focuses on blockchain, while the third is about entrepreneurship.

Regional Director of Crimson Education, Soraya Beheshti, said, “We believe that young people tend to get sidelined a lot because of age, and tend to be told that your age is really a limitation, you shouldn’t be working, you shouldn’t be doing this, you’re not experienced enough, you have to pay your due.” 

She further added that she wants young people to believe they belong and deserve a seat at the table. Crimson Education mentioned that the cost of the training varies depending on how much assistance the intern requires, but it can cost as much as $4,900 for the most comprehensive instruction. 

Additionally, the best interns will be hired as paid analysts by Weiss Asset Management, a worldwide investment company based in the United States.

Siddhant Tandon, an Indian student at Dubai International Academy, received an internship opportunity with PWC Australia, which he was able to secure through a Crimson Education competition. 

While talking about the internship opportunity, he said that before starting college, every student should do at least one internship to gain experience in the workplace. 

Advertisement

Intel expands its AI Developer Toolkit OpenVINO

Intel AI toolkit OpenVINO

Global semiconductor manufacturing giant Intel unveils new capabilities of its AI developer toolkit OpenVINO to bring more intelligence to the Edge. 

The recently released update to the OpenVINO toolbox introduces significant improvements in artificial intelligence inferencing performance, recording a greater than 40% increase in developer downloads compared with its previous generation. 

OpenVINO, which debuted in 2018 with a focus on computer vision, now with updates, supports a more extensive range of deep learning models, including audio and natural language processing. 

Read More: Microsoft to deliver Comprehensive Protection with Multi-cloud Capabilities

According to Intel, the new generation of OpenVINO also includes a new and enhanced optimization process that automatically detects system computes and accelerators. After this, the toolkit dynamically load balances and enhances artificial intelligence parallelization based on memory and compute capabilities. 

The upgrade also includes an enhanced and simplified API that makes it easier to import TensorFlow models and dramatically improves code portability. 

Intel discussed its role in powering the software-defined transformation across the network and edge with industry leaders, including leading communication service providers, telecommunication equipment manufacturers, and Internet of Things (IoT) leaders American Tower, AT&T, BT, Ericsson, Verizon, Rakuten Mobile, and Zeblok. 

Intel Senior Fellow and senior vice president of the Network and Edge Group, Nick McKeown, said, “As we usher in a new era of innovation in network and edge transformations, now, more than ever, this evolution is being driven by the need for more control, adaptability, and scalability to provide those who build and operate infrastructure the ability to quickly introduce new capabilities.” 

He further added that they aim to supply programmable hardware and open software through the broadest ecosystem possible, in a way that best serves their customers and partners in the future, and they will drive the next generation of cloud-to-network infrastructure together. 

To date, thousands of developers have used OpenVINO to deploy AI workloads at the Edge, including industry-leading companies like BMW, Audi, Samsung, John Deere, and many more. 

Advertisement

NASSCOM CoE – IoT and AI launches Healthcare Innovation Challenge 3

NASSCOM CoE Healthcare Innovation Challenge 3

NASSCOM Center of Excellence (COE)- IoT & AI announces the launch of the third edition of its Healthcare Innovation Challenge (HIC). Hospitals, diagnostic chains, insurance firms, pharma companies, government representatives, technology enterprises, and deep tech startups will participate in the newly launched NASSCOM CoE program. 

The use cases in this third series of HIC are automated credit business settlement, inpatient volume prediction based on outpatient volume, prescription digitalization using voice recognition, early detection of microbes, comprehensive patient care, OPD automation, preventive health checkup tracking, artificial intelligence-based surgical video, recording cum reporting, and cashless OPD expense management. 

Read More: Top 8 Deep Learning Libraries

Use Case sponsors Apollo Hospitals, HealthCare Global Enterprises, Max Healthcare, Hinduja Hospital, Cygnus Hospital, Aditya Birla Health Insurance, and others participated in panel discussions at the launch event. 

The previous editions of this program were highly appreciated and received participation from the healthcare industry. However, the newly announced third series of HIC has piqued the interest of insurance and technology companies as well. 

Vice President of Imaging System Software at Wipro GE Healthcare India said, “Healthcare is going through a rapid transformation and providing comprehensive patient care connecting hospital systems, laboratory diagnostics, physicians, etc. is the need of the hour. To provide timely diagnostics and treatment, connecting these functions remotely is crucial.” 

He further added that using specialized technologies like artificial intelligence, deep learning, safe data management, telemedicine, telemonitoring, and digital solutions can deliver an all-around patient experience. 

Over the last few years, NASSCOM has taken many initiatives to promote research and deployment of artificial intelligence technologies in India that have helped not only students but also numerous tech startups to scale their businesses. 

HIC will accelerate broad-based deployment participation, boosting the influence of digital technology in the healthcare sector toward the Ministry of Electronics and Information Technology’s (MeitY) aim of a $1 trillion digital economy.

Advertisement

Microsoft to deliver Comprehensive Protection with Multi-cloud Capabilities

Microsoft Comprehensive Protection Multi-cloud

Catering to the increasing needs, Microsoft is extending the native capabilities of Microsoft Defender for Cloud to the Google Cloud Platform to provide better security to its customers.

Customers will benefit from the new capabilities in improving visibility and control across multiple cloud providers, workloads, devices, and digital identities from a centralized management perspective. With this new addition, Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP) will all have native multi-cloud protection. 

It’s vital for businesses to properly embrace multi-cloud strategies as their security solutions decrease complexity and provide comprehensive protection. 

Read More: Depict.ai Raises $17M Series A funding round led by Tiger Global

Vasu Jakkal, Corporate Vice President, Security, Compliance, Identity, and Management at Microsoft, said, “Support for GCP comes with out-of-box recommendations that allow you to configure GCP environments in line with key security standards like the Center for Internet Security (CIS) benchmark, protection for critical workloads running on GCP, including servers, containers and more.” 

She further added that in this multi-cloud, multi-platform world, security operations must assess emerging cyber dangers and detect potential blind spots across a wide range of users, devices, and destinations. 

Microsoft also announced the release of a public preview of CloudKnox Permissions Management. It is a unique platform that provides total visibility into users’ identities and workloads across clouds. 

The platform includes automated features that enforce least privilege access consistently and use ML-powered continuous monitoring to identify and remediate suspicious activity. 

“IT teams lack visibility into identities and their permissions and struggle with ever-increasing permission creep. These challenges require a comprehensive, unified solution for full visibility and risk remediation,” said Alex Simons from Azure.

Advertisement

Top 8 Deep Learning Libraries

deep learning libraries

In today’s digital era, artificial intelligence is advancing at a greater pace by having deep learning as its primary contributor. Since deep learning is one of the subfields of artificial intelligence, most of the AI tasks and applications involve deep learning models. Deep learning works similar to the human brain, which perceives and transmits information through countless neuron interactions. The applications of deep learning include image processing, text classification, object segmentation, natural language processing, and much more. To build such high-end applications and use cases, you have to employ appropriate deep learning libraries at different phases of an end-to-end deep learning model development lifecycle. There are a vast collection of libraries available for implementing deep learning tasks, from which you can select the most suitable and efficient library based on your use cases and business models.  

This article mainly focuses on the top 8 deep learning libraries that are primarily used by developers at different phases of the deep learning lifecycle.

1. Keras 

Keras is one of the most prominent open-source libraries used mainly for implementing deep learning-related tasks. It initially started its journey as a Google project named ONEIROS (Open-Ended Neuro Electronic Intelligent Robot Operating System) for enabling faster experimentation with neural networks. In 2017, Keras was added to Google’s TensorFlow machine framework, making it a high-level API for building and training deep learning models.

Since Keras runs on top of the TensorFlow framework, Keras APIs can be used to effectively run both machine learning and deep learning-related tasks. Keras is highly scalable to run on both high-level GPUs and CPUs for developing complex neural network models with less computation time. Because of such enhanced features and functionalities, Keras empowers researchers and engineers to fully exploit scalability and cross-platform capabilities, thereby enabling them to achieve high accuracy and performance while building deep learning models. In addition, Keras is being used in most popular companies like YouTube, NASA, and Waymo because of its industry-strength performance and scalability.

Keras is highly compatible with Python 3.6 to 3.9 versions, Windows, Ubuntu, and Mac Operating systems. Since Keras is an open-source project, it offers greater community support by means of forums, Google groups, and Slack channels. Keras also provides users with straightforward and well-structured documentation, allowing beginners to easily learn and implement deep learning tasks. 

2. TensorFlow

Developed by the Google Brain team, TensorFlow is an open-source Python library for implementing high-level numerical computations and large-scale deep learning tasks. Ever since its development, TensorFlow was only used for inter-organizational purposes in Google. However, it was made open-source under the Apache License 2.0 in 2015. Although the TensorFlow framework is primarily used to build and develop deep learning, it also has flexible tools and libraries for building end-to-end machine learning models. 

With TensorFlow, you not only build and develop machine learning and deep learning models but also can perform probabilistic reasoning, predictive modeling, and statistical analytics. Since TensorFlow consists of high-level APIs like Keras and Theano, it can be effectively used at any phase of the model development life cycle. In addition, since TensorFlow supports cross-platform deployment, you can easily build and deploy deep learning models in any production platform, such as cloud and on-premises systems, 

TensorFlow is made to be compatible with macOS, Windows, 64 bit-Linux, and mobile computing platforms, including Android. To make developers and researchers effectively work with TensorFlow, its official documentation clearly explains the features, functionalities, and implementation methodologies of the respective library.

3. PyTorch

PyTorch is one of the most popular and open-source deep learning libraries developed by the AI research team of Facebook in 2016. The name of the respective library is based on the popular deep learning framework called Torch, a scientific computation and scripting tool written in the Lua programming language. However, Lua is a complex language to learn and get hands-on and does not offer enough modularity to interface with other libraries. To eradicate such complications, researchers of FaceBook developed and implemented the Torch framework using Python, thereby naming it as PyTorch. 

PyTorch not only allows you to implement deep learning-related tasks but also enables you to build computer vision and NLP (Natural Language Processing) applications. In addition, the primary features of PyTorch include tensor computation, automatic differentiation, and GPU acceleration, which makes it stand apart from other top deep learning libraries. 

You can flexibly run PyTorch on Linux, Windows, macOS, and any of your preferred cloud computing platforms. PyTorch also offers you standard documentation that specifies its features, functionalities, and algorithms, allowing any user to learn and try implementing deep learning models on their own.

4. MXNet

Developed by Apache Software Foundation, MXNet is one of the open-source deep learning libraries in Python that allows you to define, train, build, and deploy deep neural networks. With MXNet, you can develop and deploy deep learning models in any platform like cloud infrastructure, on-premises, and mobile devices. Since MXNet has ultra-scalability and distributive features, it can be seamlessly scaled across multiple GPUs and machines, leveraging them to achieve fast-model training and high performance. 

MXNet supports a wide range of programming languages like Python, C++, R, Julia, Scala, JavaScript, and MatLab, eliminating the need to learn new languages for working with specific frameworks. Since MXNet is language independent, you can build portable and lightweight neutral network representations that can seamlessly run on low-powered devices with limited memory like Raspberry Pi and other single-board computers. Because of such efficient features, MXNet is being used and supported by the most prominent organizations like Amazon, Baidu, Intel, and Microsoft.

MXNet provides you with a greater community that enables you to participate in discussion forums, collaborate with other researchers, and learn the features and functionalities of the respective library via tutorials and documentation.

5. Microsoft CNTK

Released by Microsoft in 2016, CNTK (Cognitive Toolkit), previously known as Computational Network ToolKit, is an open-source deep learning library used to implement distributed deep learning and machine learning tasks. With the CNTK framework, you can easily combine the most popular predictive models like CNN (Convolutional Neural Network), feed-forward DNN (Deep Neural Network), and RNN (Recurrent Neural Network) to effectively implement end-to-end deep learning tasks. 

Although CNTK is primarily used to build deep learning models, it can also be used for implementing machine learning tasks and cognitive computing. Though CNTK’s framework functions are written in C++, it also supports a wide range of programming languages like Python, C#, and Java. Furthermore, you can use CNTK for developing efficient deep learning models by either importing it as a library into your preferred development frameworks or using it as a standalone deep learning tool, or launching it in cloud platforms. Due to its platform compatibility and performance, CNTK is being used by the most prominent companies like Cyient and Raytheon. 

CNTK provides you with standard documentation and is also available as an open-source repository in GitHub, making it easier for developers and researchers to learn and implement high-level deep learning methodologies.

6. Fastai

Developed by Jeremy Howard and Rachel Thomas in 2016, Fastai is an open-source library primarily used for building deep learning and artificial intelligence models. Since Fastai is built on top of PyTorch, users can leverage the advanced features of both frameworks, thereby achieving high accuracy models with remarkable speed and performance. Apart from its other prominent features, Fastai is the first deep learning library to offer a standalone interface for building various end-to-end deep learning applications, including computer vision, text classification, neural network, and time series models. 

As its name implies, Fastai helps developers build efficient and high-level models using minimal amounts of code with faster experimentation capability. Fastai achieves high-speed experimentation since it can automatically figure out suitable pre-processing techniques and training parameters for the specific dataset, making it more accurate than other deep learning libraries.

Fastai offers basic to advanced practical courses for beginners and developers. It also provides users with clear documentation, incorporating features and algorithms of the Fastai library along with their use cases.

7. Theano

Developed in 2010, Theano is an open-source deep learning library for implementing and evaluating more complex mathematical and scientific computations that involve multi-dimensional arrays. With Theano, you can achieve transparent GPU usage by manually setting the GPU usage limit and frequency. You can develop a highly scalable and reliable training framework by utilizing multiple GPUs across the cluster, which results in accelerating the training speed of deep learning models.

With Theano, you can express and define your model in terms of mathematical expressions and computational graphs, making it easy to evaluate and assess the training capability of the respective model. Since Theano offers developers a general-purpose computing framework for implementing complex neural network models with remarkable speed and accuracy, it is extensively utilized in the Python community, especially for deep learning research. 

Theano provides you with the comprehensive documentation that incorporates all its functions, methodologies, and algorithms, making it easy for beginners to understand and implement deep learning techniques.

8. Caffe

Developed by BAIR (Berkeley AI Research), Caffe is one of the most popular deep learning libraries for Python, which is used to implement machine vision and forecasting applications. Cafe\fe serves as a one-stop framework for training, building, evaluating, and deploying deep learning models. With Caffe, you can build and evaluate your deep neural networks with a sophisticated set of layer configurations options. However, you can also access pre-made neural networks from the Caffe community website based on your use cases and model preferences. 

Since Caffe can be scaled across multiple GPUs and CPUs, you can achieve greater training and processing speed, allowing you to train deep learning models in less time. Because of its features, enhanced training speed, and performance, Caffe is being used by popular organizations like Adobe, Yahoo, and Intel. 

Caffe offers you a well-documented user guide, incorporating its philosophy, architecture, methodologies, and use cases. It’s also accessible as an open-source repository on GitHub, letting users experiment with Caffe’s functions and algorithms. 

Advertisement

QuantrolOx Secures £1.4 million seed Fund for development of scalable quantum computing

QuantrolOx quantum computing
Image Credit: Analytics Drift

QuantrolOx has secured £1.4 million in seed investment led by Nielsen Ventures and Hoxton Ventures to expand quantum computing. The round was also led by Voima Ventures, Remus Capital, Dr. Hermann Hauser, and Laurent Caraffa. Founded by Oxford professor Andrew Briggs, tech entrepreneur Vishal Chatrath, the company’s chief scientist Natalia Ares, and head of quantum technologies Dominic Lennon co-founded the company, the company aims to manage qubits within quantum computers using machine learning. 

Instead of the straightforward manipulation of ones and zeros in traditional binary-based computers, quantum computers employ quantum bits or qubits. In addition, these qubits feature a third state known as “superposition,” which may represent either a one or a zero at the same time. Instead of having the value of either a one or a zero, superposition allows two qubits to represent four situations at the same time. This characteristic can allow a computing revolution in which future computers will be capable of more than just mathematical computations and algorithms. 

Quantum computers also use the entanglement principle, which Albert Einstein described as “spooky action at a distance.” The fact that the state of particles from the same quantum system cannot be represented independently of each other is known as entanglement. They are still part of the same system, even though they are separated by huge distances.

QuantrolOx is developing automated machine learning-based control software for quantum technologies that allows them to tune, stabilize, and optimize qubits. Quantum computers need thousands of qubits, yet qubits vary somewhat owing to errors in control instruments, manufacture, and design, necessitating distinct sets of control parameters to make each one useful. To create a functional quantum computer, a complex procedure is necessary. The issues of turning and characterizing qubits become increasingly difficult and significant as the number of qubits grows.

Read More: MIT CSAIL has developed a programming language for Quantum Computing, Twist

QuantrolOx’s software is technology-neutral, meaning it may be used with any quantum technology. However, for the time being, the company is concentrating on solid-state qubits. That’s primarily because they are systems to which the company has access, including through a tight collaboration with a Finnish lab that the company wasn’t ready to reveal. QuantrolOx, like any other machine learning challenge, requires a large amount of data in order to create successful machine learning models.

QuantrolOx is now focusing on forming new agreements with quantum computer manufacturers. These are significant collaborations since the team not only requires physical access to the equipment but also the source code that controls them in order to interact with these systems.

Advertisement

OECD Framework to augment National AI Strategies

OECD AI framework

The Organization for Economic Co-operation and Development (OECD) has created a user-friendly tool or framework to analyze AI systems from a policymaking viewpoint. This framework assists lawmakers, regulators, organization policymakers, and others in characterizing the policies of AI systems deployed in specific sectors. The OECD framework also assists professionals in assessing the potential and risks present in various types of AI systems, as well as informing national AI plans and strategies.

In addition, the OECD framework helps policymakers distinguish between various types of AI systems and all the possible influences that AI has on people’s lives, whether positive or negative. This not only includes what AI systems are capable of but also where and how they implement it. For example, image recognition can be highly useful for smartphone security, but when used in other situations, it might be violating human rights.

The OECD policy classification framework differentiates AI systems based on the dimensions, such as People & Planet, Economic Context, AI Model, Data & Input, and Task & Output. Each dimension has its own set of characteristics and traits that helps in evaluating the policy implications of specific AI systems.

Read more: Depict.ai Raises $17M Series A funding round led by Tiger Global

The respective framework works by referring to the AI system’s lifecycle as a supplemental structure for understanding the primary technical properties of the respective system. Furthermore, by defining the qualities and characteristics of AI systems that matter the most, the OECD framework promotes a common understanding of AI and its effective usage across the organization. 

According to the OECD’s announcement, the current framework is meant to provide the basis for developing a future risk-assessment framework to help with diminishing and mitigating risks. It will also provide a baseline for the OECD, Members and partner organizations to develop a common framework for reporting about AI incidents.

After the successful deployment of the OECD framework across different organizations, it is expected that the respective classification tool assists in generating more data about the various types of AI systems currently in use around the world. This provides policymakers with the information they need to map the most impactful AI domain and trace interventions that make AI more favorable across the globe.

Advertisement