Officials classified it as “a central connection to many federally-supported resources for America’s AI research community” during its inauguration on Twitter. The available tools on the page include funding and grant information, datasets, computing resources, a research program directory, and a testbed selection.
Researchers working on AI projects can improve their studies and projects by using higher-quality data from federal agencies such as NASA, the National Oceanic and Atmospheric Administration, the National Institute of Standards and Technology, and the National Institutes Health once they have access to these resources.
The portal complies with provisions in the overarching National Artificial Intelligence Initiative Act, which mandates NAIIO leadership to “promote access to and early adoption of technologies, innovations, lessons learned, and expertise derived from Initiative activities to agency missions and systems across the federal government.”
In a press release, current NAIIO Director Lynne Parker stated that the portal’s purpose is to assist U.S. academics in advancing their AI projects using available government funding. “We hope that the AI Researchers Portal will help U.S. researchers more easily navigate and connect with available resources that will make them more productive and successful in advancing the state of the art in AI and related fields,” Parker concluded.
Apart from public datasets and several testbed environment options, the portal also provides academics with a list of AI research projects underway with government organizations that may be open to collaboration. NIST’s Network Modeling for Public Safety Communications and the NIH’s Graduate Data Science Summer Program are two such research programs.
The NAIIO was established during former President Donald Trump’s administration and is now part of the Biden-Harris administration’s AI goals, including continuing efforts to make federal AI resources more accessible to the general public.
Image classification using machine learning rose to prominence in a recent couple of years. While processing images to understand the visual data is nothing new as earlier researchers relied on raw pixel data to classify visual data. In this process, computers would fragment an image into individual pixels for further analysis. However, this was not an effective method as the computers often got confused in cases when two images of the same subject appear to be very different. Machines also struggled with images focusing on a single entity but having a variety of backdrops, perspectives, positions, and so on. Thus, making it difficult for computers to see and categorize images appropriately.
As a result, scientists turned to deep learning, which is a subset of machine learning that uses neural networks for processing input data. In neural networks, information is filtered by hidden layers of nodes. Each of these nodes processes the data and relays the findings to the next layer of nodes. This continues until it reaches an output layer, at which point the machine produces the desired result.
Using convolution neural networks (CNN) machine learning models became extremely good at classifying images and videos. CNN is a form of neural network where the output of the nodes in the hidden layers of CNNs is not always shared with every node in the following layer (known as convolutional layers).
Despite being proficient in detecting patterns in data, scientists are baffled by how these models make decisions or how they arrived at the said decision. Furthermore, when there is a likelihood that the machine learning model makes judgments based on inadequate, error-prone, or one-sided (biased) information, the constant need to understand the mechanisms underlying the decisions becomes even more important.
Recently, the Massachusetts Institute of Technology (MIT) has discovered an intriguing problem involving machine learning and image classification called overinterpretation. Depending on where deep learning algorithms are used, this problem might be innocuous or dangerous if it is not fixed. Apart from adversarial attacks and data poisoning, overinterpretation is the new irritant for AI researchers and developers.
The MIT researchers discovered that neural networks trained on popular datasets like CIFAR-10 and ImageNet include “nonsensical” signals that are troublesome. Models trained on them suffer from overinterpretation, a phenomenon in which they label images with such high confidence that they are useless to humans. For instance, models trained on CIFAR-10 made confident predictions even when 95% of input images were missing, and the remainder is senseless to humans.
A deep-image classifier can determine image classes with over 90 percent confidence using primarily image borders, rather than an object itself. Credit: Sunny Chowdhary
In the actual world, these signals can lead to model fragility, but they’re also valid in datasets, which means overinterpretation can’t be detected using traditional approaches.
“Not only are these high-confidence images unrecognizable, but they contain less than 10 percent of the original image in unimportant areas, such as borders. We found that these images were meaningless to humans, yet models can still classify them with high confidence,” says Brandon Carter, MIT Computer Science, and Artificial Intelligence Laboratory Ph.D. student and lead author of this research.
The team explains that machine-learning algorithms have the potential to cling onto these meaningless tiny signals, making image classification difficult. Then, after being trained on datasets like ImageNet, image classifiers can make seemingly reliable predictions based on those signals.
The MIT team adds that datasets are equally to be blamed for causing overinterpretation. According to Carter, scientists can start by questioning ways to modify the datasets in a manner that models trained on those can more closely mimic how a human would think about classifying images. Carter hopes that the algorithms could generalize better in real-world scenarios, such as autonomous driving and medical diagnosis so that the models don’t show nonsensical behavior, else it can lead to fatal outcomes due to flawed prediction.
This might imply producing datasets in a more controlled setting. Currently, images retrieved from public domains are the only ones that are classified. The team notes that overinterpretation, unlike adversarial attacks, relies on unmodified image pixels. At present, the MIT team asserts that ensembling and input dropout, can both assist prevent overinterpretation. You can read more about the research findings here.
This research was sponsored by Schmidt Futures and the National Institutes of Health.
Carter collaborated on the research with Amazon scientists Siddhartha Jain and Jonas Mueller, as well as MIT Professor David Gifford. They’ll share their findings at the Conference on Neural Information Processing Systems in 2021.
Since the beginning of the 2010s, artificial intelligence (AI) has been a torchbearer of the digital transformation we are witnessing and benefitting from today. The year 2021 brought many significant changes and research breakthroughs in AI viz., intelligence edge and cloud, decision intelligence, creative AI, and more were the biggest AI trends. In this year, AI also opened new avenues in new tech domains like AI-based NFT art, augmented reality, metaverse, and more. While these AI trends and the latest tech applications will continue to scale in 2022, there are some areas where AI technologies will lead to wide-range adoption. Let’s dive into top AI technology trends of 2022 that are going to revamp the tech world.
The following list focuses on the nebular yet promising technologies in the AI-backed industry.
iNFT
As the Non-fungible tokens or NFT craze is taking a front seat in ushering in the new crypto age, a new category of NFTs: the intelligent (or interactive) NFT or iNFT is gaining traction. The trend kick-started when in June of this year, the artist Robert Alice teamed with Alethea.AI to create the world’s first iNFT, which was dubbed ‘Alice.’
Alice had appeared in a Twitter live video in which it answered real-time queries from viewers; however many of its responses were just “I don’t know” or “I’m sorry.”
An iNFT is a smart NFT with a GPT-3 prompt built-in as part of its immutable smart contract. The iNFT created is not only perceptibly intelligent but also interactive and animated, thanks to carefully prepared prompts maintained at the smart contract layer. The hard-coded prompts invoke a cutting-edge Transformer Language model to enable generative possibilities made feasible only by recent advances in few-shot and single-shot learning.
The inbuilt intelligence and self-learning abilities of iNFTs set them apart from general NFTs. The incorporation of GPT-X into the software architecture opens up a world of possibilities in an interactive discussion. Its self-learning capability allows it to provide new types of intelligence to both the creator and the owner of the iNFT.
Furthermore, because iNFTs are decentralized structures, they may be utilized by anybody and are unaffected by censorship.
This year, the worldwide market for NFT sales volume surged 182 times in the first half of 2021 compared to the same period in 2020, reaching a startling value of US$2.5 billion, as predicted.
An illustration of their rising popularity is when Visa acquired a “CryptoPunk” – one of the hundreds of NFT-based digital avatars – for approximately US$150,000 in the cryptocurrency Ethereum in August 2021. Meanwhile, iNFT is also starting to see wide adoption in tandem in the NFT Marketplace. For instance, The Revenants, another of the first-ever iNFTs collections, shattered OpenSea records for a collection drop, garnering nearly US$10 million in a week and rocketing into the Top 10 collections on OpenSea in the previous seven days.
There is also a possibility of Instagram and Snapchat introducing iNFT as AR filters, while brands will boost product recommendations.
AI in Metaverse
The metaverse is a virtual immersive digital ecosystem made up of separate but interconnected networks that will communicate through yet-to-be-determined protocols. According to Gartner, it allows for the creation of decentralized, collaborative, and interoperable digital elements that interact with real-time, geographically oriented, and indexed content in the actual world, thereby creating immersive experiences for people.
The metaverse is the internet’s next evolutionary step, however, it is still in its early stages of development. The move to the metaverse is expected to be accelerated after Mark Zuckerberg announced about combining virtual reality technology with the social network of Meta. At present many firms are vying for dominating certain aspects of the metaverse, for example, Roblox. Even artificial intelligence will be used to enable, populate, and sustain the metaverse. This also means that the use of AI in the metaverse will be one of the leading AI trends of 2022.
For instance, Microsoft will incorporate Mesh, a virtual experience collaboration platform, into Microsoft Teams next year. Mesh builds on existing Teams features like Together mode and Presenter view to make remote and hybrid meetings more collaborative and immersive by letting participants know they’re in the same virtual space. Mesh will use Microsoft’s mixed reality technology and HoloLens headgear for virtual meetings, conferences, and video interactions in which Teams members may engage as avatars.
It is speculated that the need for huge advancements in processing power, network performance, and AI capabilities can imply metaverse will not be fully developed until the late 2030s or early 2040s. But for 2022, AI can help build virtual avatars, develop games, and enforce human-computer interaction (HCI) in the metaverse.
No code and Low Code
With data science and AI applications increasing every year, there will be a surge in jobs in this sector. However, having employees proficient in coding is not always possible for companies. In other words, one of the greatest roadblocks to effective and proper AI deployment is the scarcity of professionals in the field. With no-code and low-code solutions, you might seek to overcome the barrier, merely by providing usable interfaces. Low-code and no-code software development tools focus on visual aspects to construct software, such as dragging and dropping.
As a result, no-code and low code interfaces will grow more popular since a lack of programming experience or a thorough understanding of statistics and data structures will no longer be a barrier in 2022. By combining multiple pre-made modules and providing them with appropriate data, these AI systems would allow us to construct better products.
Recently, OpenAI, a research group founded by Elon Musk, released Codex, a programming model that can produce code from natural, spoken human language. At re:Invent 2021, AWS teased the launch of Amazon SageMaker Canvas as a no-code machine learning platform, stating that it enables business analysts to create machine learning models for predictions without understanding how to code or having prior machine learning knowledge. With its straightforward graphical user interface, SageMaker Canvas supports multiple problem types such as binary classification, multi-class classification, numerical regression, and time series forecasting.
Though code-based software is unlikely to go away anytime soon, low-code and no-code solutions can significantly reduce the time it takes to develop an app or program. As this technology evolves in 2022 and converges with the capabilities of cloud infrastructure, developers’ creativity and imagination will not depend on having strong coding skills. No doubt, with rising reliance on no-code and low-code software, the coming year will usher in a new market for this AI technology trend.
Hyperparameter Language Models
Since the announcement of GPT-3 last year, tech behemoths are working on building new AI language models that surpass the previous models in terms of hyperparameters and efficiency. This year, Microsoft and NVIDIA announced Megatron-Turing Natural Language Generation (MT-NLG), the world’s largest and most powerful monolithic transformer language model. Other leading players like OpenAI concentrated on enhancing the factual accuracy of GPT-3 Language Models, while DeepMind built a model dubbed Gopher with 280 billion parameters to explore the limits of massive language models.
These billions of language-based hyperparameters are allowing data scientists to create models that fully grasp the language and can generate human-level articles, reports, and translations. They can even write programming, create recipes, and social media content too.
We can anticipate bigger and better language generation models in 2022. Also, announcements of such models will be another awaited AI technology trend that will make headlines like GPT-3. Simultaneously, experts are voicing their concerns about the ethical and social dangers linked to such models that use social media and other open datasets for training, thereby accidental generation of malicious content. To minimize this, extensive research will also be carried out to ensure larger hyperparameter language models produce qualitative results.
ESG risks Free AI
Apart from AI apocalypse scenarios narratives by movies, there are genuine reasons why the expanding use of AI in many sectors of corporate, institutional, and personal activity is attracting the attention of authorities, concerned groups, and watchful citizens throughout the world. With AI being employed in industries such as law enforcement, healthcare, and recruiting, the availability of biased data or biased output might jeopardize confidence in a better AI-powered future.
The goal is to keep AI biases to a minimum so that it doesn’t replicate human bias. At the same time, there is a clarion call to limit the AI system’s carbon footprint and pressure on depleting semiconductor materials. On the other hand, supply networks and company operations are also strained by climate change and extreme weather occurrences. As these constraints intensify in 2022, AI will play a critical role in assisting organizations in meeting sustainability goals through enhanced productivity, data collecting, and carbon accounting, as well as increased supply chain robustness.
Besides, there is an urgent need to have conversations around ethical AI to develop reliable, transparent, and trustworthy AI systems, which also offer users improved levels of explainability for the decisions it makes.
This year, in a historic move, UNESCO has released a worldwide standard on artificial intelligence (AI) ethics, which is likely to be adopted by over 200 member countries. China has also developed its AI ethical standards, emphasizing user rights and fitting with its goal of becoming the world’s AI leader by 2030.
Even European Parliament’s ban on policing and biometric mass monitoring is also another step made to ensure better use of AI. Even the United States is not behind; in a first-of-its-kind project, the White House’s scientific advisors have suggested an AI “Bill of Rights” to limit the scope of artificial intelligence (AI) harms.
There is a huge possibility that all these new laws and regulations will ensure environmental, social and governance (ESG) risks free AI technologies in 2022.
Apart from the ones listed above, AI technologies like neuro-symbolic networks, Quantum AI, Augmented Intelligence will go mainstream in 2022.
Technology giant Amazon announces the launch of its new free AWS Builder Online Series. Amazon has collaborated with Intel to provide this program, which will teach core concepts of the Amazon Web Services (AWS).
The program has been meticulously designed so that both beginners and experienced individuals in cloud technologies can attend the AWS Builder Online Series. According to Amazon, the newly launched program will considerably help newer talents get started and accelerate their success on AWS.
The AWS Builder Online Series will primarily teach learners how to build new applications, the fundamentals of AWS, getting started with containers and AWS storage, running Windows & SAP workloads on AWS, and more.
Learners will get access to more than ten live sessions regarding mobile applications, databases, remote work solutions, storage, DevOps, and many more from Industry experts. Speakers like Naim Mucaj, Blair Layton, Akanksha Balani, Andrew Wangsanata, Derek Bingham, and many others from AWS and Intel will participate in the AWS Builder Online Series.
Apart from live sessions, learners will also have access to numerous technical demo sessions and get to resolve their queries from AWS experts. A certificate of attendance will be provided to the learners after completing the program.
AWS Builder Online Series aims to inculcate learners with critical skills that the global companies currently demand to make them industry-ready. The program is divided into two parts, namely Level 100 and Level 200. The prior mentioned is for beginners and the latter for intermediate learners. The event is scheduled for 20th January 2022 and interested candidates can apply for free from the official website of AWS Online Builder Series.
Rigetti Computing, a quantum computing company headquartered in the United States, has launched the Aspen-M, an 80-qubit quantum computer with two linked 40-qubit processors. The company further has revealed that it is testing novel hardware combinations to boost the performance of its quantum computers.
According to a recent blog post, the company has given its qubits a third energy state, transforming them into qutrits. Rigetti enables substantially greater information manipulation while simultaneously reducing readout mistakes by up to 60%.
The lowest unit of quantum information is the quantum bit (or qubit), which is analogous to the binary bit in traditional computing. Unlike a typical bit, a process is known as superposition; a qubit may take on the value of one, zero, or anything in between.
“Accessing the third state in our processors is useful for researchers exploring the cutting edge of quantum computing, quantum physics and those interested in traditional qubit-based algorithms alike,” the company explained.
Traditionally, researchers have focused on increasing the number of qubits on a quantum processor to attain a quantum advantage. Simply, the more qubits a quantum computer has, the more powerful it is. IBM, for example, has introduced the Eagle, a 127-qubit processor that set a new world record.
Rigetti currently provides an 80-qubit processor (the Aspen-11), which the company produced by combining two 40-qubit processors.
Rigetti believes that it may be viable to push for the qubits with even more states in the future. However, when the amount of the energy separating states beyond zero and one decreases, noise and control difficulties become more challenging to overcome. Rigetti announced a partnership with Deloitte and Strangeworks to investigate the quantum applications in material modeling, optimization, and machine learning utilizing the company’s latest processors.
The American Association of Community Institutions (AACC) has announced a partnership with Intel and Dell Technologies as part of Intel’s AI for Workforce program to improve AI training across the community and technical institutions in all 50 states by 2023.
The new initiative intends to make community college the primary stop for AI workforce training, reskilling, and upskilling. Last year Intel introduced the “AI for Workforce Program” at Maricopa Community College (MCC) in Arizona, with an AI-based curriculum that allows students to obtain a certificate or associate degree in AI. The initiative has grown to 31 schools in 18 states in the previous year.
AACC and Intel are cooperating with Dell Technologies as the exclusive technology partner to provide resources for the community school that provides AI training and certificate programs to assist the program’s expansion.
Intel’s executives think individuals with various perspectives and experiences should shape AI. Community colleges draw students from multiple backgrounds and skills in the United States’ higher education system. On the other hand, Dell Technologies is advising schools on how to effectively set up AI laboratories for teaching in-person, hybrid, and online students.
The new alliance will benefit under-resourced areas by focusing on Minority-Serving Institutions (MSIs) and tribal colleges and is dedicated to the program’s continued expansion. MSIs makeup seventeen schools now taking part in the AI for Workforce program.
Intel is delivering over 200 hours of AI training material to college professors to guide curriculum creation and instruction through the initiative, which was started in 2020 in conjunction with the Maricopa County Community College District (MCCCD) and with assistance from the State of Arizona. This material is being utilized to construct AI courses, certifications, and degrees and the first-ever AI for Workforce degree and certificate programs.
McDonald’s Corp. has agreed to sell Dynamic Yield, an artificial intelligence business bought for more than $300 million by its former CEO in 2019, to MasterCard Inc. for an unknown sum. McDonald’s had already complained in the past that Dynamic Yield didn’t offer the sales gain they expected and that its technology failed to meet expectations, so this split isn’t surprising.
The agreement offers MasterCard access to technologies that customize menus and browsing choices online and in-shop. Chicago-based McDonald’s had incorporated Dynamic Yield in the drive-thru menus and ordering kiosks. According to a statement, MasterCard has additional clients who use Dynamic Yield.
The service was purchased by the burger conglomerate less than three years ago. It is also McDonald’s second such sale in a few months, with the company indicating that it won’t keep the technology business it buys for long. IBM recently acquired McDonald’s automated ordering lab.
According to the fast-food company, Dynamic Yield’s sales increased during McDonald’s ownership. According to the company, McDonald’s digital engagement services will strengthen due to the service’s sale to MasterCard.
McDonald’s claims that both Dynamic Yield and MasterCard services, such as loyalty, analytics, and marketing, are used by various merchants and financial service businesses. As a result, selling Dynamic Yield to the payment firm makes sense.
“There are certain times when it may make sense for us to go acquire a technology so that we can accelerate the development of that, make sure that it is bespoke to McDonald’s needs,” McDonald’s CEO Chris Kempczinski said during the company’s Q3 2021 earnings call.
According to the firms, the deal is expected to be finalized in the first half of 2022. Liad Agmon, the CEO of Dynamic Yield, will step down and become an adviser, with Chief Technology Officer Ori Bauer taking his place.
Meta, formerly Facebook, has developed an AI system that can identify and animate human-like figures in children’s drawings without any human guidance. This is a first-of-its-kind method that can automatically animate figures that come in many forms, colors, sizes, and scales with little morphology, body symmetry, and point of view.
To use this feature, parents have to upload the artwork to the system, and they can experience the excitement of watching their children’s drawing taking shape and becoming a moving character that can skip, jump, and dance. These drawings are downloadable and can also be submitted to improve the AI model.
The first step in animating the drawings is distinguishing the human figures from the background and other types of characters. Since the object detection technology lacks accurate animation segmentation, Meta researchers used bounding boxes, image processing, and morphological operations to obtain masks.
“We’re excited to announce a first-of-its-kind method for automatically animating children’s hand-drawn figures of people and humanlike characters (a character with two arms, two legs, a head, etc.) that bring these drawings to life in a matter of minutes using AI,” Meta researchers said in a statement.
They also used Meta’s convolutional neural network-based object detection model, Mask R-CNN, for extracting human-like characters from drawings. This model is pre-trained on the most extensive publicly available segmentation data set, made of photos of real-world objects rather than drawings.
Meta invited their colleagues to share and animate their kids’ artwork to train the AI with inanimate objects, obtaining approximately 1000 drawings. The researchers look forward to an AI system that can instantly create a detailed animated cartoon from complex drawings.
On December 27, Baidu, Inc., a prominent AI business with a solid Internet base, will hold Baidu Create, China’s inaugural metaverse conference, via its platform XiRang.
The platform, named as “Land of Hope,” allows up to 100,000 online guests to interact with around 100 renowned speakers from across the world at the three-dimensional virtual reality conference. Creator City, a three-day event that honors the creators’ spirit by displaying the seamless connection between the virtual and real worlds, has one main forum and 20 sub-forums.
Since its inception in 2000, Baidu’s purpose has been to use technology to simplify a complex world. In this event, Baidu will share its technology breakthroughs and applications in various cutting-edge fields, including AI, autonomous driving, intelligent transportation, quantum computing, and biocomputing, to encourage developers and inventors from all over the world to join.
After signing in via different devices, such as PCs, phones, and wearables, members will be able to enjoy the full functionality of XiRang, including seeing, listening, and interacting with others in their chosen avatar on a Möbius strip-shaped planet.
In a matter of seconds, users will be able to immerse themselves in the city of the future, complete with renowned architectural elements such as China’s Shaolin Temple and the Sanxingdui Museum, an important archaeological site in Sichuan.
The event will be joined by some prominent personalities like Robin Li, Co-Founder and CEO, Baidu and Dr. Haifeng Wang, Chief Technology Officer, Baidu, and Kip Thorne, 2017 Nobel laureate in Physics, scientific consultant & executive producer for the movie Interstellar. From 1400 hrs Beijing Time on December 27, a live feed of the event will be available on Baidu’s official YouTube channel. Baidu’s official Twitter, Facebook, and LinkedIn accounts will also post highlights in English.
Hevo Data, a SaaS startup that helps businesses gather troves of data they create and amass to make better use of it, has acquired $30 million in a fresh fundraising round. Sequoia Capital India led the $30 million Series B financing for the San Francisco and Bangalore-based startup. Qualgro, Lachy Groom, and Chiratae Ventures were among the other investors in the round, which brings the five-year-old startup’s total fundraising to $43 million. Existing investors such as Chiratae Ventures and Sequoia Capital Surge joined the new round of funding.
Hevo Data is a bi-directional, no-code data pipeline platform designed specifically for ETL, ELT, and reverse ETL requirements. Hevo has clients in more than 40 countries spanning the United States, Europe, and Asia-Pacific. The firm claims to have increased its client base fivefold and seeks to expand its technological platform.
The global market for data integration was valued at $8.1 billion in 2020 and is expected to rise at a CAGR of 13% to $17.1 billion by 2026, according to a report.
According to Hevo, corporations have previously required huge engineering teams to handle the problem of data silos, but their no-code tool eliminates technological complications and can be used by even non-technical professionals to perform various data engineering tasks.
“We are very well-capitalised but given the large market opportunity and the high growth momentum — growing 500 percent in the past year, we received strong interest from the market and thus, decided to partner with Sequoia Capital India for our Series B”, Hevo Data co-founder and CEO Manish Jethani remarked, “Our no-code approach delivers an easy-to-use solution that reduces technological difficulties, removing data silos within enterprises.”
Earlier in July 2020, Hevo Data had raised $8 million in a Series A fundraising round headed by Qualgro, a Singapore-based venture capital firm, and Lachy Groom, an angel investor. It had previously raised $4 million in October 2019, spearheaded by Sequoia Capital Surge and Chirate. By the end of 2020, the Bengaluru and San Francisco-based startup had raised around $13 million in two rounds of funding.