Home Blog Page 286

PMC uses AI to curb open dumping

PMC uses AI to curb open dumping

The Pune Municipal Corporation (PMC) intends to use artificial intelligence to resolve community issues such as garbage dumping. The Pune Civic Administration has conducted a pilot project at Yerawada to deal with difficulties like littering public spaces. 

Pune’s civic body has identified at least 150 public spots where garbage dumping is prevalent with AI. PMC intends to use AI to find and fine citizens that are defacing public places by littering. 

Officials said that the civic body also plans to use AI, data analytics, satellite imagery, machine learning (ML), and e-governance to regularise illegal constructions, property tax, etc. Along with fighting trash disposal, PMC plans to take the help of AI and data analytics to collect property tax. 

Read more: KLA Corporation partners with IIT Madras to launch New Artificial Intelligence-advanced Computing Lab

Kunal Khemnar, additional commissioner, PMC, said that the chronic spots for garbage disposal would be identified using technology, and the PMC would carry out mitigation programs after that. 

“The civic administration plans to use mechanised sweeping machines at chronic spots. The aim is to reduce human intervention in garbage collection and disposal,” said Khemnar. He also added that software had been developed for keeping a watch on the garbage disposal. 

PMC uses AI to curb open dumping to ensure a clean and hygienic city. All the spots identified using satellite images will be cross-verified by staffers. “Even if garbage dumping does not stop after repeated cleaning, it will take action against those throwing trash. They will identify the offenders with the help of CCTV footages. They will be fined,” said a senior PMC official.

Advertisement

IBM and NeuReality Announce Partnership to Build Next-Generation AI Inference Platforms

AI Inference Platforms IBM and NeuReality
Image Credit: Analytics Drift Design Team

NeuReality, a semiconductor company located in Israel that is developing the next generation of AI-centred computing system architecture, has partnered with IBM to produce the next generation of high-performance AI inference platforms. These systems, according to both companies, will yield significant cost and power savings for deep learning use cases. This follows NeuReality’s public debut in February when the company raised $8 million in a seed round to accelerate AI workloads at scale.

According to the official announcement, this alliance will enable important industries including banking, insurance, healthcare, manufacturing, and smart cities to implement computer vision, recommendation systems, Natural Language Processing, and other AI use cases. They also say that the partnership would hasten deployments in today’s rapidly expanding AI use cases, which are currently available in public and private cloud data centers.

AI Inference is an emerging buzzword in the artificial intelligence domain. It is the aspect of AI where neural networks are really employed in real-world applications and generate results. While high-performance AI inference systems are becoming a rising area of attention for businesses, they also promise to minimize the cost and power consumption of deep learning. 

The agreement covers NR1, NeuReality’s first AI-centric architecture Server-on-a-Chip ASIC implementation. The NR1 prototype platform is built on NeuReality’s first-generation FPGA-based NR1-P prototype platform, which was unveiled earlier this year in May. According to the company, with features like native AI-over-Fabric networking, full AI pipeline offload, and hardware-based AI hypervisor, NR1-P can eliminate system bottlenecks in today’s solutions. In addition, it will also lower the cost and power consumption of inference systems and services. Prior to the release of the NR1 production platform next year, the NR1-P platform will offer software integration and system-level validation.

As part of the newly formed deal, IBM will become a NeuReality design partner, working on product requirements for the NR1 chip, system, and SDK, which will be incorporated in the architecture’s next edition. The two businesses will jointly examine NeuReality’s solutions for usage in IBM’s Hybrid Cloud, covering AI use cases, system flows, virtualization, networking, security, and other areas.

“Having the NR1-P FPGA platform available today allows us to develop IBM’s requirements and test them before the NR1 server-on-a-chip’s tapeout,” said NeuReality CEO Moshe Tanach. “Being able to develop, test and optimize complex data center distributed features, such as Kubernetes, networking and security before production is the only way to deliver high quality to our customers.”

The basic NR1-P architecture is based on a 4U server chassis with 16 Xilinx Versal Adaptive Compute Acceleration Platform (ACAP) cards. According to NeuReality, the NR1-P platform, which is intended for usage in cloud data centers and edge nodes, is now being shown to clients and partners and will be followed by further implementations. Customers may incorporate it in orchestrated data centers and other facilities and witness it in action on the new prototype platform, which is being used to test the technology.

Read More: IBM Announces Telum Microprocessor, Featuring AI Inference Accelerator

IBM opened the IBM Research AI Hardware Center in Albany, New York, in February 2019 to build a worldwide research hub for developing next-generation AI hardware with multiple technological partners and expanding the company’s cooperative nanotechnology research activities. NeuReality will become the IBM Research AI Hardware Center’s first start-up semiconductor product member and a licensee of the Center’s low-precision high-performance Digital AI Cores as a result of this cooperation.

Dr. Mukesh Khare, Vice President of Hybrid Cloud research at IBM Research, said, “In light of IBM’s vision to deliver the most advanced Hybrid Cloud and AI systems and services to our clients, teaming up with NeuReality, which brings a disruptive AI-centric approach to the table, is the type of industry collaboration we are looking for.” Mukesh added that the partnership with NeuReality is expected to drive a more streamlined and accessible AI infrastructure, which has the potential to enhance people’s lives.

Advertisement

Alphabet Builds Isomorphic Labs to change the course of drug discovery using AI

Alphabet Isomorphic Labs, Deepmind Alphafold2
Image Credit: Analytics Drift Design Team

DeepMind, a London-based artificial intelligence research company owned by Google’s parent Alphabet, has introduced a new drug development company. 

The Isomorphic Labs logo on an abstract background.
Source: Isomorphic Laboratories

The tech juggernaut and Google parent firm had made a big and unexpected splash in the realm of biology last year. Last November, DeepMind, the company’s AI research arm, astonished structural biologists by solving the long-standing challenge of protein structure prediction with their deep learning model, AlphaFold2

This discovery was a crucial milestone in the drug discovery industry as the function of proteins in a cell is linked to almost every illness. The three-dimensional protein structure, in turn, may be used to identify the function. Anyone who understands this “folding” may better identify and treat illnesses, among other things.

AlphaFold 2 solved a 50-year-old protein folding puzzle by accurately predicting a protein’s 3D structure directly from its amino acid sequence. Its goal is to learn more about how a protein’s delicate structure interacts with cells and how their unique forms might drive both life and sickness. Eight months later, it expanded on those results by making the model’s code and a database of over 350,000 predicted protein structures available to the public. That information has been utilized by independent researchers to speed up a wide spectrum of biological study, including efforts to comprehend the coronavirus.

Other companies under Alphabet’s wing are already researching various areas of human health. For instance, Verily monitors glucose levels in diabetics, and Calico, which is committed to exploring ways to slow down aging.

Hassabis will help Isomorphic Lab during its early operations to ease collaboration between the new company and Deepmind. He went on to say that this would also help define Isomorphic Labs’ strategy, vision, and culture. It might make use of DeepMind’s protein structure research to find out how various proteins interact. Instead of developing its own medications, the company may choose to market its models by forming alliances with pharmaceutical industries.

Hassabis noted that AI technologies would increasingly be employed not merely for evaluating data, but also for building effective predictive and generative models of complicated biological processes. According to him, AlphaFold2 is a significant first step in this direction, but there’s a lot more to come.

The new company was given the moniker “Isomorphic Laboratories” because it was assumed that information systems and biological systems might share a structured architecture. Isomorphism refers to the fact that they have similar shapes yet differ in origin. The ultimate goal of the new company is, to model and understand some of the fundamental mechanisms of life.

Read More: DeepMind open-sources MuJoCo for development in robotics research

According to the National Center for Biotechnology Information in the United States, the cost of developing a new medicine is on average 1.3 billion dollars. The report also states that researchers currently physically create each component before testing it in the lab under human-like settings. Testing would be faster, safer, and less expensive if AI was used. Hassabis elaborates that artificial intelligence can speed up and improve the entire process by not only analyzing data but also building predictive and generative models of highly complicated biological occurrences.

Isomorphic Labs is not the first nor only company that plans to streamline drug development by leveraging artificial intelligence. Similar technology is being investigated by several notable research institutes, including a team at the University of Washington. Like Atomwise in San Francisco and Recursion Pharmaceuticals in Salt Lake City, several start-ups are seeking to use new artificial intelligence approaches to drug discovery.

Isomorphic Labs is now looking to employ a “world-class interdisciplinary team,” which would include professionals in artificial intelligence, biology, medicinal chemistry, biophysics, and engineering, all of whom would work in a highly collaborative and inventive setting. As the company progresses into later phases, Hassabis also says that it may appoint a new CEO.

Advertisement

Autonomous delivery startup Nuro raises $600M, partners with Google

Nuro raises $600M

Less than a year after closing a $500 million funding round led by T. Rowe Price Associates, autonomous delivery startup Nuro raises $600M. The money will be used to hire more employees and enhance its technology. In this new funding round, the lead investor was Tiger Global alongside Google LLC, SoftBank Group Corp., and several others. Nuro is now reportedly valued at $8.6 billion. 

Nuro’s growing fleet of autonomous delivery vehicles can deliver groceries and medicines from shops or retail locations to consumers’ homes. As a part of a partnership with Kroger Co, Nuro’s vehicles have carried out thousands of deliveries. Neuro teamed up with FedEx corp earlier this year to evaluate whether its vehicles are suitable for parcel logistics. 

Nuro’s autonomous delivery vehicle, called the R2, uses an artificial intelligence driving system and several sensors to collect data about the environment for navigating the roads. Each vehicle can ferry 400 pounds of merchandise. 

The design of R2 plays a significant role in pedestrian safety. The vehicle is smaller than a car and has a specialized panel to protect pedestrians in the event of a collision. The vehicle also has a redundant braking and control system that allows it to continue operating if a component malfunction. 

Nuro is expanding its manufacturing capabilities; the startup announced its plan to invest 40 million dollars in a test track and a new manufacturing facility. The startup aims to build vehicles faster, and the current funding round could help advance its expanding plans. The proposed test track will help the company boost the startup’s development effort by providing a safe environment to test R2’s autonomous driving software. Testing is essential because machine learning systems improve through training, and practice runs on simulated roads will improve R2’s navigation. 

Additionally, Nuro is partnering with Google to enhance its autonomous driving software. The company announced a five-year strategic partnership with Google Cloud that will extend to data storage and running vehicle simulation workloads. Google’s collaboration with Nuro is notable given its sister company Waymo is also developing fully autonomous cars and vehicles. 

Advertisement

DataRobot acquires decision.ai to create a more actionable AI

DataRobot acquires decision.ai

On 1st November, DataRobot announced the acquisition of decision.ai and the return of Dan Becker to DataRobot. Dan’s legendary career has taken him from Kaggle to DataRobot to Google to founding decision.ai, and now, with this acquisition, he is back to DataRobot. Dan has experience consulting on AI projects for 6 Fortune 100 companies and contributions to leading open-source AI tools, including Keras and TensorFlow. 

DataRobot was founded in 2012, and today, it is the AI Cloud leader, delivering a unified platform for all data types, users, and environments to speed up the delivery of AI to production for every organization. 

Earlier this year, DataRobot launched a No-Code App Builder. Regardless of the user’s technical expertise, it allows any user to quickly turn a deployed model into a rich AI application without coding. Users can also get context on critical features, run what-if simulations, and determine how to optimize their models for precise outcomes. With DataRobot’s Decision Intelligence Flows capability launch, users can visually create complex rules to assess their predictions. It also helps to automate the decision-making process at scale. 

Read more: FDL and Intel AI Mentors Collaborate to Improve Astronaut Health

“This gives DataRobot a chance to integrate some key complementary technology and bring onboard some great minds such as Dan Becker & team. This was an excellent decision (pun intended) by DataRobot,” said Igor Veksler in a LinkedIn post.

The acquisition of Decision.ai will help DataRobot extend and improve its existing no-code app building and decision intelligence flow tools. These tools help DataRobot users optimize decisions in complex, dynamic business processes. With the augmented Decision Intelligence Suite in DataRobot AI Cloud, companies can make faster and smarter decisions when they can see the entire picture and how the decision will play out over time.

Advertisement

FDL and Intel AI Mentors Collaborate to Improve Astronaut Health

Intel AI Mentors Collaborate to Improve Astronaut Health

Frontier Development Lab (FDL) researchers along with Intel AI Mentors conducted a landmark astronaut health study to adequately understand the physiological effects of radiation exposure on astronauts. The SETI Institute hosts the Frontier Development Lab in the U.S. It is in a public/private partnership with private-sector companies, NASA, and commercial AI partners. Frontier development lab used Intel artificial intelligence technology to create a first-of-its-kind algorithm to identify cancer progression biomarkers using mouse and human radiation exposure data.

Cosmic radiation can lead to health problems and cause cancer complications since it can penetrate several layers of aluminum and steel layers and affect human tissue during space travel. Since there is little data on how cosmic radiation affects astronauts from existing space missions, they need to access soiled data from various institutions that are heavily protected. FDL’s casual machine learning models don’t have to move the data between physical locations to operate on data across different areas. 

Shashi Jain, strategic innovation and FDL partner manager at Intel, said that “We believe that the FDL Astronaut Health challenge results will enable NASA to understand the mechanisms involved in protecting astronauts more effectively as we return to the moon and beyond, as well as provide a blueprint to accelerate the use of AI in healthcare applications on Earth.”

Read more: All About Waymo Driver: Google Autonomous Driving Technology

Intel and FDL’s casual machine learning allows a federation of collaborator institutions to allow access to the data without moving it to separate locations. This is a testament to how public and private institutions can work together to unlock more insights that would otherwise remain buried. 

FDL’s CRISP 2.0, developed by extending CRISP 1.0, leverages Intel’s Open Federated Learning framework (OpenFL). It makes it possible to drain and combine FDL’s CRISP 2.0 from institutions such as Mayo Clinic, NASA, and NASA GeneLab without moving the data to a central location. 

Advertisement

Facebook shuts down Facial Recognition Feature

Facebook shuts down Facial Recognition

Meta, formally known as Facebook shuts down facial recognition system. It’ll delete the “faceprints” of more than 1bn Facebook users over the coming weeks. Due to this move, people will no longer be automatically recognized in videos, memories, and photos across Facebook. This move is coming after a lawsuit accused Facebook’s tagging tech of violating Illinois’ biometric privacy law, which led to a $650 million settlement in February 2021. It was in 2018 that this case became a class-action lawsuit. However, Facebook made its facial recognition on the platform as opt-in only in 2019. 

This change will also impact image descriptions for blind and visually impaired people since Automatic Alt Text (ATT) descriptions will no longer include people’s names from videos and photos on Facebook. 

This change represents a more significant shift in facial recognition usage, removing over a billion people’s individual facial recognition templates from the Facebook database. Although facial recognition technology is highly valued in various cases, Meta stated that benefits have to be weighed against growing societal concerns. Additionally, the laws regarding this technology are unclear since regulators have yet to provide a set of rules for its use. 

Facebook limiting its facial recognition technology will also affect services where people can verify the identity for financial products, gain access to a locked account, or unlock a personal device. The company, however, would continue working on limited but valuable use cases while ensuring that society and people have transparency and control over their right to be automatically recognized in memories, videos, and pictures. 

“Making this change required us to weigh the instances where facial recognition can be helpful against the growing concerns about the use of this technology as a whole,” said Jason Grosse, a Meta spokesman. He also added that Meta has also not ruled out incorporating facial recognition technology into future products. 

In this company-wide move, Meta will delete the identification templates of all people who have opted for face recognition. Facebook users will no longer be able to turn on face recognition or see a suggested tag with their name in photos, memories, and videos that they appear in. Although Facebook will delete the digital scans of facial features by December, it will not eliminate DeepFace, the software that powers the facial recognition system.

Jerome Presenti, Meta’s vice president of artificial intelligence, stated that facial recognition is valuable. However, it requires strict transparency and privacy controls so people can choose how their faces are used and recognized. He also stated that it’s best to limit facial recognition technology for a narrow set of use cases because of the ongoing uncertainty. Presenti said that face recognition is most valuable when data is not connected to a cloud server and only operates on personal devices such as unlocking smartphones or laptops. 

By shutting down the Face Recognition tagging program that Facebook has used for years, Meta hopes to reinforce user confidence in its privacy protections as it prepares a rollout of potentially privacy-compromising augmented and virtual reality technology. Earlier this year, Facebook launched a pair of camera-equipped smart glasses with Ray-Ban and is gradually launching 3D virtual worlds on its Meta VR headset platform. Promoting these products and technology requires the company to garner a level of trust from regulators and users, and giving up Facebook auto-tagging after a lengthy legal battle seems a straightforward way to bolster it.

Advertisement

All About Waymo Driver: Google Autonomous Driving Technology

Waymo Driver

In the past decade, autonomous driving has progressed from ‘maybe possible’ to ‘now commercially available.’ Waymo, the company that emerged from Google’s self-driving car project, officially started its commercial self-driving car service in the suburbs of Phoenix in December 2018. At first, the program was available only to a few hundred vetted riders where human safety operators were always behind the wheel. However, in the past four years, Waymo has slowly opened the program to public members. Additionally, it has begun to run Robo taxis without drivers inside. 

In 2009, Google began the self-driving car project, and in 2016, Alphabet bought Waymo, an autonomous driving technology company. Since then, Google’s self-driving project became Waymo. Waymo provides fully autonomous cars and has been dubbed ‘the World’s Most Experienced Driver’ by Alphabet AI. 

In this article, we understand how Waymo leverages artificial intelligence to create its world-dominating self-driving technology. 

Waymo’s Tech

The Waymo self-driving system has two essential parts: a highly sophisticated custom suite of sensors developed explicitly for fully autonomous operations and state-of-the-art software to make sense of the information.

Lidar is Waymo Driver’s most powerful sensor that paints a 3D picture of surroundings, allowing the system to measure the size and distance of objects around our vehicle. Lidar can measure the distance whether the things are 300 meters away or up close. Lidar sensors allow Waymo’s technology to see the world in incredible detail and identify objects in the sun on the brightest days and moonless nights. 

Waymo has adopted the data centers of Google, TPUs, and the TensorFlow ecosystem, for training its neural networks. The rigorous training cycles and simulation testing allows the company to enhance its ML and autonomous system.

The platform also leverages AI to simulate sensor data gathered through its self-driving vehicles. In a recent paper, Waymo researchers introduced SurfelGAN, a technique that uses texture-mapped surface elements for reconstructing scenes and camera viewpoints to handle positions and orientations. 

Advanced Sensors

Waymo claims to have built the most advanced sensor systems that have been trained with over 20 million autonomously driven miles. They have improved the current system over five generations of development. Its 5th generation Driver consists of radar, Lidar, and cameras to see 360 degrees around the vehicle. 

In Waymo’s self-driving automobiles, a family of Lidar sensors uses light waves to paint rich 3D pictures, known as point clouds, allowing the Waymo Driver to see the world in incredible detail. Point clouds from Lidar can capture the distance and size of objects, allowing the software to spot pedestrians walking on a moonless night an entire city block away. 

Second, Waymo vehicles are also equipped with a range of cameras that provide the Waymo Driver with different road perspectives. These cameras can capture long-range objects and help the rest of our system by adding various sources of information, providing a deeper understanding of its environment to the Waymo Driver. 

Radar System

Waymo uses one of the world’s first radar imaging systems for fully autonomous vehicles that complement its cameras and Lidars. This radar can instantly perceive a pedestrian’s speed and trajectory even in challenging weather conditions, such as fog, snow, and rain, providing the Waymo Driver with unprecedented resolution, range, and field of view for safe driving. 

Waymo sensors produce various types of data, including fine-grained Lidar point clouds, video footage, and radar imagery over different ranges and fields of view. The diversity of sensors allows a sensor fusion technique that improves detections and characterizations of objects. 

Sensor fusion technology allows Waymo to amplify the advantages of its sensor. For example, Lidar excels at providing depth information and detecting the 3D shape of objects. At the same time, cameras can pick out visual features such as a temporary road sign and the color of traffic signals. Meanwhile, Waymo’s radar is highly effective in bad weather and can track moving objects like animals running out of a bush and onto the road.

Advertisement

Key Announcements at Microsoft Ignite Fall Edition 2021: Day 1

Microsoft Ignite Fall Edition 2021 day 1, microsoft ignite announcements
Image: Microsoft

This week, Microsoft hosts its second Ignite conference of the year. It’s an annual opportunity to showcase what the company is emphasizing and how it plans to revolutionize the way businesses work. At this event, customers get to meet with experts to obtain answers to questions about implementing and managing Microsoft technologies at the conference. This year’s user and partner event is no exception. 

Microsoft introduced a slew of new capabilities for Microsoft Teams, reiterating the company’s commitment to Teams as the hub for collaboration. It also recognizes the need of improving security and collaboration capabilities, as well as explore new ways to use the metaverse in the workplace. At Microsoft Ignite 2021, Microsoft CEO Satya Nadella highlighted that 90 new services and updates will be unveiled at the Ignite conference for Fall 2021, as well as other steps Microsoft is doing to better equip customers in an age of fast digital change and broad hybrid work. Here are some of the key Microsoft Ignite announcements from the Fall Edition event: 

Microsoft and Metaverse

“As the digital and physical worlds come together, we are creating an entirely new platform layer, which is the metaverse,” Nadella said during the Ignite 2021 conference.

“We’re bringing people, places and things together with the digital world, in both the consumer space as well as in the enterprise,” he added.

Just days after Facebook was renamed as Meta in an effort to establish virtual places for both consumers and companies, Microsoft is joining the battle to build a metaverse inside Teams. Next year, Microsoft will integrate Mesh, a collaboration platform for virtual experiences, into Microsoft Teams. Mesh expands on prevailing Teams features like Together mode and Presenter view, and it’s intended to make remote and hybrid meetings more collaborative and immersive by communicating to participants that they’re in the same virtual area.

Mesh will employ HoloLens headsets and Microsoft’s mixed reality technology for virtual meetings, conferences, and video conversations that Teams members may participate in as avatars. Microsoft’s Mesh technology will also let users participate in normal web- or app-based Teams meetings as both their VR versions and themselves. For Teams video conversations, there will be a 3D avatar that you can customize for yourself, and it will be available even if you don’t have a headset. Customers who do not have a device that can show 3D images will be able to view the content and avatars in 2D.

Azure OpenAI Service

One of the expected Microsoft Ignite announcements was Microsoft’s Azure cloud platform collaborating with OpenAI. The new Azure OpenAI Service will provide Azure users access to OpenAI’s GPT-3 API. GPT-3 is an autoregressive language model capable of turning natural language into direct software code, as well as summarising big text blocks with questions.

For the time being, Microsoft says Azure OpenAI Service will remain an invite-only platform. The Azure OpenAI Service will give Microsoft’s cloud clients additional options to get the best results for their businesses. It will supplement the conventional GPT-3 API in terms of security, management, and networking.

Microsoft Loop

Microsoft Loop, a new Microsoft 365 tool that pushes collaboration beyond the typical document, is based on the Fluid Framework.  

Loop components, Loop pages, and Loop workspaces are the three major features of Microsoft Loop. Loop components are information pieces that you may iterate on and that function across the Microsoft 365 ecosystem. A Loop component can be a table, a to-do list, or anything else, and it’s updated in real-time with the most up-to-date information wherever it’s accessible. As explained during the Microsoft Ignite announcements, if you share a Loop table on Microsoft Teams, you can make changes right there, and they’ll appear on any website or place where the table is referenced. Loop components are described by Microsoft as “atomic units of productivity that help you collaborate and complete work right within chats and meetings, emails, documents, and more.”

Loop pages allow you to pull together all of the material connected to a project in a way that is always up-to-date since all of the components are updated in real-time wherever you share them. Loop workspaces are more akin to notebooks, in which you may organize pages into groups and sections to make things easy to discover, which is very beneficial for huge projects. Both pages and workspaces can be collaborated on by multiple users at the same time.

Azure Cosmos DB

Microsoft has also delivered on its commitment to push boundaries of data analytics capabilities of the Azure database. According to the company, one upgrade offers new functionality for Azure Cosmos DB with the purpose of making Apache Cassandra data transfer easier. As per Microsoft, Azure Managed Instance for Apache Cassandra is now broadly accessible, with an automated synchronization functionality that enables hybrid data to run in the cloud and on-premises. 

Similar to Apache HBase and Google Cloud Bigtable, Cassandra is an open-source column family store NoSQL database. The Azure Cassandra service features an automated synchronization capability that allows users to sync data across their own Cassandra servers, both on-premises and in the cloud.

Microsoft adds that all Azure Advisor users may now configure alerts for throughput spending restrictions to help them keep track of their spending.

SQL Server 2022

During Executive Vice President Scott Guthrie’s keynote at Microsoft Ignite 2021  announcements: Day 1, on Tuesday morning, Microsoft announced SQL Server 2022. Because the product is still in private preview, the complete details of the release aren’t available yet. Microsoft, on the other hand, revealed some fascinating insights about the forthcoming update.

For example, SQL Server 2022 will support migrations to Managed Instance via distributed availability groups, allowing for database migrations with near-zero downtime.

Managed Instance is a platform-as-a-service (PaaS) product designed to make moving from on-premises or cloud-based virtual machines to PaaS as simple as possible. However, there were a few major difficulties that users had to deal with, e.g., performing migrations with minimal downtime necessitated a complicated database migration services solution.

From readable secondaries, SQL Server 2022 will also provide write access to the Query Store. This will provide you with more visibility into what’s going on with those secondary replicas and give you more adjusting options.

Microsoft Viva

Microsoft Viva is a Microsoft 365-based workforce platform that offers learning, resources, insights, knowledge, and communications. Until recently, the platform consisted of individual modules such as Connections, Insights, Learning, and Topics. Microsoft announced at Ignite 2021 that Microsoft Viva is now available as a full package.

Topics, Insights, Learning, and Connections are all part of the package. Microsoft Viva Insights is receiving additional capabilities to improve collaboration and productivity throughout the fully integrated platform.

Headspace’s guided meditations and mindfulness exercises in the Viva Insights app for Teams will be accessible in more languages later this month — French, German, Portuguese, and Spanish — to promote mindfulness and wellbeing throughout the workday.

Microsoft Excel API

Microsoft unveiled an upgrade to Excel that adds a new JavaScript API to the classic spreadsheet software at the Ignite conference. Developers will be able to construct custom data types and methods using this new API.

Over the last several years, Excel has begun to introduce a variety of new data kinds, first allowing users to draw in stock and geographic data from the cloud, then Power BI and Power Query data types, which allow users to deal with their own data. Customers will now be able to design their own add-ins and enhance already existing ones to leverage on data types, resulting in a more integrated, next-generation experience within Excel, according to Mircosoft.   Developers can also define unique data types that are relevant to their businesses.

Azure Container Apps

Azure Container Apps is a new fully managed serverless container service from Microsoft that complements the company’s current container infrastructure offerings such as Azure Kubernetes Service (AKS). Azure Container Apps, according to Microsoft, was designed primarily for microservices, with the potential to expand fast based on HTTP traffic, events, or long-running background operations.

As per Microsoft, with Azure Container Apps, developers will be able to design their apps in the language and framework of their choice and then deploy them using this new service.

Azure Stack HCI

Microsoft’s Azure Stack HCI hyper-converged infrastructure solution has been enhanced to include GPU for AI and machine learning, soft kernel reboot, thin provisioning, and other infrastructure-level features. Virtual machine creation and administration through the Azure portal are among the management-level changes. All Azure Stack HCI integrated systems now include server core security.

A new Azure Stack HCI partner program allows users with additional certified solutions and services from independent software providers.

Microsoft has released a preview of Azure Virtual Desktop for Azure Stack HCI, as well as new autoscale functionality for AVD that allows users to plan the start and stop of session hosts.

Context IQ

Context IQ is another new Microsoft Office experience unveiled at Ignite Fall Edition. This is an artificial intelligence service that may provide you with relevant ideas and information at any moment. Microsoft Editor, the company’s spellchecking and proofreading product, will be the first platform to benefit from Context IQ. Context IQ allows Editor to not only fix what you’ve typed but also forecast what information you’ll want to write. It can, for instance, recommend files to share with a coworker based on projects you’re working on when you try to schedule a meeting through email, and it can do much more.

Read More: Introducing MT-NLG: The World’s Largest Language Model by NVIDIA and Microsoft

Context IQ may recommend related Dynamics 365 sales records plugins as well as third-party plugins like Jira, Zoho, and SAP. Users will be able to enter information without having to switch between email or other apps. In Teams, pressing Tab will prompt the Editor to finish a statement, such as when booking a trip online and entering a frequent flier number.

Azure Chaos Studio

Microsoft has released a preview of Azure Chaos Studio, a tool for experimenting with application resilience. The platform includes a library of errors caused by agents or services, as well as continuous validation to ensure that product quality is maintained.

Users of Chaos Studio may deliberately interrupt programs with network delay, unexpected storage failures, secret expiration, a full data center outage, and other real-world events to uncover weaknesses and create mitigations before users are affected.

Microsoft Edge on Linux

Microsoft Edge has been out of beta for almost a year on Windows and macOS desktops, and the browser is finally ready for a new platform. Microsoft revealed during the Microsoft Ignite 2021 event announcements that Edge is now fully accessible on Linux in the stable channel.

This implies Edge has finally caught up to Google Chrome and Mozilla Firefox, both of which have long been available for Linux distribution. You may get Microsoft Edge for Linux by visiting the Edge download website and downloading the .deb or .rpm package. Microsoft is commemorating the release of Edge Stable on Linux with a bonus of the Edge Surf game.

Dynamics 365

Microsoft unveiled three new Dynamics 365 capabilities, each with varying levels of availability, to help businesses better their connections with customers, workers, and suppliers. These new tools are Dynamics 365 Supply Chain Insights, Dynamics 365 Customer Service voice channel, and Dynamics 365 Connected Spaces.

Dynamics 365 Supply Chain Insights assists organizations in avoiding possible supply interruptions by providing data “in near real-time” from multiple partners, such as logistics partners, data providers, and suppliers. The technology, which is now in preview, allows businesses to establish a digital version of their physical supply chain and improve end-to-end visibility throughout their whole value chain.

It also integrates data on global events that might have an influence on supply chains, such as political upheaval, natural catastrophes, or pandemic resurgences.

Dynamics 365 Customer Service voice channel is a voice-enabled, SaaS-based customer service application that is now generally accessible. According to Microsoft, it’s built on Microsoft Teams technology and combines the capabilities of conventional Contact Center as a Service (CCaaS), Unified Communications as a Service (UCaaS), and Customer Engagement Center (CEC) into a single solution.

Dynamics 365 Connected Spaces, which will be available as a preview in early December, allows businesses to harness observational data with ease, leverage AI-powered models to discover insights about your surroundings, and respond in real-time to trends and patterns.

 Microsoft Defender for Business

Microsoft has unveiled a brand-new security suite tailored to the dangers that small and medium-sized organizations face at the Microsoft Ignite fall edition. Called Microsoft Defender for Business, which was launched today at Microsoft Ignite 2021, will try to offer what Microsoft calls “enterprise-grade endpoint protection” from Microsoft Defender for Endpoint to businesses with 300 or fewer employees.

Defender for Business, which is specifically built to defend organizations from malware and ransomware on Windows, macOS, iOS, and Android devices, will enable clients to construct a secure foundation by finding and correcting software vulnerabilities and misconfigurations. Its Endpoint detection and response (EDR) feature will assist SMBs in identifying persistent threats and removing them from their environments using behavioral-based detection and response alerts. It can also minimize alert volume and remediates threats.

You can watch the Microsoft Ignite Fall Edition 2021:Day 1 here:

Advertisement

Synopsys Acquires Performance Optimization Leader Concertio

Synopsys acquires Concertio

United States-based software developing company Synopsys acquires artificial intelligence-enabled real-time performance optimization leader Concertio. Company officials have not provided any information regarding the valuation of the acquisition deal. 

With this acquisition, Synopsys wants to further enhance its SiliconMAX Silicon Lifecycle Management platform. Synopsys plans to use the expertise of Concertio to maximize the performance of its platform, including manufacturing, product ramp, in-field operations, and product test. 

General manager of Silicon Realization Group at Synopsys, Shankar Krishnamoorthy, said, “This acquisition of Concertio reflects our commitment to aggressively expand the capabilities and benefits of the SiliconMAX SLM platform to address our customers’ rapidly evolving silicon and system health needs.” 

Read More: Hugging Face’s paper got the best demonstration paper award at EMNLP 2021

He further added that lifecycle management systems have become critical to successfully deploy and operate advanced electronic systems. Concertio’s unique artificial intelligence-enabled performance optimizer will enable Synopsis to boost the performance of many devices like automotive and industrial IoT. 

Concertio’s software gets directly installed in the desired device and monitors its interaction. The software uses artificial intelligence technologies like reinforced learning to learn how the device interacts and then optimize them to enhance performance. 

Concertio is a United States-based software firm that was founded by Tomer Morad, Timer Paz, and Andrey Gelman in the year 2016. Since its establishment, the company has earned a high reputation in its industry by providing best-in-class performance optimization solutions. 

Concerto had raised over $4 million in its seed funding round from investors like Differential Ventures, NextLeap Ventures, and Big Red Ventures before being acquired by Synopsys.

Advertisement