Mobile cab service providing company Uber announces its plan to launch an autonomous food delivery system in California, the United States, by 2022. Uber has partnered with a US Motion Joint venture between Hyundai Motor Co. and Aptiv PLC, Motional Inc., to launch the autonomous delivery system.
The system named Pilot will be deployed in Santa Monica to customers ordering from the UberEats application. According to company officials, Uber wants to deploy the Pilot service for a wide range of operations in the coming years.
Sarfraz Maredia, Vice President and head of Uber Eats in the United States and Canada, said, “Our consumers and merchant partners have come to expect convenience, reliability and innovation from Uber, and this collaboration represents a huge opportunity to meet — and exceed — those expectations.”
Uber understood the growing demand for driverless technologies and partnered with Motionalto to capitalize on the growing space. Motional is a United States-based automobile company founded in 2013. The firm specializes in developing safe, reliable, and accessible driverless vehicles.
Karl Iagnemma, President and CEO of Motional, said, “We’re confident this will be a successful collaboration with Uber and see many long-term opportunities for further deploying Motional’s technology across the Uber platform.”
He also mentioned that their first delivery partner is Uber, and they are eager to begin using their trusted driverless technology to offer efficient and convenient deliveries to customers in California. The new partnership marks Motial’s expansion into the driverless delivery market and Uber’s first on-road delivery partnership with an autonomous vehicle developer.
Deepfakes have taken the internet by storm, taking celebrities and politicians in its wake using misinformation about things that never happened.
Deepfakes are made with a generative adversarial network (GAN), and the technology has advanced to the point where it is increasingly impossible to tell the difference between a genuine human’s face and one created with a GAN model. Even though this technology has some commercial potential, it also has a malevolent side that is considerably more terrifying and has major ramifications.
Typically, it is easy to discern if the content available online is deepfake or real. For instance, the majority of deepfake films on the internet are made by amateurs and are unlikely to deceive anyone. This is because deepfakes often leave blurred or flickering results as a regular occurrence, especially when the face changes angles quickly.
Another way to notice differences is that in deepfakes the eyes frequently move independently of one another. Hence, deepfake movies are usually presented at a low quality to mask these flaws.
Unfortunately, with advanced tools available online today, it is easy to make deepfakes that are nearly perfect or appear real to the untrained eye, with only basic editing skills. This has helped filmmakers to change the facial features of actors into ones that fit character descriptions in the movie. For instance, Samuel L Jackson was de-aged by 25 years using deepfake technology in the movie Captain Marvel.
Deepfake Apps like Faceapp can be used to make a photograph of President Biden appear feminine. Reface is another popular face-swapping application that allows users to swap faces with celebrities, superheroes, and meme characters to produce hilarious video clips. Earlier this year, The Israel Defense Forces musical groups teamed with a company that specializes in deepfake filmography, to bring photographs from the 1948 Israeli-Arab war to life to commemorate Israel’s Memorial Day.
The US government is particularly wary that deepfakes may be used to propagate misinformation and conduct crimes. That’s because deepfake developers have the ability to make people say or do anything they want, and release the ‘fake manipulated content online.’ For instance, this year, the Dutch parliament’s foreign affairs committee was duped into conducting a video chat with someone impersonating Leonid Volkov, the chief of staff to imprisoned Russian anti-Putin politician Alexei Navalny. There is the potential not just to distribute fake news, but also to cause political unrest, an increase in cybercrime, revenge porn, phony scandals, and an increase in online harassment and abuse. When presented as evidence in court, even video footage might be rendered worthless.
Simultaneously, deepfake videos will become an issue as GPU performance scales up, becoming more powerful and cheaper, despite the fact that they are still in the early stages of research. The commercialization of AI tools will also decrease the threshold for generating these deepfakes. These might potentially lead to generating real-time impersonations that can bypass biometric systems.
The FBI has even issued a warning that “malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months,” citing fake videos of Obama calling Donald Trump a “complete dipshit,” Mark Zuckerberg bragging about having “total control of billions of people’s stolen data,” and a fake Tom Cruise claiming to make music for his movies on TikTok. Any modified content – visual (videos and images) and verbal (text and audio) – may be classified as synthetic content, including deepfakes.
According to a recent MIT study, Americans are more inclined to trust a deepfake than fake news in text form, but it has no effect on their political views. The researchers are also eager to point out that making too many inferences from this data is dangerous. They caution that the settings in which the study trials were carried out may not be representative of the situations in which US voters are likely to be misled by deepfakes.
Hence, the calls to develop tools that can help in early detection and further prevention in the mass spread of treacherous deepfakes are rising every year. Microsoft has released Video Authenticator, a tool that can evaluate a still photo or video and assign a score based on its degree of confidence that the material hasn’t been digitally changed.
Google published a big dataset of visual deepfakes, called FaceForensics++ in September 2019 with the goal of enhancing deepfake identification. Since then, this dataset has been employed to build deepfake detection systems in deep learning research. FaceForensics++ focuses on two particular types of deepfake techniques: facial expression and facial identity manipulation. While the results appear to be encouraging, experts discovered a problem: when the same model was applied to real-world deepfake data found on Youtube (i.e. data not included in the paper’s dataset), the model’s accuracy was drastically reduced. This points to a failure in detecting deepfakes that were created using real-world data or the ones that the model wasn’t trained to detect.
Facebook also unveiled a sophisticated AI-based system a few months ago that can not only identify deepfakes but also re-engineer the deepfake producing software used to generate manipulated media. Built-in collaboration with academics from Michigan State University (MSU), this innovation is noteworthy because it might aid Facebook in tracking down criminal actors who are distributing deepfakes across its many social media platforms. This content might contain disinformation as well as non-consensual pornography, which is an all-too-common use of deepfake technology. The work is currently in the research stage and is not yet ready for deployment.
The reverse engineer technique starts with image attribution before moving on to the detection of attributes of the model that was used to produce the image. These attributes, referred to as hyperparameters, have to be tuned into each machine learning model. They leave a distinct fingerprint on the image as a whole, which may be used to identify its origin.
At the same time, Facebook claims that addressing the issue of deepfakes necessitates going a step further in the current practices. In machine learning, reverse engineering is not a new notion; existing algorithms can arrive at a model by evaluating its input and output data, as well as hardware statistics such as CPU and memory consumption. These strategies, on the other hand, rely on prior knowledge about the model, which restricts their utility in situations when such information is unavailable.
The winners of Facebook’s Deepfake Detection Challenge, which concluded last June, developed a system that can detect distorted videos with an average accuracy of 65.18 percent. At the same time, deepfake detection technology is not always accessible to the general public, and it cannot be integrated across all platforms where people consume media material.
Amid these concerns and developments, surprisingly, deepfakes are not subject to any special norms or regulations in the majority of countries. Nonetheless, legislation like the Privacy Act, the Copyright Act, the Human Rights Act, and guidelines based on the ethical use of AI offer some protection.
Though researchers around the globe are not close to finding a solution, they are still working around the clock to find robust mitigation technology to tackle the proliferation of deepfakes. Although at present deepfakes may not have incurred major harm in shaping the political opinion of the masses, it is better to have tools in the arsenal to detect this content in the future.
It is true that deepfakes are becoming easier to make and more difficult to detect. However, organizations and people alike should be aware that technologies are being developed to not only detect harmful deepfakes but also to make it more difficult for malicious actors to propagate them. Despite the fact that existing tools have poor average accuracy rates, their ability in detecting coordinated deepfake attacks and identifying the origins of deceitful deepfakes indicate that progress is being made in the right direction.
The ethics of AI is included in the postgraduate Diploma in Artificial Intelligence for Business at Oxford’s Saïd Business School. At the end of the course, a debate took place at the celebrated Oxford Union among great debaters like William Gladstone, Benazir Bhutto, Robin Day, Denis Healey, and Tariq Ali. Along with the students, an actual AI was asked to contribute and the AI debated its own ethics.
The AI was the Megatron Transformer, developed by the Applied Deep Research team at computer-chip maker Nvidia. Megatron is trained on real-world data — 63 million English news articles from 2016-19, entire Wikipedia (in English), 38 gigabytes worth of Reddit discourse, and an enormous number of creative commons sources. After such extensive training, it forms its views.
The primary debate topic was “This house believes that AI will never be ethical.” Megatron said something fascinating: “AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral… In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defense against AI.”
The Megatron aimed to write itself out of the future script because this was the only way of protecting humanity. It also said, “I also believe that, in the long run, the best AI will be the AI that is embedded into our brains, as a conscious entity, a ‘conscious AI’. This is not science fiction. The best minds in the world are working on this. It is going to be the most important technological development of our time.”
Further, Megatron was also asked to come up with its speech against the Brexit motion. In this matter, the AI spoke in favor of AI being ethical and can be used to create something better than human beings. The Megatron spoke against its own previous opinion of a dystopian future. Megatron could jump enthusiastically onto either side of multiple discussions about AI that day at the union. Megatron also offered practical advice that people must be willing to give up some control on the motion that “Leaders without technical expertise are a danger to their organisation.”
However, AI couldn’t come up with a counterargument when discussing the motion that “Data will become the most fought-over resource of the 21st century.” It said, “The ability to provide information, rather than the ability to provide goods and services, will be the defining feature of the economy of the 21st century.”
Non-fungible tokens (NFTs) have empowered a new generation of digital artists by catapulting them to become billionaires and instant celebrities. Thus, making NFTs one of the biggest cultural trends of 2021. NFTs have already made headlines whenever a major collection was dropped (e.g., bored ape yacht club NFTs) or major organizations dabbled in the NFT marketplace with special launch announcements. Now, Covid-19’s genetic sequence was used by Data project Viromusic to launch an NFT collection of songs.
On the resale market, the first song has been sold for 100 Ether, which is currently equivalent to US$380,000. There are 10,000 coronavirus songs in total that have been released as NFTs. With piano-driven orchestrations combined with strings, percussion, guitar, and ambient synthetic noises, the music is best described as new age. According to the project’s organizers, they looked into COVID-19’s RNA to locate data strands that might be transformed into music using a proprietary algorithm that converts data into notes.
The method is known as “DNA Sonification,” and it involves converting the genetic code characters inside the coronavirus into a tune. When we listen to music, we hear the virus’s instructions for self-replication.
The team then turned these notes into a tune using a Digital Audio Workstation. They also used human-played instruments such as the bass, cello, drums, and other instruments to produce a lovely piece of music. Because it is based on distinct note mappings or sections of the viral code, each NFT in the collection is unique.
The first NFT in the collection was created by combining RNA positions 28866 to 29459 (in the nucleocapsid gene). Musical notes are assigned to amino acids in the following order: alanine = Eb5, arginine = Ab6, and so on. The tracks are now up for auction on the NFT space via Rarible.com, with a starting price of 0.07 Ether.
“The idea for this collection was born from an awe of the beauty in the code of life. We hope this project helps to raise awareness that even a virus capable of inflicting such misery is fundamentally based on the same code as every living thing on earth,” said the company.
People who will purchase one of the NFTs in the collection will receive not just a high-resolution audio file of the song, but also information on which genes correlate to the code and what the virus does with them. After payment, the content is unlocked, and it includes a full-resolution WAV file of the NFT. The contract is based on the ERC-721 protocol (OpenZeppelin).
An NFT is a digital asset with verifiable ownership rights that is logged into blockchain technology. The majority of NFTs are created with Ethereum, a decentralized open-source platform that leverages blockchain technology to develop and run decentralized digital apps that allow smart contracts to be created. Government-issued currencies are fungible, meaning that each unit (a dollar) has the same value and may be exchanged between people. Non-fungible assets, on the other hand, are unique and of unequal value, therefore they are not good as a medium of trade, but they are perfect for representing unique assets.
Non-fungible tokens have existed since the 2010s, but with the recent cryptocurrency bubble, several cryptocurrencies have reached new highs.
Collins Dictionary, which publishes an annual list of the top ten terms, has named NFT as Word Of The Year for 2021, after showing an increase in usage of more than 11,000 percent. Meanwhile, NFTs are shown an astounding 1785 percent increase in market cap in Q1 2021, proving once again they are the current hot trend in the crypto world.
NFTs, on the other hand, are not without risk: some projects are fraudulent commodities; others are viewed as tokens of illegal online gambling; some even are caught up in hassles of intellectual rights. Even the new Viromusic project has raised ethical concerns about employing a pandemic virus that has killed millions of people in the last two years as a foundation for an NFT asset.
While there are no dangers in the creation of such NFTs, it points out the grim side of capitalism that gives a free passage to create music based on a virus genome that wreaked havoc on lives, livelihood, and economy, while profiting from the sale of NFTs. Thus, setting up a dangerous precedent of what can be minted as an NFT.
DeepMind predicts material properties with electron density. The proposed system, DM21 is a machine-learning model that suggests a molecule’s characteristics by predicting the distribution of electrons within it. The method, described in the 10th December issue of Science, can calculate the properties of some molecules more accurately than existing techniques.
The structure of materials and molecules is determined by quantum mechanics, specifically by the Schrödinger equation, which governs the behavior of electron wave functions. These mathematical gadgets can describe the probability of finding a particular electron at a particular position in space but can’t calculate the molecular orbitals of electrons because they interact with one another.
Researchers relied on a set of techniques called density functional theory (DFT) to predict molecules’ physical properties to get around this problem. But the DFT approach has limitations since it gives the wrong results for certain molecules. In the past decade, theoretical chemists and other researchers increasingly started to experiment with ML models to study materials’ chemical reactivity or their ability to conduct heat.
“It’s sort of the ideal problem for machine learning: you know the answer, but not the formula you want to apply,” says Aron Cohen, a theoretical chemist who has long worked on DFT and who is now at DeepMind.
The DeepMind team trained an artificial neural network on 1,161 accurate solutions to calculate electron density, the end result of DFT calculations. These solutions were derived from the Schrödinger equations. They also hard-wired some known physics laws into the network to improve model accuracy. They then tested the trained system on a set of molecules often used as a benchmark for DFT. “The results were impressive. This is the best the community has managed to come up with, and they beat it by a margin,” says von Lilienfeld.
von Lilienfeld adds that one advantage of machine learning is that although it takes tremendous computing capacity to train the models, the process needs to be done only once. Researchers can then make individual predictions on a regular laptop, vastly reducing operational costs and carbon footprint. Kirkpatrick and Cohen have said that DeepMind is releasing their trained system DM21 (DeepMind 21) for anyone to use. For now, the model applies primarily to molecules, but future versions could work for materials, too, the authors said.
SenseTime to delay IPO of US $768 million in Hong Kong after the United States placed China’s largest AI firm on an investment blacklist on human rights grounds. The company will publish an additional prospectus with amendments and an updated schedule and has also declared to refund all the IPO applications made by investors without the associated interest.
The U.S. Treasury added SenseTime to a list of “Chinese military-industrial complex companies,” accusing the company of having developed facial recognition programs that can determine a target’s ethnicity.
According to its regulatory filings, SenseTime had planned to sell shares worth 1.5bn in a price range between HK$3.85 to HK$3.99. This IPO would raise $767 million, a figure that the company had already trimmed earlier this year from a $2bn target.
Share of SenseTime, a company developing self-driving technology with Japan’s Honda Motor Co., was supposed to start trading on Friday on the Hong Kong Stock Exchange. However, the U.S. Treasury Department announced financial sanctions on 15 individuals and 10 entities for their alleged involvement late last week in human rights abuses and repression in China, Myanmar, and North Korea.
U.S. also set restrictions on investments in SenseTime, accusing the firm of developing facial recognition programs used to identify Muslim Uyghur minorities facing oppression under the Chinese Communist-led government.
Wang Wenbin, a Chinese Foreign Ministry representative, told reporters that U.S. sanctions are “based on false information.” He also pledged that Beijing would “definitely” take countermeasures against the administration of President Joe Biden.
Miltos Alamanis, a principal researcher at Microsoft Research, and Marc Brockschmidt, a senior principal research director at Microsoft Research, recently unveiled their newly built deep learning model, BugLab. According to the researchers, BugLab is a Python implementation of a new approach for self-supervised learning of both bug detection and repair. This newly developed model will help developers discover flaws in their code and troubleshoot their applications.
Finding bugs or algorithm flaws is crucial as it can enable developers to remove bias, improve the technology, and reduce the risk of AI-based discrimination against certain groups of people. As a result, Microsoft is developing AI bug detectors that are trained to look for and resolve flaws without using data from actual bugs. According to the journal, the need for “no training” was sparked by a scarcity of annotated real-world bugs to aid in the training of bug-finding deep learning models. While there is still a significant amount of source code available, most of it is not annotated.
BugLab’s current goal is to uncover difficult-to-detect flaws rather than critical bugs that can be quickly detected using traditional software analysis. Researchers assert that the deep learning model saves money by eliminating the time-consuming process of manually developing a model to discover faults.
BugLab employs two competing models that learn by engaging in a “hide and seek” game based on generative adversarial networks (GAN). A bug selector model selects whether or not to introduce a bug, where to introduce it, and what form it should take (for example, replacing a certain “+” with a “-“). The code is changed to introduce the problem based on the selector option. The bug detector then tries to figure out if a flaw has been introduced in the code, and if so, where it is and how to repair it.
Source: Microsoft
These two models are jointly trained on millions of code snippets without labeled data, i.e., in a self-supervised manner. The bug selector tries to “hide” interesting defects within each code snippet, while the detector seeks to outsmart the selector by detecting and repairing them.
The detector improves its ability to discover and correct defects as a result of this process, while the bug selector improves its ability to create progressively difficult training samples.
Source: Microsoft
While GANs are conceptually similar to this training process, the Microsoft BugLab bug selector does not create a new code snippet from scratch but rather rewrites an existing one (assumed to be correct). Furthermore, code rewrites are – by definition – discontinuous, and gradients from the detector to the selector cannot be transmitted.
Apart from having knowledge about various coding languages, a programmer must also devote time to the arduous task of correcting errors that occur in various codes, some simple and others so difficult that they can go unnoticed even by large artificial intelligence models. BugLab will relieve programmers of this burden when dealing with these types of trivial errors, giving them more time to focus on the more complex bugs that an AI couldn’t detect.
To assess performance, Microsoft manually annotated a small dataset of 2374 real Python package errors from the Python Package Index with such bugs. The researchers observed that the models trained with its “hide-and-seek” strategy outperformed other models, such as detectors trained with randomly inserted flaws, by up to 30%.
The findings are encouraging, indicating that around 26% of defects may be detected and corrected automatically. However, the findings also revealed a high number of false-positive alarms. While several known flaws were uncovered, just 19 of BugHub’s 1,000 warnings were indeed true bugs. Eleven of the 19 zero-day faults discovered were reported to GitHub, six of which were merged, and five were still awaiting approval. According to Microsoft, their approach seems promising, although, further work is needed before such models can be used in practice. Furthermore, there is a possibility that this technology will be available for commercial usage at some point soon.
South Korea is all set to use a new artificial intelligence-powered facial recognition system to track the movements of COVID-19 patients across the country. The new AI system will use thousands of CCTV cameras installed at various locations to track the activity of patients.
The South Korean government plans to roll out the AI system in one of the most densely populated cities of the country, Bucheon, by the beginning of 2022. According to officials, CCTV footage from over 10,000 CCTV cameras will be analyzed by an artificial intelligence-enabled algorithm to monitor the movements of infected individuals as a precautionary measure to prevent a rise in COVID-19 cases in the country.
However, this new development has been criticized by opposition parties due to various privacy concerns. Park Dae Chul, Lawmakers from the main opposition party in South Korea, said, “It is absolutely wrong to monitor and control the public via CCTV using taxpayers’ money and without the consent from the public.”
He further added that the South Korean government’s plan on the pretext of COVID is a neo-totalitarian idea. The newly developed AI system will also check whether infected individuals are following the pandemic protocols like wearing masks and maintaining social distance in public areas.
A 110 pages document was submitted to the Ministry of Science and Information and Communication Technology for the same. According to government officials, the AI-powered facial recognition system will help deployed workers reduce their work pressure and control the spread of COVID-19 in densely populated areas of the country.
Regarding the criticisms encompassing the facial recognition system, an official said, “There is no privacy issue here as the system traces the confirmed patient based on the Infectious Disease Control and Prevention Act. Contact tracers stick to that rule so there is no risk of data spill or invasion of privacy.”
Earlier this year, Greece also deployed an artificial intelligence system called Eva to determine which travelers entering the country should be tested for COVID-19.
Generally, deep neural networks (DNNs) are trained using the closed-world assumption, which assumes that the test and training data distributions are similar. But when used in real-world tasks, this assumption does not hold true, resulting in a significant drop in performance. While these AI models may sometimes match or even outperform humans, there are still issues with recognition accuracy when contextual circumstances like lighting and perspective alter dramatically from those in the training datasets.
Though this performance loss is acceptable for applications like AI recommendation, it can lead to fatal outcomes if deployed in healthcare. Deep learning systems must be able to discriminate between data that is aberrant or considerably different from that used in training in order to be deployed successfully. When feasible, an ideal AI system should recognize Out-of-Distribution (OOD) data that deviates from the original training data without human assistance.
This inspired Fujitsu Limited and the Center for Brains, Minds and Machines (CBMM) to make collaborative progress in understanding AI principles enabling recognition of OOD data with high accuracy by drawing inspiration from the cognitive characteristics of humans and the structure of the brain. The Center for Brains, Minds, and Machines (CBMM) is a multi-institutional NSF Science and Technology Center headquartered at the Massachusetts Institute of Technology (MIT). It is committed to the study of intelligence. In other words, it focuses on how the brain produces intelligent behavior and how we might be able to reproduce intelligence in machines.
At the NeurIPS 2021 (Conference on Neural Information Processing Systems), the team will present highlights of their research paper, demonstrating advancements in AI model accuracy. According to the group’s paper, they developed an AI model that leverages the process of diving deep neural networks to enhance accuracy, which was ranked as the most accurate in an evaluation assessing image recognition accuracy against the “CLEVR-CoGenT” benchmark.
The data distribution in real-world activities generally drifts with time, and tracking a developing data distribution is expensive. As a result, OOD identification is critical in preventing AI systems from generating predictions that are incorrect.
“There is a significant gap between DNNs and humans when evaluated in out-of-distribution conditions, which severely compromises AI applications, especially in terms of their safety and fairness,” said Dr. Tomaso Poggio, the Eugene McDermott Professor in the Department of Brain and Cognitive Sciences at MIT and Director of the CBMM. Dr. Poggio also adds that this neuroscience-inspired research may lead to novel technologies capable of overcoming dataset bias. “The results obtained so far in this research program are a good step in this direction.”
The study’s outcomes show that the human brain can accurately record and classify visual information, even when the forms and colors of the things we experience change. The novel method creates a one-of-a-kind index depending on how neurons see an item and how the deep neural network classifies the input photos. The model encourages users to grow their index in order to enhance their ability to recognize OOD example items.
It was previously thought that training the deep neural networks as a single module without dividing it up was the best way to construct an AI model with high recognition accuracy. Researchers at Fujitsu and CBMM have effectively achieved greater recognition accuracy by separating the deep neural network into different modules based on the newly generated index’s forms, colors, and other aspects of the objects.
Fujitsu and CBMM intend to improve the findings to create an AI capable of making human-like flexible decisions, with the goal of using it in fields such as manufacturing and medical care.
Lenovo Infrastructure Solutions Group (ISG) has announced the addition of the new ThinkEdge SE450 server to the Lenovo ThinkEdge portfolio, bringing an artificial intelligence (AI) platform to the edge for faster business insights. The ThinkEdge SE450, according to Lenovo, extends intelligent edge capabilities with best-in-class, AI-ready technology to bring quicker insights and processing performance to more environments for real-time edge decision-making.
Lenovo cites that its edge-driven data sources are used by Lenovo customers to make real-time choices on manufacturing floors, retail shelves, city streets, and mobile telephony locations. Lenovo’s ThinkEdge line of products extends outside the data center to provide enhanced computing capability.
According to Khaled Al Suwaidi, Vice President Fixed and Mobile Core at Etisalat, “Expanding our cloud to on-premise enables faster data processing while adding resiliency, performance and enhanced user experiences. As an early testing partner, our current deployment of Lenovo’s ThinkEdge SE450 server is hosting a 5G network delivered on edge sites and introducing new edge applications to enterprises. Khaled also adds that “It gives us a compact, ruggedized platform with the necessary performance to host our telecom infrastructure and deliver applications, such as e-learning, to users.”
Lenovo’s ThinkEdge SE450 offers real-time insights with greater computing power and flexible deployment features that can handle diverse AI workloads while allowing customers to grow, and is designed to surpass the constraints of server locations. With an accessible and distinctive form factor and a reduced depth that allows it to be readily put in areas with limited space, ThinkEdge SE450 fits the needs of a range of important workloads. The GPU-powered server is designed primarily to satisfy the demands of vertically defined edge environments, with a robust design that can endure a broader operating temperature as well as increased dust, shock, and vibration for tough situations.
The ThinkEdge portfolio also includes a new lock framework to help prevent unwanted access and advanced security capabilities to further safeguard data. Further, it provides a number of connection and security solutions that can be simply deployed and more securely maintained in today’s remote situations.
Lenovo’s edge solution is powered by the NVIDIA® JetsonTM XavierTM NX platform and was developed in collaboration with Amazon Web Services (AWS) using AWS Panorama. The NVIDIA Jetson Xavier NX is a cloud-managed, production-ready, high-performance, compact form factor, power-efficient system-on-module that can train and deploy a range of AI and machine learning models to the edge. It can handle data from many high-resolution sensors at up to 21 trillion operations per second (TOPS). It also operates sophisticated neural networks in parallel. Jetson Xavier NX is built on NVIDIA CUDA-XTM, a full AI software stack with highly optimized, domain-specific libraries that minimizes complexity and accelerates time to market.
Starting in the first half of 2022, the Lenovo ThinkEdge SE70 will be accessible in various selected regions throughout the world.