Home Blog Page 282

Researchers used GAN to Generate Brain Data to help the Disabled

Researchers brain data

On November 18, 2021, researchers at the USC Viterbi School of Engineering published a paper on generating synthetic brain activity data. The research paper was published in the Nature Biomedical Engineering forum, in which it was reported that research scientists had trained an AI model to produce neural data for building a Brain-Computer Interface (BCI) system. 

The Brain-Computer Interface technology is user-specific and must be well trained based on the dysfunctions of the respective patients. For training the BCI algorithm, there is a vast need for brain data. Sometimes it is difficult, expensive, or even impossible when the paralyzed individuals cannot produce sufficient brain signals. To overcome this issue of collecting real-world data for the BCI algorithm, researchers thought of generating synthetic data that are artificially generated by computers. 

The process of generating brain data is done by employing Generative Adversarial Networks (GAN). GANs are the computational frameworks that compete two neural networks against each other to produce new synthetic data instances. The computer-generated brain data is now used for training an algorithm for BCI systems. 

Read more: Synopsys to use AI for optimizing Samsung’s latest Smartphone designs

According to researchers, using GAN synthesized neural data, the overall training speed of the BCI system is increased by 20 times compared to the traditional methods.

The brain-computer interface works based on feeding the neural signals called spike trains into the algorithm. The BCI system analyzes the brain signals or impulses and converts them into digital instructions. This method allows impaired patients to operate digital devices like computer cursors or joysticks only by mental commands. 

Even people with paralysis, motor dysfunction, and locked-in syndrome can also benefit from the BCI system for enhancing their quality of life. The BCI devices are available in various forms like caps, helmets, and hair bands that continuously monitor the brain signals once the patient wears them. 

Advertisement

UNESCO unveils First Global Agreement On Ethics Of Artificial Intelligence: What Next?

UNESCO Ethics Of Artificial Intelligence ai
Image Source: UNESCO

The United Nations Educational, Scientific, and Cultural Organization (UNESCO) has published a global standard on artificial intelligence (AI) ethics that almost 200 member nations are expected to embrace.

The standard establishes shared values and concepts that will guide the creation of the legal infrastructure required to support AI’s healthy growth. According to UNESCO, there are various advantages to using AI, but there are also drawbacks, such as gender and ethnic bias, substantial risks to privacy, dignity, and agency, dangers of mass surveillance, and greater use of inaccurate AI technologies in law enforcement.

This is not the first time the concerns about the accountability of leveraging AI technologies have come into the limelight. While sure, it ushered discussions on the ethics of AI and a regulatory check on the possibility of misuse of AI outstripping its benefits and promises; there is still a gap. 

While AI technologies offer enormous promise for social and economic growth, policymakers face complicated and divergent hurdles in implementing them. Bias, stereotyping, and prejudice are among issues that AI brings to the table when such discussion happens. Amid these concerns, AI-generated analysis is increasingly being used to make decisions in both the public and private sectors. Hence, UNESCO has asked for AI to be created in a way that promotes fair outcomes.

Presenting the 28-page document, formally titled “Recommendation on the Ethics of Artificial Intelligence,” Audrey Azoulay, director-general of the UNESCO, stated that the content of the recommendation focuses on the following pointers:

  • Individuals will be better protected by ensuring openness, agency, and control over their personal data, which will go beyond what digital companies and governments are doing. It emphasizes that everyone should be allowed to access and even delete their personal data records.
  • The standard strictly prohibits the use of artificial intelligence (AI) systems for social scoring and mass monitoring. It insists that while building regulatory frameworks, member states should keep in mind that ultimate responsibility and accountability must always rest with people and that artificial intelligence technology should not be given legal authority in itself.
  • The ethical impact assessment is designed to assist nations and businesses in assessing the impact of AI systems on persons, society, and the environment as they develop and deploy AI systems.
  • The standard suggests that governments evaluate the AI system’s direct and indirect environmental impacts throughout its life cycle, including its carbon footprint, energy consumption, and the environmental effect of raw material extraction for manufacturing AI technologies. 

The Recommendation will include provisions to prevent real-world biases from being repeated online, as well as tangible policy actions based on universal values and principles. It will also instruct UNESCO to assess each country’s progress in the field of AI to assist them in the implementation phase.

In 2018, Audrey Azoulay, the Director-General of UNESCO, announced an ambitious initiative of establishing an ethical framework for using artificial intelligence across the world. Three years later, owing to the mobilization of hundreds of experts worldwide and extensive international talks, the 193 UNESCO member nations have recently officially embraced this ethics of AI standard framework.

According to Azoulay, “The world needs rules for artificial intelligence to benefit humanity. The recommendation on the ethics of AI is a major answer. It sets the first global normative framework while giving states the responsibility to apply it at their level.” Azoulay asserts that UNESCO will support its 193 member states in its implementation and ask them to report regularly on their progress and practices.

Meta (formerly known as Facebook) has been at the center of various controversies surrounding the ethics of AI in recent years. Cambridge Analytica, a now-defunct British political consulting business, exploited Facebook’s data to sway the Brexit referendum in the United Kingdom and Donald Trump’s victory in the United States. In 2018, Timnit Gebru, a former employee of Google’s Ethical AI team, and Joy Buolamwini, a researcher, demonstrated that face recognition software was less accurate in recognizing women and persons of color than it was in identifying white men.

Unesco’s proposals come less than a month after China released its own set of AI ethical principles, focusing on user rights and aligning with its ambition of becoming a global AI leader by 2030. 

Read More: China releases Guidelines on AI ethics, focusing on User data control

Meanwhile, AI experts are concerned that a few African voices have been included in the worldwide ethical regulations that offer guidelines for AI research. This is crucial as African countries are investing more in AI and machine learning research and development today. Data Science Africa, Data Science Nigeria, and the Deep Learning Indaba with its satellite IndabaX events, which have taken place in 27 African nations to date, demonstrate the interest and public investment in the AI disciplines.

While, surprisingly, China came out in favor of the 28-page document suggestion as one of UNESCO’s 193 member nations, surveillance cameras with biometric facial recognition dot the cityscape of the Mandarin nation. China is already in hot waters for its role in technologically aided persecution of the Muslim Uyghur minority in the autonomous area of Xinjiang, as well as the struggle against Hong Kong’s democracy movement — fueling the fears of the use of AI to control public behavior. 

Hundreds of towns, from Dubai to Nairobi, Moscow to Detroit, have placed cameras with FRT, with the assurance of feeding data to central command centers under smart city crime solutions. And it’s worth noting that most of these cities’ governments didn’t get public approval for the unbridled use of face recognition in the name of law enforcement.

On the brighter side, at the same time, several countries, such as Belgium and Luxembourg, have stated their opposition to the use of face recognition technology.

The success of this initiative lies with governments around the world, willing to adhere to the basic protocols and guidelines that will ensure ethical use of AI in the long run. A comprehensive follow-up will be necessary to ensure that the UNESCO recommendation can successfully offer universal standards for policy and legislation. As per the document, governments should create a regulatory framework that lays out a mechanism for conducting ethical impact assessments on AI systems to foresee outcomes, minimize risks, avoid adverse repercussions, increase citizen engagement, and address societal concerns. Algorithms, data, and design processes should all be auditable, traceable, and explainable.

While this is not a one-day assignment, experts hope this will encourage more conversations. Other international bodies have been working on AI ethics as well. For instance, the Organization for Economic Cooperation and Development (OECD) issued “Principles on Artificial Intelligence” in 2019, which supports “respect [for] human rights and democratic ideals” while adopting the technology.

Advertisement

Kolkata Police to use AI technology to identify traffic norms violators

Kolkata police AI traffic norms violators

Kolkata police announce that it plans to use AI technology to identify traffic norms violators. Police handle thousands of such traffic violation cases on a daily basis. This new artificial intelligence-powered solution will help them in carrying out their day-to-day operations. 

The AI system will use over 2500 CCTV cameras installed across the city to recognize helmet-less bike riders and parking violations. The system will process the complaint and send it to violators, which, if done manually, would consume a lot of time. 

The police department has requested a session with experts to understand the functionalities and capabilities of the newly developed artificial intelligence surveillance system. The final plan regarding the deployment of the AI system is yet to be discussed. 

Read More: Synopsys to use AI for optimizing Samsung’s latest Smartphone designs

“There are some plans in this regard. It is too early to comment till anything gets finalized,” said DCP Traffic, Ajit Sinha. The artificial intelligence-powered video analytics software will be installed at the LalBazar police control room, which will scan the number plates of traffic norm violators in real-time. 

The system will also reduce the chances of any injury caused to police officers while physically stopping violators. The AI system will be connected to the RTO server so that it can instantaneously send e-challans to traffic norm violators through SMS. It is believed that the deployment of the new AI system will also help in reducing any sort of corrupt practices. 

Earlier this year, Bengaluru police also introduced a similar kind of AI-powered system in the streets to check traffic norms violations. The system named Integrated Traffic Enforcement Management System (ITeMS) is capable of recognizing various types of traffic rule violations, including red light hopping, rash driving, helmet-less riders, using mobile while driving, and many more.

Advertisement

Synopsys to use AI for optimizing Samsung’s latest Smartphone designs

Synopsys AI design samsung smartphone

Electronic design automation and semiconductor developing company Synopsys announces that it is using an AI design system to optimize Samsung’s latest smartphone designs. The technology has already been successfully implemented by Samgsung to create a high-performance design at an advanced process technology. 

Apart from smartphone designs, Synopsys had collaborated with Samsung to develop and design next-generation Exynos processors that will be featured in the upcoming smartphones of Samsung. Synopsys used its DSO.ai platform to help Samsung develop performance-oriented chipsets for its smartphones. 

Samsung will use the same award-winning DSO.ai platform coupled with Synopsys Fusion Compiler RTL-to-GDSII solution to optimize the designs of its new smartphones. Synopsys’s DSO.ai uses reinforced learning technology to achieve better results. The DSO.ai technology has been deployed at various levels of phone manufacturing and designing process that has helped Samsung in reducing the research time and effort of engineers. 

Read More: Worldwide AI Software Market to Reach $62 Billion in 2022

Chairman and co-CEO of Synopsys, Aart De Geus, said, “This pivotal moment in semiconductor history will breathe new life into Moore’s law. We congratulate Samsung on this remarkable achievement, and we look forward to catalyzing its next 1000x.” 

He further added that autonomous chipset designing only existed in science fiction, but Sysnopsys’ technology has made it possible in the real world. Excerpts suggest that this new technology is a breakthrough in the microchip manufacturing and designing industry as it would considerably help architects to design better processors in a short time with reduced manual efforts. 

EVP of infrastructure and design technology center at Samsung Electronics, Thomas Cho, said, “Not only have we demonstrated that AI can help us achieve PPA targets for even the most demanding process technologies, but through our partnership, we have established an ultra-high-productivity design system that is consistently delivering impressive results.”

Earlier this year, Synopsys had acquired Concertio, an artificial intelligence-enabled real-time performance optimization firm to further strengthen its capabilities as a Silicon to Software partner for electronics product developing companies. 

Advertisement

Pony.ai Receives Clearance for Paid Autonomous Robotaxi Services in Beijing

Pony.ai clearance paid robotaxi Beijing

Full-stack autonomous driving solutions developing company Pony.ai receives its clearance for paid autonomous robotaxi service in Beijing, China. The recent approval will allow Pony.ai to provide its self-driving taxi service to people in the Beijing Intelligent Networked Vehicle Policy Pilot Zone located in the southern part of the capital city. 

Pony.ai will now be able to charge relevant fees from passengers to access the autonomous driving taxis. This new development will allow the company to deploy up to 100 autonomous taxis in the designated locality. This is a step forward towards the company’s goal to commercialize robotaxis at a global level. 

CEO and Co-founder of Pony.ai, James Peng, said, “Supportive policies, development of safe technology, and public acceptance are the keys to accelerating commercialization of the autonomous driving industry, and Pony.ai has conducted abundant testing of the application scenarios and product forms of autonomous driving over the past five years.” 

Read More: Standard AI to drive future of Autonomous Checkouts at Retail Stores

He further added that this approval marks the next step in self-driving car development as the new policy will allow Pony.ai to test and validate its commercialization model. Prior to this policy, Pony.ai’s PonyPilot+ robotaxi service was operational free of cost for passengers to travel in a restricted area of 60km2. 

PonyPilot+ uses a smartphone application and an in-car solution named PonyHi to provide a safe, comfortable, and hassle-free experience to its passengers. CTO and Co-founder of Pony.ai, Tiancheng Lou, said, “Today, our autonomous driving technology has advanced beyond conventional driving scenarios, demonstrating the technological capability to handle more complex driving scenarios and at a larger scale.” 

He also mentioned that the new emerging technologies are transforming Pony.ai’s mobility at a much faster rate than they expected. Earlier this year, Pony.ai had also started testing its robotaxis in various regions of California, United States and plans to commercialize the deployment by 2022. To achieve its goals, Pony.ai is also considering going public in the US to fund the deployment and commercialization process. 

Along with Pony.ai, Chinese technology group Baidu also received approval to launch paid autonomous robotaxis in the same locality. According to Baidu, this will be the company’s Apollo Go service’s first-ever commercial deployment.

Advertisement

Worldwide AI Software Market to Reach $62 Billion in 2022

AI market 2022

The forecasting report of a tech research and consultancy company called Gartner states that global artificial intelligence software revenue will be $62.5 billion in 2022. 

As compared to the current year of 2021, there is a considerable increase in revenue by 21.3 percent. This number represents an increase of more than a fifth compared to revenue figures from 2021. 

In 2022, businesses worldwide are all set to spend more on artificial intelligence-powered software services than ever before. The top five use cases of AI service spendings in 2022 will be on knowledge management, virtual assistance, autonomous vehicles, digital workspace, and crowdsourced data. 

Read more: MIT researchers develop an AI model that understands object relationships

Gartner said that the successful business outcomes using the five major use cases would depend on selecting the use cases carefully. Each use case could provide significant value while being scalable to reduce risk and would be vital for demonstrating the importance of AI investment to stakeholders. 

The Gartner report states that worldwide enterprises are continuously demonstrating a strong interest in AI-based services because of the enormous revenue generation. Nearly 48% of Chief Information Officers (CIO) who responded to the previous surveys of Gartner said that major enterprises present worldwide have planned to deploy AI and machine learning technologies within the next 12 months. 

It is evident that advances in AI maturity will naturally increase AI software revenue through increased spending across enterprises globally. In the official press report, Alys Woodward, senior research director at Gartner said that “The AI software market is picking up speed, but its long-term trajectory will depend on enterprises advancing their AI maturity.”

Advertisement

UC San Diego uses deep learning to map the human cell, finds new cell components

deep learning to map the human cell

The researchers at the University of California San Diego School of Medicine have taken a significant leap forward in understanding human cells. Their pilot study combines biochemistry, microscopy, and artificial intelligence in a technique known as a multi-scale integrated cell, or MuSIC. The study uses deep learning to map the human cell and it found about 70 components in a human kidney cell line, half of which have never been seen before. In all the components of a human kidney cell line, researchers determined an unfamiliar structure to be a new complex of proteins that binds RNA. 

Scientists have believed that there is more to human cells than mitochondria, endoplasmic reticulum, and nucleus. The large portion that has been unknown to humans for research can now be studied. The MuSIC research, powered by artificial intelligence, finally provides a way to look deeper. 

Complex found through MuSIC enables the translation of genes to proteins and also determines which is activated at which times. The cells’ proteins are studied using one of the two techniques: biophysical association and microscope imaging. Microscope imaging tags various fluorescent colors to proteins of interest and allows researchers to track the movements and associations across the field of view. For biophysical associations, researchers use an antibody specific to a protein to remove it from the cell and see what is attached to it. 

Read more: MIT researchers develop an AI model that understands object relationships

MuSIC uses deep learning technology to map the interior work of the cell directly from cellular microscopic images. Although microscopes allowed researchers to look down the size of a single micron, smaller elements such as protein complexes or individual proteins are not visible through it. Biochemistry techniques allow scientists to look at things that are one billionth of a meter or 1000 microns but the gap between microns and nanometers remains. 

The researchers at the University of California San Diego School of Medicine were able to bridge the gap from nanometre to micron-scale using artificial intelligence. To do so, they collected data from multiple sources and asked the system to assemble the data into a cell model. 

Researchers trained MuSIC to look at all the data for constructing a model of the cell. However, the system cannot map cell content to specific locations since their locations are not necessarily fixed. One of the most significant benefits of this research is that scientists will better understand the molecular basis of various diseases by comparing the innermost structures of healthy and diseased cells. 

Advertisement

Standard AI to drive future of Autonomous Checkouts at Retail Stores

Standard AI autonomous checkouts

World-leading automation solutions providing company Standard AI to drive the future of autonomous checkout systems at retail stores in various regions. Recently, the company successfully completed its acquisition of United Kingdom-based computer vision firm ThirdEye Labs. 

However, no information was provided by company officials regarding the valuation of this acquisition deal. With this new acquisition, Standard AI will use the expertise of ThirdEye Labs in computer vision to further improve its autonomous retail checkout system. 

Additionally, former team member of Tesla and Lyft, Sameer Qureshi, will join Standard AI as its first Vice-president of Machine Learning. Qureshi will help the artificial intelligence and machine learning teams work closely with its engineering team to fine-tune its automation solution. 

Read More: Microsoft launches Tutel, an AI open-source MoE library for model training

“Some of the most transformative work in machine learning and computer vision is happening in retail. I was drawn to Standard AI for its mission to use computer vision to transform the way we shop and better the way we live,” said Qureshi. 

He also mentioned that it is exhilarating for him to work with the best talents in the world in order to enhance the capabilities of Standard AI’s platform. A few months ago, Standard AI launched its new artificial intelligence-enabled autonomous checkout experience at existing stores in the Arizona Circle K location. 

Standard AI’s high-end automation system helps retailers to offer a better customer experience and improve in-store efficiency and economics. CEO of Standard AI, Jordan Fisher, said, “ThirdEye product and engineering team have been engaged in cutting-edge work, and they will be invaluable to our team as we expand the capabilities of our platform deeper into retail.” 
He further added that computer vision technology has now become critical for retailers to keep up with the data-driven flexibility of the eCommerce industry globally. In addition, Standard AI is extensively hiring professionals on a global scale. Interested candidates can apply from the official website of Standard AI.

Advertisement

Microsoft launches Tutel, an AI open-source MoE library for model training

Microsoft Tutel AI open source library
Image Source: Financial Times

Recently, Microsoft unveiled, Tutel, an open-source library for constructing a Mixture of Experts (MoE) models – a type of large-scale artificial intelligence model. According to the company, Tutel, which is integrated into Meta’s PyTorch toolkit, will simplify the process of executing MoE more readily and efficiently.

The Mixture of Experts (MoE) architecture is a deep learning model architecture in which the computing cost is proportional to the number of parameters, allowing for simpler scalability. The MoEs are made up of little clusters of “neurons” that are only active when certain circumstances are met. The lower “layers” of the MoE model extract features, and experts are called upon to assess them. MoEs may be used to develop translation systems, with each expert cluster learning new grammatical rules or elements of speech.

In other words, MoE entails breaking down predictive modeling tasks into sub-tasks, training an expert model on each, creating a gating model that learns which expert to trust based on the expected input, and combining the predictions. A gating model is a neural network model used to understand each expert’s predictions and help determine which expert to trust for a particular input. This is achieved by finding which expert gives the largest output or confidence provided by the gating network.

Although the approach was developed with neural network specialists and gating models in mind, it can be used for any form of models. As a result, it resembles stacked generalization and falls within the meta-learning category of ensemble learning approaches. By substituting a single global model with a weighted sum of local models, the accuracy of a function approximation is improved in MoE. 

According to Microsoft, currently, MoE is the only approach demonstrated to scale deep learning models to trillions of parameters. This implies it has the potential to pave the way for models that can learn even more information and power computer vision, speech recognition, natural language processing, and machine translation systems, among others, that can help individuals and institutions in new ways.

Microsoft is particularly interested in MoE because it makes effective use of hardware. Here, only those experts are called upon in case an issue arises that requires their specialism, while the remainder of the model waits in silence for their turn, thereby increasing efficiency.

Furthermore, when compared to other forms of model architecture, artificial intelligence models MoEs provide a substantial number of plus points. They may respond to changes in a unique way, allowing the model to display a broader range of behaviors. The data may originate from many places, and the model only takes a few professionals to run – even a large model only requires a small number of computing resources.

A line graph comparing the end-to-end performance of Meta’s MoE language model using Azure NDm A100 v4 nodes with and without Tutel. The x-axis is the number of A100 (80GB) GPUs, beginning at 8 and going up to 512, and the y-axis is the throughput (K tokens/s), beginning with 0 and going up to 1,000 in intervals of 100. Tutel always achieves higher throughput than fairseq.
Image Source: Microsoft 

Microsoft’s Tutel primarily focuses on MoE-specific calculation enhancements. The library is designed specifically for Microsoft’s new Azure NDm A100 v4 series instances, which offer a sliding scale of Nvidia A100 GPUs. Tutel’s MoE algorithmic support is broad and versatile, allowing developers across AI fields to implement MoE more quickly and efficiently. Tutel delivers an 8.49x speedup on an NDm A100 v4 node with 8 GPUs and a 2.75x speedup on 64 NDm A100 v4 nodes with 512 A100 GPUs when compared to state-of-the-art MoE implementations like Meta’s Facebook AI Research Sequence-to-Sequence Toolkit (fairseq) in PyTorch for a single MoE layer. This is a significant benefit because existing machine learning frameworks such as TensorFlow, PyTorch, and others lack a practical all-to-all communication library, resulting in large-scale distributed training performance loss.

Read More: Introducing MT-NLG: The World’s Largest Language Model by NVIDIA and Microsoft

The present MoE-based DNN model depends on the splicing of numerous ready-made DNN operators supplied by deep learning frameworks to generate MoE calculations due to a lack of efficient implementation methods. This strategy might incur substantial performance overhead due to the necessity for duplicate calculations. Here, Tutel proves handy by enabling the creation and development of many highly efficient GPU cores to offer operators for MoE computations.

Tutel, in addition to other high-level MoE solutions such as fairseq and FastMoE, focuses on the optimizations of MoE-specific computation and all-to-all communication, as well as other diversified and flexible algorithmic MoE support. Tutel features a concise user interface that makes it simple to combine with other MoE systems. Developers may also leverage the Tutel interface to embed independent MoE layers into their own DNN models from the ground up, gaining direct access to the highly optimized state-of-the-art MoE capabilities. 

“MoE is a promising technology. It enables holistic training based on techniques from many areas, such as systematic routing and network balancing with massive nodes, and can even benefit from GPU-based acceleration,” explains Microsoft.

Tutel, which displayed a significant gain over the fairseq framework, has also been included in the DeepSpeed architecture. Tutel and related integration are expected to benefit additional Azure services, particularly for clients looking to expand their own huge models easily. MoE is still in its initial stages today, and more work is needed to fully fulfill its potential. Consequently, scientists will continue to improve Tutel in the hopes of bringing you even more exciting research and application results in the future.

Tutel is now available for download on GitHub.

Advertisement

AI and Robotics Helps in Improving Healthcare Rehabilitation

AI Robotics Rehabilitation

Universidad Carlos III de Madrid (UC3M) and Inrobics Social Robotics, S.L.L have combinedly developed a robotic device that provides cognitive rehabilitation services and therapy assistance to patients. Social robots are officially certified as a medical device that aims to revolutionize the health sector through automation. These social robots can help patients with functional or neurological disorders to improve their quality of life by providing treatment for enhancing motivation, stimulating concentration, changing behavioral patterns through practice and feedback. 

The entire therapy process is done by four elements — a robot that interacts with the patient, an application that can be used by healthcare staff to set up and track sessions, an artificial intelligence system that uses a 3D sensor for controlling the robot, and a cloud-based storage system that contains information from all rehabilitation process done by the robots. 

The AI-based robot acts as an intelligent co-therapist that socially interacts with patients and offers them a series of activities or a number of exercises that patients have to follow. In addition, the robot also gives feedback to the patients on how to improve their motor skills or cognitive abilities.

Read More: Landing AI raises $57 million in Series A Funding Round

The therapy staff/doctor specially customizes every therapy given by the robot according to every patient’s need. 

Since Inrobics is a cloud-based service, it can be used in both rehabilitation centers and respective patient’s homes. For providing external assistance, they offer the Inrobics app. This app enables patients/users to set up therapy sessions with robots. The app also provides real-time insights and reports of a patient’s behavioral development and recovery. 

Advertisement