Saturday, January 17, 2026
ad
Home Blog Page 274

IDC and Baidu Whitepaper: Artificial Intelligence to Reduce Carbon Emissions by 70% by 2060

IDC and Baidu Whitepaper AI to Reduce Carbon Emissions by 70% by 2060

According to a report from IDC and Baidu, artificial intelligence will contribute to a 70% reduction in carbon emissions by 2060. In the whitepaper, the company estimates that over 35 billion tons of carbon emissions will be reduced by 2060 as a result of AI-related technologies.

Providing insights into the Green Energy Transformation through innovative carbon emission reduction, the report incorporates research from IDC on the topic of ICT and AI and practices from Baidu and its industry partners.

As stated in the whitepaper, it will take a technology-intensive approach to achieve carbon neutrality, and numerous industries will benefit from the breakthroughs in AI and ICT infrastructure combined with Carbon reduction technologies. It is estimated that by 2060, AI-related technologies will contribute 70% to carbon reduction, with a net carbon reduction of at least 35 billion tons.

Read More: Baidu to showcase AI advancements at China’s Metaverse Conference

Founder, Chairman, and CEO of Baidu, Robin Li, stated that Baidu plans to dedicate even more resources to this endeavor using AI with ecosystem partners for carbon-neutral growth.

The Vice President and General Manager of Baidu’s Intelligent Transportation Division, Guobin Shang, points out that one of the significant advantages of intelligent transportation solutions is bringing “Vehicle-Road-Smart Mobility” to bear on tackling the issue of emissions from vehicles.

For cities with a population of over 10 million, the researchers state that they can reduce emissions by at least 41,600 tons per year, which is equivalent to 14,000 private cars.

According to IDC, based on their long-term tracking of global IT markets and their accumulation of data on such demands, an original carbon emission model is applied to cloud computing to calculate that the worldwide reduction of carbon emissions by 2020 is equivalent to taking 26 million gas-powered cars off the road.

With advanced technology and innovative mechanisms, Baidu plans to achieve carbon neutrality by 2030.

Advertisement

Accenture collaborates with Udacity for the Accenture Scholarship Program

Accenture collaborates with Udacity for the Accenture Scholarship Program

Accenture has recently collaborated with Udacity through the Accenture Scholarship Program to provide technical education to the students with no or limited technical knowledge.  

There are no prerequisites for the program. Interested applicants can apply for the program here. Applicants may apply for the program by Jan 17, 2022. Scholarship recipients will be notified by Feb 17, 2022, and Nanodegree programs will commence on Feb 21, 2022. 

Participants who complete the Challenge and Assessment will be eligible for a full scholarship to either Udacity’s Intro to Programming or Predictive Analytics programs.

Read More: Udacity is offering Scholarships for Cloud, Data, and AI Skills

Approximately 600 people will receive full scholarships to Udacity’s “Intro to Programming” or “Predictive Analytics Program.” To complete the program, participants need to commit ten hours each week. They will learn basic coding skills with the Intro to Programming path and predictive analytics with the Predictive Analytics path.

Following the Nanodegree program’s conclusion, all scholarship recipients will be eligible to apply to the Apprenticeship program

Educating the next generation is a race against technology. People will need new skills as intelligent machines and systems change the nature of work. 

With the increasing need for people in the field of technology, large corporations are testing lifelong learning methods. Still, traditional education and learning systems are ineffective and unsuited to the challenges of the new skills generation. Organizations with smaller sizes are more likely to be at risk if they fail to apply new learning techniques. There could also be significant economic repercussions.

The Chicago Apprenticeship Network was created in 2017 by Accenture and Aon. They aimed to provide underserved communities with more opportunities for high-paying tech jobs. Because of this commitment, Accenture is collaborating with Udacity to create a program that trains students to become employable.

Advertisement

NCKU and Quanta Computer set up AI research center

NCKU and Quanta Computer set up joint AI research center

Earlier this week, Quanta Computer and National Cheng Kung University announced the creation of the Quanta-NCKU Joint Artificial Intelligence Research Center.

In a ceremony attended by Tainan Mayor Huang Wei-Che, NCKU President Su Huey-Jen, and Quanta Computer Chair Barry Lam, signed an agreement on creating an interdisciplinary team to integrate AI cloud technology at the national level. A high-performance AI server by Quanta Computer was presented to NCKU at the event, and the company has committed to continue sponsoring the AI research center for the next four years.

Huang said in the speech that the city has formed a 5G Tainan Team with Quanta Computer to implement 5G technology in the city government.

Read More: The US Govt has launched a portal for AI researchers

Su praised Lam for his decision to help promote AI-related innovations in Taiwan after the Legislative of Yuan recently passed a bill to create the Ministry of Digital Development. With Quanta Computer boasting a renowned legacy in hardware, the center will have the most advanced equipment, NCKU newly founded School of Smart Semiconductor and Sustainable Manufacturing has proven to be a perfect fit.

Through its AI Cloud Platform, QOCA Air, Quanta initiated a research collaboration with NCKU. The platform, which carries Quanta’s ICT innovations, is constantly updated, customized, and re-designed in light of the results achieved through the collaboration. 

The joint research center will be supported by Quanta, providing the necessary resources for accelerating AI research. Additionally, NCKU will build a platform that enables researchers to research across multiple fields.  

Combining biomedical technology, the internet of things, big data, and optoelectronics to create critical applications in Taiwan’s medical, agricultural, and Urban Sectors is part of the school’s strategy to foster industry-academia collaboration and foster smart applications. A smart city that demonstrates the Internet of Things in action will be NCKU’s goal with additional help from the Ministry of Digital Development.

Advertisement

Indian border guards to rely on startups with AI technology

Indian border guards to rely on startups with AI technology

Using the data collected from a database, Sashastra Seema Bal (SSB) wants to identify suspicious persons and vehicles using artificial intelligence based sensors and other electronic devices for the Indian Borders.

SSB and the Ministry of Electronics and Information Technology have launched a hackathon to develop low-cost homemade solutions for effective border management as a part of the initiative DRISHTI, which aims to improve the overall border security scenario and check illegal intrusion drone-based activities. 

Participants will be required to develop innovative solutions for detecting suspicious/unauthorized vehicles, drone-based surveillance tools, and geospatial mapping of vulnerable areas along the International border. The SSB will use a database of collected information to identify suspicious persons and vehicles using Artificial Intelligence based sensors. These data include audio, video, text, and images.

Read More: Artificial Intelligence in Military Drones: How is the World Gearing up and What does it mean?

MIETY has invited startups, innovative MSME firms, and research companies to submit their ideas as part of this process. The selected participants will have an opportunity to create and develop future security technologies for homeland and other internal projects with the SSB. 

In line with the Government’s Make in India Initiative, various future-ready applications are considered and launched amid the current geopolitical security environment. ASIGMA, the Army Secure IndiGeneous Messaging Application, is a contemporary messaging application recently launched by the Indian Army. 

In response to the COVID-19 pandemic, the Indian Army has pursued automation and progress towards paperless operations. In addition to ASIGMA, numerous applications will be deployed via the Indian Army’s captive pan network as ASIGMA is expected to amplify the Army’s efforts.
Secretary Ajay Sawhney and SSB Director General Kumar Rajesh Chandra launched the hackathon DRISHTI (Development of Research Indigenous Solutions Hackathon for Technological Innovations) in the national capital. There will be two winners in each challenge, receiving ₹10 Lakh and ₹5 Lakh, respectively. The SSB was established after the 1962 Sino-Indian war to guard the borders between Nepal and Bhutan.

Advertisement

Bengaluru ranked fifth for AI talent outside US

Bengaluru ranked fifth for AI talent outside US

According to a study by The Fletcher School at Tufts University, Bengaluru, India’s technology capital, is ranked as one of the top five global cities with the most robust AI talent pools. Bangalore outranks Los Angeles, London, Beijing, Chicago, and Washington based on this rating.

The study mentions that Bengaluru is home to the world’s second-largest pool of AI talent, ranking fifth for diversity among AI professionals. According to this study, San Francisco, New York, Boston, and Seattle were ranked higher than Bengaluru.

These rankings were made using the Fletcher School’s TIDE framework, which consists of Talent Pool, Investments, Diversity of Talent, and changes in the state of the country’s digital foundation, along with gender and racial diversity, acceptance of migrants and cost of living. 

While AI is becoming more popular among organizations, a considerable talent skill gap prevents many from accelerating their developments. Despite the growing number of AI talents, opportunities, firms across all industries struggle to attract top talent from a labor pool that isn’t growing fast enough, according to Deloitte’s third edition of the State of AI in the Enterprise survey.  

Read More: Top AI Technology Trends to Dominate 2022

The study found that nearly one-seventh of the AI talent pool is female compared with 27% in broad STEM fields. According to the TIDE framework, Delhi, Hyderabad, and Mumbai are the other Indian cities ranked among the top 50 AI cities.

It is good news for enterprises in India and for IT outsourcing firms that are seeking to increase their AI talent capabilities and differentiate themselves competitively. 

The Fletcher study is another testament to India’s efforts to nurture its AI ecosystem over the past few years. According to the study, enterprise and startup companies have a substantial opportunity to develop India’s diverse talent to transform with AI. 
The KPMG report “Technology Innovation Hubs 2021” listed Bengaluru as one of the top ten cities in the world for technology innovation outside the Silicon Valley.

Advertisement

Chinese scientists develop AI Prosecutor

Chinese scientists develop AI Prosecutor

Chinese scientists claim to have developed the world’s first machine to identify people who have committed crimes using AI based Prosecutor. This is a significant technological development for China. 

According to a researcher, an AI prosecutor can charge people with over 97 percent accuracy by utilizing verbal descriptions. According to the South China Morning Post, the Shanghai Pudong People’s Procuratorate, the country’s largest and busiest district prosecutor office, tested the machine.

An expert from the Project told local media that the Project aims to reduce prosecutor workloads. In one of the publications, Professor Shi Yong, the Project’s lead scientist, was quoted as saying, “The software could replace prosecutors in case-making processes to an extent.” He added that the system could also identify “dissent” against the state.

Read More: China releases Guidelines on AI ethics, focusing on User data control

Scientists explained how the software works, saying it can process 1,000 human-created case description texts to determine whether charges can be filed. South China Morning Post reported that the machine was trained for five years — from 2015 to 2020 — and was fed 17,000 cases to identify and prosecute common crimes.

According to the South China Morning Post, the AI machine was trained on credit card fraud, gambling crimes, drunk driving, theft, and obstructing official duties. In time, they hope their machine will substitute prosecutors for some of the decisions made during the “decision-making” process. 

The device will become adept at recognizing more types of crimes and charging those who deserve it with more training. Despite these concerns, some prosecutors are concerned about the new system. They asked who would be responsible if an error was made. Others said that people don’t want machines interfering with their work. 

This is the first time artificial intelligence is being used in decision-making. While law enforcement agencies worldwide use digital technology extensively, it is primarily limited to evidence evaluation and forensics.

Advertisement

The First Flying Humanoid Robot

The First Flying Humanoid Robot

A team of researchers at Istituto Italiano di Tecnologia has recently been exploring what they believe is a fascinating idea- creating flying humanoid robots. Nevertheless, to control the movement of flying robots, objects, or vehicles, researchers will need systems that can reliably estimate the thrust produced by propellers, which enable them to move through the air. 

Researchers are building the model from the data they have collected. In addition to centrifugal force, the team’s framework also uses the entire robot’s “centroidal momentum” to estimate thrust. Among roboticists developing humanoid systems, this is a value that is commonly used to control and estimate their movement. 

Recently, the team developed a framework for estimating thrust intensities in flying multibody systems without thrust-measuring sensors. During the research at the Artificial and Mechanical Intelligence lab, Daniele Pucci, the head of AMI Lab, aimed to create a robot that can maneuver objects, walk on the ground, and fly. 

Read More: Humanoid Robot Sophia will be available for Auction as iNFT on Binance

As opposed to designing a new robotic structure, researchers decided to extend the capabilities of a humanoid robot to include flight since many of them can manipulate objects and move on the ground. 

Pucci’s team tested the framework on the iRonCub robot, an evolution of the iCub robot with the integrated jet engines. The robot has been under development for some time. However, recently it has been demonstrated in its entirety. 

Also, the team is planning to work on enhancing iRonCub’s flight capabilities. It is hoped that a reliable and performance-oriented humanoid will be developed that is capable of terrestrial and aerial locomotives in the future. 

Affaf Momin and Hosameldin Awadalla plan to improve Pucci’s thrust estimation framework with AI and data-driven computational tools. Giuseppe L’Erario, a colleague of the researchers, will integrate the algorithms into controllers to integrate robot walking, maneuvering, running and horizontal flight strategies.  

Advertisement

Kneron announced a $25M funding by Lite-On Technology

Kneron announced a $25M funding by Lite-On Technology

AI chips, that are semiconductors designed to accelerate machine learning, have many applications. According to Albert Liu, CEO and Founder of Kneron, one of the most promising applications of AI chips is in self-driving cars.

Liu’s AI chipmaker Kneron has been covertly raising funds to enter the intelligent transportation market. It secured a fresh round of $25 million fundraising with strategic partner Lite-On Technology, a Taiwanese optoelectronic pioneer. Among the other investors were Alltek, PalPilot, Sand Hill Angels, and Gaingels.

Since its inception in 2015, Kneron has raised approximately $125 million in total finance.

In San Diego and Taipei, the business has a long list of notable backers, including Hong Kong tycoon Li Ka-Horizon Shing’s Ventures, Alibaba, Qualcomm, Sequoia, and Foxconn, the world’s largest electronics manufacturer and Apple’s supplier.

Read More: Hevo Data raises $30 million in Series B Funding

According to Liu, the company will be profitable in 2023, which would be an “excellent moment”  for it to go public. When asked more recently, the founder was more coy about the firm’s IPO prospects, though he said the listing would likely to be in the U.S.

The CEO believes that an ambulance would not have to halt at junctions if roadside perception equipment could interact with neighboring cars. He went on to say that such infrastructure is essential in Asian countries, where traffic circumstances are more complex than in the United States.

The chips developed by the firm are “reconfigurable,” which means they combine software flexibility with hardware speed. According to Liu, AV’s silicon may be utilized for the vast AI engine within a vehicle and the tiny sensors stacked on the car’s outside.
Kneron presently generates $3-4 million in monthly income from its 30 business clients, with the United States accounting for 30-40% of its revenue. Kneron counts Foxconn as a strategic investor, using the startup’s chips in its “MIH” manufacturing platform for electric vehicles. In May, Kneron agreed to buy image signal processor Vatics from Vivotek, a subsidiary of Delta Electronics.

Advertisement

The US Govt has launched a portal for AI researchers

US administration launches portal for AI researchers

The National Artificial Intelligence Initiative Office launched a new area of resources for AI researchers to offer convenient access to data sets and testbed settings for AI application training.

Officials classified it as “a central connection to many federally-supported resources for America’s AI research community” during its inauguration on Twitter. The available tools on the page include funding and grant information, datasets, computing resources, a research program directory, and a testbed selection.

Researchers working on AI projects can improve their studies and projects by using higher-quality data from federal agencies such as NASA, the National Oceanic and Atmospheric Administration, the National Institute of Standards and Technology, and the National Institutes Health once they have access to these resources.

Read More: Overinterpretation: MIT finds new bottleneck in Machine learning based Image Classification

The portal complies with provisions in the overarching National Artificial Intelligence Initiative Act, which mandates NAIIO leadership to “promote access to and early adoption of technologies, innovations, lessons learned, and expertise derived from Initiative activities to agency missions and systems across the federal government.”

In a press release, current NAIIO Director Lynne Parker stated that the portal’s purpose is to assist U.S. academics in advancing their AI projects using available government funding. “We hope that the AI Researchers Portal will help U.S. researchers more easily navigate and connect with available resources that will make them more productive and successful in advancing the state of the art in AI and related fields,” Parker concluded.

Apart from public datasets and several testbed environment options, the portal also provides academics with a list of AI research projects underway with government organizations that may be open to collaboration. NIST’s Network Modeling for Public Safety Communications and the NIH’s Graduate Data Science Summer Program are two such research programs.

The NAIIO was established during former President Donald Trump’s administration and is now part of the Biden-Harris administration’s AI goals, including continuing efforts to make federal AI resources more accessible to the general public.

Advertisement

Overinterpretation: MIT finds new bottleneck in Machine learning based Image Classification

Overinterpretation machine learning Image Classification
Image Source: Security Magazine

Image classification using machine learning rose to prominence in a recent couple of years. While processing images to understand the visual data is nothing new as earlier researchers relied on raw pixel data to classify visual data. In this process, computers would fragment an image into individual pixels for further analysis. However, this was not an effective method as the computers often got confused in cases when two images of the same subject appear to be very different. Machines also struggled with images focusing on a single entity but having a variety of backdrops, perspectives, positions, and so on. Thus, making it difficult for computers to see and categorize images appropriately.

As a result, scientists turned to deep learning, which is a subset of machine learning that uses neural networks for processing input data. In neural networks, information is filtered by hidden layers of nodes. Each of these nodes processes the data and relays the findings to the next layer of nodes. This continues until it reaches an output layer, at which point the machine produces the desired result.

Using convolution neural networks (CNN) machine learning models became extremely good at classifying images and videos. CNN is a form of neural network where the output of the nodes in the hidden layers of CNNs is not always shared with every node in the following layer (known as convolutional layers).

Despite being proficient in detecting patterns in data, scientists are baffled by how these models make decisions or how they arrived at the said decision. Furthermore, when there is a likelihood that the machine learning model makes judgments based on inadequate, error-prone, or one-sided (biased) information, the constant need to understand the mechanisms underlying the decisions becomes even more important.

Recently, the Massachusetts Institute of Technology (MIT) has discovered an intriguing problem involving machine learning and image classification called overinterpretation. Depending on where deep learning algorithms are used, this problem might be innocuous or dangerous if it is not fixed. Apart from adversarial attacks and data poisoning, overinterpretation is the new irritant for AI researchers and developers. 

The MIT researchers discovered that neural networks trained on popular datasets like CIFAR-10 and ImageNet include “nonsensical” signals that are troublesome. Models trained on them suffer from overinterpretation, a phenomenon in which they label images with such high confidence that they are useless to humans. For instance, models trained on CIFAR-10 made confident predictions even when 95% of input images were missing, and the remainder is senseless to humans.

Four photos in the top row show a giant panda, a golden retriever, a traffic light, and a street sign. The bottom row shows only the edges of the same images.
A deep-image classifier can determine image classes with over 90 percent confidence using primarily image borders, rather than an object itself. Credit: Sunny Chowdhary

In the actual world, these signals can lead to model fragility, but they’re also valid in datasets, which means overinterpretation can’t be detected using traditional approaches.

“Not only are these high-confidence images unrecognizable, but they contain less than 10 percent of the original image in unimportant areas, such as borders. We found that these images were meaningless to humans, yet models can still classify them with high confidence,” says Brandon Carter, MIT Computer Science, and Artificial Intelligence Laboratory Ph.D. student and lead author of this research.

Read More: Fujitsu and MIT Center for Brains, Minds, and Machines Build AI model to Detect OOD data

The team explains that machine-learning algorithms have the potential to cling onto these meaningless tiny signals, making image classification difficult. Then, after being trained on datasets like ImageNet, image classifiers can make seemingly reliable predictions based on those signals.

The MIT team adds that datasets are equally to be blamed for causing overinterpretation. According to Carter, scientists can start by questioning ways to modify the datasets in a manner that models trained on those can more closely mimic how a human would think about classifying images. Carter hopes that the algorithms could generalize better in real-world scenarios, such as autonomous driving and medical diagnosis so that the models don’t show nonsensical behavior, else it can lead to fatal outcomes due to flawed prediction.

This might imply producing datasets in a more controlled setting. Currently, images retrieved from public domains are the only ones that are classified. The team notes that overinterpretation, unlike adversarial attacks, relies on unmodified image pixels. At present, the MIT team asserts that ensembling and input dropout, can both assist prevent overinterpretation. You can read more about the research findings here.

This research was sponsored by Schmidt Futures and the National Institutes of Health. 

Carter collaborated on the research with Amazon scientists Siddhartha Jain and Jonas Mueller, as well as MIT Professor David Gifford. They’ll share their findings at the Conference on Neural Information Processing Systems in 2021.

Advertisement