Tuesday, November 11, 2025
ad
Home Blog Page 331

Fake News Generated By Artificial Intelligence Can Confuse Experts

artificial intelligence generated fake news

A recent study found that artificial intelligence systems can be used to generate fake news that is convincing enough to fool experts. The study was performed using artificial intelligence models dubbed transformers in order to create fake cybersecurity news, which was later presented to the experts for testing. 

Surprisingly, the experts failed to recognize it. Artificial intelligence is used to identify fake news as it enables computer scientists to check large amounts of false information instantly. Ironically, the same technology is being used to spread misinformation in recent years. Natural Language Processing (NLP) is used by most Transformers like BERT and GPT to interpret text and produce translations and summaries. 

However, Transformers can also be used to generate fake news across various social media platforms like Facebook and Twitter. According to the study,  Transformers also pose a misinformation threat in medicine and cybersecurity. 

Read More: Facebook’s Artificial Intelligence Can Now Detect Deepfake

The researchers performed an experiment, which showed that cybersecurity misinformation examples generated by Transformers were able to trick cyber threat experts, who have knowledge about all kinds of cybersecurity threats and vulnerabilities. 

Similar techniques can be used to generate fake medical documents. During the COVID-19 pandemic, this method was used many times to create fake research papers, which were being used to make decisions regarding public health. 

Nowadays, both healthcare and cybersecurity sectors are adopting artificial intelligence to extract data from cyber-threat intelligence, which is used to develop automated systems to help recognize potential cyber-attacks.  

If these automated systems process fake cybersecurity data, their effectiveness in detecting a real cyber-attack reduces drastically. People spreading misinformation are developing new and better ways to spread fake news faster than experts create ways to recognize them. At the end of the day, the responsibility is of the reader to triangulate the information with other trusted sources before believing it to be authentic news.

Advertisement

Cleerly Raised $43 Million To Improve Treatment Of Heart Diseases

Cleerly raised $43 million

Cleerly, a New York-based health tech startup, raised $43 million in its second funding round. The company also showcased its unique digital care pathway solution to prevent heart attacks. Vensana Capital led the funding along with New Leaf Venture Partners, the American College of Cardiology, LRVHealth, Cigna Ventures, and existing investors.

Cardiac disease is a major problem that accounts for one in every four deaths in the world. The Center for Disease Control and Prevention (CDC) said, in the United States, one person dies every 36 seconds. Cleerly was founded by James K. Min. in 2016 to bring innovations in the healthcare industry to tackle this issue by developing proprietary artificial intelligence and machine learning algorithms to identify the risk of heart attacks in individuals. 

The research was conducted at New York-Presbyterian Hospital, in its cardiovascular imaging department, which includes a massive clinical trial on more than 50,000 heart patients. The study was conducted to determine how imaging can be used to analyze heart diseases and predict patient outcomes. 

Read More: Facebook AI Open Sources A New Data Augmentation Library

Founder and CEO of Cleerly, James Min, said, “Advanced imaging has always been key to diagnosing and preventing the most common causes of cancer for years, but we’re not utilizing it yet to prevent the most common cause of death.” He further added that the company is now using artificial intelligence, which is continuously refined with huge volumes of clinical data to diagnose and prevent cardiac diseases in individuals. 

Justin Klein, Managing Director of Vensana Capital, said, “We see Cleerly as the future of how coronary disease will be evaluated, and we support the company’s mission to tailor a personalized approach to diagnosis and treatment.”

The enterprise’s expertise in the understanding of coronary diseases is revolutionizing the healthcare sector by introducing a multi-dimensional approach to cater to the needs of doctors as well as patients. Cleerly aims to use the newly raised funds to expand its operational capabilities and continue investing in industry-leading research and development.

Advertisement

Andrew Ng Announces A Data-Centric AI Competition

Andrew Ng competition

The AI mastermind Andrew Ng announced a data-centric AI competition on Friday. The competition will include improving the performance of machine learning models by optimizing the data rather than the model/algorithm. 

The data-centric approach is not always an ideal do-to method when it comes to improving any model’s output, but according to a few recent pieces of research that were performed by Andrew’s team, it has shown that there is a significant improvement in the results using data-centric methods rather than model-centric ones. This led to the invention of the competition to know more data-centric approaches. 

Users can register for the data-centric competition, and then download the datasets from the website that consists of handwritten roman numbers from 1 to 10. The task is to optimize model performance by improving the datasets via training and validation sets. The various data-centric techniques such as fixing incorrect labels, adding own data or moving trained and validation splits can be implied.

Read more: CSEM Develops Artificial Intelligence Powered Chips That Runs On Solar Energy

The submission has to be done on the Codalab website by uploading a zip file consisting of less than 10,000 png images which work as the datasets. There can be only five submissions per day. The uploaded folder and the label book will be passed on to a predicting script that will train a fixed model on the submitted data. The script will generate a set of predictions on the label book. The competition organizers will mimic the contender’s run by replacing the deb set (label book) with a test set to obtain accurate results. The accuracy will be uploaded on the leaderboard. The results will be based on the best overall performance and the most innovative approach. 

The data-centric competition will be open till September 4th. There will be two winners from each category who will get an opportunity for a private discussion with Andrew Ng regarding data-centric optimization. Their work will also be published on the deeplearning.AI channel.   

This will be the first-ever data-centric competition, but there will be many more in the coming years Ng mentioned.

Advertisement

Will DeepMind Be Able To Develop Artificial General Intelligence?

Will DeepMind Be Able To Develop AGI

Computer scientists are questioning the tech giant Alphabet’s AI-based firm DeepMind, will ever be able to develop machines with the “normal” intelligence seen in humans or better known as Artificial General Intelligence. 

DeepMind, one of the largest artificial intelligence laboratories in the world, has assembled a dedicated team to work on the concept of ‘reinforcement learning’ to develop human-level artificial intelligence.. 

The platform aims to develop artificial intelligence algorithms that would perform specific tasks to improve the chances of earning a reward in a given situation. Many AI algorithms have been developed using this process to play games like chess and go. 

Read More: Google I/O Introduces LaMDA, A Breakthrough Conversational AI Technology

DeepMind firmly believes that this reinforcement learning technique would grow so much that it would compete with human intelligence. Scientists say that if they continue to reward the algorithm every time it does something it is asked to do, eventually, it would start to show signs of general intelligence. 

However, AI scientist Samim Viniger said that the company’s ‘reward is enough’ approach is not enough to develop a true general intelligence model. He also mentioned that the path to achieving general artificial intelligence is full of hurdles and hardships, which the entire scientific world is aware of. 

An independent AI researcher, Stephen Merritt, said, “there is a difference between theory and practice.” As there is no concrete evidence till date that reinforcement learning would lead to the development of AGI. Many experts of the artificial intelligence community are saying that the issue with DeepMind’s approach is that the results generated by the ‘reward is enough’ process cannot be wrong, which contradicts the basic meaning of Artificial General Intelligence (AGI). 

Entrepreneur William Tunstall-Pedoe said that even if the researchers are correct, there is no guarantee that they would achieve the desired results anytime soon. He also mentioned the possibility of a faster and better way to reach the outcome. 

The company was acquired by Google in 2016 for $600 million. Since then, the firm has expanded rapidly and now has more than 1000 employees. The enterprise said that though they have had some popular research on reinforcement learning, it is only a fraction of the overall research the company does in other fields of artificial intelligence like symbolic artificial intelligence and population-based training. 

Advertisement

Facebook AI Open Sources A New Data Augmentation Library

Facebook AugLy

Facebook open-sourced a new python library named AugLy to assist artificial intelligence researchers in developing a more sturdy machine learning model using data augmentation. AugLy pitches in by providing advanced data argumentation tools that can be used to train and test various models. 

As most of the data sets that are being used are multimodal, AugLy was fabricated to combine audios, texts, videos, and images in different modalities. It offers over 100 data augmentations that emphasize various things that real people on social media platforms like Facebook and Instagram do (like overlaying text with images, text with emoji, screenshots, etc.). Facebook says that many augmentations were informed by the various ways people transform infringing content to escape the automatic restricting systems. 

AugLy has four sub-libraries with the same interface, each corresponding to various modalities. Provided with both function-based and class-based transforms along with intensity function to help users identify how intense a transformation is. The augmentations were sourced from multiple existing libraries as well as some developed by Facebook itself. 

Read more: Facebook’s Artificial Intelligence Can Now Detect Deepfake

If the models can be robust enough to disrupt the unimportant aspects of data, it will learn to focus on the crucial data attributes for a specific use, says the Facebook AI blog. The blog also mentions that the model developed using AugLy can detect duplicate copies or almost identical copies of a particular infringing content even when the image is augmented by a pixel or with a filter or with text or audio added. This actively prevents users from uploading disturbing content. 

AugLy can assist with object detection models, identification of hate speech, voice reorganization. It was used in the Deepfake detection challenge to check the robustness of deepfake detection models. AugLy is part of Facebook AI’s broader efforts on advancing multimodal machine learning, ranging from the hateful memes challenge to SIMCC data sets for training next-generation shopping assistants, mentioned by Facebook in their AI blog. 

Check Facebook AugLy library on GitHub.

Advertisement

Canon Developed Artificial Intelligence Powered Smile Detecting Cameras

Canon developed smile recognition system

The Japanese MNC Canon has developed an artificial intelligence-powered smile detecting camera system. The technology has been installed in Canon’s subsidiary office in China with the intention of tackling workplace morale issues. 

The camera system allows only ‘smiling’ employees to enter the office and organize meetings to ensure that the workers are happy throughout the working hours. Last year, Canon Information Technology announced the launch of its smart smile recognition system as a part of its workplace management tools. 

However, the technology didn’t receive much attention until it got implemented in China. Canon Official said this technology was implemented to bring cheerfulness to the work floor in this COVID-19 pandemic. The official further added that they have been wanting to encourage workers to build a positive environment using this technology. 

Read More: Udacity Launches AWS Machine Learning Scholarship Program

However, a report from Financial Times shows the degree to which big enterprises in China track the activity of their employees using artificial intelligence. An IT engineer from Shanghai said that this technology makes no sense as it is not possible for a person to stay in the same frame of mind throughout the day. 

King’s College London academic Nick Srnicek said that the employees are not being replaced by artificial intelligence. Instead, the companies are closely monitoring the workers using artificial intelligence. He added that the situation is quite similar to the time of the industrial revolution where technology was increasing the pace of the employee who works with machines, instead of the other way around. 

Because of the ongoing pandemic, most of the companies are resorting to a work from home approach, leading to the adoption of surveillance software. According to Financial Time’s report, the artificial intelligence-powered smile recognition camera system is the least dangerous type of surveillance tool.  

Advertisement

Facebook’s Artificial Intelligence Can Now Detect Deepfake

facebook

On Wednesday, Facebook and Michigan State University (MSU) announced that their new artificial intelligence technique could detect and pinpoint the generative model used to create deepfake images or videos.

Deepfake is creating fake images/videos from an existing image using deep learning techniques. It has become so efficient that it’s almost impossible to tell whether a picture or video is original or a deepfake. 

Read more:ISRO Is Offering A Free Machine Learning Course On Remote Sensing With Certification

Although Facebook has banned and eliminated deepfake in January 2020, it still seems to be a threat to the security of the users. Being the most widely used social media platform, Facebook is the place of residence for deepfake. 

Facebook employed a new way for the detection of deepfake called ‘Fingerprint Estimation Network’. This method is based on the generative fingerprints that are created during the modeling of deepfake. These fingerprints are very similar to human fingerprints; they are unique and can trace the generative models that made a particular deepfake. The technique picks them up from the deepfake image or even from a video frame and traces them back to the original model. 

Once the generative fingerprints are detected, this new technique uses a reverse-engineered method called the ‘model praising approach’ to detect the original generative model. Since most of the generative models that are used to fabricate deepfake are known, this method works perfectly fine; although the generative model is unknown, the AI can still see it, says the Facebook research team.

To test the efficacy of the new AI technique, Facebook put up 100,00 deepfakes into testing that were created using 100 generative models. Facebook says that this methodology developed, along with MSU, is working significantly better than the earlier ones used to detect and remove deepfakes. 

Facebook and MSU research teams have also mentioned that they are putting a forethought to open-source the datasets, code, and the trained models used to detect the generative model to ease up the area of research of deepfake detection. 

Advertisement

CSEM Develops Artificial Intelligence Powered Chips That Runs On Solar Energy

CSEM develops solar powered AI chips

Engineers at the Swiss Center for Electronics and Microtechnology (CSEM) announced they have developed an artificial intelligence-powered chip that can perform complicated operations like voice, face and gesture recognition, and cardiac monitoring, which can run on solar energy. 

This technology can eliminate artificial intelligence’s mandatory requirement of staying continuously connected to the cloud. This technology performs all artificial intelligence operations locally on the chipset rather than on the cloud. It is a customizable system that can be modified according to the requirements of any application, which requires real-time signal and image processing. 

The technology will be unveiled at the 2021 VLSI Circuits Symposium in Kyoto this month. The technology works through an entirely new signal processing architecture that reduces the power consumption of the chip. It has an ASIC chip and a RISC-V processor, which also has been developed by CSEM. The technology will also have two coupled machine learning accelerators. The first machine learning accelerator is a binary decision tree (BDT) engine that can carry out simple tasks but cannot perform facial-voice-gesture recognition operations. 

Read More: KeepTruckin Raised $190 Million To Invest In AI Products

The second accelerator is a convolutional neural network (CNN) engine capable of performing complicated recognition operations efficiently. This unique approach reduces the power consumption drastically as the first accelerator does most of the job. Stephane Emery, Head researcher at CSEM, said, “When our system for example is used in facial recognition applications, the first accelerator will answer preliminary questions like are there people in the images? And if so, are their faces visible?”  

This groundbreaking invention can revolutionize the field of artificial intelligence and machine learning as the chips developed by CSEM can run independently for more than one year. Additionally, it considerably reduces the maintenance and installation cost of such devices and enables the usage of them at locations where it is difficult to find a power source. 

Advertisement

IIT Mandi Is Hosting Workshop On Deep Learning For Executives And Working Professionals

IIT Mandi is hosting workshop on deep learning

Indian Institute of Technology Mandi, along with IIT Mandi iHub and HCI foundation, is organizing a 6-day weekend workshop on Deep Learning Crash Course (ADLCC 2021) for Executives and Working professionals between 3rd July 2021 and 18th July 2021. 

This crash course will cover both theory and practical sessions on artificial intelligence and machine learning. The theory section will be covered from 9 AM to 1 PM, and the practical sessions will be organized from 2 PM to 6 PM on weekends. An assessment will be conducted after the completion of the course, based on which the applicants will receive certificates. 

Emphasis will be given on topics like Basics of Machine learning & Neural Networks, Object Localisation and Detection, Convolutional Neural Networks, Autoencoders and Variational AutoEncoder, and Generative Adversarial Networks. Dr. Aditya Nigam, Workshop Coordinator, and Assistant Professor, School of Computing and Electrical Engineering, IIT Mandi, said, “This workshop will be the key to enter the mystic world of AI/ML. Extensive learning has been planned through comprehensive sessions organized by various experts.” 

Read More: IIT Madras Offers Post Baccalaureate Fellowship In Artificial Intelligence & Data Science

He also added that the unique structure of the workshop would help mature learners understand the topics in a better way. The course will include sessions by venerated speakers like Dr. Varun Dutt, Dr. Chetan Arora, Dr. Kamlesh Tiwari, and many more. “This workshop is the first of its kind sponsored by the iHub and IIT Mandi, and the entire workshop series includes six such workshop events in 2021 in total,” said Dr. Varun Dutt of IIT Mandi. He also mentioned that in this workshop, the applicants would get hands-on experience of artificial intelligence and machine learning topics of various sectors, including the field of human-computer interaction. 

The registration fee structure of the workshop is as follows – 

The workshop has a limited number of seats, and the applications will be processed in a first-come, first-serve manner. Interested learners can apply for this workshop through IIT Mandi’s website.

Advertisement

AWS Is Now Ferrari’s Official Cloud Service Provider

AWS is Ferrari's Cloud provider

Amazon Web Service (AWS) and Ferrari announced an agreement for AWS to provide artificial intelligence, machine learning, and cloud services to the sports car manufacturer. The services provided by AWS will be used to test cars with an addition of a new fan engagement platform to Ferrari’s smartphone app. 

The company is also planning to launch an augmented reality (AR) platform named Ferrari Garage, where the users will be able to experience the company’s cars in a virtual world. The company also plans to utilize these services for its vehicle information hub, where the company stores its car information and maintenance records. 

The logo of AWS also appeared on Ferrari’s Formula 1 team at the French Grand Prix event, which took place on 20th June. Amazon Elastic Compute Cloud (Amazon EC2) will enable the carmaker to test complex simulations of cars in multiple driving, racing, and environmental conditions. This will help the company generate test reports much faster than the time taken in traditional on-premise testing. 

Read More: Higher Cloud Demand Helped Boost Oracle’s Quarterly Revenue

Ferrari also plans to use Amazon ElasticKubernetes to improve the company’s car configurator, which is widely used to build customized cars in 2D and 3D formats. In addition, Scuderia Ferrari will utilize AWS services to develop a new digital platform where users will be able to connect to their favorite racers. 

AWS has been the most popular cloud platform for over 15 years and has continuously worked to bring new advancements in their technology. Ferrari officials said they had chosen AWS because of its constant drive for innovation and the wide range of solutions they provide for machine learning and artificial intelligence.

“Ferrari and AWS are both exceptional in their respective spheres of activity, and I am pleased to welcome a partner known for the excellence of its innovation and creativity,” said the Managing Director of Ferrari, Mattia Binotto.

Advertisement