Friday, November 29, 2024
ad
Home Blog Page 323

Frontier Development Lab To Use Artificial Intelligence In Space Science Explorations

Frontier Development Lab To Use Artificial Intelligence In Space Science Explorations

Frontier Development Lab has built a team of researchers who will use artificial intelligence and machine learning to explore space science. FDL plans to use artificial intelligence to solve challenges that scientists face in lunar resources, astronaut health, and earth science. 

Frontier Development Lab is a public-private partnership with NASA in the USA and ESA in Europe. It brings together industry leaders in space science and artificial intelligence like Google Cloud, Luxembourg Space Agency, Microsoft, Intel, Lockheed Martin, and many more. 

Hosted by NASA Ames Research Center and the SETI Institute, the FDL aims to combine physics and machine learning to help explore several issues in space science and humanity. 

Bill Diamond, the president and CEO of the SETI Institute, said, “In an impressive pivot, our 2020 FDL participants demonstrated that interdisciplinary researchers could achieve extraordinary results in an intense sprint environment and do it virtually, across about nine time zones.” 

Read More: AWS Is Now Ferrari’s Official Cloud Service Provider

He also clarified that the FDL artificial intelligence and machine learning accelerator event would again be organized virtually in 2021. FDL was founded in 2016, after which it has successfully exhibited the potential for interdisciplinary artificial intelligence approaches to overcome challenges in lunar prospecting, planetary defense, and space weather.

The lab handles knowledge gaps in space science, using machine learning experts with researchers in astronomy, astrophysics, and planetary science. They research together for a period of eight weeks during the summer break of the academic year. The researchers of FDL have already used artificial intelligence to predict solar activities, generate 3D models of potentially dangerous asteroids, and map lunar resources. 

FDL 6.0 will build upon the work, processes, and learning developed over the last five years, with the potential to deepen the impact of the work and advance science in new ways.

Advertisement

Deep Learning For AI: A Paper By The Experts

A Paper By Yoshua Bengio, Yen LeCun, & Geoffery Hinton

The three scholars of artificial intelligence and deep learning, Yoshua Bengio, Yann LeCun, and Geoffery Hinton, announced that their paper Deep Learning For AI would be published officially in July 2021. The paper would be regarding neural networking, artificial intelligence and deep learning.

In 2018, the three researchers dug deep into how simple neural networks can learn a rich internal representation required to perform complex tasks such as recognizing objects or understanding language. The three explored the field to its maximum capacity and have put forth their thoughts and learnings in the paper, Deep Learning of AI.

Deep learning systems are doing well in the present for system 1 type that includes object recognition and understanding language. But not so well for system 2 tasks such as learning with little or no external supervision, using deep learning to perform tasks that humans and animals do by using a deliberate sequence of steps, and coping with test examples that arrive from a different distribution than the training examples. The paper describes a few ways which can make deep learning systems perform well with system 2 tasks. Not only that, the paper also briefs about the origins and recent advancements in deep learning and AI. 

Read more: Canon Developed Artificial Intelligence Powered Smile Detecting Cameras

The paper has three major purposes: trying to point to the direction in actual progress of AI and making them learn like humans and animals, by getting machines to understand the reasoning and getting machines perceived more robustly and work precisely like humans and animals.

The paper also engages with how many believe that there are problems that neural networks cannot solve and they tend to resort back to the classical AI symbolic approach, but this work suggests otherwise, that those goals can be achieved by making the neural networks more structured via extending the network itself.

“Deep learning has a great future; it is only going to get bigger and better, but there is still a long way to go in terms of understanding how to make neural networks effective, and we expect to have many more ideas,” said Geoffery Hinton in a video describing the paper.

Advertisement

Yellow Messenger Renamed To Yellow.ai To Launch Artificial Intelligence Powered Voice Bots

Yellow messenger renamed to Yellow.ai

Yellow messenger recently renamed itself Yellow.ai after the announcement of the launch of a new product suite. This change aims to prioritize the delivery of ‘Total Customer Experience (CX) Automation.’ 

The company launched artificial intelligence-powered voice bots to its existing platform of automated chat solutions. Yellow.ai’s new technology offers the best artificial intelligence and human intelligence to deliver precise and enhanced customer experience at a very competitive price bracket.

The company has already partnered with 700+ brands globally to provide them with Conversational CX automation service. This technology has now enabled brands to elevate the customer experience across multiple platforms Telephony, Google Assistant, Alexa, Instagram, Apple Business Chat, Web, WhatsApp, Facebook, Google Business Messages, Telegram, WeChat, LINE, and many more in 100+ languages. 

Read More: CSEM Develops Artificial Intelligence Powered Chips That Runs On Solar Energy

Customers can now get personalized and unified assistance from the brands when they reach out to them via WhatsApp, web, or dialing the company’s customer service number. Some of the main features of Yellow.ai’s service include Multichannel voice experience via all the popular voice assistants, Text-to-Speech (TTS) capabilities which support numerous emotions like happiness and anger. 

The platform also has a built-in personalization engine that can analyze the intent and sentiment of the customer, along with a continuous learning Speech-to-text (STT) system that will improve its accuracy over time. The platform offers an inclusive and unbiased approach to CX in every market segment. 

The co-founder and CEO of Yellow.ai, Raghu Ravinutala, said, “The post-pandemic world is moving towards touchless UI, and ‘voice’ is playing a key role in enabling smarter brand-to-consumer engagement.” He added that Yellow.ai is dedicated to enabling human-like, engaging conversations with their new conversational CX platform. 

Yellow.ai is hosting a global event, ‘Envision – the future of Voice AI’ on 22nd June to mark its product launch. The event will host industry leaders from Microsoft, Teleperformance, Concentrix, and many more.

Advertisement

Deutsche Bank Releases Paper On The Usage Of AI In Security Services

AI in bank security

On Saturday, Deutsche Bank released a paper titled “Unleashing the potential of AI in securities services,” which gives an insight on the potential usage of artificial intelligence and machine learning in security services and post-trade custody by banks. 

This globally leading investment bank is no new to the game of AI; they have used AI earlier for advanced client segmentation processes along with their S-2 Predict tool to prevent settlement failures. And also as self-executing bots for natural-language messaging as part of their client-facing chatbot for customer assistance. Now they have pushed the usage of AI technology for risk management and for better understanding of client activities. 

AI comes into play in risk management during the management of various risks that occur due to time-zone differences, imperfect communication across various sectors/chains, involvement of multiple clients, and also the hectic time pressure that comes while settling a trade. Tackling these risks using AI and ML gives both the banks and the clients a notable competitive advantage.

Read more: US Partners With India For Research In Artificial Intelligence For Mutual Benefits

Deutsche Bank also mentioned in the paper that AI can be applied to the existing data to identify the current and real-time trends by using historical trends or identifying future trends by mixing past and present data. It can be used in client segregation, i.e. by dividing clients with similarities into various groups. This process will help the custodians develop better products and services that will meet the client’s shared and individual needs. Not only that, but AI can also assist and speed up the decision-making process by analysing all the data and trends in no time.

The paper gives insight into the various AI uses along with learning types, algorithm types, governance (to ensure model accuracy and consistency) and a list of key recommendations. The paper also specifies the benefits due to the usage of AI and ML in banks that includes: improving speed, efficiency, settlement lifecycle, and also open up new employment opportunities.

“The opportunities are endless, and we are not even yet scratching the surface. Going forward, we will continue pushing this emerging technology to the forefront – stay tuned!” mentioned Paul Maley, global head of securities of Duestch bank in the paper. 

Advertisement

US Partners With India For Research In Artificial Intelligence For Mutual Benefits

US partners with India for artificial intelligence

The United States and India have joined hands again by launching the U.S.-India Synthetic Intelligence (USIAI) Partnership. India’s massive potential in the field of artificial intelligence is the main reason for this partnership between the two democracies. 

Joe Biden recently established the National Artificial Intelligence Initiative Office with the sole focus on partnering with the allies of the country for research and development of artificial intelligence. This partnership will be beneficial for both the countries, as India would get better chances to upgrade its infrastructures, and the US will get access to Indian tech talents. 

Georgetown University’s Center for Security and Emerging Technology report mentioned that India produces seven times as many engineering graduates each year than the United States. But the lack of opportunities in the field of artificial intelligence retards India’s development. 

Read More: Andrew Ng Announces A Data-Centric AI Competition

11% of the top 50 artificial intelligence startups in the US are founded by Indian immigrants. Their contribution not only adds to the US economy but also benefits India through tech transfers, investment opportunities, and outsourcing. 

Policymakers in India and the United States can facilitate their artificial intelligence ambitions by strengthening ties with Indian talents and backing their ventures in both nations. India’s colossal population produces way more graduates each year than its economy can absorb. This gives America the opportunity to lure talented individuals to the country. 

This partnership can help India to encourage the US to set up University campuses in India, which would improve the standard of education. The increasing Indian tech diaspora in the US is a crucial element of the India-US artificial intelligence partnership that will enable the two countries to strengthen one another further and together implement emerging technologies in accordance with democratic values and principles.

Advertisement

Fake News Generated By Artificial Intelligence Can Confuse Experts

artificial intelligence generated fake news

A recent study found that artificial intelligence systems can be used to generate fake news that is convincing enough to fool experts. The study was performed using artificial intelligence models dubbed transformers in order to create fake cybersecurity news, which was later presented to the experts for testing. 

Surprisingly, the experts failed to recognize it. Artificial intelligence is used to identify fake news as it enables computer scientists to check large amounts of false information instantly. Ironically, the same technology is being used to spread misinformation in recent years. Natural Language Processing (NLP) is used by most Transformers like BERT and GPT to interpret text and produce translations and summaries. 

However, Transformers can also be used to generate fake news across various social media platforms like Facebook and Twitter. According to the study,  Transformers also pose a misinformation threat in medicine and cybersecurity. 

Read More: Facebook’s Artificial Intelligence Can Now Detect Deepfake

The researchers performed an experiment, which showed that cybersecurity misinformation examples generated by Transformers were able to trick cyber threat experts, who have knowledge about all kinds of cybersecurity threats and vulnerabilities. 

Similar techniques can be used to generate fake medical documents. During the COVID-19 pandemic, this method was used many times to create fake research papers, which were being used to make decisions regarding public health. 

Nowadays, both healthcare and cybersecurity sectors are adopting artificial intelligence to extract data from cyber-threat intelligence, which is used to develop automated systems to help recognize potential cyber-attacks.  

If these automated systems process fake cybersecurity data, their effectiveness in detecting a real cyber-attack reduces drastically. People spreading misinformation are developing new and better ways to spread fake news faster than experts create ways to recognize them. At the end of the day, the responsibility is of the reader to triangulate the information with other trusted sources before believing it to be authentic news.

Advertisement

Cleerly Raised $43 Million To Improve Treatment Of Heart Diseases

Cleerly raised $43 million

Cleerly, a New York-based health tech startup, raised $43 million in its second funding round. The company also showcased its unique digital care pathway solution to prevent heart attacks. Vensana Capital led the funding along with New Leaf Venture Partners, the American College of Cardiology, LRVHealth, Cigna Ventures, and existing investors.

Cardiac disease is a major problem that accounts for one in every four deaths in the world. The Center for Disease Control and Prevention (CDC) said, in the United States, one person dies every 36 seconds. Cleerly was founded by James K. Min. in 2016 to bring innovations in the healthcare industry to tackle this issue by developing proprietary artificial intelligence and machine learning algorithms to identify the risk of heart attacks in individuals. 

The research was conducted at New York-Presbyterian Hospital, in its cardiovascular imaging department, which includes a massive clinical trial on more than 50,000 heart patients. The study was conducted to determine how imaging can be used to analyze heart diseases and predict patient outcomes. 

Read More: Facebook AI Open Sources A New Data Augmentation Library

Founder and CEO of Cleerly, James Min, said, “Advanced imaging has always been key to diagnosing and preventing the most common causes of cancer for years, but we’re not utilizing it yet to prevent the most common cause of death.” He further added that the company is now using artificial intelligence, which is continuously refined with huge volumes of clinical data to diagnose and prevent cardiac diseases in individuals. 

Justin Klein, Managing Director of Vensana Capital, said, “We see Cleerly as the future of how coronary disease will be evaluated, and we support the company’s mission to tailor a personalized approach to diagnosis and treatment.”

The enterprise’s expertise in the understanding of coronary diseases is revolutionizing the healthcare sector by introducing a multi-dimensional approach to cater to the needs of doctors as well as patients. Cleerly aims to use the newly raised funds to expand its operational capabilities and continue investing in industry-leading research and development.

Advertisement

Andrew Ng Announces A Data-Centric AI Competition

Andrew Ng competition

The AI mastermind Andrew Ng announced a data-centric AI competition on Friday. The competition will include improving the performance of machine learning models by optimizing the data rather than the model/algorithm. 

The data-centric approach is not always an ideal do-to method when it comes to improving any model’s output, but according to a few recent pieces of research that were performed by Andrew’s team, it has shown that there is a significant improvement in the results using data-centric methods rather than model-centric ones. This led to the invention of the competition to know more data-centric approaches. 

Users can register for the data-centric competition, and then download the datasets from the website that consists of handwritten roman numbers from 1 to 10. The task is to optimize model performance by improving the datasets via training and validation sets. The various data-centric techniques such as fixing incorrect labels, adding own data or moving trained and validation splits can be implied.

Read more: CSEM Develops Artificial Intelligence Powered Chips That Runs On Solar Energy

The submission has to be done on the Codalab website by uploading a zip file consisting of less than 10,000 png images which work as the datasets. There can be only five submissions per day. The uploaded folder and the label book will be passed on to a predicting script that will train a fixed model on the submitted data. The script will generate a set of predictions on the label book. The competition organizers will mimic the contender’s run by replacing the deb set (label book) with a test set to obtain accurate results. The accuracy will be uploaded on the leaderboard. The results will be based on the best overall performance and the most innovative approach. 

The data-centric competition will be open till September 4th. There will be two winners from each category who will get an opportunity for a private discussion with Andrew Ng regarding data-centric optimization. Their work will also be published on the deeplearning.AI channel.   

This will be the first-ever data-centric competition, but there will be many more in the coming years Ng mentioned.

Advertisement

Will DeepMind Be Able To Develop Artificial General Intelligence?

Will DeepMind Be Able To Develop AGI

Computer scientists are questioning the tech giant Alphabet’s AI-based firm DeepMind, will ever be able to develop machines with the “normal” intelligence seen in humans or better known as Artificial General Intelligence. 

DeepMind, one of the largest artificial intelligence laboratories in the world, has assembled a dedicated team to work on the concept of ‘reinforcement learning’ to develop human-level artificial intelligence.. 

The platform aims to develop artificial intelligence algorithms that would perform specific tasks to improve the chances of earning a reward in a given situation. Many AI algorithms have been developed using this process to play games like chess and go. 

Read More: Google I/O Introduces LaMDA, A Breakthrough Conversational AI Technology

DeepMind firmly believes that this reinforcement learning technique would grow so much that it would compete with human intelligence. Scientists say that if they continue to reward the algorithm every time it does something it is asked to do, eventually, it would start to show signs of general intelligence. 

However, AI scientist Samim Viniger said that the company’s ‘reward is enough’ approach is not enough to develop a true general intelligence model. He also mentioned that the path to achieving general artificial intelligence is full of hurdles and hardships, which the entire scientific world is aware of. 

An independent AI researcher, Stephen Merritt, said, “there is a difference between theory and practice.” As there is no concrete evidence till date that reinforcement learning would lead to the development of AGI. Many experts of the artificial intelligence community are saying that the issue with DeepMind’s approach is that the results generated by the ‘reward is enough’ process cannot be wrong, which contradicts the basic meaning of Artificial General Intelligence (AGI). 

Entrepreneur William Tunstall-Pedoe said that even if the researchers are correct, there is no guarantee that they would achieve the desired results anytime soon. He also mentioned the possibility of a faster and better way to reach the outcome. 

The company was acquired by Google in 2016 for $600 million. Since then, the firm has expanded rapidly and now has more than 1000 employees. The enterprise said that though they have had some popular research on reinforcement learning, it is only a fraction of the overall research the company does in other fields of artificial intelligence like symbolic artificial intelligence and population-based training. 

Advertisement

Facebook AI Open Sources A New Data Augmentation Library

Facebook AugLy

Facebook open-sourced a new python library named AugLy to assist artificial intelligence researchers in developing a more sturdy machine learning model using data augmentation. AugLy pitches in by providing advanced data argumentation tools that can be used to train and test various models. 

As most of the data sets that are being used are multimodal, AugLy was fabricated to combine audios, texts, videos, and images in different modalities. It offers over 100 data augmentations that emphasize various things that real people on social media platforms like Facebook and Instagram do (like overlaying text with images, text with emoji, screenshots, etc.). Facebook says that many augmentations were informed by the various ways people transform infringing content to escape the automatic restricting systems. 

AugLy has four sub-libraries with the same interface, each corresponding to various modalities. Provided with both function-based and class-based transforms along with intensity function to help users identify how intense a transformation is. The augmentations were sourced from multiple existing libraries as well as some developed by Facebook itself. 

Read more: Facebook’s Artificial Intelligence Can Now Detect Deepfake

If the models can be robust enough to disrupt the unimportant aspects of data, it will learn to focus on the crucial data attributes for a specific use, says the Facebook AI blog. The blog also mentions that the model developed using AugLy can detect duplicate copies or almost identical copies of a particular infringing content even when the image is augmented by a pixel or with a filter or with text or audio added. This actively prevents users from uploading disturbing content. 

AugLy can assist with object detection models, identification of hate speech, voice reorganization. It was used in the Deepfake detection challenge to check the robustness of deepfake detection models. AugLy is part of Facebook AI’s broader efforts on advancing multimodal machine learning, ranging from the hateful memes challenge to SIMCC data sets for training next-generation shopping assistants, mentioned by Facebook in their AI blog. 

Check Facebook AugLy library on GitHub.

Advertisement