Home Blog Page 283

Landing AI raises $57 million in Series A Funding Round

Landing AI Series A funding

Artificial intelligence startup Landing AI raises $57 million in its Series A funding round led by industrial IoT investor McRock Capital. Other investors like Insight Partners, Taiwania Capital, Canada Pension Plan Investment Board (CPP Investments), Intel Capital, Samsung Catalyst Fund, Far Eastern Group’s DRIVE Catalyst, and Walsin Lihwa also participated in the funding round. 

The founder of this AI startup is the co-founder of Google Brains research lab and former chief scientist at Baidu, Andrew Ng. With this fresh funding, Landing AI plans to further improve its product LandingLens, and extensively hire trained professionals to meet their goals. Landing AI plans to increase its team size from 75 to 150 while adding on more customers. 

Andrew Ng said, “AI built for 50 million data points doesn’t work when you only have 50 data points. By bringing machine learning to everyone regardless of the size of their data set, the next era of AI will have a real-world impact on all industries.” 

Read More: Novarad and CureMetrix to develop new AI-driven Mammography solutions

He further added that it is important to have good quality data in order to win with artificial intelligence. United States-based Landing AI was founded in 2017 and specializes in developing artificial intelligence solutions for challenging visual inspection problems and generating business value. 

Additionally, Co-Founder and Managing Partner of McRock Capital, MacDonald, will join Landing AI’s board of directors. “Landing AI will unleash the power of the Industrial IoT one company, one factory, and one manufacturing line at a time,” said MacDonald. He also mentioned that he believes Landing AI will be able to bring a transformation in digital technologies for various markets. 

George Mathew, managing director at Insight Partners, said that the need for Landing AI is ever increasing, and he believes the company will be able to unlock untapped segments of machine vision projects.”

Advertisement

Novarad and CureMetrix to develop new AI-driven Mammography solutions

Novarad CureMetrix AI mammography

Enterprise healthcare solutions developing company Novarad partners with artificial intelligence-powered medical imaging products developing firm CureMetrix to develop new AI-driven mammography solutions. 

With this collaboration, both companies plan to integrate their products to launch more capable solutions. Novarad’s imaging tools will be integrated with CureMetrix’s artificial intelligence-powered women’s health suite of tools for mammography. 

The new product will be distributed in various regions throughout the United States. Novarad will be the exclusive distributor of the integrated systems for small to medium-sized imaging centers and hospitals across the USA. 

Read More: Fovia Ai to Showcase Artificial Intelligence Visualization Integrations

The solution will drastically reduce the reading time of mammographies by 30% that would result in early diagnosis of breast cancer patients. The solution will also bring down the false positives by 60%. 

Novarad’s director of product, David Grandpre, said, “By integrating these highly trained, proven algorithms with our existing mammography offerings, radiologists will be able to streamline their workflow, reduce false positives and enhance their ability to diagnose breast cancer earlier.” 

He further added that the company’s vision is similar to CureMetrix’s goal of supporting women’s health. Coremetrics is a United States-based artificial intelligence company that was founded by Navid Alipore, Homa Karimabadi, Kevin Harris, and Blaise Barrelet in 2014. 

According to the company, it is the global leader in artificial intelligence for medical imaging. Coremetrics products help healthcare practitioners to accurately identify and classify any anomalies in mammography. 

CEO of CureMetrix, Navid Alipore, said, “CureMetrix solutions will enhance the performance of physicians using Novarad’s outstanding platforms, improving both clinical and financial outcomes both now and well into the future.” He also mentioned that their integrated solution would aid radiologists in diagnosing breast cancer at an early stage to decrease fatality chances.

Advertisement

MIT researchers develop an AI model that understands object relationships

AI model that understands object relationships

When humans see, they understand the scene by relating with objects. However, deep learning models struggle to understand the entangled relationships between individual objects. Without knowledge of object relationships, a robot that’s supposed to help someone in a kitchen would have difficulty following complex commands like “pick up the spatula that is to the left of the stove and place it on top of the cutting board.” MIT researchers have developed an AI model that understands object relationships to solve this problem. The model works by representing individual relationships and then combining them to describe the overall scene, enabling the model to generate more accurate actions. 

The framework generates an image of a scene based on a text description of objects and their relationships. Next, the system would break these sentences down into smaller pieces to describe each relationship. It then combines the smaller relationships through an optimization process that generates an image of the scene. In addition, breaking sentences allows the system to recombine shorter pieces in various ways, making it better to adapt to new scene descriptions.

“When I look at a table, I can’t say that there is an object at XYZ location. Our minds don’t work like that. In our minds, when we understand a scene, we really understand it based on the relationships between the objects. We think that by building a system that can understand the relationships between objects, we could use that system to more effectively manipulate and change our environments,” says Yilun Du, a Ph.D. student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-lead author of the paper.

Read more: OpenAI’s GPT-3 is now Open for All

Other co-lead authors on the paper were Shuang Li, a CSAIL Ph.D. student, and Nan Liu, a graduate student at the University of Illinois; Joshua B. Tenenbaum, Professor of Cognitive Science and Computation; and senior author Antonio Torralba, the Delta Electronics Professor of Electrical Engineering and Computer Science. In December, they will present the research in a paper titled Learning to Compose Visual Relations at the Conference on Neural Information Processing Systems.

The researchers used energy-based models, a machine-learning technique to represent the individual object relationships in a scene description. The system also works in reverse, finding text descriptions when given an image that matches the relationships between objects in the scene. In addition, their model can edit an image by rearranging the objects in the scene to fit a new description.

The MIT researcher’s model outperformed the baselines compared to other deep learning methods that were given text descriptions and tasked with generating images that displayed the corresponding objects and their relationships. They also asked humans to evaluate whether the generated images matched the original scene description, and 91 percent of participants concluded that the new model performed better.

This research is helpful in situations where industrial robots perform multi-step manipulation tasks, like assembling appliances or stacking items in a warehouse. Li also added that their model can learn from less data but can generalize to more complex scenes. The researchers would like to see how their model performs on complex real-world images with noisy backgrounds and objects blocking one another.

Advertisement

OpenAI’s GPT-3 is now Open for All

OpenAI GPT-3 Public

OpenAI, one of the most prominent AI research laboratories, opens its most successful GPT3 model for all. The natural language model GPT-3 is now publicly available for developers and enterprises to work on their most challenging language problems. Developers can use the GPT-3 model by accessing it through the APIs. Any user can visit the OpenAI website, create an account by signing up, and start working with GPT-3 instantly.

Amidst higher expectations set by the previous version GPT-2, OpenAI released GPT-3 in June 2020 with nearly 175 billion parameters. The newly launched model captured all the attention of researchers, AI businesses, and mass media as it was the largest NLP model at that time.

Even though OpenAI’s GPT-3 is not free for users, the GPT-3 API pricing was pretty decent when they first released the API for limited users. 

Since the release of the GPT-3 API last year, AI enthusiasts have had to join a waitlist to get access to the API. However, only a few got access to the API because OpenAI was critical of the safety concerns; the company believed that it could be used for malicious or illegal purposes. 

To solve this issue and enhance their safety measures, OpenAI has spent a year since 2020 working on both safety and reliability. Now, the company is more confident about its safety initiatives, allowing everyone to integrate GPT-3 in AI-based solutions. “Our progress with safeguards makes it possible to remove the waitlist for GPT-3,” mentioned in their official announcement.

As part of the GPT-3 API release, OpenAI has provided its user guidelines and content guidelines to clarify users/developers for what kind of content their API can generate. OpenAI has released API tools and best practices documentation to help developers bring their applications to production quickly and safely. They have also listed the supported countries that can instantly access the GPT-3 API.

“As our safeguards continue to improve, we will expand how the API can be used while further improving the experience for our users,” the organization said in its official announcement.

Advertisement

Fovia Ai to Showcase Artificial Intelligence Visualization Integrations

Fovia AI Artificial Intelligence Visualization Integrations

Cloud-based imaging products developing company Fovia Ai to showcase its new artificial intelligence-powered visual integrations in the Imaging Artificial Intelligence in Practice (I.A.I.P.). The event is scheduled to commence from 28th November 2021 at the 107th Scientific Assembly and Annual Meeting of the Radiology Society of North America. 

Fovia Ai plans to integrate its technology with other vendors’ products, including 3M, Ambra Health, Bayer AI, Lunit, and many others. The event attendees will get access to innovative artificial intelligence technologies and other related products that remove barriers to clinical adoption. 

Chief Technology Officer of Fovia Ai, Kevin Kreeger, said, “We are pleased that the existence of standards such as F.H.I.R., DICOMweb/W.A.D.O., RSNA/ACR CDE’s (including RadElements and RadLex), and SOLE allow our XStream® aiCockpit® A.I. viewer technology to communicate and interact with the various A.I. vendors’ algorithms, A.I. Orchestrator Systems, Reporting Systems, and PACS Archives/Viewers.” 

Read More: NASA Confirms 301 new Exoplanets using Machine Learning Technology

He further added that the demo would exhibit the future of artificial intelligence technologies in the radiology domain, and the company is delighted to work with others to connect various AI products in a real-world clinical scenario. Fovia Ai is a United States-based tech company founded by George Buyanovsky and Kenneth Fineman in 2003. 

The firm specializes in developing imaging SDK for 2D and 3D products. It is the global leader in advanced visualization and zero-footprint SDKs. The company has developed many high-end products such as High Definition Volume Rendering, XStream H.D.V.R. and F.A.S.T., and RapidPrint. Fovia Ai has over 20 years of experience in radiology integrations with multiple platforms, partners, and operating systems.  

Advertisement

NASA Confirms 301 new Exoplanets using Machine Learning Technology

NASA new exoplanets machine learning

The National Aeronautics and Space Administration (NASA) confirms 301 new exoplanets found with the help of machine learning technology. The recent discovery has increased the total exoplanets count to 4870 that revolves around a multitude of distant stars. 

The data was collected from the Kepler space telescope and was fed to an artificial intelligence algorithm named Exominer. The algorithm analyzed existing data to generate results pointing out the real exoplanets among various candidates. 

The machine learning algorithm was meticulously designed to mimic the process of manually confirming exoplanets by experts. The machine learning algorithm uses deep neural networks to automatically learn new tasks when fed with enough quantity of data. 

Read More: South Korea to build a $330 million Artificial Intelligence Complex

Exoplanet scientist at NASA’s Ames Research Center in California’s Silicon Valley, Jon Jenkins, said, “Unlike other exoplanet-detecting machine learning programs, ExoMiner isn’t a black box – there is no mystery as to why it decides something is a planet or not.” 

He further added that they could easily control the features and requirements in the data that led Exominer to reject or confirm a planet’s validity as an exoplanet. The newly developed algorithm will considerably help scientists to identify new exoplanets as it speeds up the entire identification process. 

Exominer project leader and machine learning manager at Universities Space Research Association, Hamed Valizadegan, said, “ExoMiner is highly accurate and in some ways more reliable than both existing machine classifiers and the human experts it’s meant to emulate because of the biases that come with human labeling.” 

He also mentioned the machine learning algorithm is highly accurate and generates reliable results. All the newly discovered exoplanets were identified by Kepler Science Operations Center but got their validation only after Exominer generated the results. 

According to NASA officials, they now plan to use Exominer in various other projects like TESS to help them by transferring the learnings of the algorithm.

Advertisement

South Korea to build a $330 million Artificial Intelligence Complex

South Korea Artificial Intelligence Complex

South Korea plans to build a $330 million national artificial intelligence complex named ‘The National AI Industrial Convergence Complex.’ According to officials, the AI complex will be built over the next three years.

The AI complex will provide office space and data center to more than 70 startups to help them develop new and highly scalable artificial intelligence solutions for enterprises. The AI complex will be built upon a massive area of 46,200 square meter in the southwestern Korean city of Gwangju. 

Read More: Tech Mahindra Partners with Cogniac to develop AI-powered Visual Data solutions

This new development is a step towards accelerating South Korea’s beyond the information and communications powerhouse in the post-Covid-19 era into an artificial intelligence powerhouse. 

According to a Minister of Korea, the AI complex will open windows to new job opportunities and foster new industries after the COVID-19 pandemic. The AI complex will mainly focus on developing solutions for the automobile and energy industries. 

Invest Korea mentioned regarding the AI complex that it would “transform the regional industrial structure centred on the manufacturing industry into a future-oriented industrial structure by linking with Gwangju’s flagship industries such as automobile, energy, and healthcare.” 

The national AI industrial convergence complex will help developers create new technologies for sustainable and stable future growth. Apart from this newly launched project, Gwangju houses various other technology companies, including Samsung, LG, and Kia automobiles. The AI complex will help the city in becoming a hub of artificial intelligence technologies that would benefit various industries across the world.

Advertisement

Laiye Partners with Huawei Cloud to Power Brazil’s Digital Transformation

Laiye Partners Huawei Cloud

Intelligent assistant solution developing company Laiye partners with Huawei Cloud to power Brazil’s digital transformation goal. The strategic partnership will focus on the deployment of various artificial intelligence, big data, and cloud computing solutions throughout Brazil to accelerate its digitization process. 

According to the companies, this newly launched initiative will slowly roll out in other parts of Latin America soon. Both the companies acknowledge the exponential growth of artificial intelligence technologies and startup ecosystems in the entire world. 

With their combined efforts, the companies believe that they can help Brazilian startups and organizations to bring innovations and develop cutting-edge artificial intelligence technologies. 

Read More: Honda to use Artificial Intelligence technology for making roads safer

General Manager of Laiye, Petter Dalen, said, “Latin America is a region rich in opportunity and growth, and working with HUAWEI CLOUD helps us serve this dynamic and fast-growing market and leverage AI, cognitive, and cloud capabilities.” 

He further added that they are looking forward to working with Huawei Cloud in order to help tech companies build more high-end solutions for benefiting organizations working in various industries, including retail, education, logistics, healthcare, and many more. 

Many Brazilian startups have started adopting AI solutions to increase their productivity and ease of operations. According to a report of Accenture, the artificial intelligence sector has the highest economic growth for Brazil, with an estimated worth of over $400 billion in Gross Added Value by 2035. 

President of Huawei Cloud Brazil, Qin Dan, said, “Together with Laiye, our mission is to deliver value to the industry and society through innovation and help our customers go digital with innovative and reliable products and solutions.” 

He also mentioned that they are incredibly thrilled to join hands with Laiye to serve their customers in Latin America by helping them in their digital transformation. 

Advertisement

Tech Mahindra Partners with Cogniac to develop AI-powered Visual Data solutions

Tech Mahindra Partners Cogniac AI solutions

Information technology service and solution providing company Tech Mahindra partners with AI visual observation firm Cogniac to develop new artificial intelligence-powered visual data solutions to help organizations at a global level. 

The partnership aims to build solutions that would simplify data management for companies using machine vision technology. It is a similar kind of technology that is used in robotics guidance and other related fields. 

The developed product will help increase the productivity of companies operating in various industries, including manufacturing, railway, automotive, logistics, government, and many others. The companies will couple their expertise in data analysis, cloud computing, and big data management capabilities to develop solutions that maximize the productivity of companies with the use of artificial intelligence and convolutional neural networks. 

Read More: NVIDIA announced a Strategic Investment and Partnership with Kore.ai

The Vice President and Head of Emerging Business at Tech Mahindra, Rahul Bhuman, said, “With an aim to deliver transformative enterprise machine solutions that are highly agile and scalable, we have re-aligned our strategy and delivery model to accelerate the customer’s transformation journey, in sync with NXT.NOWTM framework.” 

With the new solutions, companies across the globe will be able to accelerate the operational use of visual data and analytics that will drastically increase their business productivity. 

Cognac is a San Francisco-based artificial intelligence solutions developing startup founded by Amy Wang and Bill Kish in 2015. The company specializes in providing AI services that let companies extract more data from videos and images using deep learning technology. 

Till date, Cogniac has raised more than $40 million over six funding rounds from investors like National Grid Partners, Autotech Ventures, London Technology Club, Wing Venture Capital, and many more. 

Chief Partnership Officer of Cogniac, Vahan Tchakerian, said, “With Tech Mahindra’s data synthesis capabilities, we will be able to provide necessary and relevant information to customers and enable them to make data-informed business decisions.” 

He further added that often the data exists but is not appropriately tapped. This new partnership will help them bring in innovations that would benefit their customers.

Advertisement

193 countries adopt the first global agreement on the Ethics of AI

first global agreement ethics AI

The United Nations recently announced that 193 countries across the globe have agreed to adopt the world’s first recommendation for ethics of AI implementation. This development is believed to be a significant step towards regulating the deployment of artificial intelligence solutions for the betterment of humanity. 

In recent years, there have been many instances where artificial intelligence solutions have been accused of violating fundamental human rights, including privacy. Hence it was much needed to have a standardized guideline across countries to keep a check on the deployment of AI technologies. 

UNESCO mentioned in a statement, “We see increased gender and ethnic bias, significant threats to privacy, dignity and agency, dangers of mass surveillance, and increased use of unreliable AI technologies in law enforcement, to name a few. Until now, there were no universal standards to provide an answer to these issues.” 

Read More: Microsoft and Heathrow announces AI model to detect illegal wildlife trafficking

This new development is a step towards achieving a global consensus for the use of artificial intelligence technologies. Another motive of this agreement is to protect citizens’ data by improving transparency, control, and security for personal data. 

Assistant director-general for Social and Human Science, Gabriela Ramos, said, “Decisions impacting millions of people should be fair, transparent and contestable. These new technologies must help us address the major challenges in our world today, such as increased inequalities and the environmental crisis, and not deepening them.” 

The UN also mentioned that developers must consider the power consumption of AI technologies so that new innovations do not pose any threat to our environment and promote sustainable development goals. UNESCO has urged all the 193 Member countries to regularly report their progress and practices regarding the agreement to concerned authorities. 

Advertisement