Home Blog Page 210

Landing AI Adds New Edge Capabilities to LandingLens

Landing AI Edge Capabilities

Artificial intelligence and deep learning startup Landing AI adds new edge capabilities to its LandingLens with the launch of its new LandingEdge. 

Manufacturers will be able to deploy deep learning visual inspection solutions to edge devices on the manufacturing floor, allowing them to detect product flaws more accurately and consistently using LandingEdge. 

The company’s flagship product, the LandingLens platform, includes a variety of capabilities to assist teams in developing and deploying reliable and repeatable inspection systems for various jobs in a production setting. 

Read More: Sudalai Rajkumar Becomes Quadruple Grandmaster

LandingLens users will now be able to more easily integrate with industrial infrastructure to interact with cameras, apply models to pictures, and create predictions to support real-time decision-making on the manufacturing floor, thanks to the enhanced edge capabilities. 

Vice President of Outreach Technology and Vision Technology at Landing AI, David L. Dechow, said, “These products mark huge steps in bringing deep learning solutions to the factory floor that are easily integrated to perform an automated inspection for a broad range of applications.” 

He further added that they give manufacturers and systems integrators more powerful tools to swiftly adopt inspection solutions that cut costs, boost productivity, and improve consumer product satisfaction. 

Moreover, LandingLens has also been enhanced to allow for up to seven times quicker deep learning model training than previously. 

Recently, Landing AI also joined the NVIDIA Metropolis Partner program to accelerate AI performance and edge deployment. This will help the company improve its quality control in manufacturing and industrial applications. 

United States-based technology startup Landing AI was founded by a former Chief Scientist at Baidu and founding lead of the Google Brain team, Andrew Ng, in 2017. To date, the startup has raised $57 million from several investors like McRock Capital, Intel Capital, Insight Partners, Samsung Catalyst Fund, and others over two funding rounds. 

Advertisement

Google cancels US-based Dalit activist’s talk on caste equity over employees’ pressure 

Google cancels caste talk

Succumbing to the pressure from its employees, Google cancels a talk by Thenmozhi Soundararajan, a US-based Dalit activist scheduled to speak on the topic of caste equity. 

As an initiative of Google’s Diversity Equity Inclusivity (DEI) program meant for employee sensitization, the talk was being coordinated by the senior manager at Google News, Tanuja Gupta, who has now quit the company in protest. 

Ahead of the talk, which was scheduled for April 18, the authorities, including Tanuja, allegedly received inflammatory mass emails from a group of pro-Hindu employees who accused Soundararajan of being anti-Hindu and claimed that their lives would be at risk if the talk proceeded. 

A report by the Washington Post alleges the cancellation of the discussion was the direct consequence of the pressure from the South-Asian employees of the company. 

Read More: Google Bans Deepfake training projects on Colab 

After hearing about the talk’s cancellation, Thenmozhi wrote an email to Sundar Pichai, CEO of Google, expressing how alarming the situation was and how the issue must be addressed to achieve an equitable society. 

Thenmozhi Soundararajan, a world-renowned anti-caste campaigner and former president of the Ambedkarites Association of North America (AANA), has led many causes that have drawn global attention to the discriminatory caste system in the South Asian culture. Her California-based non-profit initiative, Equity Labs, advocates for the civil rights of Dalits or formerly addressed as “untouchables” in a millennia-old system of social hierarchy. 

Diversity Equity Inclusivity (DEI) programs, a standard part of most multinational companies of today, primarily focus on race, gender, and sexual orientation. However, Dalit activists have been lobbying to include caste sensitization in the programs over the years.

Advertisement

Google Cloud releases an Architecture Diagramming tool to go from architecture to implementation quickly

google releases architecture diagramming tool

Google releases a new google cloud architecture diagramming tool to help organizations go from architecture to implementation in a few steps. The idea is to guide users with a cloud use case to go from ‘ideas’ to ‘implementation’ via architecture.

The architecture diagram forms the foundation of the implementation journey. It allows users to share ideas, collaborate with people, and integrate the designs. To design a full-fledged architecture diagram, users need assistance in the form of a ‘reference architecture.’ This reference diagram can be tweaked to fit the purpose. 

Sometimes, the users may be aware of the starting point, but other times they may not have a clue. Furthermore, transitioning from the architecture to the actual implementation is an intimidating process. The Google Cloud architecture diagramming tool has been put into the scenario to address these challenges in implementation. The tool helps in the translation process to go from building the architecture to deploying the application with a few clicks.

Read More: Use Apple’s iPhone as a Webcam with the Continuity Camera Feature.

The tool’s interface provides a centralized listing of all Google Cloud goods and services. These services have been structured based on categories like database, compute, etc. Several images and icons have been incorporated into the interface so the user can build an architecture with a drag and drop feature. The tool is integrated with the Google Cloud Developer Cheat Sheet, allowing you to review the explanations and documentation associated with each component.

Other than architecture building assistance, the tool also includes 10+ prebuilt reference architectures that may be used, to begin with. These references cater to common use cases like microservices, websites, compute, ML, data science, etc.

Once the architecture is ready, its components can be easily deployed in Google Cloud with just one click! 

Advertisement

Singapore Teases Transparent, Explainable, Fair, and Ethical AI with the Announcement of A.I. Verify

Singapore A.I. Verify
Image Source: ST Telemedia Global Data Centre, Singapore

The artificial intelligence (AI) adoption rate has been growing exponentially in the past few years. While developed countries like the USA, China, and Japan are already at the forefront of adopting this technology, countries like Singapore are not trailing behind either. Most recently, Minister for Communications and Information, Josephine Teo, announced piloting the world’s first AI Governance Testing Framework and Toolkit, at the World Economic Forum Annual Meeting in Davos in May this year. A.I. Verify provides a means for companies to measure and demonstrate how safe and reliable their AI products and services are.

The Infocomm Media Development Authority (IMDA) and the Personal Data Protection Commission (PDPC), which oversees the country’s Personal Data Protection Act, created the new toolkit to bolster the nation’s commitment to encouraging the ethical use of AI. This development adheres to the guidelines imposed in the Model AI Governance Framework in 2020 and the core themes of the National AI Strategy in 2019. Through self-conducted specialized testing and process inspections, A.I. Verify aims to improve transparency in the usage of AI between organizations and their stakeholders. Being the first of its type, A.I. Verify is ready to help organizations navigate the complex ethical issues that arise when AI technology and solutions are used.

IMDA also mentioned that A.I. Verify abided with globally established AI ethical standards and norms, including those from Europe and the OECD, and covered critical aspects such as repeatability, robustness, fairness, and social and environmental wellness. Testing and certification regimes that included components like cybersecurity and data governance were also included in the framework.

The new toolkit is now available as a minimum viable product (MVP), which includes ‘basic’ capabilities for early users to test and provide feedback for product development. It performs technical testing based on three principles: “fairness, explainability, and robustness,” combining widely used open-source libraries into a self-assessment toolbox. For explainability, there’s SHAP (SHapley Additive exPlanations), for adversarial robustness, there’s Adversarial Robustness Toolkit, and for fairness testing, there’s AIF360 and Fairlearn. 

The MVP Testing Framework tackles five Pillars of concern for AI systems, which encompass 11 widely recognized AI ethical principles, namely, transparency, explainability, repeatability or reproducibility, safety, security, robustness, fairness, data governance, accountability, human agency, and oversight, and inclusive growth, social and environmental well-being.

The five pillars are as follows:

  • transparency in the usage of AI and its systems; 
  • knowing how an AI model makes a decision; 
  • guaranteeing AI system safety and resilience; 
  • ensuring fairness and no inadvertent discrimination by AI; 
  • and providing adequate management and monitoring of AI systems.

Organizations can participate in the MVP piloting program, gaining early access to the MVP and using it to self-test their AI systems and models. This also allows for developing international standards and creating an internationally applicable MVP to reflect industry needs.

Finally, A.I. Verify intends to analyze deployment transparency, support organizations in AI-related enterprises, evaluate goods or services to be offered to the public, and assist prospective AI investors through AI’s advantages, dangers, and limits.

The pilot toolkit also creates reports for developers, managers, and business partners, covering essential areas that influence AI performance and putting the AI model to the test. It’s packed as a Docker container, so it is quickly installed in the user’s environment. The toolkit currently supports binary classification and regression algorithms from popular frameworks such as scikit-learn, Tensorflow, and XGBoost, among others.

The test framework and tools, according to IMDA, will even allow AI system developers to undertake self-testing not only to ensure that the product meets market criteria but also to provide a common platform for displaying the test results. Overall, A.I. Verify attempted to authenticate claims made by AI system developers regarding their AI use along with the performance of their AI products, rather than defining ethical norms.

Read More: UNESCO unveils First Global Agreement On Ethics Of Artificial Intelligence: What Next?

However, there is a flaw in this innovation. The toolbox would not ensure that the AI system under examination was free of biases or security issues, according to IMDA. Furthermore, the MVP is unable to specify ethical criteria and can only verify statements made by AI system creators or owners on the AI systems’ methodology, usage, and verified performance.

Because of such constraints, it’s difficult to say how AI Verify will aid stakeholders and industry participants in the long term. For now, there is no knowledge about how developers will ensure that the information provided in the toolkit prior to self-assessment is correct and not based on speculations. This is quite a technological challenge, A.I. Verify will have to overcome.

Singapore intends to cooperate with AI system owners or developers throughout the world in the future to collect and produce industry benchmarks for the creation of worldwide AI governance standards. Singapore has engaged in ISO/IEC JTC1/SC 42 on AI for the interoperability of AI governance frameworks and the creation of international standards on AI, and is working with the US Department of Commerce and other like-minded governments and partners. For instance, Singapore collaborated with the US Department of Commerce to guarantee interoperability between their AI governance frameworks.

According to IMDA, several organizations have already tested and provided comments on the new toolset, including Google, Meta, Microsoft, Singapore Airlines, and Standard Chartered Bank. With industry input and comments, more functionalities will be gradually introduced.

Singapore hopes to strengthen its position as a leading digital economy and AI-empowered nation by introducing the toolkit as it continues investing in and developing AI capabilities. While Singapore aspires to be a leader in creating and implementing scalable and impactful AI solutions by 2030, it is evident that the country places a substantial value on encouraging ethical AI practices.

Advertisement

Sudalai Rajkumar Becomes Quadruple Grandmaster

Sudalai Rajkumar Quadruple Grandmaster

Head of artificial intelligence and machine learning at Growfin, Sudalai Rajkumar, becomes Kaggle Quadruple Grandmaster. This information was revealed by Rajkumar through a post on his LinkedIn account. 

Kaggle, a Google subsidiary, is an online community of data scientists and machine learning experts. 

Users can use Kaggle to search and publish datasets, study, and construct models in a web-based data science environment, collaborate with other data scientists and ML experts, and compete in data science contests. 

Read More: Use Apple’s iPhone as a Webcam with the Continuity Camera Feature

With this new development, Sudalai Rajkumar becomes the third Indian to receive the title of Kagle Quadruple Grandmaster. 

Apart from Rajkumar, Abhishek Thakur, Chris Deotte, Bojan Tunguz, and Rohan Rao have earlier achieved this title. Rajkumar has been working with Growfin.ai since 2021 and currently serves as the company’s head of AI & ML. 

In his LinkedIn post, he mentioned, “Elated to join the elite group of Quadruple Kaggle Grandmasters. Thank you all for the love and support.”

Rajkumar has a Bachelor of Engineering degree from PSG College of Technology and a degree in Business Analytics and Intelligence from the Indian Institute of Management Bangalore. Before joining Growfin.ai, Rajkumar had worked with companies like H2O.ai, Argoid, and GeoIQ.io.

Advertisement

Insilico Medicine Raises $60 Million in Series D Funding Round

Insilico Medicine Funding round

AI platform for drug development Insilico Medicine raises $60 million in its recently held series D funding round. Several global investors such as US West Coast, BHR Partners, Warburg Pincus, BOLD Capital Partners, B Capital Group, Qiming Venture Partners, and Pavilion Capital participated in the company funding round. 

According to Insilico Medicine, it intends to use the new capital to accelerate the development of its advanced pipeline, which includes its lead product, which is presently in Phase I testing, along with the continuous development of its Pharma.AI platform. 

Founder and CEO of Insilico Medicine, Alex Zhavoronkov, said, “It is a testament to the strength of our end-to-end AI platform, which has been validated by many partners, and produced our first novel antifibrotic program discovered using AI and aging research and designed using our generative AI chemistry engine.” 

Read More: Adani Group buys 50% stake in agriculture drone startup General Aeronautics

He further added that this groundbreaking program has moved on to Phase I clinical trials after completing a first-in-human Phase 0 investigation in healthy volunteers. 

Moreover, Insilico Medicine has also nominated seven preclinical candidates across several other disease indications since 2021. 

The funds will also be used to support Insilico’s continued worldwide expansion and strategic projects. The projects include a fully automated, AI-driven robotic drug discovery laboratory and a completely robotic biological data factory to supplement the company’s significant, curated data assets. 

Hong Kong-based AI drug development platform Insilico Medicine was founded by Alex Zhavoronkov in 2014. The firm specializes in providing a platform for drug development to treat cancer and age-related diseases. To date, Insilico Medicine has raised more than $366 million from multiple investors over eleven funding rounds. 

“For Insilico, 2022 is a year of incredible growth and progress. They have demonstrated the value of combining deep scientific expertise with cutting-edge technology capabilities to significantly accelerate drug discovery,” said Head of China Healthcare at Warburg Pincus, Min Fang. 

Advertisement

Use Apple’s iPhone as a Webcam with the Continuity Camera Feature

iPhone as a Webcam

Apple WWDC 2022: Apple announces to roll an update that allows users to use Apple’s iPhone as a webcam in macOS. Its new Continuity Camera feature is an upcoming update to the macOS as a part of macOS Ventura. Apple anticipates MacBook customers mounting an iPhone on top of their macs and using the camera to improve video chats in FaceTime, WebEx, Microsoft Teams, and other similar applications.

It’s a clever innovation that utilizes Apple’s ecosystem and allows Mac users to make higher-quality video chats without purchasing a separate 4K webcam. Apple demonstrated an iPhone 13 Pro installed on a MacBook Pro 13″ laptop, using a mount designed by Belkin.

Apple says it is collaborating with Belkin on similar mounts that will be available later this year to make it convenient to position an iPhone over a MacBook screen. This will also not necessitate the purchase of new hardware, as existing phones will be made compatible via software upgrades. Later this year, this new continuity camera feature will be accessible.

Read More: Google to Release a Product Map to find offerings similar to Google Cloud Platform on AWS and Azure.

The continuity camera feature: Continuity Camera converts the signal from your standard iPhone’s back camera into a webcam usable in macOS apps. Alongside the continuity camera feature, you can use more features like Centre Stage, the new Studio Light, Portrait mode, and Desk View when you wish to utilize the ultra-wide-angle camera. 

macOS Ventura also introduces the Stage Manager feature that automatically organizes the apps/windows in use. This enables the user to see everything in a single section. Other updates include Handoff facetime support for a seamless transition from iOS to a macOS device. Safari also gets enhanced Shared Tab Groups to synchronize sites with your colleagues or family members. Some UI makeovers can also be seen in Spotlight, amongst other basic up-gradation.

Advertisement

Adani Group buys 50% stake in agriculture drone startup General Aeronautics

Adani invests in General Aeronautics

Adani Defence Systems and Technologies, a subsidiary of Adani Enterprises, signed an agreement on Friday stating its acquisition of a 50% stake in a Bangalore-based agriculture drone startup General Aeronautics. The acquisition is expected to be completed by July 31st. 

In a regulatory filing, Adani explained that Adani Defence Systems and Technologies plans on upscaling its artificial intelligence/machine learning capabilities and expertise in military drones to provide end-to-end solutions to the problems concerning the domestic agriculture sector. 

General Aeronautics provides commercial robotic drone-based solutions for crop protection, crop health, precision farming, and yield monitoring using artificial intelligence and analytics. 

Read More: India Post successfully pilots first-ever drone mail delivery with TechEagle Innovations

According to General Aeronautics, drone-based technologies can offer potential cost-efficient and optimal solutions to multi-faceted problem areas, including challenges related to food scarcity and health crises. 

General Aeronautics’ advanced crop protection platform for sustainable precision agriculture provides a comprehensive crop protection solution that integrates optimum spray drone technology with a purpose-built execution platform. 

The drone-based market in India can be expected to grow exponentially to Rs. 30,000 crore by the fiscal year 2026, mainly because of the evolving policy framework, PLI incentives, and the current ban on imports of drones.

Advertisement

Mayflower, an AI-powered ship crosses Atlantic Ocean

Mayflower AI robot

The Mayflower, an artificial intelligence (AI)-powered self-sailing ship developed by IBM, has arrived in North America after sailing the entire Atlantic Ocean. 

The autonomous ship from England began its journey on April 27, 2022. The highlight of this mission was that the ship was able to cross the ocean without the support of any onboard human crew. 

According to officials, the one-of-a-kind autonomous ship has been integrated with IBM’s AI Captain, the operation’s digital brain, enabling the Mayflower to navigate its way across the ocean. 

Read More: Maharashtra MSRTC Buses to get AI-powered Driver Monitoring System

The Mayflower Autonomous Ship (MAS) traveled 4,400 kilometers from Plymouth, England, to Halifax, Nova Scotia, Canada. Mayflower has vision sensors, infrared cameras, and a navigation system that allows it to use dead reckoning if a satellite fails. 

The project is led by ProMare, a non-profit committed to marine research, with IBM serving as the project’s primary technological and scientific partner. 

“The journey she made across was arduous and has taught us a great deal about designing, building, and operating a ship of this nature and the future of the maritime enterprise,” said Project director Brett Phaneuf. 

Officials mentioned that the AI-powered autonomous ship was designed to demonstrate the advancement of technology throughout the centuries since our ancestors set sail for the New World. 

During the announcement of the commencement of this project, Vice President for Defense and Intelligence at IBM Federal, Ray Spicer, said, “We’ll just watch it with pride as it sails along and makes its own decisions based on how well we trained it. And then it appears in Plymouth, Massachusetts, at the end of the journey.” 

Advertisement

Tesla’s humanoid robot Optimus likely to be completed by September

Tesla's Humanoid Robot Optimus

In a tweet, Elon Musk, CEO of Tesla Inc., said that Tesla might have a prototype version of its humanoid robot named ‘Optimus’ up and ready in the upcoming months. The announcement seems to have come as clarification from Musk for postponing the date of the second Tesla AI day from April 30th to September 30th

According to Musk, Optimus is a robot the same size as an average human, which can perform mundane yet essential everyday tasks such as cleaning, grocery shopping, etc., thus making physical work a choice. 

Also known as the Tesla Bot, the concept of this humanoid robot was first introduced by Musk at the inauguration of Tesla’s first AI day back in April 2021

Read More: India Post successfully pilots first-ever drone mail delivery with TechEagle Innovations

In a presentation at the inauguration, Telsa explained that the robot would be operated by artificial intelligence systems similar to that of Tesla’s electric vehicles, which are currently under development. Optimus will be almost 173 cm i.e 5ft 8 inches tall, weigh about 57 kgs, and can carry up to 20 kgs of weight. 

Musk said that the humanoid robot would one day be potentially more significant than Tesla’s vehicle commerce. Engaged tweets on Elon’s announcement show that the unveiling of Optimus at Tesla’s second AI day is anticipated with bated breath by enthusiasts. 

Advertisement