Friday, November 21, 2025
ad
Home Blog Page 210

Insilico Medicine Raises $60 Million in Series D Funding Round

Insilico Medicine Funding round

AI platform for drug development Insilico Medicine raises $60 million in its recently held series D funding round. Several global investors such as US West Coast, BHR Partners, Warburg Pincus, BOLD Capital Partners, B Capital Group, Qiming Venture Partners, and Pavilion Capital participated in the company funding round. 

According to Insilico Medicine, it intends to use the new capital to accelerate the development of its advanced pipeline, which includes its lead product, which is presently in Phase I testing, along with the continuous development of its Pharma.AI platform. 

Founder and CEO of Insilico Medicine, Alex Zhavoronkov, said, “It is a testament to the strength of our end-to-end AI platform, which has been validated by many partners, and produced our first novel antifibrotic program discovered using AI and aging research and designed using our generative AI chemistry engine.” 

Read More: Adani Group buys 50% stake in agriculture drone startup General Aeronautics

He further added that this groundbreaking program has moved on to Phase I clinical trials after completing a first-in-human Phase 0 investigation in healthy volunteers. 

Moreover, Insilico Medicine has also nominated seven preclinical candidates across several other disease indications since 2021. 

The funds will also be used to support Insilico’s continued worldwide expansion and strategic projects. The projects include a fully automated, AI-driven robotic drug discovery laboratory and a completely robotic biological data factory to supplement the company’s significant, curated data assets. 

Hong Kong-based AI drug development platform Insilico Medicine was founded by Alex Zhavoronkov in 2014. The firm specializes in providing a platform for drug development to treat cancer and age-related diseases. To date, Insilico Medicine has raised more than $366 million from multiple investors over eleven funding rounds. 

“For Insilico, 2022 is a year of incredible growth and progress. They have demonstrated the value of combining deep scientific expertise with cutting-edge technology capabilities to significantly accelerate drug discovery,” said Head of China Healthcare at Warburg Pincus, Min Fang. 

Advertisement

Use Apple’s iPhone as a Webcam with the Continuity Camera Feature

iPhone as a Webcam

Apple WWDC 2022: Apple announces to roll an update that allows users to use Apple’s iPhone as a webcam in macOS. Its new Continuity Camera feature is an upcoming update to the macOS as a part of macOS Ventura. Apple anticipates MacBook customers mounting an iPhone on top of their macs and using the camera to improve video chats in FaceTime, WebEx, Microsoft Teams, and other similar applications.

It’s a clever innovation that utilizes Apple’s ecosystem and allows Mac users to make higher-quality video chats without purchasing a separate 4K webcam. Apple demonstrated an iPhone 13 Pro installed on a MacBook Pro 13″ laptop, using a mount designed by Belkin.

Apple says it is collaborating with Belkin on similar mounts that will be available later this year to make it convenient to position an iPhone over a MacBook screen. This will also not necessitate the purchase of new hardware, as existing phones will be made compatible via software upgrades. Later this year, this new continuity camera feature will be accessible.

Read More: Google to Release a Product Map to find offerings similar to Google Cloud Platform on AWS and Azure.

The continuity camera feature: Continuity Camera converts the signal from your standard iPhone’s back camera into a webcam usable in macOS apps. Alongside the continuity camera feature, you can use more features like Centre Stage, the new Studio Light, Portrait mode, and Desk View when you wish to utilize the ultra-wide-angle camera. 

macOS Ventura also introduces the Stage Manager feature that automatically organizes the apps/windows in use. This enables the user to see everything in a single section. Other updates include Handoff facetime support for a seamless transition from iOS to a macOS device. Safari also gets enhanced Shared Tab Groups to synchronize sites with your colleagues or family members. Some UI makeovers can also be seen in Spotlight, amongst other basic up-gradation.

Advertisement

Adani Group buys 50% stake in agriculture drone startup General Aeronautics

Adani invests in General Aeronautics

Adani Defence Systems and Technologies, a subsidiary of Adani Enterprises, signed an agreement on Friday stating its acquisition of a 50% stake in a Bangalore-based agriculture drone startup General Aeronautics. The acquisition is expected to be completed by July 31st. 

In a regulatory filing, Adani explained that Adani Defence Systems and Technologies plans on upscaling its artificial intelligence/machine learning capabilities and expertise in military drones to provide end-to-end solutions to the problems concerning the domestic agriculture sector. 

General Aeronautics provides commercial robotic drone-based solutions for crop protection, crop health, precision farming, and yield monitoring using artificial intelligence and analytics. 

Read More: India Post successfully pilots first-ever drone mail delivery with TechEagle Innovations

According to General Aeronautics, drone-based technologies can offer potential cost-efficient and optimal solutions to multi-faceted problem areas, including challenges related to food scarcity and health crises. 

General Aeronautics’ advanced crop protection platform for sustainable precision agriculture provides a comprehensive crop protection solution that integrates optimum spray drone technology with a purpose-built execution platform. 

The drone-based market in India can be expected to grow exponentially to Rs. 30,000 crore by the fiscal year 2026, mainly because of the evolving policy framework, PLI incentives, and the current ban on imports of drones.

Advertisement

Mayflower, an AI-powered ship crosses Atlantic Ocean

Mayflower AI robot

The Mayflower, an artificial intelligence (AI)-powered self-sailing ship developed by IBM, has arrived in North America after sailing the entire Atlantic Ocean. 

The autonomous ship from England began its journey on April 27, 2022. The highlight of this mission was that the ship was able to cross the ocean without the support of any onboard human crew. 

According to officials, the one-of-a-kind autonomous ship has been integrated with IBM’s AI Captain, the operation’s digital brain, enabling the Mayflower to navigate its way across the ocean. 

Read More: Maharashtra MSRTC Buses to get AI-powered Driver Monitoring System

The Mayflower Autonomous Ship (MAS) traveled 4,400 kilometers from Plymouth, England, to Halifax, Nova Scotia, Canada. Mayflower has vision sensors, infrared cameras, and a navigation system that allows it to use dead reckoning if a satellite fails. 

The project is led by ProMare, a non-profit committed to marine research, with IBM serving as the project’s primary technological and scientific partner. 

“The journey she made across was arduous and has taught us a great deal about designing, building, and operating a ship of this nature and the future of the maritime enterprise,” said Project director Brett Phaneuf. 

Officials mentioned that the AI-powered autonomous ship was designed to demonstrate the advancement of technology throughout the centuries since our ancestors set sail for the New World. 

During the announcement of the commencement of this project, Vice President for Defense and Intelligence at IBM Federal, Ray Spicer, said, “We’ll just watch it with pride as it sails along and makes its own decisions based on how well we trained it. And then it appears in Plymouth, Massachusetts, at the end of the journey.” 

Advertisement

Tesla’s humanoid robot Optimus likely to be completed by September

Tesla's Humanoid Robot Optimus

In a tweet, Elon Musk, CEO of Tesla Inc., said that Tesla might have a prototype version of its humanoid robot named ‘Optimus’ up and ready in the upcoming months. The announcement seems to have come as clarification from Musk for postponing the date of the second Tesla AI day from April 30th to September 30th

According to Musk, Optimus is a robot the same size as an average human, which can perform mundane yet essential everyday tasks such as cleaning, grocery shopping, etc., thus making physical work a choice. 

Also known as the Tesla Bot, the concept of this humanoid robot was first introduced by Musk at the inauguration of Tesla’s first AI day back in April 2021

Read More: India Post successfully pilots first-ever drone mail delivery with TechEagle Innovations

In a presentation at the inauguration, Telsa explained that the robot would be operated by artificial intelligence systems similar to that of Tesla’s electric vehicles, which are currently under development. Optimus will be almost 173 cm i.e 5ft 8 inches tall, weigh about 57 kgs, and can carry up to 20 kgs of weight. 

Musk said that the humanoid robot would one day be potentially more significant than Tesla’s vehicle commerce. Engaged tweets on Elon’s announcement show that the unveiling of Optimus at Tesla’s second AI day is anticipated with bated breath by enthusiasts. 

Advertisement

Maharashtra MSRTC Buses to get AI-powered Driver Monitoring System

MSRTC Buses AI Driver Monitoring System

Maharashtra State Road Transport Corporation (MSRTC) buses to get an artificial intelligence (AI)-powered driver monitoring system in the coming months. 

The AI-powered system will also be integrated with CCTV cameras which will not only monitor but also prompt voice commands to drivers when the system finds out that the driver is distracted. 

On June 1st, the state transport corporation launched its first intercity e-bus service between Pune and Ahmednagar using an e-bus dubbed “Shivai.” 

Read More: India Post successfully pilots first-ever drone mail delivery with TechEagle Innovations

This new development is a part of the government’s plan to encourage safe driving and minimize road accidents caused due to human errors. 

The Times of India reported that Shekhar Channe, managing director of MSRTC, stated that the new technologies would be put in all new buses purchased by the state road transport firm. 

According to Channe, existing deployed vehicles might also get this upgrade, but no confirmation has yet passed. 

Channe said, “The Shivai e-buses have the voice system. If a driver is inattentive, speaks to fellow passengers, or talks on cellphone while driving, a voice would immediately alert him to be careful and focus on driving.” He further added that passengers will hear the voice and can advise the motorist in question to concentrate on driving. 

Moreover, the CCTV system on board has a recording feature that allows authorities to check footage in the event of an accident. 

“We understand that monitoring all buses at all times is not possible. But the new systems will act as a deterrent, and drivers will be more careful,” said an MSRTC spokesperson. 

Advertisement

Google to Release a Product Map to find similar GCP offerings on AWS and Azure

Google Product Map

Google releases a product map that can help you identify the name of cloud services across different cloud platforms. Given the diversity of cloud vendors, the product map can come in handy in finding similar GCP offerings on AWS and Azure.

Many cloud users search for the “best” offerings from various providers to evaluate and streamline their workflow. However, often people struggle to find the exact alternative on different cloud platforms.

Read More: A Novelist to Co-Write Your Next Cringe-Read with AI

Google is driven to structure the world’s knowledge and make it broadly accessible and beneficial. That’s why Google has just released a simple product map highlighting similar products from Google Cloud Platform, AWS, and Azure.

The cloud services can be filtered by the product name or other standard features. Based on the information, one can make sense of the product and the respective vendor to select the “best.” 

The only suggestion is to delve a little deeper when anything strikes your eye. The particular reason to do this is that some of Google’s services may be listed without any counter equivalent from Azure and AWS. Some features like Anthos Clusters, Network Intelligence Center, VPC Service Controls, etc., are not mapped to any service from AWS and Azure.

This doesn’t mean that such cloud services are uniquely provided by Google but should indicate the difference between Google and AWS and Azure portfolios. Nevertheless, the product map can guide you through the offerings and differences at ease.

Advertisement

How Google’s GraphWorld solves Bottlenecks in Graph Neural Network Benchmarking?

graph neural network graphworld google
Image Credit: Analytics Drift Design Team

Graph neural networks (GNNs) are a subtype of deep learning algorithms that operate on graphs. Graphs are data structures with edges and vertices. Depending on whether there are directional relationships between vertices, edges can be either directed or undirected. Although the terms vertices and nodes are sometimes used interchangeably, they represent two separate concepts. GNNs are considered an advancement over Convolutional Neural Networks (CNNs). CNNs are incapable of handling graph data because nodes in graphs are not represented in any order and dependence information between two nodes is provided by edges.

Neural networks are critical to the development of machine learning models as they help comprehend data patterns where conventional model training can fail. However, this technology generally caters to euclidean data, which encompasses the data in one or two-dimensional domains. E.g. audio, imagery, textual files. 

Graph Neural Networks are distinct from traditional machine learning models as they analyze graph-structured data rather than euclidean data with a grid-like structure. Graph-structured data can also exist in a three-dimensional domain and is a prime example of non-euclidean data. Non-euclidean data ranges from complex molecular structures to traffic networks that require representation in a three-dimensional domain. If this form of data is translated into a euclidean framework, valuable information can be lost. In other words, a traditional neural network does not usually take into account the features of each vertex or edge in graph-structured data. Machine learning algorithms that prefer input in grid-like or rectangular arrays can limit the analysis they can do on graphs when the graphs exist in non-euclidean space, which means that nodes and edges cannot be represented using coordinates. Besides, when a graph structure is converted into its adjacency matrix, the resultant adjacency matrix can yield graphs with a broad range of appearances.

Since a graph structure can represent most objects, Graph Neural Networks offer a wide range of possible applications for non-euclidean data.

Thousands of GNN subtypes have been created annually due to a spike in interest in GNNs over the preceding few years. Methods and datasets for testing GNNs, on the other hand, have gotten significantly less attention. Many GNN articles employ the same 5–10 benchmark datasets, which are mostly made up of readily labeled academic citation networks and molecular datasets. This means that new GNN subtypes’ empirical performance can only be claimed for a small set of graphs. Recently published research with rigorous experimental designs raise suspicion about the performance rankings of prominent GNN models described in formative publications, further complicating the situation.

For instance, it is already mentioned that GNN task datasets are successively re-used throughout publications, as they are in many machine learning subfields, to correctly quantify incremental advances of new designs. However, as seen in NLP and computer vision applications, this can easily lead to the overfitting of novel structures to datasets over time. If the primary collection of benchmark graphs has comparable structural and statistical qualities, the effect will be amplified.

To address these bottlenecks, Stanford recently unveiled Open Graph Benchmark (OGB), which is an open-source software for assessing GNNs on a handful of massive-scale graph datasets across a range of tasks, allowing for a more uniform GNN experimental design. However, as current datasets, Open Graph Benchmark was sourced from many of the same domains, implying its failure in tackling the above-mentioned dataset variety issue.

The Open Graph Benchmark raised the number of nodes in experiment-friendly benchmark citation graphs by more than 1,000 times. From one point of view, this is entirely natural as computational capabilities improve and graph-based learning problems become more data-rich. However, while the availability of enormous graphs is critical for evaluating GNN software, platforms, and model complexity, giant graphs are not required to verify GNN accuracy or scientific relevance. Standardized graph datasets for assessing GNN expressiveness become less accessible to the typical researcher as the field’s benchmark graphs grow in size.

Furthermore, without access to institution-scale computer resources, investigating GNN hyperparameter tuning approaches or training variance is almost impossible with big benchmark datasets.

In “GraphWorld: Fake Graphs Bring Real Insights for GNNs,” Google proposes a framework for measuring the performance of GNN architectures on millions of synthetic benchmark datasets to match the volume and pace of GNN development. Google recommends GraphWorld as a complementary GNN benchmark that allows academics to investigate GNN performance in portions of graph space that are not covered by popular academic datasets. This is primarily because, Google believes that while “GNN benchmark datasets featured in the academic literature are just individual locations on a fully-diverse world of potential graphs, GraphWorld directly generates this world using probability models, tests GNN models at every location on it, and extracts generalizable insights from the results.”

To highlight the inspiration behind GraphWorld, researchers compare Open Graph Benchmark graphs, to a much larger collection (5,000+) of graphs from the Network Repository. While most Network Repository graphs are unlabeled and so cannot be used in normal GNN experiments, the authors have found that they represent many graphs that exist in the actual world. They calculated the clustering coefficient (how coupled nodes are to adjacent neighbors) and the degree distribution Gini coefficient (the inequality among nodes’ connection counts) for the OGB and Network Repository graphs. The Google team discovered that OGB datasets exist in a small and sparsely populated portion of this metric space.

When utilizing GraphWorld to explore GNN performance on a certain job, the researcher first selects a parameterized generator (example below) that could generate graphical datasets for stress-testing GNN models on the task. A generator parameter is an input that influences the output dataset’s high-level properties. GraphWorld employs parameterized generators to build populations of graph datasets that are adequately variegated to put state-of-the-art GNN models to the test. GraphWorld creates a string of GNN benchmark datasets by sampling the generator parameter values using parallel computing (e.g., Google Cloud Platform Dataflow). GraphWorld evaluates an arbitrary number of GNN models (selected by the user, e.g., GCN, GAT, GraphSAGE) on each dataset at the same time, and then produces a large tabular dataset that combines graph attributes with GNN performance results.

Read More: Google Introduces 540 billion parameters PaLM model to push Limits of Large language Models

Google researchers outline GraphWorld pipelines for node classification, link prediction, and graph classification tasks, each with its own dataset generator in their paper. They noticed that each pipeline required less time and computer resources than state-of-the-art experimentation on OGB graphs, implying that GraphWorld is affordable to researchers on a tight budget.

GraphWorld, according to Google, is cost-effective, as it can execute hundreds of thousands of GNN tests on synthetic data for less than the cost of one experiment on an extensive OGB dataset.

Advertisement

New AI-powered Technology to help Bikers with a smooth ride

AI-powered Technology help Bikers

Students from the KL Hyderabad campus (Deemed to be University) have come up with a new artificial intelligence (AI)-powered technology that can help bikers with a smooth ride. 

The new AI-based tech can effectively detect obstacles on their way and alert the bikers about it, assisting them in having a better riding experience. 

Cherutanuri Sai Santosh Reddy, Cherukuri Shravan Sairam, Boddu Avinash, and Marri Akhil Reddy, who are currently 2nd year CSE students, have developed this unique AI system. 

Read More: Devo raises $100 million in Funding round led by Eurazeo

Santosh said, “I decided to build this device after seeing my cousin struggling to ride on a road full of potholes.” 

According to the developers, this new AI system will not only be useful for beginners but also for experienced riders as it provides a better riding experience. The technology includes a 4k resolution night vision sensor, an alerting system, Bluetooth speakers, and LED lighting. 

The students spent roughly Rs 12 000 to make the device. However, they claim that it can be scaled up if built in large quantities. 

The AI device gets fitted near the bike speedometer and sends alerts to riders whenever it detects an obstacle within ten meters range. 

According to the developers, they intend to launch the device on campus initially before partnering with bike manufacturers. The bike has been tested on campus by the University of Texas team, and it has been proven to operate with both geared and non-geared bicycles. 

Shravan said, “Right now, the accuracy is about 67%. While the device is accurately detecting rocks and sand, the accuracy while detecting pits is not 100%. Once we fix it, we will introduce it on the campus.”

Advertisement

India Post successfully pilots first-ever drone mail delivery with TechEagle Innovations

Indian Post Drone Mail Delivery

With what could possibly be a radical shift in the traditional postal delivery systems, India Post has satisfactorily finished its very first drone mail delivery trial project with a Gurugram-based startup, TechEagle Innovations. TechEagle is India’s first and largest manufacturer of high-speed, long-range, and heavy-payload delivery drone logistics systems.

As a part of the pilot test, a parcel was delivered from Bhuj to Bhachau talukas of Kutch, Gujarat, covering an approximate distance of 46 km in almost 25 minutes, which is 5x faster than the time taken by surface transportation. 

In a tweet, TechEagle Innovations tweeted, ‘It was the longest drone delivery in a single flight and that too in a harsh weather environment, with a wind speed of more than 31 km/hr.’

Read More: Vodafone Idea launches AI-powered AdTech platform Vi Ads

The delivery was effectuated using TechEagle’s hybrid-electric vertical take-off and landing (VTOL) drone christened ‘VertiplaneX3’, which has the capacity of taking off and landing vertically akin to a helicopter from a compact area of 5m×5m. VertiplaneX3 can effectively carry a substantial payload of 3kg over a distance of over 100 km with a maximum velocity of 120 km/hr, making it the fastest eVTOL drone made in India.

Equipped with multiple fail-safe options and a dynamic design that makes it water and dustproof, the eVTOL can outstand a 45 km/hr wind force and extreme temperatures between -15°C to 30°C. These features make it ideal for medical, military, maritime, hyperlocal, and e-commerce deliveries. 

Co-founder and COO of TechEagle Innovations, Anshu Abhishek, explained how the initiative is aimed at facilitating speedy deliveries throughout the urban and rural areas of the country. He elaborated on how the project offers an opportunity to potential stakeholders to commercialize and upgrade mail delivery services in India. 

Advertisement