Maharashtra State Road Transport Corporation (MSRTC) buses to get an artificial intelligence (AI)-powered driver monitoring system in the coming months.
The AI-powered system will also be integrated with CCTV cameras which will not only monitor but also prompt voice commands to drivers when the system finds out that the driver is distracted.
On June 1st, the state transport corporation launched its first intercity e-bus service between Pune and Ahmednagar using an e-bus dubbed “Shivai.”
This new development is a part of the government’s plan to encourage safe driving and minimize road accidents caused due to human errors.
The Times of India reported that Shekhar Channe, managing director of MSRTC, stated that the new technologies would be put in all new buses purchased by the state road transport firm.
According to Channe, existing deployed vehicles might also get this upgrade, but no confirmation has yet passed.
Channe said, “The Shivai e-buses have the voice system. If a driver is inattentive, speaks to fellow passengers, or talks on cellphone while driving, a voice would immediately alert him to be careful and focus on driving.” He further added that passengers will hear the voice and can advise the motorist in question to concentrate on driving.
Moreover, the CCTV system on board has a recording feature that allows authorities to check footage in the event of an accident.
“We understand that monitoring all buses at all times is not possible. But the new systems will act as a deterrent, and drivers will be more careful,” said an MSRTC spokesperson.
Google releases a product map that can help you identify the name of cloud services across different cloud platforms. Given the diversity of cloud vendors, the product map can come in handy in finding similar GCP offerings on AWS and Azure.
Many cloud users search for the “best” offerings from various providers to evaluate and streamline their workflow. However, often people struggle to find the exact alternative on different cloud platforms.
Google is driven to structure the world’s knowledge and make it broadly accessible and beneficial. That’s why Google has just released a simple product map highlighting similar products from Google Cloud Platform, AWS, and Azure.
The cloud services can be filtered by the product name or other standard features. Based on the information, one can make sense of the product and the respective vendor to select the “best.”
The only suggestion is to delve a little deeper when anything strikes your eye. The particular reason to do this is that some of Google’s services may be listed without any counter equivalent from Azure and AWS. Some features like Anthos Clusters, Network Intelligence Center, VPC Service Controls, etc., are not mapped to any service from AWS and Azure.
This doesn’t mean that such cloud services are uniquely provided by Google but should indicate the difference between Google and AWS and Azure portfolios. Nevertheless, the product map can guide you through the offerings and differences at ease.
Graph neural networks (GNNs) are a subtype of deep learning algorithms that operate on graphs. Graphs are data structures with edges and vertices. Depending on whether there are directional relationships between vertices, edges can be either directed or undirected. Although the terms vertices and nodes are sometimes used interchangeably, they represent two separate concepts. GNNs are considered an advancement over Convolutional Neural Networks (CNNs). CNNs are incapable of handling graph data because nodes in graphs are not represented in any order and dependence information between two nodes is provided by edges.
Neural networks are critical to the development of machine learning models as they help comprehend data patterns where conventional model training can fail. However, this technology generally caters to euclidean data, which encompasses the data in one or two-dimensional domains. E.g. audio, imagery, textual files.
Graph Neural Networks are distinct from traditional machine learning models as they analyze graph-structured data rather than euclidean data with a grid-like structure. Graph-structured data can also exist in a three-dimensional domain and is a prime example of non-euclidean data. Non-euclidean data ranges from complex molecular structures to traffic networks that require representation in a three-dimensional domain. If this form of data is translated into a euclidean framework, valuable information can be lost. In other words, a traditional neural network does not usually take into account the features of each vertex or edge in graph-structured data. Machine learning algorithms that prefer input in grid-like or rectangular arrays can limit the analysis they can do on graphs when the graphs exist in non-euclidean space, which means that nodes and edges cannot be represented using coordinates. Besides, when a graph structure is converted into its adjacency matrix, the resultant adjacency matrix can yield graphs with a broad range of appearances.
Since a graph structure can represent most objects, Graph Neural Networks offer a wide range of possible applications for non-euclidean data.
Thousands of GNN subtypes have been created annually due to a spike in interest in GNNs over the preceding few years. Methods and datasets for testing GNNs, on the other hand, have gotten significantly less attention. Many GNN articles employ the same 5–10 benchmark datasets, which are mostly made up of readily labeled academic citation networks and molecular datasets. This means that new GNN subtypes’ empirical performance can only be claimed for a small set of graphs. Recently published research with rigorous experimental designs raise suspicion about the performance rankings of prominent GNN models described in formative publications, further complicating the situation.
For instance, it is already mentioned that GNN task datasets are successively re-used throughout publications, as they are in many machine learning subfields, to correctly quantify incremental advances of new designs. However, as seen in NLP and computer vision applications, this can easily lead to the overfitting of novel structures to datasets over time. If the primary collection of benchmark graphs has comparable structural and statistical qualities, the effect will be amplified.
To address these bottlenecks, Stanford recently unveiled Open Graph Benchmark (OGB), which is an open-source software for assessing GNNs on a handful of massive-scale graph datasets across a range of tasks, allowing for a more uniform GNN experimental design. However, as current datasets, Open Graph Benchmark was sourced from many of the same domains, implying its failure in tackling the above-mentioned dataset variety issue.
The Open Graph Benchmark raised the number of nodes in experiment-friendly benchmark citation graphs by more than 1,000 times. From one point of view, this is entirely natural as computational capabilities improve and graph-based learning problems become more data-rich. However, while the availability of enormous graphs is critical for evaluating GNN software, platforms, and model complexity, giant graphs are not required to verify GNN accuracy or scientific relevance. Standardized graph datasets for assessing GNN expressiveness become less accessible to the typical researcher as the field’s benchmark graphs grow in size.
Furthermore, without access to institution-scale computer resources, investigating GNN hyperparameter tuning approaches or training variance is almost impossible with big benchmark datasets.
In “GraphWorld: Fake Graphs Bring Real Insights for GNNs,” Google proposes a framework for measuring the performance of GNN architectures on millions of synthetic benchmark datasets to match the volume and pace of GNN development. Google recommends GraphWorld as a complementary GNN benchmark that allows academics to investigate GNN performance in portions of graph space that are not covered by popular academic datasets. This is primarily because, Google believes that while “GNN benchmark datasets featured in the academic literature are just individual locations on a fully-diverse world of potential graphs, GraphWorld directly generates this world using probability models, tests GNN models at every location on it, and extracts generalizable insights from the results.”
To highlight the inspiration behind GraphWorld, researchers compare Open Graph Benchmark graphs, to a much larger collection (5,000+) of graphs from the Network Repository. While most Network Repository graphs are unlabeled and so cannot be used in normal GNN experiments, the authors have found that they represent many graphs that exist in the actual world. They calculated the clustering coefficient (how coupled nodes are to adjacent neighbors) and the degree distribution Gini coefficient (the inequality among nodes’ connection counts) for the OGB and Network Repository graphs. The Google team discovered that OGB datasets exist in a small and sparsely populated portion of this metric space.
When utilizing GraphWorld to explore GNN performance on a certain job, the researcher first selects a parameterized generator (example below) that could generate graphical datasets for stress-testing GNN models on the task. A generator parameter is an input that influences the output dataset’s high-level properties. GraphWorld employs parameterized generators to build populations of graph datasets that are adequately variegated to put state-of-the-art GNN models to the test. GraphWorld creates a string of GNN benchmark datasets by sampling the generator parameter values using parallel computing (e.g., Google Cloud Platform Dataflow). GraphWorld evaluates an arbitrary number of GNN models (selected by the user, e.g., GCN, GAT, GraphSAGE) on each dataset at the same time, and then produces a large tabular dataset that combines graph attributes with GNN performance results.
Google researchers outline GraphWorld pipelines for node classification, link prediction, and graph classification tasks, each with its own dataset generator in their paper. They noticed that each pipeline required less time and computer resources than state-of-the-art experimentation on OGB graphs, implying that GraphWorld is affordable to researchers on a tight budget.
GraphWorld, according to Google, is cost-effective, as it can execute hundreds of thousands of GNN tests on synthetic data for less than the cost of one experiment on an extensive OGB dataset.
Students from the KL Hyderabad campus (Deemed to be University) have come up with a new artificial intelligence (AI)-powered technology that can help bikers with a smooth ride.
The new AI-based tech can effectively detect obstacles on their way and alert the bikers about it, assisting them in having a better riding experience.
Cherutanuri Sai Santosh Reddy, Cherukuri Shravan Sairam, Boddu Avinash, and Marri Akhil Reddy, who are currently 2nd year CSE students, have developed this unique AI system.
Santosh said, “I decided to build this device after seeing my cousin struggling to ride on a road full of potholes.”
According to the developers, this new AI system will not only be useful for beginners but also for experienced riders as it provides a better riding experience. The technology includes a 4k resolution night vision sensor, an alerting system, Bluetooth speakers, and LED lighting.
The students spent roughly Rs 12 000 to make the device. However, they claim that it can be scaled up if built in large quantities.
The AI device gets fitted near the bike speedometer and sends alerts to riders whenever it detects an obstacle within ten meters range.
According to the developers, they intend to launch the device on campus initially before partnering with bike manufacturers. The bike has been tested on campus by the University of Texas team, and it has been proven to operate with both geared and non-geared bicycles.
Shravan said, “Right now, the accuracy is about 67%. While the device is accurately detecting rocks and sand, the accuracy while detecting pits is not 100%. Once we fix it, we will introduce it on the campus.”
With what could possibly be a radical shift in the traditional postal delivery systems, India Post has satisfactorily finished its very first drone mail delivery trial project with a Gurugram-based startup, TechEagle Innovations. TechEagle is India’s first and largest manufacturer of high-speed, long-range, and heavy-payload delivery drone logistics systems.
As a part of the pilot test, a parcel was delivered from Bhuj to Bhachau talukas of Kutch, Gujarat, covering an approximate distance of 46 km in almost 25 minutes, which is 5x faster than the time taken by surface transportation.
In a tweet, TechEagle Innovations tweeted, ‘It was the longest drone delivery in a single flight and that too in a harsh weather environment, with a wind speed of more than 31 km/hr.’
The delivery was effectuated using TechEagle’s hybrid-electric vertical take-off and landing (VTOL) drone christened ‘VertiplaneX3’, which has the capacity of taking off and landing vertically akin to a helicopter from a compact area of 5m×5m. VertiplaneX3 can effectively carry a substantial payload of 3kg over a distance of over 100 km with a maximum velocity of 120 km/hr, making it the fastest eVTOL drone made in India.
Equipped with multiple fail-safe options and a dynamic design that makes it water and dustproof, the eVTOL can outstand a 45 km/hr wind force and extreme temperatures between -15°C to 30°C. These features make it ideal for medical, military, maritime, hyperlocal, and e-commerce deliveries.
Co-founder and COO of TechEagle Innovations, Anshu Abhishek, explained how the initiative is aimed at facilitating speedy deliveries throughout the urban and rural areas of the country. He elaborated on how the project offers an opportunity to potential stakeholders to commercialize and upgrade mail delivery services in India.
Allado McDowell set out to write with AI the most cringe-worthy novel possible but instead crafted a strange and humorous book. Amor Cringe is an ode to cringe maximalism! It is fast, lively, and a little sleazy, and reading it would feel like frolicking around with glee while everyone else is not having one.
When McDowell was asked about the implication of the word Cringe, she answered (with GPT-3): Cringe is a phrase that organically came from the internet. It refers to the behavior that makes a witness squirm with posthumous embarrassment, online and in real life. My broad hypothesis of “cringe” is that it reflects our social nature as humans.
The novel results from a one-of-a-kind writing process in collaboration with GPT-3 AI! GPT-3 is a text-generating AI. Consider it like an autopilot that finishes the deal. It works on a Large Language Model, or LLM, which is vital to the future of new generation AI models trained on massive datasets.
This isn’t Allado McDowell’s first collaboration with AI, but Amor Cringe is a significant step forward in the process. The particular reason for the same is that rather than portraying the collaboration as a duet, Allado-McDowell has blended her words with GPT-3’s writing, resulting in a distinct narrative voice.
McDowell is known to have established the Artists + Machine Intelligence program in 2016 at the Google AI event. She has a deep knowledge and understanding of AI and related areas. Before Amor Cringe, McDowell published Pharmako-AI, which depicts a conversation between the former and the Generative Pre-Trained Transformer 3 or the GPT-3.
With more developments in line with AI-based text generation, many more novels will have an artificially intelligent co-author. You never know; someday in the future, you may end up binging on a book entirely written with the help of AI.
Cloud-native login and security analytics services providing company Devo raises $100 million in its recently held series F funding round led by Eurazo.
Several other investors, such as Insight Partners, General Atlantic, Bessemer Venture Partners, TCV, and others, also participated in Devo’s latest funding round.
Devo plans to use the newly raised capital to drive development in new regions and verticals, to speed Devo’s delivery of the “autonomous SOC,” and to support possible future M&A expansion. This new funding has now increased Devo’s market valuation to over $2 billion.
CEO of Devo, Marc van Zadelhoff, said, “Security teams are facing more threats than ever—regardless of industry or geography—and that challenge is compounded by the difficulty of hiring and retaining talent, a lack of visibility into the full attack surface, and the speed and scale necessary to keep up with not just growing threats, but the growth of their organizations.”
He further added that this round of investment enables them to deliver on the autonomous SOC through continuing technological innovation, to grow to other areas to service more clients, and to examine more M&A opportunities.
Moreover, he is delighted to have instilled such trust in the company’s investors, who continue to support its innovation and the value it provides to customers.
United States-based cybersecurity startup Devo was founded by Pedro Castillo in 2011. The company specializes in offering a cloud-native logging and security analytics platform that unlocks data’s full potential to enable bold, confident action. To date, Devo has raised nearly $500 million from multiple investors over seven funding rounds.
According to Devo’s plan, it will continue to expand into new verticals and locations, focusing on the public sector and the Asia-Pacific (APAC) region.
To enhance efficiency, Google has been incorporating machine learning into its cloud-based Google Workspace infrastructure. These latest AI developments in Google Workspace are intended to help employees stay on the right track, collaborate securely, and improve work relationships in various environments.
People who have been on the platform have substantially notified Google that the post-Covid scenario has increased the hybrid work in terms of volume of emails, meetings, chats, etc. Most recent Google’s AI developments in Workspace address such issues along with other scheduled enhancements.
Machine Learning technologies now enhance Google Meet services and make the transfer of information more convenient and secure. Other AI-based enhancements cover ‘live sharing’ to make the meets more interactive by allowing content synchronization among participants.
Owing to these developments, Google Docs now has an improvised automated summary section that allows the user to catch up on a missed update.
This year, Google has also planned to enhance its image, music, and content sharing services. Google’s artificial intelligence will work towards ‘portrait restoration’ to help improve video quality. The process will be done for low-quality videos or low-light settings and areas that do not have stable network connectivity. This Google AI enhancement will automatically improve the video quality without slowing down the device.
As the future of hybrid working arrangements seems promising, Google’s AI aims to enhance its clientele’s work experience by incorporating intelligent capabilities across the platform.
Ranjani Mani, Senior Manager of Data Sciences and Advanced Analytics at VMware, has left the company to join global software firm Atlassian after serving for roughly seven years.
She announced this news through a post on her LinkedIn profile. Mani has joined Atlassian as the Analytics Leader of Atlassian’s EMEA CSS Data Sciences team.
Mani has over a decade of expertise in analytics and data science and has served crucially for her previous company.
At VMware, she headed a team of global data scientists that collaborated with VMware Customer Experience [CXS] leadership, product, engineering, and IT to push ideas to complete data sciences and business analytics initiatives aimed at providing a great customer experience.
She is passionate about delivering corporate value and providing an excellent customer experience via Analytics, Strategy, and Leadership. Mani has been recognized several times in reputed events and reports, including India AI 21 Women ’21, Women in AI Leadership Award 2021, Women in Big data 2021, and many more.
Apart from VMware, she has worked with other industry-leading companies like Oracle and Dell. She has a Bachelor’s degree in Engineering, Electronics and Communication, and a Master’s in Business Administration in Brand Management and Analytics.
“I have worked across analytics & data sciences – specifically in Product, Pricing and Customer Experience Analytics, Product Management and Strategy across companies such as Oracle, Dell Global Analytics and Tata Docomo,” Mani mentioned in an interview.
The world is not new to the notion of being watched by facial recognition systems. While facial recognition has already revolutionized the field of biometrics verification and was shot to widespread usage amid covid restrictions, the biometrics field is on the verge of witnessing new trends. For instance, gait recognition using motion sensors. Like a human fingerprint, an individual’s gait is unique too. It is believed that gait recognition will soon supplement the existing biometric technologies like facial recognition, voice recognition, iris recognition and fingerprint verification, for enhanced security.
Much like in the movie Mission Impossible: Rogue Nation, where the protagonist’s sidekick Benji has to walk across a room where gait analysis is performed on him, before retrieving a secret Syndicate file. In the movie the gait analysis system observes how an agent talks, walks and even their facial tics – any mismatch will result in getting tased. In the real world, gait recognition is touted to work similarly.
Machine Learning technologies formed the foundation for developing gait recognition tools. Even if a person’s face is hidden, turned away from the camera, or hidden behind a mask, ML-based computer vision systems can recognize them from a video.
The software looks at the person’s silhouette, height, pace, and walking patterns and matches them to a database. Some algorithms, for example, are intended to interpret visual information, while others employ sensor data. Irrespective of the nature of data, the whole process involves: capturing data, Silhouette segmentation, contour detection and feature extraction with classification. In public locations, this technology is more convenient than retinal scanners or fingerprints since it is less intrusive. Further, as mentioned earlier, gait recognition is unlikely to be beaten since no two people’s gaits are the same.
Scientists are now striving to enhance recognition systems using the data and models gathered. Because each footfall is distinct, the identification algorithms are always confronted with new data. The algorithm will assess future data better if it detects more gait variations.
Because no algorithmic method can be perfect and there is always room for development, new capture methods and algorithms are always being tested to address the shortcomings of present gait recognition systems. While video camera data is already used for gait analysis, researchers are also working on gait recognition based on-body sensors, sensors on mobile phones or smart wearable devices, radar input, and so on.
Biometrics for gait recognition is still in its infancy, however, it has already found an interesting application, i.e., detecting human emotions on how they walk. Gait recognition combines spatial-temporal characteristics like step length, step width, walking speed, and cycle time with kinematic parameters like the ankle, knee, and hip joint rotation, ankle, knee, hip, and thigh mean joint angles, and trunk and foot angles. The relationship between step length and a person’s height is also taken into account. All these factors change depending on the mood of an individual. A person who just learned that they won a lottery will have a different stride than someone bereft or pacing nervously.
Analyzing these can help identify nonverbal behavioral cues and help with our social skills. Researchers at the University of Maryland created an algorithm called ProxEmo a few years ago, which allowed a small wheeled robot (Clearpath’s Jackal) to assess your gait in real-time and anticipate how you might be feeling. The robot could choose its course to offer you more or less space based on your detected emotion. Though this could appear to be a minor milestone in the context of robot-human relations, but scientists believe in future machines will be clever enough to interpret an unhappy person’s walk and attempt to comfort them.
The team presented a multi-view 16 joint skeleton graph convolution-based model for classifying emotions that works with a common camera installed on the Jackal robot. The emotion identification system was linked into a mapless navigation scheme, and deep learning techniques were used to train the system to correlate various skeleton gaits with the feelings that human volunteers identified with those walking humans. On the Emotion-Gait benchmark dataset, it earned a mean average emotion prediction precision of 82.47%. You can check the dataset here.
Last year, researchers from the University of Plymouth presented an experimental biometrics smartphone security system that used gait authentication. The solution uses smartphones’ motion sensors to record data on users’ stride patterns, with preliminary results indicating that the biometric system is between 85 and 90% accurate in detecting an individual’s gait, based on 44 participants.
So far gait recognition system promises new avenues of biometrical identifications; however, the success depends on whether it violates public privacy.