Loram, the vehicle fleet company, plans to utilize Nauto’s AI technology to reduce the ill effects of distracted driving in Loram’s fleet of truck drivers. The company will use Nauto’s devices with computer vision and deep learning capabilities to alert unfocused drivers, especially truck drivers.
As per a survey conducted by a truck fleet software company Insight, 4.5 accidents over a year are credited to a single truck driver. With over 14,000 miles covered by Loram truck drivers in a day, there is a significant risk. After exploring several technologies, Grahan Rose, vehicle fleet manager at Loram, decided to proceed with Nauto’s AI and deep learning.
Nauto was founded in 2015 with capabilities in facial recognition and managing distracted driving. Nauto installs a tiny camera device equipped with computer vision technology to capture what is going on the road and what the driver does.
The camera does not record everything but the 30 seconds around and a probable collision or driving violation. The device also computes the probability of a collision if the drivers are distracted and alert them when the probability is more than 30 percent. If it notices an incoming collision, it rings an alarm.
Loram has been involved with Nauto since 2018 for overall road traffic management and monitoring drivers’ behavior. Rose said, “It’s significantly decreased our tailgating, our cellphone usage. Our guys are going hands-free and not picking up their cellphones and getting distracted when they’re driving.”
Australia-based AICRAFT and Bengaluru-based aerospace firm Valdel Advanced Technologies signed an MoU on machine learning and artificial intelligence. The memorandum was inked in the presence of the Indian Space Research Organisation (ISRO) on the concluding day of the Bengaluru Space Expo.
The memorandum aims to help Valdel in simulation and manufacturing using AICRAFT’s advanced AI capabilities. Valdel’s services will be used to test AICRAFT’s platforms and systems.
Sarah Kirlew, Consul General for South India, said, “We are pleased that Australian industry has continued to explore tangible space collaboration with India over the past few days.” The statement was delivered in tandem with six other memorandums signed by several Australian and Indian companies, two of which were based in Bengaluru alone.
Additionally, an Australian aerospace company, Space Machines, opened a new R&D center in Bengaluru. The company will collaborate with Ananth Technologies on product development, testing, integration, and joint-space missions. Another Australian company, HEX20, will collaborate with Skyroot Aerospace, a Hyderabad-based company, to launch spacecraft and avionics.
The BSE saw many more collaborations between companies like GalaxEye, QL Space, SatSure, Australia’s SABRN Health, Altdata, etc.
RunwayML introduced a new text-to-video feature on its AI-backed web video editor. The new update can edit videos with written commands called “prompts.” Runway posted a video demonstration reel to show how a simple text input can alter a video. You write commands like “remove the object,” “make it more cinematic,” etc., in an input text box, and it gets done.
The promotional video also shows text-to-image generation similar to Stable Diffusion, text overlay, character masking, and much more.
Text-to-video technologies are yet in a primitive state because of high computational requirements and dataset scarcity for training with metadata. Runway’s text2video is not the first of its kind, there have been some other successful attempts like CogVideo. CogVideo can generate videos, however, with low resolution.
With every other attempt, it will be reasonable to expect enhanced quality in text2video technologies; hence, Runway’s teaser is a step forward in synthetic video generation.
Until now, Runway has been accessible as a web-based commercial video editing product. It is not entirely free and charges a monthly fee, including charges for cloud storage. Currently, the new text-to-video feature is in a controlled phase, with “Early Access” only available to a few people. You can sign up for the waitlist on Runway.
MIT researchers and the MIT-born startup DynamoFL have created the FedLTN, a federated learning-based system. FedLTN is based on the lottery ticket hypothesis, which is a machine learning concept. The hypothesis postulates that considerably smaller subnetworks within incredibly large neural network models exist which can function on a similar level. The researchers explain that finding one of these subnetworks is equivalent to finding a winning lottery ticket. Therefore the ‘LTN’ in FedLTN stands for ‘lottery ticket network.’
The advent of powerful computer processors and the availability of abundant data for neural network training have led to the enormous advancement of machine learning-based technologies. Machine learning models typically perform better when trained on a wider variety of data, encouraging businesses and organizations to gather as much information as possible from their consumers. This includes data from sensors in user devices, GPS, cameras, CCTV surveillance, wearables, smartphones, and EHRs. However, from a privacy standpoint, user-generated data is typically susceptible, including location data, private medical records, and social interactions, etc. There is a possibility of major privacy infringement risk if this sensitive data is compiled on a centralized server.
In addition to privacy concerns, relaying data to a central server for training would result in these problems like higher network expenses, management and business compliance costs, and potentially regulatory and legal complexities. Moreover, with increasing network congestion, it is likely to be challenging to request that all training data be sent to the remote server, thereby inhibiting the adoption of centralized machine learning on user devices powered by wired and wireless telecommunication.
The need for privacy-preserving machine learning is growing as the general public and lawmakers become more aware of the data revolution. In light of this, research on privacy-respecting methods like homomorphic encryption, secure multiparty computing, and federated learning is becoming more and more prominent. For the time being, we’ll concentrate in this post on how federated learning makes privacy feasible.
Federated learning, sometimes referred to as collaborative learning, enables the mass training of models using data that is still dispersed across the devices where it was initially created. In this way, millions of people train their models on their devices using local datasets. Then users communicate insights like model parameter updates of the local model to a central server. The server’s responsibility includes combining all participating clients’ weights into a new model version. The users are subsequently given a new copy of the modified global model to start the subsequent federated training cycle. This approach is continued until the model achieves convergence. Since the centralized training orchestrator only sees each user’s contribution through model updates, the sensitive data stays with the owners of the data, where the initial training is carried out.
Despite its objective to improve user privacy and reduce communication costs by sharing the updated parameters, federated learning faces three significant bottlenecks. For instance, the data quality given by various end-user participants in federated learning might vary considerably. The capacity of various terminal devices or individuals to provide training data may vary, and there may be unforeseen random mistakes during data collecting and storage. Since each user collects their own data, such data do not necessarily follow the same statistical patterns, thereby affecting the performance of the combined global model. Therefore, data quality must be considered as one of the participants’ privacy concerns to ensure that the learning process is impartial and free from discrimination.
Additionally, the combined model is created by averaging the results, implying it is not customized for each individual. Further, transferring the local model parameters to the central server, and copies of the updated global model back to local devices, requires transporting a lot of data at high connection costs.
The three issues with federated learning can all be solved at once thanks to a solution devised by MIT researchers. Their solution reduces the size of the combined machine-learning model while increasing accuracy, and expediting user-to-central server communication. Also, it guarantees that each user obtains a model better tailored to their surroundings, improving performance. Compared to alternative methods, the team reduced the model size by nearly an order of magnitude, resulting in communication costs for individual users that were four to six times cheaper. Their solution also managed to boost the model’s overall accuracy by approximately 10%.
The researchers used an iterative pruning technique to implement the lottery ticket hypothesis. The researchers examined the leaner neural network to check if the accuracy remained over the threshold after removing nodes and connections between them if the model’s accuracy was above a certain threshold.
This pruning methodology for federated learning has been utilized in previous methods to reduce the size of machine learning models so that they might be shared more effectively. Though these methods ramped up processes, model performance deteriorated. Hence, the researchers used a few cutting-edge methods to speed up the pruning procedure while improving the precision and personalization of the new, smaller models.
By skipping the stage when the remnant parts of the pruned neural network are “rewound” to their initial values, they were able to accelerate the pruning process. In addition, the model was trained before being pruned, which enhanced its accuracy and enabled faster pruning.
Researchers were cautious to avoid removing layers from the network that gather crucial statistical data about that user’s particular data in order to make each model more customized for the user’s surroundings. Additionally, each time a model was integrated, data from the central server was accessed, saving time and preventing the need for repeated communication rounds.
Once researchers tested FedLTN in simulations, they found that it improved performance and cut communication costs across the board. In one experiment, a model created using a conventional federated learning method was 45 megabytes in size; however, the model created using their technology was just 5 megabytes and had the same accuracy. Another test compared FedLTN’s performance to a state-of-the-art approach, which needed 12,000 megabytes of communication between users and the server to train a single model, compared to FedLTN’s 4,500 megabytes.
Even the worst-performing users with FedLTN had a performance improvement of more than 10%. And according to Vaikkunth Mugunthan, Ph.D. ’22, lead author of this research paper, the total model accuracy outperformed the state-of-the-art personalization algorithm by over 10%.
After creating and perfecting FedLTN, Mugunthan is currently attempting to incorporate the method into DynamoFL. He wants to improve this solution in the future, especially by utilizing the same techniques on unlabeled data. Mugunthan hopes this research encourages other academics to reconsider how they approach federated learning.
Mugunthan collaborated on the paper with his adviser, senior author Lalana Kagal, a principal research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL).
The Google team along with Alex Wiltschko and Richard Gerkin have accomplished a remarkable feat by mapping the scent of molecules and digitizing the sense of smell, . By doing so, the team has opened up the possibility of discovering new odors and molecules.
The researchers validated their model against new molecules, linked their findings to biological odor mechanisms, and expanded their results to find new approaches to global health challenges. This transformational research’s real-world applications are beyond exciting.
The names of the smells that various molecules are said to evoke, such as meaty, floral, or mint, were combined with thousands of examples of those molecules in a graph neural network (GNN) model that Google AI created in 2019. This approach was necessary to study the correlation between a molecule’s structure and its likelihood of having a specific odor label.
The page object model (POM) displayed pairs of odors perceived similarly as close-by points with a similar hue. The Google AI researchers show how we can use the map to understand these properties in introductory biology, predict the future odor properties of molecules, and address urgent global health issues. Several tests have already run on the map.
Test 1: Testing with molecules that did not correlate with odors
The researchers tested the fundamental model to see if it could accurately predict the smells of novel molecules they didn’t include in its development.
Test 2: Linking odor quality to the fundamental biology
The researchers tested the odor map to see if it could predict animal odor perception and the underlying brain activity. They discovered that it accurately predicted the behavior of most of the animals studied by neurologists, including mice and insects, and the activity of sensory receptors, neurons, and synapses.
Test 3: To address the global health problem
The odor map opens up new possibilities because it closely relates to animal perception and biology. They chose to retrain the POM to manage one of humanity’s most significant issues: the spread of diseases carried by ticks and mosquitoes. In general, the POM can be used to predict animal olfaction.
Dataiku has released a new documentary series, “AI & Us.” The series focuses on how artificial intelligence has become highly accessible nowadays and is revolutionizing many industries. The company believes it will build a connection between enterprises and AI while enlightening entrepreneurs to create and provide insightful services.
The series was screened on 7 September, with the first episode, “Dressed by Machines,” discussing AI in the fashion industry. It depicted how AI helps deliver customized recommendations in creating and forecasting trends. Dataiku claimed that the lifetime of an average trend or style is only five weeks, given the society’s interest in “fast fashion.”
The premier episode also presents a project with a “reactive reality rig”, a turntable connected to a camera to take 360-degree images of a fashion ensemble. These images are instantly processed into a 3D version that users can use to fit their scanned faces and body measurements. The idea is to create a 3D avatar that can virtually “try” on the outfit.
Nevertheless, the technology is not intended to replace the designers or models from showcasing fashion ensembles. Costas Kazantzi, a creative technologist at Fashion Innovation Agency, said, “AI cannot replace the fashion designer from a fashion perspective. However, I believe the very element of artificial intelligence is about randomness.” He added that AI would help in diversifying the creative process.
Shaun McGirr, a data scientist and VP of AI strategy at Dataiku, is hosting the series. McGirr will introduce more interesting use cases in the proceeding episodes and some ethical question-answers that show society’s perception of AI.
UserStudy, a SaaS startup that simplifies user research for product teams, raised eight crores in a pre-seed funding round led by Better Capital. Other participating investors were Microsoft, Meta, Sparrow VC, Maninder Gulati, Good Capital, Gojek, FlexiLoans, Oyo, and Upgrad.
UserStudy is creating a video-first research solution that gathers insights from audio and video channels in an organization and consolidates all insights and feedback in one place. The company also helps product teams recruit the research participants with solid demographics and professional attributes segmentation. UserStudy is a full-stack research platform that encomapsses the whole value chain of user research.
“We use what is easy and discard complex things. We believe user research is too complex and laborious; products and experiences are suffering at scale due to this challenge. UserStudy helps with this by making user research ten times easier for product teams globally,” said Vaibhav Domkundwar of Better Capital.
UserStudy was founded earlier this year by Nitin Matiyali and Anshul Divakar, both batchmates from IIT Bombay. They have experience in organizations like Gojek, McKinsey, and Oyo. Nitin has a business background as a consultant with McKinsey and Head of Revenue for Oyo’s UK Homes business. Anshul has handled product leadership roles at Gojek & Flexiloans.
Co-Founder of UserStudy said that their vision is to build a one-stop SaaS for all research needs of product teams globally. While design leaders understand the significance of user research, most teams cannot perorm well enough because conducting research is time-consuming and challenging. With UserStudy, it becomes effortless to perform research. Insights are generated in a matter of not weeks, but hours.”
The Telecom Regulatory Authority of India (TRAI) has extended the deadline for stakeholder counter-comments and comments for its recent discussion paper on leveraging opportunities around artificial intelligence (AI) and big data for the telecom industry.
In its discussion paper surfaced in early August, the Telecom Regulatory Authority of India (TRAI) sought views on the opportunities and risks involved in leveraging AI and big data. It also covered the issues such as customer privacy and constraints in adopting these technologies. The regulator noted in the paper that the telecom sector could leverage AI and Bbig data in quality of service, spectrum management, and network security.
TRAI is now seeking comments and counter-comments by October 14 and 28, respectively, following requests from stakeholders for more time. It had initially demanded these submissions by September 16 and September 30.
The TRAI paper noted that investment in AI and machine learning (ML) companies had increased sharply. Indian AI and analytics start-ups received US $1,108 million in funding in 2021, the highest ever funding received in seven years, with a year-over-year growth rate of 32.5%.
In its discussion paper, TRAI said that leveraging AI and big data would help telecom carriers to optimize network quality with more innovative detection and anomaly prediction, assisted capacity planning, and self-optimization to respond to changing conditions.
The paper also mentioned that with the rising adoption of smartphones along with the growth of mobile internet, telcos are required to access exceptional amounts of data sources. These sources include customer profiles, device data, call detail records, network data, customer usage patterns, location data, which combined, might become the Big data.
A Chinese metaverse firm has chosen an AI-powered virtual humanoid robot as its chief executive officer (CEO). The official announcement was made last week.
The humanoid robot, Tang Yu, will be leading the operations at China’s NetDragon Websoft, making it the first robot to hold an executive role in a business. NetDragon Websoft develops and operates multiplayer online games in addition to producing mobile applications.
Tang Yu will handle the organizational and operational aspects of the company, which is worth nearly $10 billion. According to the company, the new CEO will boost the speed of execution, improve the quality of job activities and optimize process flow.
It also said that the robot would act as a real-time data hub and analytical tool, enabling logical decision-making in everyday operations. The company further said that Tang Yu would contribute to making the risk management system more efficient.
It is interesting to note that Jack Ma, the founder and chairman of Alibaba Group, predicted in 2017 that a robot would probably be featured as Time Magazine’s best CEO in 30 years.
Though different-colored aliens and intergalactic travel are yet to be discovered, science fiction has always been a vehicle for futuristic creativity. As a result, several technologies that were once science-fictional have become a reality.
The Australian Federal Police (AFP) has established a new crypto unit to focus on keeping an eye on transactions involving cryptocurrencies. As part of its Criminal Assets Confiscation Taskforce (CACT), it has set up this team for tracking down and confiscating cryptocurrencies related to criminal activities.
Stefan Jerga, the national manager of the AFP’s criminal asset confiscation command, reported that the use of crypto in criminal activities has dramatically expanded since the AFP made its first crypto seizure in early 2018. In response, Jerga said that Australian Federal Police made the decision to form a separate crypto taskforce unit in August.
This decision comes at a time when the Australian Federal Police has reached its 5-year goal of seizing up to $600M in fraudulent assets in just three years, two years ahead of schedule. The benchmark was initially set by the Criminal Assets Confiscation Taskforce and was scheduled to be reached by 2024. This achievement not only demonstrates the AFP’s proficiency but also the alarming surge in fraudsters. As a result, the emphasis placed on illegal crypto transactions coincides with the AFP collecting far more criminal assets than the authorities had anticipated.
Jerga made his comments after the Australian Federal Police said that since 2020, the CACT had seized $380 million in residential and commercial property, $200 million in cash and bank accounts, and $35 million in cars, boats, aircraft, artworks, luxury items and cryptocurrencies.
The Australian Transaction Reports and Analysis Centre (AUSTRAC), a government financial intelligence organization, issued a warning about the increasing profitability of cryptocurrencies for criminals in April. According to the AUSTRAC deputy chief executive John Moss, crypto assets are appealing to groups like Neo-Nazis due to their anonymity and ease of cross-border transactions.
Australians reportedly lost almost $26 million in 2021 due to cryptocurrency-related fraud. According to the Australian Competition and Consumer Commission, Australians lost AU$205 million between January and May 1, 2022. However, the true amount is probably significantly higher. Nevertheless, the stated sum revealed a 166% rise in crypto fraud losses from the same time the previous year.
Last month, the Australian Treasury announced the establishment of a multi-step regulatory framework for cryptocurrencies. The framework aims to be more detailed and informed, and it includes what the Treasury refers to as “token mapping,” which enables public servants to monitor changes in the Australian cryptocurrency market.