Voice biometrics has grown considerably, particularly with Siri and Alexa enhancing the capabilities of voice technology. Two leading companies Auraya and Validsoft will invest in this space, to aid large companies in financial services and telecommunications, thereby minimizing fraud. They will offer a better level of security than PIN codes and passwords using voice biometrics to have a better user experience, eventually reducing risk and simplifying the verification process.
Similar to a fingerprint, iris, or face, voice biometrics is also unique to an individual. To create a voiceprint, a speaker provides a sample of their voice. “When you want to verify your identity, you use another sample of your voice to compare it to that initial sample,” Paul Magee, President of voice biometrics company Auraya explained.
Considering the cost and effort, voice technology is one of the underappreciated applications. Despite many hurdles, voice technology can be considered the epitome of verification and authentication processes for many organizations.
Magee said. “Nobody can steal my voice because you can’t steal what I’m going to say next.” When users are prompted to say their phone or account numbers or digits displayed on the screen, that’s active biometrics. “Passive is more in the background,” Magee said. “So while I’m talking with the call center agent, my voice is being sampled and the agent is being provided with a confirmation that it really is me.”
This voice technology system can be cheated by using recorded audio or creating deep fakes by producing a synthetic version of a person’s voice. However, such fraud can be overcome using live elements or identifying anomalies in speech.
The greatest barrier to voice biometrics is the legislative law of various countries creating obligations to use voiceprints. The future of this technology depends on simplifying the implementation process and getting regulatory compliance to come up with solutions that can give a return on investment, improve productivity.
A recent study published on Thursday by JAMA Ophthalmology found that a computer vision-powered wearable device may help reduce collisions and other accidents for the blind and visually impaired.
Artificial intelligence’s inexorable advance has brought many novel solutions to make the lives of physically challenged people easier. When embedded into wearables, and portable devices, artificial intelligence is improving life for the blind and visually impaired.
Computer vision is a branch of artificial intelligence (AI) that allows computers and systems to extract useful information from digital pictures, videos, and other visual inputs, as well as to perform actions or make suggestions based on that data. To mine insights from data, it leverages a subtype of machine learning called deep learning and a convolutional neural network (CNN). These technologies work together to interpret sensory data by using labels to perform convolutions and make predictions about what the model is “seeing.” They identify numeric and encoded patterns in vectors, which are used to translate all real-world visual input. The neural network performs the convolutions till the model makes predictions accurate to what humans view.
In the study, the researchers developed a technological aid for the visually impaired. This aid contained an experimental device and a data recording unit that was housed in a sling backpack with a chest-mounted wide-angle camera fixed on the strap and two Bluetooth-connected vibrating wristbands worn by the user. The camera is linked to a processing unit that records images and assesses the danger of a collision based on the relative movement of approaching and surrounding objects in the camera’s field of view.
In case an impending collision is detected on the left or right side, the corresponding wristband will vibrate, whereas a head-on collision will cause both wristbands to vibrate. Unlike walking canes or any previous devices that warn of surrounding objects whether the user is walking toward them, this device evaluates relative motion and warns only of approaching obstructions that pose a collision threat while ignoring items that are not likely to cause a collision. However, it will not replace the walking canes altogether. In fact, the research showed that when used in combination with a long cane, the wearable device minimized the risk for collisions and fell by 37% when compared with other mobility aids.
According to Dr. Gang Luo, who is an associate scientist at the Schepens Eye Research Institute of Mass Eye and Ear, many blind individuals use long canes to detect obstacles and collision risks, but the risk is not entirely eliminated. Hence their team sought to develop a device that can augment these everyday mobility aids, further improving their safety. Dr. Luo is also an associate professor of ophthalmology at Harvard Medical School in Cambridge, Mass.
Dr. Luo trialed the device with his colleagues in his vision rehabilitation lab, including the lead author Shrinivas Pundlik, Ph.D., who designed the computer vision algorithm. They tested the device on 31 blind and visually impaired adults who use a long cane or guide dog, or both, to aid their daily mobility.
After being instructed on how to operate the wearable device, participants used it for roughly a month throughout everyday activities while continuing to utilize their usual mobility equipment. The wearable was randomized to flip between active and silent modes. Inactive mode, users could receive vibrating alerts for imminent collisions. The silent mode, in comparison, analyzed and captured images but did not vibrate to notify users of impending collisions.
As per Dr. Lou, the silent mode is similar to the placebo condition in many drug trials. During the testing and analysis, the wearers and researchers would have no idea when the device modes changed. The researchers examined the recorded footage for collisions. The device’s efficiency was assessed by comparing collisions that happened during the active and silent modes. This was when the team observed the collision frequency in active mode was 37% less than that in silent mode.
Apart from that, the team mentions that there is a possibility that the wearable did not identify all conceivable dangers. Before asking for U.S. Food and Drug Administration clearance, the researchers want to augment the computer vision digital processing and camera technologies to make the device more efficient, smaller, and visually attractive.
The clinical trial of this computer vision wearable device was funded by a U.S. Army Medical Research and Materiel Command grant. It received its patent from the Mass Eye and Ear.
Omega’s relentless evolution in timekeeping technology has delivered flawless timings to the world’s finest competitors. Omega is the official timekeeper of Tokyo Olympics 2020, and currently uses computer vision and motion sensors for events like swimming, gymnastics, and beach volleyball to determine on-screen graphics.
Since 1932 Omega has been serving the legacy of timekeeping at the Olympics. From the stopwatch to the futuristic starting pistol, Omega with the British Race Finish Record Company has helped in precise recording to maintain the decorum in various sport events. However, the time capturing method was switched exclusively by electronics, with photo-finish cameras capturing 10 new world records.
The International Olympic Committee for 2021 in Tokyo has introduced four new sports: skateboarding, surfing, karate, and sport climbing. But, the most interesting work is how Omega within the last four years trained its in-house artificial intelligence to learn beach volleyball.
Beach volleyball is a team sport played by two teams of two players on a sand court divided by a net. This sport needs positioning and motion technology for training an AI. It will help recognize numerous shot types from smashes, spikes, pass types to blocks, as well as the ball’s flight path. Later, this data is combined with information garnered from gyroscope sensors in the player’s clothing. Motion sensors assist systems with the direction of movement of the athletes, as well as the height of jumps and speed. The entire data is processed and fed live to broadcasters for use in commentary or on-screen graphics.
The only challenge faced with AI is dealing with missing data whilst the ball is out of the camera coverage, it needs to recalculate null values once it gets the object back into its vision. “When you can track the ball, you will know where it was located and when it changed direction. And with the combination of the sensors on the athletes, the algorithm will then recognize the shot,” Zobrist says
Omega uses sensors and multiple cameras at 250 frames a second. The company claims its beach volleyball system to be 99 percent accurate. Toby Breckon, professor in computer vision and image processing at Durham University, however, is interested to see how the performance is delivered in the Games specifically if the system is fooled by differences in race and gender.
Zobrist confirms a whole new set of innovations for the Paris Games in 2024 and consistent changes in timekeeping, scoring, motion sensors, and positioning systems for Los Angeles in 2028.
Ambi Robotics, a pioneer in simulation to reality AI, has introduced AmbiKit, a multi-robot knitting solution that can identify and pick items from bulk storage. AmbiKit is aimed at transforming the fastest growing eCommerce supply chain: online subscription boxes with AI-powered pick and place robots. This new knitting robot system has a five-robot picking line that can augment the pick and place tasks.
The Emeryville, California-based company claims its new knitting robot system can work 10,000x faster than alternatives. AmbiKit is proficient in picking and packing millions of unique items into bags/boxes for online orders. The AI allows the robots to adapt to any hardware configuration and learn new grasping methods. When paired with their cloud-based nature, they can share the latest techniques across the entire population of AmbiKits (no customer data is shared).
“Retailers and global eCommerce brands are operating in a year-round ‘peak’ season. In order for high-growth brands to fulfill online orders, companies are deploying AI-powered robotic systems to increase resilience in the supply chain. AmbiKit can immediately improve efficiency by deploying into existing workflows and successfully pick and place millions of unique items from day one,” says Jim Liefer, CEO of Ambi Robotics.
Since its inception, the subscription eCommerce market has seen tremendous growth. This market was valued at $13.2 billion in 2018 and is expected to reach $478 billion by 2025. Consumers are demanding convenient, personalized product choices. Retailers can offer subscriptions, customized orders, and on-time deliveries at reduced operating costs with robotic knitting systems.
AmbiKit can work efficiently in the subscription box market and order personalization. Currently, it can sort up to 60 products per minute. AmbiKit alleviates the manual sorting processes in warehouses that cost millions of dollars annually due to human error, employee injury, and high employee turnover.
Data management and analytics institute SAS announces its plans for initial public offering (IPO) in a recent press release. According to the press release, the company plans to go public by the year 2024.
SAS is now focusing on refining its financial reporting structure and further enhancing its artificial intelligence platform to prepare for its IPO. The company plans to conduct research and develop new and better artificial intelligence software for data management and analytics.
SAS now has enough market leadership and funds to back its decision of going public. Co-founder and CEO of SAS, Jim Goodnight, said, “By moving toward IPO readiness, we can open up new opportunities for SAS employees, customers, partners, and our community to participate in our success, ensuring the brightest possible future for all of us.”
He further mentioned that the company is on the right track of sustainable growth that has enabled SAS to gain the trust of its clients on its uniquely developed platform. “We have built a strong operational and financial foundation, setting us up for an even better future. Now, it’s time to prepare for this next chapter,” Goodnight added.
Last year the company continued its 45 year streak of profitability by generating a revenue of $3 billion and has noted a growth of 8.4% in the first half of 2021. Workplace ethics is something that the company takes very seriously, and it has been able to maintain its position as the pioneer of workplace culture in the world.
The company has successfully understood the need for data analytics software in pandemic times and has developed industry-leading solutions to meet the requirement.
Goodnight mentioned, “ As we chart our path toward an IPO, we will continue to invest in our brand and platform, prioritizing our core values and empowering customers to solve their most complex problems.”
Robin Li Yanhong, the co-founder and CEO of Baidu, predicts eight artificial intelligence technologies that will shape the future of our society in the next decade.
He spoke about next-generation artificial intelligence technologies at the ABC summit 2021 held in Beijing on 29th July. Li believes that humankind would witness a quantitative and qualitative technological transformation in the coming years.
He mentioned technologies like machine translation, biological computing, autonomous driving vehicles, digital city operation, deep learning frameworks, artificial intelligence-powered chips, knowledge management systems, and intelligent personal assistants would play a vital role in this revolution.
“At present, the world is ushering in a new round of innovation. The intelligent economy with AI as the core driving force has become a new engine for economic development. With the industry and society being more and more aware of the true value of AI, AI technology has entered a period of rapid application after long-term investment and accumulation,” said Robin Li.
He further mentioned that cross language real time communication earlier portrayed in movies is now becoming a reality. Li’s company Baidu had earlier worked with the Chinese government to develop a 3D artificial intelligence training system that helped divers to train more precisely with motion capture and data analysis features.
Experts believe that new technologies will help people in every walk of life and will also boost economic development on a global scale. Baidu is also developing a cloud-based artificial intelligence chip named Kunlun 2 that will enter the mass production stage by the end of this year.
Chief Technology Officer of Baidu, Wang Haifeng, said, “We are in the best age of technological innovation and industrial development.” He also added that advanced technologies would bring both digital transformation and intelligent up-gradation to increase economic growth.
Fitness application developing startup Insane AI raised $873,000 in its seed funding round led by pi Ventures. Investors like Anupam Mittal, Sameer Pithawalla, Soumil Majumdarm, Karan Tanna, Arjun Jain, and Lets Venture participated in the funding round.
Insane AI is a Bengaluru-based tech startup that specializes in developing artificial intelligence-powered fitness applications to help customers achieve their fitness goals. The firm was founded in early 2021 by Anurag Mundhada, Jayesh Hannurkar, and Sourabh Agarwal.
The company also provides a mobile app that gives the users information and knowledge about artificial intelligence, machine learning, and data science. The smartphone application is available for free download on Google Play Store.
Co-founder Anurag Mundhada said, “Mainstream fitness formats can become drab and don’t challenge or inspire people enough to really build a long-term habit of fitness. Our unique gamified workout format keeps users highly engaged and motivated to give their best during every session, allows them to monitor their progress, and keeps them committed to their fitness goals.”
The company noted a 50% increase in downloads after the COVID-19 pandemic as all the gyms and fitness centers are closed. The company’s platform integrates fitness and gaming services using machine learning and artificial intelligence, augmented reality, and computer vision to provide an immersive experience to the users.
The application tracks the user’s sleep routine, nutrition intake, mental health, and physical fitness to give exercise suggestions.
Shubham Sandeep of pi Ventures said, “Digital health and fitness has seen a rising demand amidst the pandemic and is already a multi-billion dollar industry. However, the at-home fitness experience is lacking, which is waiting to be disrupted by technological advances such as artificial intelligence, aurgmented reality, and computer vision.”
He further added that they have complete faith in Insane AI for developing solutions to reimagine the fitness industry.
The Michael J. Fox Foundation (MJFF) and the research arm of IBM have developed an AI model that can group typical Parkinson’s disease (PD) symptom patterns. This AI model can accurately identify the progression of these symptoms in a patient, regardless of whether or not they are taking medications to mask those symptoms.
This means that in the future, doctors will be capable of utilizing AI to predict how their patients’ diseases will proceed, allowing them to manage their symptoms better. This was one of the primary goals the two organizations had set out to achieve from the start, according to the discovery results published in The Lancet Digital Health.
The human motor system uses a sequence of distinct movements to accomplish bodily activities, e.g., arm swinging when walking, running, or jogging. These movements and the transitions between them produce activity patterns that may be monitored and examined for indications of Parkinson’s disease. Researchers study the physical measurements obtained from Parkinson’s patients, which differ from those taken from non-patients, and the development in those differences over time indicates disease progression. But it was unknown why some people with Parkinson’s will have their disease turn more severe than others until the recent breakthrough by Big Blue.
Since July 2018, IBM Research and MJFF have been collaborating to see how machine learning may be used to assist physicians in better understanding the underlying biology of Parkinson’s disease, especially given how it proceeds so differently from person to person. This machine-learning algorithm evaluated data from patients over seven years and identified trends in their symptoms that were connected to neurodegeneration. Based on this insight, the team created an AI computer model that might help doctors anticipate how a patient’s condition would evolve, aiding them in giving the correct treatments at the right time or deciding who would benefit the most from a clinical trial. The researchers identified eight distinct states in Parkinson’s disease, each containing both motor and non-motor symptoms, and discovered that the disease might shift between them in no particular sequence over time. One of these states included severe cognitive impairment.
IBM stated that its aim is to use AI to help with patient management and clinical trial design. “These goals are important because, despite Parkinson’s prevalence, patients experience a unique variety of motor and non-motor symptoms,” the company added.
The Michael J. Fox Foundation’s Parkinson’s Progression Markers Initiative provided the data for this research. It is a clinical study originally started in 2010 in cooperation with more than 30 biotech, pharmaceutical, non-profit, and private firms. The study, which included over 1,400 patients from throughout the world, has gathered years of patient data from health records, wearable devices, and cellphones, as well as sequencing genomes and analyzing specimens obtained during their condition. According to IBM, this is the largest and most robust volume of longitudinal Parkinson’s patient data to date.
The findings were compared to those of a control group of 610 Parkinson’s disease patients from the National Institute of Neurological Disorders and Stroke Parkinson’s Disease Biomarker Program (PDBP). This aided in the validation of the AI model that IBM researchers had been working on since mid-2018.
Messaging service provider Gupshup raises $240 million in its additional funding round on Wednesday. Investors like Tiger Global, Fidelity Management, Think Investments, Malabar Investments, Harbor Spring Capitals, and White Oak participated in Gupshup’s series F funding round.
Earlier this year, the startup raised nearly $100 million. The fresh funding has increased Gupshup’s market valuation to $1.4 billion. The company plans to use the funding to expand its market reach and to develop its business messaging platform further.
Co-founder and CEO of Gupshup, Beerud Sheth, said, “Conversation is becoming a bigger part of doing business, and it has partly been driven by the pandemic. Second, we have always been the leader in this space, but the product innovation we have focused on in the last two to three years has worked in our favor.”
He further mentioned that new investors would provide crucial insights to the company for it to plan its future strategies. The startup is also planning to use a certain amount of funds for a share buyback for its loyal employees and investors.
Gupshup is a San Francisco-based company founded by Dr. Milind R Agarwal, Beerud Sheth, and Rakesh Mathur in 2004. The firm offers a conversational messaging platform to businesses to share short messages privately and publicly.
The platform delivers more than 4 billion messages every day and has sent a total of 150 billion messages till date. Sheth said, “There was still more investor interest, and the company wanted to build relationships with the public market investors. So, having a relationship with them now will help us in doing an IPO later.”
Company officials claim that they have witnessed a 60% increase in growth compared to the previous year. Investors firmly believe that the new funding will help the company’s team to scale up its operations and fill the product gaps in its portfolio.
Climate change is leading to intense and deadly wildfire seasons, stretching the firefighting resources to the limit. However, analytical tools like Suppression Difficulty Index (SDI) are helping firefighters to improve their chance of reducing wildfires by bringing machine learning and big data into the picture. Firefighters are turning to AI tools for building control lines and agendas.
For decades, fire managers have been relying on weather patterns and analytical data on fire behavior. Now, they are utilizing predictive technologies and artificial intelligence to plan wildfire management schedules and manage analytics of their terrain in real-time.
Researchers like Mr. Dunn hope their ML models and tools can ensure that the scarce fire resources are deployed as efficiently as possible. Firefighters are currently using Potential Operational Delineations (PODs), a popular tool that Mr. Dunn helped develop. PODs use advanced spatial analytics that allows teams to plan the location to take on wildfires even before they break out. The POD tool superimposes several statistical models like SDI over a map of a region. This aids fire managers in planning out their control lines and plans of attack in advance.
“You will never take the personal element out of fighting fires, but people make bad decisions under stress – they can’t crunch all this data on their own. This is about reducing the uncertainty, and helping firefighters make better decisions,” said Brad Pietruszka, who has been using analytical tools like PODs since 2017. He is a fire manager at San Juan, a 1.8-million-acre National Forest.
Another complex tool is the Potential Control Locations (PCLs) algorithm, which suggests where to build control lines during a fire. It considers information about ridges, flat grounds, fuel present in the ground, geography, distance from roads and public spaces, and samples it across historical fire perimeters.
Tools like SDI, PODs, PCLs, and others provide crucial information to firefighters during out-of-control wildfire seasons. As firefighters turn to AI to fight wildfires, researchers have stressed that these tools are efficient only when coupled with insights from people living in wildfire-prone areas.