Social networking platform Twitter announces its new bug bounty program to find bias in artificial intelligence algorithms. The bounty program has a prize pool of over $7,000.
It mainly focuses on detecting bugs and biases in artificial intelligence image cropping algorithms. This move comes after the constant criticism Twitter faces for its image cropping algorithm as users claim it to be biassed towards lighter skin toned people.
According to Twitter, it is pretty challenging to identify bugs on its own. Hence it has now reached out to the public for improving its technology.
The blog written by Rumman Chowdhury and Jutta Williams mentioned, “We want to foster a similar community focused on ML ethics so that we can identify a wider range of issues than we can do ourselves. With this challenge, we are on Twitter. And the industry aims to set a precedent for the positive and collective identification of algorithmic harm.”
The winner of the competition will be awarded an amount of $3,500. The runner-ups will be given $1,000 and $500, respectively. Apart from the traditions award categories, Twitter has also introduced two news categories, namely Most Innovative and Most Generalizable. Both of these categories have a winning prize of $1,000.
The blog also mentioned, “It sparks more people to be involved who maybe didn’t have resources and free time. We want to start cultivating and creating a community of ethical AI hackers.”
Interested candidates can register themselves with HackerOne in order to compete in the program. The registration link is available on the official website of Twitter.
The bounty program is all set to go live from 30th July 2021 to 6th August 2021. The timings of the competition will be from 9:00 AM to 11:59 PM Pacific Time.
Tech Giant Facebook announces its plans to develop new artificial intelligence solutions for accurate age recognition. The company has a policy that restricts users of age less than 13 years from registering on its platform.
Users nowadays bypass this restriction by entering an incorrect age on the social media platform of Facebook. The company plans to tackle this issue using the capabilities of artificial intelligence.
The company is now seeking help from web browsers and operating system developers to provide them with the required data for developing accurate artificial intelligence algorithms capable of determining the actual age of users.
Currently, Facebook depends on user reports against accounts to verify and remove them from the platform. Vice President of youth products at Facebook, Pavni Diwanji, said, “We’re developing AI to find and remove underaged accounts and new solutions to verify people’s ages. We’re also building new experiences designed specifically for those under 13.”
Facebook is also planning to build a new platform specifically for users below the age of 13. A new social media platform is currently being developed to enable users to have an age-appropriate online experience that can be managed by their guardians.
Earlier, the company planned to use government IDs to ensure the age of the users, but now they have changed their approach and are relying on artificial intelligence. The research team plans to use currently available data to train its artificial intelligence system. While explaining their approach in a recently published blog, Diwanji said, “We look at things like people wishing you a happy birthday and the age written in those messages. We also look at the age you shared with us on Facebook and apply it to our other apps where you have linked your accounts and vice versa.”
Australian Federal Court now decides to recognize artificial intelligence as inventors under the patent law. This groundbreaking announcement has changed the widely accepted fact that only humans can invent new technologies.
Earlier this year, the South African government also recognized an artificial intelligence neural network named DABUS as an inventor. An Australian Law professor at the University of Surrey, Ryan Abbot, had also filed two international patents in seventeen different countries across the globe to recognize DABUS and the inventor of two neural systems for adjustable food containers and emergency beacons.
He mentioned that he wanted to support DABUS after realizing the double standards the current law has for the assessment of human behavior and artificial intelligence. The developer of DABUS, Stephen Thaler, had been engaged in numerous legal cases worldwide for this recognition since 2019.
Thaler said, “It’s been more of a philosophical battle, convincing humanity that my creative neural architectures are compelling models of cognition, creativity, sentience, and consciousness.”
He further added that the recently established fact that their artificial intelligence platform DABUS has developed patent-worthy inventions prove that the system can function as a conscious human brain.
Dr. Mark Summerfield, the patent attorney of Australia, said, “A recognition in Australian law that the term inventor can encompass a machine would not only be well ahead of the dictionaries, it would also be ahead of any significant usage of the word in this way in society at large, or even among qualified experts in the field.”
Countries like The United States and the United Kingdom have already rejected to accept artificial intelligence as an inventor. The developers are now looking forward to a positive response from India, Japan, and Israel.
Brain computing startup Neuralink announces that it has raised $205 million in its series C funding round in a blog. The funding was led by Vy Capital and had other participants, including Sam Altman, Ken Howery, Fred Ehersam, and Google Ventures.
Neuralink uses artificial intelligence technologies to help brain dead or injured people. In the long run, it plans to develop many brain-machine interfaces to cure various types of brain ailments that would create a connected biological and artificial intelligence ecosystem.
The company has now received total funding of $363 million after the new investment round led by a Dubai-based investment company Vy Capital. Neuralink is a California-based startup founded by Elon Musk and PayPal co-founder Max Hodak in the year 2016.
The firm specializes in developing high bandwidth brain-machine interfaces to enable individuals suffering from paralysis to interact with society. Earlier in 2017, the company had received an investment of $107 million from Elon Musk.
Officials said that they aim to “develop brain-machine interfaces that treat various brain-related ailments, with the eventual goal of creating a whole-brain interface capable of more closely connecting biological and artificial intelligence.”
The company plans to use this fresh funding to bring its product to the public and conduct research to develop and improve its products. According to Elon Musk, Neuralink’s first product called ‘N1 Link’ will be technology that will be implanted into paralysis patients in order to enable them to interact with others.
N1 Link will be completely invisible after the implantation process and will process and transmit data wirelessly. Musk describes the product as a “FitBit in your skull with tiny wires that go into your brain.” The official blog mentioned, “The first indication this device is intended for is to help quadriplegics regain their digital freedom by allowing users to interact with their computers or phones in a high bandwidth and naturalistic way.”
Michael Marciano and Jonathan Adelman, research assistant professors in Forensic and National Security Sciences Institute (FNSSI), have invented a novel hybrid machine learning approach (MLA) to mixture analysis (U.S. patent number 10,957,421). Their method enables swift and automated deconvolution (separation) of DNA mixtures with increased accuracy. The software provides a high confidence conclusion with minimal computing and financial resources.
“I knew that nothing like this approach has been done before and, since this problem set was uniquely appropriate for the use of AI and machine learning, we involved the Office of Technology Transfer in order to pursue intellectual property protection,” says Marciano. “The most exciting aspect of this project was that we introduced the application of AI to forensic DNA analysis.”
Forensic evidence has helped jurors give clarity of the criminal offense, and with time complex evidence is processed with better speed, precision, and sensitivity. For instance, in a bank robbery, the offender is likely to use a pen available to customers. Although the culprit had deposited skin cells on the pen, it may also involve several other people who visited the bank. This condition results in a complex mixture of DNA.
With the advent of computer software, biological evidence containing complex mixtures or low-level DNA can be evaluated by further utilizing probabilistic methods. The primary task is to process the DNA mixture and estimate probabilities associated with individuals contributing to a given DNA mixture. However, due to the complexity of non-pristine DNA and lack of resources, limitations still exist giving a future scope of research.
In 2014, the National Institute of Justice funded this idea of Marciano and Adelman and hopefully, it will aid the law enforcement and criminal justice community. Marciano says FNSSI students can currently use components of the MLA and, once a commercial partner is secured and the product is fully developed, they will begin implementing it into the FNSSI curriculum. Other patent-pending technologies from Marciano and Adelman are currently being used by students.
This technology is available for licensing and companies interested in exploring commercial applications, or not-for-profit entities interested in developing this technology for public benefit should contact the Syracuse University Office of Technology Transfer.
Voice biometrics has grown considerably, particularly with Siri and Alexa enhancing the capabilities of voice technology. Two leading companies Auraya and Validsoft will invest in this space, to aid large companies in financial services and telecommunications, thereby minimizing fraud. They will offer a better level of security than PIN codes and passwords using voice biometrics to have a better user experience, eventually reducing risk and simplifying the verification process.
Similar to a fingerprint, iris, or face, voice biometrics is also unique to an individual. To create a voiceprint, a speaker provides a sample of their voice. “When you want to verify your identity, you use another sample of your voice to compare it to that initial sample,” Paul Magee, President of voice biometrics company Auraya explained.
Considering the cost and effort, voice technology is one of the underappreciated applications. Despite many hurdles, voice technology can be considered the epitome of verification and authentication processes for many organizations.
Magee said. “Nobody can steal my voice because you can’t steal what I’m going to say next.” When users are prompted to say their phone or account numbers or digits displayed on the screen, that’s active biometrics. “Passive is more in the background,” Magee said. “So while I’m talking with the call center agent, my voice is being sampled and the agent is being provided with a confirmation that it really is me.”
This voice technology system can be cheated by using recorded audio or creating deep fakes by producing a synthetic version of a person’s voice. However, such fraud can be overcome using live elements or identifying anomalies in speech.
The greatest barrier to voice biometrics is the legislative law of various countries creating obligations to use voiceprints. The future of this technology depends on simplifying the implementation process and getting regulatory compliance to come up with solutions that can give a return on investment, improve productivity.
A recent study published on Thursday by JAMA Ophthalmology found that a computer vision-powered wearable device may help reduce collisions and other accidents for the blind and visually impaired.
Artificial intelligence’s inexorable advance has brought many novel solutions to make the lives of physically challenged people easier. When embedded into wearables, and portable devices, artificial intelligence is improving life for the blind and visually impaired.
Computer vision is a branch of artificial intelligence (AI) that allows computers and systems to extract useful information from digital pictures, videos, and other visual inputs, as well as to perform actions or make suggestions based on that data. To mine insights from data, it leverages a subtype of machine learning called deep learning and a convolutional neural network (CNN). These technologies work together to interpret sensory data by using labels to perform convolutions and make predictions about what the model is “seeing.” They identify numeric and encoded patterns in vectors, which are used to translate all real-world visual input. The neural network performs the convolutions till the model makes predictions accurate to what humans view.
In the study, the researchers developed a technological aid for the visually impaired. This aid contained an experimental device and a data recording unit that was housed in a sling backpack with a chest-mounted wide-angle camera fixed on the strap and two Bluetooth-connected vibrating wristbands worn by the user. The camera is linked to a processing unit that records images and assesses the danger of a collision based on the relative movement of approaching and surrounding objects in the camera’s field of view.
In case an impending collision is detected on the left or right side, the corresponding wristband will vibrate, whereas a head-on collision will cause both wristbands to vibrate. Unlike walking canes or any previous devices that warn of surrounding objects whether the user is walking toward them, this device evaluates relative motion and warns only of approaching obstructions that pose a collision threat while ignoring items that are not likely to cause a collision. However, it will not replace the walking canes altogether. In fact, the research showed that when used in combination with a long cane, the wearable device minimized the risk for collisions and fell by 37% when compared with other mobility aids.
According to Dr. Gang Luo, who is an associate scientist at the Schepens Eye Research Institute of Mass Eye and Ear, many blind individuals use long canes to detect obstacles and collision risks, but the risk is not entirely eliminated. Hence their team sought to develop a device that can augment these everyday mobility aids, further improving their safety. Dr. Luo is also an associate professor of ophthalmology at Harvard Medical School in Cambridge, Mass.
Dr. Luo trialed the device with his colleagues in his vision rehabilitation lab, including the lead author Shrinivas Pundlik, Ph.D., who designed the computer vision algorithm. They tested the device on 31 blind and visually impaired adults who use a long cane or guide dog, or both, to aid their daily mobility.
After being instructed on how to operate the wearable device, participants used it for roughly a month throughout everyday activities while continuing to utilize their usual mobility equipment. The wearable was randomized to flip between active and silent modes. Inactive mode, users could receive vibrating alerts for imminent collisions. The silent mode, in comparison, analyzed and captured images but did not vibrate to notify users of impending collisions.
As per Dr. Lou, the silent mode is similar to the placebo condition in many drug trials. During the testing and analysis, the wearers and researchers would have no idea when the device modes changed. The researchers examined the recorded footage for collisions. The device’s efficiency was assessed by comparing collisions that happened during the active and silent modes. This was when the team observed the collision frequency in active mode was 37% less than that in silent mode.
Apart from that, the team mentions that there is a possibility that the wearable did not identify all conceivable dangers. Before asking for U.S. Food and Drug Administration clearance, the researchers want to augment the computer vision digital processing and camera technologies to make the device more efficient, smaller, and visually attractive.
The clinical trial of this computer vision wearable device was funded by a U.S. Army Medical Research and Materiel Command grant. It received its patent from the Mass Eye and Ear.
Omega’s relentless evolution in timekeeping technology has delivered flawless timings to the world’s finest competitors. Omega is the official timekeeper of Tokyo Olympics 2020, and currently uses computer vision and motion sensors for events like swimming, gymnastics, and beach volleyball to determine on-screen graphics.
Since 1932 Omega has been serving the legacy of timekeeping at the Olympics. From the stopwatch to the futuristic starting pistol, Omega with the British Race Finish Record Company has helped in precise recording to maintain the decorum in various sport events. However, the time capturing method was switched exclusively by electronics, with photo-finish cameras capturing 10 new world records.
The International Olympic Committee for 2021 in Tokyo has introduced four new sports: skateboarding, surfing, karate, and sport climbing. But, the most interesting work is how Omega within the last four years trained its in-house artificial intelligence to learn beach volleyball.
Beach volleyball is a team sport played by two teams of two players on a sand court divided by a net. This sport needs positioning and motion technology for training an AI. It will help recognize numerous shot types from smashes, spikes, pass types to blocks, as well as the ball’s flight path. Later, this data is combined with information garnered from gyroscope sensors in the player’s clothing. Motion sensors assist systems with the direction of movement of the athletes, as well as the height of jumps and speed. The entire data is processed and fed live to broadcasters for use in commentary or on-screen graphics.
The only challenge faced with AI is dealing with missing data whilst the ball is out of the camera coverage, it needs to recalculate null values once it gets the object back into its vision. “When you can track the ball, you will know where it was located and when it changed direction. And with the combination of the sensors on the athletes, the algorithm will then recognize the shot,” Zobrist says
Omega uses sensors and multiple cameras at 250 frames a second. The company claims its beach volleyball system to be 99 percent accurate. Toby Breckon, professor in computer vision and image processing at Durham University, however, is interested to see how the performance is delivered in the Games specifically if the system is fooled by differences in race and gender.
Zobrist confirms a whole new set of innovations for the Paris Games in 2024 and consistent changes in timekeeping, scoring, motion sensors, and positioning systems for Los Angeles in 2028.
Ambi Robotics, a pioneer in simulation to reality AI, has introduced AmbiKit, a multi-robot knitting solution that can identify and pick items from bulk storage. AmbiKit is aimed at transforming the fastest growing eCommerce supply chain: online subscription boxes with AI-powered pick and place robots. This new knitting robot system has a five-robot picking line that can augment the pick and place tasks.
The Emeryville, California-based company claims its new knitting robot system can work 10,000x faster than alternatives. AmbiKit is proficient in picking and packing millions of unique items into bags/boxes for online orders. The AI allows the robots to adapt to any hardware configuration and learn new grasping methods. When paired with their cloud-based nature, they can share the latest techniques across the entire population of AmbiKits (no customer data is shared).
“Retailers and global eCommerce brands are operating in a year-round ‘peak’ season. In order for high-growth brands to fulfill online orders, companies are deploying AI-powered robotic systems to increase resilience in the supply chain. AmbiKit can immediately improve efficiency by deploying into existing workflows and successfully pick and place millions of unique items from day one,” says Jim Liefer, CEO of Ambi Robotics.
Since its inception, the subscription eCommerce market has seen tremendous growth. This market was valued at $13.2 billion in 2018 and is expected to reach $478 billion by 2025. Consumers are demanding convenient, personalized product choices. Retailers can offer subscriptions, customized orders, and on-time deliveries at reduced operating costs with robotic knitting systems.
AmbiKit can work efficiently in the subscription box market and order personalization. Currently, it can sort up to 60 products per minute. AmbiKit alleviates the manual sorting processes in warehouses that cost millions of dollars annually due to human error, employee injury, and high employee turnover.
Data management and analytics institute SAS announces its plans for initial public offering (IPO) in a recent press release. According to the press release, the company plans to go public by the year 2024.
SAS is now focusing on refining its financial reporting structure and further enhancing its artificial intelligence platform to prepare for its IPO. The company plans to conduct research and develop new and better artificial intelligence software for data management and analytics.
SAS now has enough market leadership and funds to back its decision of going public. Co-founder and CEO of SAS, Jim Goodnight, said, “By moving toward IPO readiness, we can open up new opportunities for SAS employees, customers, partners, and our community to participate in our success, ensuring the brightest possible future for all of us.”
He further mentioned that the company is on the right track of sustainable growth that has enabled SAS to gain the trust of its clients on its uniquely developed platform. “We have built a strong operational and financial foundation, setting us up for an even better future. Now, it’s time to prepare for this next chapter,” Goodnight added.
Last year the company continued its 45 year streak of profitability by generating a revenue of $3 billion and has noted a growth of 8.4% in the first half of 2021. Workplace ethics is something that the company takes very seriously, and it has been able to maintain its position as the pioneer of workplace culture in the world.
The company has successfully understood the need for data analytics software in pandemic times and has developed industry-leading solutions to meet the requirement.
Goodnight mentioned, “ As we chart our path toward an IPO, we will continue to invest in our brand and platform, prioritizing our core values and empowering customers to solve their most complex problems.”