Wednesday, November 19, 2025
ad
Home Blog Page 134

Copycats or Inspired Art: Is AI-generated art causing Copyright Infringement?

AI iamge generator copyright
Image: © z1b/Stock.adobe.com

In September, a former programmer and artist living in New York named Kris Kashtanova announced on their Instagram profile that Zarya of the Dawn, an AI-generated graphic book, has been registered for U.S. copyright. It was recognized as the first work produced utilizing AI-art generators to earn such recognition from the U.S. Copyright Office, noting that other creators had previously failed to achieve this milestone. 

In the same month, Getty Images removed AI-generated artwork from its platform, including pictures created by OpenAI’s DALL-E, Midjourney, and Meta AI’s Make-A-Scene. Getty claims that the decision was made due to concerns that the application of copyright rules to images produced using such technologies is still up in the air.

Most recently, DeviantArt announced its own AI generator, DreamUp, while offering users option to disallow art-generating AI systems from using their artwork without their consent. However, it immediately faced backlash from users because to opt out, users had to submit a form that would take days to process.

The copyright issue is not unwarranted, as AI image/art generators sample publically available visuals while creating new visuals and using them as training data for their algorithms. The outcomes are frequently ones that are protected by copyright laws. Apart from the ethical and legal ramifications of co-creating art with AI, people have raised concerns in light of recent AI advancements. The increasing use of AI to create magazine covers, posters, and logos, for example, raises the worrisome question of whether AI will someday replace artists.

Despite the backlash among original artists and photographers, the judiciary has not made any comment about the possible violation of regional copyright laws. Meanwhile, some artists wonder if AI will foster or stifle creativity because, for instance, copyright regulations in the US and the EU do not expressly include AI-generated work.

A flourishing industry has grown as artists experiment with the potential of computer programming backed by AI. In recent months, AI-generated artworks have won top prizes in digital art contests and bagged the highest prices at auctions. This may seem unfair as artists take years to perfect their craft, and AI achieves that by a ‘prompt.’

Apart from digital theft, there is the need to define ‘digital creator,’ especially when it is probable that future legal disputes will undoubtedly rely on the extremely vague definition of whether a person makes creative decisions can be deemed as the creator. It is also challenging to pinpoint the original creator of an AI artwork due to the process of creation of the artwork. The first step in creating an AI-generated artwork is writing an algorithm which is done by a team of coders. Next, an individual (AI artist) offers some input data or prompt, following which AI autonomously produces the result or ‘original’ work. The majority of the copyright laws in place encourage authentic original work. This is so because their primary motive was to encourage both the economy and creative endeavors. 

For example, a person may not be considered the owner of AI artwork if they only generate art by offering a generic textual prompt like “SpongeBob SquarePants in the medieval era.” However, it might be acceptable to claim authorship if someone takes a very precise prompt or an inspired idea, produces a large number of artworks, and makes further adjustments. The reason for the latter’s uncertainty is that the unpleasant reality of this market is that haste is sometimes prioritized over quality, thus, a flawless AI-generated image might meet many objectives.

Interestingly, in cases when an ‘original’ AI-generated work of art is not copyrighted and it is in the public domain, anybody can reproduce, distribute, use it commercially, or sell it to others. The residual impact might be to dissuade original artists from producing creative works of art. 

Many media-based businesses view this as another avenue of profit since it makes it possible to quickly create products based on deceptively copyright-free artworks instead of designing an original piece for weeks. As a result, such companies may not feel the necessity to pay the artist for original artwork or visuals, the former can enter a few prompts or visual data into the system and get similar results in a matter of seconds. 

Although AI picture generators don’t feel hesitant about stealing content from visual artists without their knowledge, the key distinction is how inclusion is defined. For instance, the AI model may no longer use the dataset even if it was trained on it and redistributed. It means that it’s no longer there if you can’t see inside and immediately extract a picture. Stability AI is a partially British corporation whose image dataset was created by the German non-profit LAION. Germany also has a Text and Data Mining exemption similar to the one the UK has, making LAION’s existence legitimate. Would that suggest that copyrighted images and artwork can be scraped legally?

The above-mentioned instances, highlight another ethical blackhole: can use of data for training such models also violate copyright laws. The research group behind DALL-E, OpenAI, claims to have developed the program by web scraping and analyzing millions of annotated images, although the information is private. There are generative AI tools like Stability AI, which depend on already publicly available visuals on Pinterest, Shutterstock, and more.  

Web scraping of images has been a hot topic since ClearView AI scrapped billions of social media photos. However, if AI technologies are essentially reproducing artworks after accessing their creative expression to find patterns in the pictures and descriptions, and the ‘new’ artworks cannot be traced back to the original creation via reverse image search, is it still illegal?

Although the Ninth Circuit earlier this year reiterated that scraping openly accessible data from online sources does not violate the Computer Fraud and Abuse Act, or CFAA, which regulates what qualifies as computer hacking under U.S. law. Nevertheless, a court has not yet ruled on whether the data ingestion stage of an AI training exercise counts as fair use under American copyright law.

The historic decision by the U.S. Ninth Circuit of Appeals was the result of a prolonged legal dispute filed by LinkedIn to block a competitor organization from web scraping users’ public profiles for personal information. The case was heard by the United States Supreme Court last year, but it was sent to the Ninth Circuit for the original appeals court to re-hear the appeal.

Regardless of whether you consider AI-generated artworks to be original works of art or to be stylistically inspired and plagiaristic, they reside in a perplexing legal gray area that no regulatory agency has yet to resolve.

Advertisement

Cristiano Ronaldo teams up with Binance to launch Special NFTs

Cristiano Ronaldo NFTs
Source: Binance

As part of an exclusive, multi-year agreement with Binance, a blockchain ecosystem and cryptocurrency infrastructure provider, Portuguese striker Cristiano Ronaldo’s first non-fungible token (NFT) collection is anticipated to be available, debuting on November 18, 2022. According to reports, a worldwide marketing campaign using Ronaldo will promote the launch and introduce his followers to Web3.0 through NFTs.

The first Cristiano Ronaldo NFT collection, according to Binance, will include seven animated statues with four rarity levels: Super Super Rare (SSR), Super Rare (SR), Rare (R), and Normal (N). Every NFT statue will represent Ronaldo’s life, from his early bicycle kicks through his upbringing in Portugal. The Binance NFT market will hold an auction for the 45 CR7 NFTs with the highest values (5 SSR and 40 SR). NFTs will be awarded to the highest bidder when the auction closes, which is anticipated to take 24 hours.

SSR and SR bid prices will start at 10,000 and 1,700 BUSD, respectively. The remaining 6,600 NFTs (600 R and 6,000 N) will be available on Binance beginning at 77 BUSD for the Normal rarity.

Sneak Peek of the CR7 NFT Collection Source: Binance

Every rarity level will have a unique set of benefits, such as:

–  Personal message from Cristiano Ronaldo

–  Autographed CR7 & Binance merchandise

–  Guaranteed access for all future CR7 NFT drops

–  Complimentary CR7 Mystery Boxes

–  Entry into giveaways with signed merchandise and prizes

Additionally, new customers will get a Cristiano Ronaldo Mystery Box when they join Binance.com (and complete KYC). These boxes could include limited-edition Ronaldo NFTs. Only the first 1.5M new Binance users who sign up with referral ID ‘RONALDO’ are eligible for the CR7 Mystery Boxes.

Read More: Nike is Launching .Swoosh Web3 Platform to Offer its NFT Apparel on Polygon

The fact that this launch takes place the day after Piers Morgan’s YouTube channel publishes the complete Cristiano Ronaldo’s interview about how Manchester United abandoned him makes the timing seems like a smart move. Prior to the two-part publication, the first of which was released on Wednesday, Morgan, and Ronaldo have been teasing the 90-minute interview.

On his channel on Tuesday, Morgan remarked that even if some people appear to be criticizing Ronaldo after viewing small clips of the interview, many would identify with him after witnessing the entire exposé. In the shocking tell-all interview with Morgan, Ronaldo said he has “no respect” for United manager Erik Ten Hag and took a swipe at former teammates Wayne Rooney and Gary Neville.

Then, he focused on the Glazer family, who own Manchester United, announcing: “Manchester [United] is a marketing club, they will get its money from the marketing, the sports they don’t really care, in my opinion.”

This interview, followed by Ronaldo’s announcement about his NFT on Twitter, has made him the center of huge backlash and ridicule from his fans. It will be interesting to see if the interview will help in the selling of the NFTs.

Advertisement

WhatsApp India Head and Meta India Public Policy lead resign

Whatsapp India Head resign

Meta India’s lead for public policy Rajiv Aggarwal has quit today, two weeks after Meta India head Ajit Mohan quit to take up another job at Snap. WhatsApp’s India head Abhijit Bose has also resigned, the company said.

According to Meta, Rajiv Aggarwal decided to step down from his role at Meta in order to pursue another opportunity.

Will Cathcart, Head of WhatsApp, said, “His entrepreneurial drive has helped our team deliver new services benefiting millions of people and businesses. There is so much more WhatsApp can do for India, and we are excited to help advance India’s digital transformation,” 

Read More: Waymo Uses Its Robotaxis To Create Real-Time Weather Maps

The tech giant also announced Shivnath Thukral as Director of Public Policy for Meta India across all its platforms. Thukral served as the Director of WhatsApp Public Policy in India.

Over 11,000 employees at Meta were laid off in the first significant round of lay-offs in the social media giant’s history. Meta is the latest tech giant to announce a downsizing.

Advertisement

Meta Introduces ‘Galactica,’ an AI System that Generates Academic Papers from Simple Text Inputs

meta galactica

Meta AI introduces a new language model called ‘Galactica,’ an AI model that generates original scientific and academic papers using simple text inputs. Galactica can also answer direct questions, explain its answers, and provide citations for the sources it used.

Meta AI has been actively working on numerous language models like the OPT-175B and PEER and studying the human brain for language processing. With Galactica, researchers aim to summarize academic literature, solve math problems, generate Wiki articles, and accomplish much more. 

Galactica was trained on a massive corpus of scientific and academic papers, knowledge bases, and reference material. After collecting all relevant information, Galactica compresses it into a 120-billion parameters model capable of fitting on a single NVIDIA A100 GPU. The model is a competitive alternative to GPT-3, another language model by OpenAI that can write an academic thesis by itself in about 2 hours.

Galactica explained in its own words, “Galactica models are trained on a large corpus comprising more than 360 millions in-context citations and over 50 millions of unique references normalized across a diverse set of sources.”

Simply put in a text input and click on “Generate.” As seen in the above screenshot, on giving input to generate results on the “breadth-first search algorithm,” Galactica outputted a brief paper. To expand the content, Galactica offers an option to “Generate More.”

Read More: GitHub creates private vulnerability reports for public repositories

Galactica’s scientific knowledge is derived from the exertion that went into building the dataset that it was trained on. To ensure that the model learns from various modalities, including natural language, molecular sequences, codes, etc., additional tokens were created to help it identify them. 


Galactica is the result of researchers from Meta AI and people from Paper with Code working together to develop it completely open-source and publish the research.

Advertisement

Amazon introduces its new item-picking robot: Sparrow

On November 10, Amazon introduced its new item-picking robot, Sparrow, designed to detect, select and handle millions of individual warehouse inventory items. Sparrow should also minimize employees’ repetitive tasks and improve worker safety. 

As per Amazon, Sparrow is developed to pick out items on shelves so that they can be packed into orders for shipping to customers. It is one of the most challenging tasks in warehouse robotics because different items have different shapes, sizes, and textures. Sparrow uses machine-learning algorithms and cameras to detect items on shelves and plans to grab them using a custom gripper with several suction tubes. 

Read more: National Commission for Women launched the 4th phase of the Digital Shakti campaign

According to Amazon, Sparrow can handle 64 percent of more than 100 billion items in its inventory. It can pick various items like DVDs, socks, and stuffies but needs help with loose or complex packing items. 

In the year 2012, Amazon had first introduced robotics, and since then, it deployed over 520,000 robotics drive units globally that are capable of handling a variety of warehouse tasks. Sparrow is built to join Amazon’s earlier robot systems like Robin and Cardinal to streamline and speed up warehouse tasks while freeing human laborers from repetitive, tedious, and dangerous tasks.

Advertisement

National Commission for Women launched the 4th phase of the Digital Shakti campaign

The National Commission for Women (NCW) has recently launched the Digital Shakti campaign’s 4th phase. The 4th phase of the campaign focuses on empowering and skilling women in cyberspace digitally in collaboration with CyberPeace Foundation and Meta.

Ms. Rekha Sharma, the Chairperson of NCW, mentioned that 4th phase would be a milestone in ensuring safe cyberspace for women. Digital Shakti has been helping women by encouraging their digital participation through proper technical training. The 4th phase campaign in Digital Shakti will continue contributing towards the larger goal of fighting cyber violence against women.

Read more: Google’s new approach to Reinforcement Learning: Reincarnating Reinforcement Learning (RRL)

The Digital Shakti campaign was started in 2018 to help women fight cybercrime in the most effective ways. With the Digital Shakti campaign, more than 3 lakh women across India have been made aware of cybersecurity tricks, practices, data privacy, and technology usage for their benefit.

 In March 2021, the 3rd phase of the Digital Shakti campaign was launched at Leh with NCW Chairperson Mrs. Rekha Sharma, Lieutenant Governor Shri Radha Krishna Mathur, and Jamyang Tsering Namgyal, MP, Ladakh. In the 3rd phase, the campaign developed a resource center to offer information on all the avenues of reporting in case women face cybercrime.

Advertisement

GitHub creates private vulnerability reports for public repositories

Github creates private vulnerability reports for public repositories
Image Source: mediatalk

GitHub now allows developers to discreetly warn their peers of discovered vulnerabilities. According to the company, doing so will avoid the “name and shame” game and stop any unwanted misuses brought on by public revelation.

GitHub stated in a blog post earlier this week that given the way the platform is now set up, there is sometimes no other alternative but to publicly expose a vulnerability that risks notifying prospective threat actors before malware removal tools can be implemented. For researchers, who are frequently saddled with decisions that can result in further security issues, being able to report code vulnerabilities in confidence is crucial.

According to the blog, security researchers frequently feel accountable for warning users about a vulnerability that could be exploited. “If there are no clear instructions about contacting maintainers of the repository containing the vulnerability. It can potentially lead to public disclosure of the vulnerability details.”

GitHub has now added private vulnerability reporting, which is essentially a simple reporting form, to address the problem. A security researcher or developer can use the new private reporting feature to report a vulnerability report to a public repository. The receiver can either accept it, implying to the researcher a willingness to collaborate with them to fix the problem, or it can reject it, ask more questions, and/or signal other options.

By visiting the main page of their repository, clicking Settings, and then selecting “Code security and analysis” under “Security,” code maintainers or developers can activate private reporting on GitHub.com. They can select to enable or disable the option by clicking the arrow to the right of “Private vulnerability reporting.”

Read More: Neural Acoustic Fields: MIT-IBM Watson team use Acoustic Information to build ML model

Since complaints are handled in a single location, the Microsoft-owned platform also anticipates that the new reporting style would simplify troubleshooting procedures. Moreover, it offers code maintainers the chance to privately discuss vulnerability details with security researchers and developers before working together on a solution using patch management software.

The initiative was one of several announcements made by GitHub during the GitHub Universe 2022 developer event this month.

Advertisement

Yale uses ML based PRECISION model for personalized treatment for Hypertension

Yale ML based model for hypertension patients, precision yale
Image source: Cleveland Clinic

Hypertension, often known as high blood pressure, is one of the most prevalent diseases in the general population, especially in middle-aged and older people. If a person has mild to moderate hypertension, it can initially be treated with lifestyle changes. However, blood pressure drugs would often be taken into consideration if this doesn’t work. Yale University researchers have created a machine learning-based clinical decision support tool to tailor recommendations for blood pressure control treatment goals.

The pressure that pushes blood through arteries when the heart beats, supplying oxygen and nutrients to organs and tissues all over the body, is known as blood pressure. Our organs must maintain a normal blood pressure level in order to function properly and prevent internal injury. Researchers state that hypertension, which develops when there is a continuous blood pressure of more than 140/90 mm Hg, is one of the main causes of heart disease, disability, and early mortality worldwide in the study published earlier this week in The Lancet Digital Health. Though, there has been disagreement over how much blood pressure should be dropped to reduce this risk, particularly for Type 2 diabetes patients for whom clinical trials have shown different outcomes regarding the benefits of aggressive blood pressure control.

This inspired researchers from Yale to create a machine learning-based tool to help people with and without diabetes determine whether to pursue intensive vs conventional blood pressure treatment objectives. Through a data-driven methodology, the innovative clinical decision support tool encourages collaborative decision-making between patients and healthcare professionals.

To determine whether the superiority of intensive vs routine antihypertensive care can be explained by patient characteristics, lead author Dr. Evangelos K. Oikonomou and senior author Dr. Rohan Khera, assistant professor at Yale School of Medicine and director of the Cardiovascular Data Science (CarDS) Lab, gathered data from two randomized clinical trials: SPRINT (Systolic Blood Pressure Intervention Trial) and ACCORD BP (Action to Control Cardiovascular Risk in Diabetes Blood Pressure). While SPRINT did not include patients with diabetes, ACCORD BP included only patients with type 2 diabetes mellitus. 

Both studies randomly assigned patients to an intensive or regular systolic blood pressure target of 120 mm Hg or 140 mm Hg.

The SPRINT trial supported the need to decrease blood pressure, but the ACCORD BP trial supported the failure of aggressive blood pressure treatment. This is why the researchers decided to focus on these studies. The researchers used SPRINT data to identify 59 factors, including kidney function, smoking, and statin or aspirin use, to build PREssure Control In Hypertension (PRECISION), an ML model aimed to discover features of individuals who benefited the most from actively reducing blood pressure. Through iterative Cox regression analyses that provided average hazard ratio (HR) estimates weighted for the phenotypic distance of each participant from the index patient of each iteration, they were able to extract personalized treatment effect estimates for the primary outcome, time to first major adverse cardiovascular event (MACE; cardiovascular death, myocardial infarction or acute coronary syndrome, stroke, and acute decompensated heart failure). Then, they used variables that were frequently associated with greater personalized benefit to train an extreme gradient boosting algorithm (known as XGBoost) to predict the customized effect of intensive systolic blood pressure control. 

Read More: Researchers use Deep Learning to Hallucinate synthesis of new proteins

The team then evaluated the value of PRECISION when applied to the ACCORD BP trial of patients with type 2 diabetes randomly assigned to receive intensive versus standard systolic blood pressure control. Patients were divided into different groups according to their predicted response to therapy and significant demographic factors (age, sex, cardiovascular disease, and smoking). When compared to conventional treatment, researchers discovered that the tool could identify diabetic patients who benefited from intensive blood pressure management.

According to Khera, it could be difficult to determine the optimal blood pressure targets and treatment plan for individuals who have diabetes and hypertension, they explain that in the study, the team enhanced inference from two important clinical trials using machine learning to evaluate a specific cardiovascular advantage of aggressive blood pressure management. The most important finding is that people with diabetes who benefit from such a treatment method appear to be defined by the benefit profile found in those without diabetes. The research paper reported that intensive systolic blood pressure treatment in SPRINT showed a significant cardiovascular benefit, while corresponding benefits were not shown in ACCORD BP. As opposed to looking at the impacts of the therapies on a population as a whole, this method enabled the team to monitor the effects of the treatments on an individual and individualized level.

According to the researchers, these results suggest that PRECISION can offer trustworthy, useful information to guide decisions about intensive vs. conventional systolic blood pressure treatment among patients with diabetes. Nevertheless, they added that further research in a variety of patient demographics is required to fully comprehend how different factors affect the dangers and advantages of an intensive blood pressure-lowering strategy.

Oikonomou emphasized that, at least until the team prospectively proves its clinical relevance, the proposed machine learning algorithm, PRECISION, is only authorized to be applied for research. It was suggested by the study’s authors that a more comprehensive methodology could be employed to create a more personalized interpretation of clinical trials of diagnostic and therapeutic treatments. Finally, Oikonomou noted that the team is presently investigating the potential of their technology in creating clinical trials that are more intelligent, effective, and safer.

Advertisement

DeviantArt releases DreamUp AI art generator, faces Backlash from its community

DeviantArt DreamUp
Artwork by: Dracu-Teufel666 on DeviantArt

Jumping on the bandwagon, the Wix-owned artist network DeviantArt unveiled DreamUp, its own AI art generator, promising a “safe and fair” generation for creators. Images produced by DreamUp will have a noticeable watermark and be automatically tagged on DeviantArt with the hashtag “#AIart.”

Based on the Stable Diffusion AI model, the new generator will explicitly label their images as AI and even give credit to the authors who contributed to them when they are posted on the DeviantArt website. Additionally, the website offers creators the option to decide whether the tool can use their work as direct inspiration in an effort to allay artists’ concerns that their work would be copied or used by the generator to create images in their style. 

On top of that, the website gives creators the authority to decide whether or not to allow their work to be included in datasets used to train third-party AI models. If they select not to be included in such datasets, a “noimageai” metatag directive will be present in the HTML files of their content pages. Further, a “noai” directive protects their artwork when media files are directly downloaded from DeviantArt’s servers. 

DreamUp from DeviantArt

Since there are now tens of thousands of photos on DeviantArt’s website that have been labeled as “AI-art,” according to CEO Moti Levy, this is necessary. The number of those photographs published to their website has increased by 1,000% in the previous four months alone.

Levy did not clarify whether DreamUp, like DALL-E 2 and most other commercial AI art tools, will automatically filter out subjectively undesirable content such as graphic violence and gore. Though, he pointed out that DreamUp’s artwork will be subject to DeviantArt’s terms of use and etiquette guidelines, which forbid deepfakes, hostile images, and explicit art.

DreamUp will be available this week as part of DeviantArt’s premium Core plans, which start at US$3.95 per month. Members of DeviantArt may try out the tool for free with up to five prompts.

Read More: Canva unveils text-to-image AI feature

Meanwhile, the move to introduce DreamUp backfired immediately, with DeviantArt facing immense backlash from users because to opt out, users had to submit a form that would take days to process. The present edition of Stable Diffusion was not trained on this exclusion, which means that while DeviantArt users’ past artwork may have been included for training this iteration of the algorithm, their style will no longer be used as a differentiator even if they opt-out. 

Another glaring instance for backlash was when users had to manually go into their accounts and opt out of every single image since every single work of art on the whole website had been identified to be made accessible for AI systems to learn from. It released an update that continues to focus mostly on responding to user complaints, which would imply there are fundamental problems with the exercise’s whole purpose rather than simply specific instances of how it was implemented.

There isn’t much that can be done to prevent companies from continuing to use user art from DeviantArt for their own works in the absence of any agreements with them or legally binding copyright laws that protect artists’ works. Levi claimed that DeviantArt has begun contacting other companies in an effort to forge some form of agreement. Levy also believes that simultaneously, other image hosting services need to start negotiating similar agreements with their artists if they are serious about placing leverage on companies developing new AI art generators.

Advertisement

Laser Attacks: A looming threat to Autonomous Vehicles

laser attacks on autonomous vehicles
Source: MIT

Since the automakers have embarked on the mission to bring autonomous vehicles to the roads, there has been a rise in a series of claims, incidents, and investigations that outright state how unsafe these vehicles are. Most recently, the Full Self-Driving (FSD) Beta software from Tesla reportedly failed to recognize a stationary, kid-sized mannequin at an average speed of 25mph, according to test track data from the Dawn Project. Now, imagine the blow to this industry, when the latest study discovered that one could fool autonomous vehicles by using expertly timed lasers. 

Researchers from the United States and Japan have shown that a laser strike might be used to impair autonomous vehicles and remove people from their field of vision, putting those in its path at risk. According to the research, perfectly timed lasers directed at an approaching LIDAR (Light Detection and Ranging) system could generate a blind area in front of the vehicle large enough to fully obscure moving pedestrians and other obstructions. The cars’ false perception of safety on the road due to the erased data put anything that could be in the attack’s blind zone in danger. The security flaw was discovered by researchers from the University of Florida, the University of Michigan, and the University of Electro-Communications in Japan.

A schematic of the attack, which can delete lidar data from a region in front of a vehicle, leading to unsafe vehicle movement. Below, showing the deletion of lidar data for a pedestrian in front of a vehicle, visible below left but invisible below right. Credit: Sara Rampazzi/University of Florida

LIDAR is a spinning version of radar used by autonomous or self-driving vehicles to measure the distances between themselves and the objects in their route by first reflecting laser light. LIDAR essentially aids the vehicle in detecting its surroundings. The researchers utilized a laser to simulate the LIDAR reflections received by the sensor. Sara Rampazzi, a UF professor of computer and information science and engineering who led the study, reported the sensor falsely detected the real barriers by discounting real reflections originating from the real obstacles in the presence of the laser pulses.

The researchers were able to remove data for both stationary objects and moving people using this approach. Under test conditions, an attack was deployed against an autonomous vehicle to prevent it from decelerating as it was intended to do when it encountered a pedestrian. The attacker was little more than 15 feet away when the laser strike was launched from the side of the road of an oncoming car. The study also warned about the possibility of real-world scenarios where the attack could follow a slow-moving vehicle using basic camera tracking equipment. In addition, the researchers used very basic camera tracking software for their investigations, which could have been impacted at a wider distance if they had used more advanced equipment.

However, with improved technology, it could be done at a greater distance. The technology needed is pretty simple, but in order to keep the laser pointed in the correct direction, it must be precisely synchronized to the LIDAR sensor and constantly monitored to track moving cars. One of the researchers, S. Hrushikesh Bhupathiraj, a UF doctoral student in Rampazzi’s lab and one of the lead authors of the study, revealed that although timing the laser beam toward the LIDAR sensor with some degree of accuracy is necessary for deceiving, the data required to synchronize this is publicly available from LIDAR manufacturers.

Read More: The Rise of China in the Autonomous Vehicle Industry

These experiments were carried out by the researchers in order to aid in the development of a more robust sensor system. Now that these attacks could be recognized, the manufacturers of these LIDAR systems will need to upgrade their software and switch to a different method of obstacle detection. Researchers are hopeful that future hardware design upgrades might also strengthen resistance against such attacks.

This is the first instance in which a LIDAR system has been hacked in any way to stop it from detecting obstructions. The research will be presented at the 2023 USENIX Security Symposium and are publicly available online.

Earlier in 2015, Dr. Jonathan Petit, a principal scientist specializing in connected vehicles and consultation services at Security Innovation and a former research fellow in the University of Cork’s Computer Security Group, discovered that a laser hack attack could paralyze driverless cars and trick them into taking evasive action while conducting research into the cyber susceptibilities of autonomous vehicles. He explained to the Institute of Electrical and Electronics Engineers Spectrum (IEEE) that the assault is quick and easy to execute using readily available tools, such as a Raspberry Pi or Arduino computer that can successfully impersonate the automobile at a distance of up to 100 meters.

The ability of autonomous cars to accurately identify and understand nearby obstructions in real-time is crucial to their long-term success. These in-road obstacles might include people, traffic cones, and other vehicles. To achieve precise and reliable detection, the majority of high-level autonomous vehicles use a variety of perception sources, such as LIDAR, multi-sensor fusion, and cameras. Theoretically, the obstacle detection systems in such vehicles always perform at their highest level, unlike distracted or intoxicated drivers. As a result, they are expected to minimize accidents and the high mortality toll on our roadways. However, if a basic laser attack may disable or confuse them, it is time to reconsider how to address these obstacles before determining that autonomous driving technology is suitable for usage on public roadways. Especially when such hacks can stunt the growth of the autonomous vehicle industry!

Advertisement