The National Commission for Women (NCW) has recently launched the Digital Shakti campaign’s 4th phase. The 4th phase of the campaign focuses on empowering and skilling women in cyberspace digitally in collaboration with CyberPeace Foundation and Meta.
Ms. Rekha Sharma, the Chairperson of NCW, mentioned that 4th phase would be a milestone in ensuring safe cyberspace for women. Digital Shakti has been helping women by encouraging their digital participation through proper technical training. The 4th phase campaign in Digital Shakti will continue contributing towards the larger goal of fighting cyber violence against women.
The Digital Shakti campaign was started in 2018 to help women fight cybercrime in the most effective ways. With the Digital Shakti campaign, more than 3 lakh women across India have been made aware of cybersecurity tricks, practices, data privacy, and technology usage for their benefit.
In March 2021, the 3rd phase of the Digital Shakti campaign was launched at Leh with NCW Chairperson Mrs. Rekha Sharma, Lieutenant Governor Shri Radha Krishna Mathur, and Jamyang Tsering Namgyal, MP, Ladakh. In the 3rd phase, the campaign developed a resource center to offer information on all the avenues of reporting in case women face cybercrime.
GitHub now allows developers to discreetly warn their peers of discovered vulnerabilities. According to the company, doing so will avoid the “name and shame” game and stop any unwanted misuses brought on by public revelation.
GitHub stated in a blog post earlier this week that given the way the platform is now set up, there is sometimes no other alternative but to publicly expose a vulnerability that risks notifying prospective threat actors before malware removal tools can be implemented. For researchers, who are frequently saddled with decisions that can result in further security issues, being able to report code vulnerabilities in confidence is crucial.
According to the blog, security researchers frequently feel accountable for warning users about a vulnerability that could be exploited. “If there are no clear instructions about contacting maintainers of the repository containing the vulnerability. It can potentially lead to public disclosure of the vulnerability details.”
GitHub has now added private vulnerability reporting, which is essentially a simple reporting form, to address the problem. A security researcher or developer can use the new private reporting feature to report a vulnerability report to a public repository. The receiver can either accept it, implying to the researcher a willingness to collaborate with them to fix the problem, or it can reject it, ask more questions, and/or signal other options.
By visiting the main page of their repository, clicking Settings, and then selecting “Code security and analysis” under “Security,” code maintainers or developers can activate private reporting on GitHub.com. They can select to enable or disable the option by clicking the arrow to the right of “Private vulnerability reporting.”
Since complaints are handled in a single location, the Microsoft-owned platform also anticipates that the new reporting style would simplify troubleshooting procedures. Moreover, it offers code maintainers the chance to privately discuss vulnerability details with security researchers and developers before working together on a solution using patch management software.
The initiative was one of several announcements made by GitHub during the GitHub Universe 2022 developer event this month.
Hypertension, often known as high blood pressure, is one of the most prevalent diseases in the general population, especially in middle-aged and older people. If a person has mild to moderate hypertension, it can initially be treated with lifestyle changes. However, blood pressure drugs would often be taken into consideration if this doesn’t work. Yale University researchers have created a machine learning-based clinical decision support tool to tailor recommendations for blood pressure control treatment goals.
The pressure that pushes blood through arteries when the heart beats, supplying oxygen and nutrients to organs and tissues all over the body, is known as blood pressure. Our organs must maintain a normal blood pressure level in order to function properly and prevent internal injury. Researchers state that hypertension, which develops when there is a continuous blood pressure of more than 140/90 mm Hg, is one of the main causes of heart disease, disability, and early mortality worldwide in the study published earlier this week in The Lancet Digital Health. Though, there has been disagreement over how much blood pressure should be dropped to reduce this risk, particularly for Type 2 diabetes patients for whom clinical trials have shown different outcomes regarding the benefits of aggressive blood pressure control.
This inspired researchers from Yale to create a machine learning-based tool to help people with and without diabetes determine whether to pursue intensive vs conventional blood pressure treatment objectives. Through a data-driven methodology, the innovative clinical decision support tool encourages collaborative decision-making between patients and healthcare professionals.
To determine whether the superiority of intensive vs routine antihypertensive care can be explained by patient characteristics, lead author Dr. Evangelos K. Oikonomou and senior author Dr. Rohan Khera, assistant professor at Yale School of Medicine and director of the Cardiovascular Data Science (CarDS) Lab, gathered data from two randomized clinical trials: SPRINT (Systolic Blood Pressure Intervention Trial) and ACCORD BP (Action to Control Cardiovascular Risk in Diabetes Blood Pressure). While SPRINT did not include patients with diabetes, ACCORD BP included only patients with type 2 diabetes mellitus.
Both studies randomly assigned patients to an intensive or regular systolic blood pressure target of 120 mm Hg or 140 mm Hg.
The SPRINT trial supported the need to decrease blood pressure, but the ACCORD BP trial supported the failure of aggressive blood pressure treatment. This is why the researchers decided to focus on these studies. The researchers used SPRINT data to identify 59 factors, including kidney function, smoking, and statin or aspirin use, to build PREssure Control In Hypertension (PRECISION), an ML model aimed to discover features of individuals who benefited the most from actively reducing blood pressure. Through iterative Cox regression analyses that provided average hazard ratio (HR) estimates weighted for the phenotypic distance of each participant from the index patient of each iteration, they were able to extract personalized treatment effect estimates for the primary outcome, time to first major adverse cardiovascular event (MACE; cardiovascular death, myocardial infarction or acute coronary syndrome, stroke, and acute decompensated heart failure). Then, they used variables that were frequently associated with greater personalized benefit to train an extreme gradient boosting algorithm (known as XGBoost) to predict the customized effect of intensive systolic blood pressure control.
The team then evaluated the value of PRECISION when applied to the ACCORD BP trial of patients with type 2 diabetes randomly assigned to receive intensive versus standard systolic blood pressure control. Patients were divided into different groups according to their predicted response to therapy and significant demographic factors (age, sex, cardiovascular disease, and smoking). When compared to conventional treatment, researchers discovered that the tool could identify diabetic patients who benefited from intensive blood pressure management.
According to Khera, it could be difficult to determine the optimal blood pressure targets and treatment plan for individuals who have diabetes and hypertension, they explain that in the study, the team enhanced inference from two important clinical trials using machine learning to evaluate a specific cardiovascular advantage of aggressive blood pressure management. The most important finding is that people with diabetes who benefit from such a treatment method appear to be defined by the benefit profile found in those without diabetes. The research paper reported that intensive systolic blood pressure treatment in SPRINT showed a significant cardiovascular benefit, while corresponding benefits were not shown in ACCORD BP. As opposed to looking at the impacts of the therapies on a population as a whole, this method enabled the team to monitor the effects of the treatments on an individual and individualized level.
According to the researchers, these results suggest that PRECISION can offer trustworthy, useful information to guide decisions about intensive vs. conventional systolic blood pressure treatment among patients with diabetes. Nevertheless, they added that further research in a variety of patient demographics is required to fully comprehend how different factors affect the dangers and advantages of an intensive blood pressure-lowering strategy.
Oikonomou emphasized that, at least until the team prospectively proves its clinical relevance, the proposed machine learning algorithm, PRECISION, is only authorized to be applied for research. It was suggested by the study’s authors that a more comprehensive methodology could be employed to create a more personalized interpretation of clinical trials of diagnostic and therapeutic treatments. Finally, Oikonomou noted that the team is presently investigating the potential of their technology in creating clinical trials that are more intelligent, effective, and safer.
Jumping on the bandwagon, the Wix-owned artist network DeviantArt unveiled DreamUp, its own AI art generator, promising a “safe and fair” generation for creators. Images produced by DreamUp will have a noticeable watermark and be automatically tagged on DeviantArt with the hashtag “#AIart.”
Based on the Stable Diffusion AI model, the new generator will explicitly label their images as AI and even give credit to the authors who contributed to them when they are posted on the DeviantArt website. Additionally, the website offers creators the option to decide whether the tool can use their work as direct inspiration in an effort to allay artists’ concerns that their work would be copied or used by the generator to create images in their style.
On top of that, the website gives creators the authority to decide whether or not to allow their work to be included in datasets used to train third-party AI models. If they select not to be included in such datasets, a “noimageai” metatag directive will be present in the HTML files of their content pages. Further, a “noai” directive protects their artwork when media files are directly downloaded from DeviantArt’s servers.
DreamUp from DeviantArt
Since there are now tens of thousands of photos on DeviantArt’s website that have been labeled as “AI-art,” according to CEO Moti Levy, this is necessary. The number of those photographs published to their website has increased by 1,000% in the previous four months alone.
Levy did not clarify whether DreamUp, like DALL-E 2 and most other commercial AI art tools, will automatically filter out subjectively undesirable content such as graphic violence and gore. Though, he pointed out that DreamUp’s artwork will be subject to DeviantArt’s terms of use and etiquette guidelines, which forbid deepfakes, hostile images, and explicit art.
DreamUp will be available this week as part of DeviantArt’s premium Core plans, which start at US$3.95 per month. Members of DeviantArt may try out the tool for free with up to five prompts.
Meanwhile, the move to introduce DreamUp backfired immediately, with DeviantArt facing immense backlash from users because to opt out, users had to submit a form that would take days to process. The present edition of Stable Diffusion was not trained on this exclusion, which means that while DeviantArt users’ past artwork may have been included for training this iteration of the algorithm, their style will no longer be used as a differentiator even if they opt-out.
Another glaring instance for backlash was when users had to manually go into their accounts and opt out of every single image since every single work of art on the whole website had been identified to be made accessible for AI systems to learn from. It released an update that continues to focus mostly on responding to user complaints, which would imply there are fundamental problems with the exercise’s whole purpose rather than simply specific instances of how it was implemented.
There isn’t much that can be done to prevent companies from continuing to use user art from DeviantArt for their own works in the absence of any agreements with them or legally binding copyright laws that protect artists’ works. Levi claimed that DeviantArt has begun contacting other companies in an effort to forge some form of agreement. Levy also believes that simultaneously, other image hosting services need to start negotiating similar agreements with their artists if they are serious about placing leverage on companies developing new AI art generators.
Since the automakers have embarked on the mission to bring autonomous vehicles to the roads, there has been a rise in a series of claims, incidents, and investigations that outright state how unsafe these vehicles are. Most recently, the Full Self-Driving (FSD) Beta software from Tesla reportedly failed to recognize a stationary, kid-sized mannequin at an average speed of 25mph, according to test track data from the Dawn Project. Now, imagine the blow to this industry, when the latest study discovered that one could fool autonomous vehicles by using expertly timed lasers.
Researchers from the United States and Japan have shown that a laser strike might be used to impair autonomous vehicles and remove people from their field of vision, putting those in its path at risk. According to the research, perfectly timed lasers directed at an approaching LIDAR (Light Detection and Ranging) system could generate a blind area in front of the vehicle large enough to fully obscure moving pedestrians and other obstructions. The cars’ false perception of safety on the road due to the erased data put anything that could be in the attack’s blind zone in danger. The security flaw was discovered by researchers from the University of Florida, the University of Michigan, and the University of Electro-Communications in Japan.
A schematic of the attack, which can delete lidar data from a region in front of a vehicle, leading to unsafe vehicle movement. Below, showing the deletion of lidar data for a pedestrian in front of a vehicle, visible below left but invisible below right. Credit: Sara Rampazzi/University of Florida
LIDAR is a spinning version of radar used by autonomous or self-driving vehicles to measure the distances between themselves and the objects in their route by first reflecting laser light. LIDAR essentially aids the vehicle in detecting its surroundings. The researchers utilized a laser to simulate the LIDAR reflections received by the sensor. Sara Rampazzi, a UF professor of computer and information science and engineering who led the study, reported the sensor falsely detected the real barriers by discounting real reflections originating from the real obstacles in the presence of the laser pulses.
The researchers were able to remove data for both stationary objects and moving people using this approach. Under test conditions, an attack was deployed against an autonomous vehicle to prevent it from decelerating as it was intended to do when it encountered a pedestrian. The attacker was little more than 15 feet away when the laser strike was launched from the side of the road of an oncoming car. The study also warned about the possibility of real-world scenarios where the attack could follow a slow-moving vehicle using basic camera tracking equipment. In addition, the researchers used very basic camera tracking software for their investigations, which could have been impacted at a wider distance if they had used more advanced equipment.
However, with improved technology, it could be done at a greater distance. The technology needed is pretty simple, but in order to keep the laser pointed in the correct direction, it must be precisely synchronized to the LIDAR sensor and constantly monitored to track moving cars. One of the researchers, S. Hrushikesh Bhupathiraj, a UF doctoral student in Rampazzi’s lab and one of the lead authors of the study, revealed that although timing the laser beam toward the LIDAR sensor with some degree of accuracy is necessary for deceiving, the data required to synchronize this is publicly available from LIDAR manufacturers.
These experiments were carried out by the researchers in order to aid in the development of a more robust sensor system. Now that these attacks could be recognized, the manufacturers of these LIDAR systems will need to upgrade their software and switch to a different method of obstacle detection. Researchers are hopeful that future hardware design upgrades might also strengthen resistance against such attacks.
This is the first instance in which a LIDAR system has been hacked in any way to stop it from detecting obstructions. The research will be presented at the 2023 USENIX Security Symposium and are publicly available online.
Earlier in 2015, Dr. Jonathan Petit, a principal scientist specializing in connected vehicles and consultation services at Security Innovation and a former research fellow in the University of Cork’s Computer Security Group, discovered that a laser hack attack could paralyze driverless cars and trick them into taking evasive action while conducting research into the cyber susceptibilities of autonomous vehicles. He explained to the Institute of Electrical and Electronics Engineers Spectrum (IEEE) that the assault is quick and easy to execute using readily available tools, such as a Raspberry Pi or Arduino computer that can successfully impersonate the automobile at a distance of up to 100 meters.
The ability of autonomous cars to accurately identify and understand nearby obstructions in real-time is crucial to their long-term success. These in-road obstacles might include people, traffic cones, and other vehicles. To achieve precise and reliable detection, the majority of high-level autonomous vehicles use a variety of perception sources, such as LIDAR, multi-sensor fusion, and cameras. Theoretically, the obstacle detection systems in such vehicles always perform at their highest level, unlike distracted or intoxicated drivers. As a result, they are expected to minimize accidents and the high mortality toll on our roadways. However, if a basic laser attack may disable or confuse them, it is time to reconsider how to address these obstacles before determining that autonomous driving technology is suitable for usage on public roadways. Especially when such hacks can stunt the growth of the autonomous vehicle industry!
According to a new report by Deloitte, the economic impact of the metaverse in India could potentially range from $79 to $148 billion per annum by 2035, which is about 1.3% to 2.4% of the country’s GDP.
With over half of its population under 30, India produces the highest number of STEM graduates globally and is demographically well-positioned to impart digital labor to the metaverse.
The report also estimates that the metaverse impact on GDP in Asia could range between $0.8-1.4 trillion per year by 2035, almost 1.3 to 2.4 percent of overall GDP.
The report emphasizes digital payments and gaming/entertainment as the key sectors where the metaverse would impact India. According to the report, digital payments will be an indispensable component of the metaverse for trading digital assets. India could feature robustly in the fields, as the rates of real-time digital payments of the country are the highest in the world, the report added.
According to the report, awareness of the metaverse in Asia is high. Millions of people in the region are already using early metaverse platforms for gaming, socializing, purchasing items, creating digital twins, and attending concerts. However, a fully immersive metaverse with real-time offerings of visually rich worlds is still years away.
Waymo has disclosed that its latest vehicle sensor arrays are producing real-time weather maps in hopes of improving ride-hailing services in Phoenix and San Francisco. The Alphabet subsidiary’s robotaxis detect the intensity of conditions such as fog or rain by measuring the droplets on the windows.
When compared to radar, satellites, and airport weather stations, Waymo’s technology provides a considerably more precise picture of the environment. It can detect inland-moving coastal fog or drizzle that radar would often miss. In places like San Francisco and elsewhere where the weather can change drastically between areas, insights from this real-time weather-based data might be quite helpful.
Millions of data points were gathered by Waymo’s fleet of robotaxi autonomous vehicles as they traveled through the foggy streets of San Francisco to create the map. Waymo is able to develop a new meteorological metric in conjunction with advanced weather-detecting vehicles outfitted with visibility sensors, which it then feeds to its autonomous “Waymo Driver AI” to support its decision-making.
With the help of this map, Waymo’s fleet can monitor the buildup of coastal fogs coming off the Pacific Ocean as well as how quickly they dissipate in the morning. It can sense drizzle and light rainfall that cause slippery roadways in adverse conditions when the National Weather Service’s local Doppler weather radar is ineffective. With the use of these weather monitoring tools, Waymo can determine specific locations where the weather is starting to get worse or better.
Image Credit: Waymo
The mapping technology also enables Waymo One to offer better ride-hailing services to consumers at a certain time and location and provides Waymo Via trucking customers with more precise delivery updates.
As Waymo moves closer to introducing completely autonomous vehicles as part of its for-profit robotaxis service in California, this degree of on-the-ground accuracy will become increasingly crucial. After getting certification from the California DMV, the Alphabet company is almost ready to start delivering “rider-only” trips in San Francisco.
Daniel Rothenberg, a trained meteorologist and a member of the company’s weather team told The Verge, “Waymo will create similar weather maps for additional cities as we scale.”
The surge in the number of autonomous vehicles in recent years has brought more attention to the safety of driving autonomous vehicles. While manufacturing autonomous vehicles is easy, as it is fairly similar to making non-autonomous vehicles, the real challenge lies in enabling the vehicle to navigate through adverse weather conditions. Though autonomous vehicles are equipped with sensors like LIDAR, radar, and cameras, they fail during unexpectedly changing weather conditions like heavy snowfall, fog, or rain. Cameras’ view can be obstructed due to fog and heavy snow, thereby rendering them unable to see the roadsigns, lane dividers, bends, etc. Even LIDAR lasers become less accurate while attempting to run through snowflakes and showers.
For the advanced AI technologies powering these vehicles, insight into the precise prevailing road conditions is crucial. For this reason, manufacturers are working to develop systems that can gather all the data required to drive autonomous vehicles in extreme weather conditions.
Amazon Web Services, or AWS, introduce a novel method for evaluating facial recognition models and detecting biases. The proposed method does not utilize standard identification annotations and estimates the model’s performance based on previous demographic data.
Artificial intelligence-based models often experience algorithmic bias. Consequently, the area has become an emerging domain of study. The proposed method focuses on examining biases in facial recognition. A simple way to determine if a facial recognition algorithm is biased is to train the model on a massive dataset, including faces from several demographics. However, this requires identity annotation.
The method proposed by Amazon evaluates biases without identity annotations. While annotations are not necessary, it is necessary for the model to have some way of determining which subjects belong to each category. Where standard models generate vector representations (embeddings) in a single space, this method represents the same subject in two embeddings placed at a distance lesser than a predetermined cutoff.
The researchers then hypothesized that there exists a distribution to which these distances belong and another distribution to which the remaining distances (between two non-identical subjects) belong. The model learns both of the distributions, and the difference between them provides a measure of the model’s accuracy.
The researchers are optimistic about the method being useful for AI as the model shows appreciable results when compared to Bayesian calibration, as seen in the paper.
Google has recently announced a new approach to Reinforcement Learning algorithms called Reincarnating Reinforcement Learning (RRL). This article provides an overview of the RRL algorithms.
Reinforcement Learning is a kind of machine learning technique that focuses on training intelligent agents with related experiences in a way that they can learn to solve decision-making problems like playing video games, designing hardware chips, and flying stratospheric balloons.
Due to the generality of Reinforcement Learning, researchers focus on RL research to develop intelligent agents that can efficiently learn Tabula Rasa. Generally, the term Tabula Rasa is used to describe the chance for a fresh start. For example, when a student’s family migrates to a different location, they must begin the year at a new school in a completely blank state. This means Tabula Rasa is an opportunity to start again with no historical record.
Tabula Rasa RL systems are typically the exceptions rather than the standards for solving large-scale RL problems. Large-scale RL systems like OpenAI Five have achieved human-level performance on Dota2 after experiencing multiple algorithmic changes during the development cycle. But including the algorithmic changes to the RL systems from scratch, can be very challenging and expensive.
Therefore, the inefficient nature of Tabula Rasa Reinforcement Learning research to train agents from scratch can make it challenging for many researchers to handle computationally demanding problems. For example, the standard benchmark to train a deep RL agent on the 50+Atari 2600 games in ALE for 200M frames needs 1000 + GPU days. As the deep RL algorithms move toward complex problems, the computational barrier to entering RL research will become even higher.
Therefore, to address such inefficiencies of Tabula Rasa, Google has introduced a new algorithm called ‘Reincarnating Reinforcement Learning (RRL). It will also present the complete research about the RRL algorithm at the NeurIPS 2022 conference. In this research, Google has proposed an alternative approach to RL research where prior work like learned models, logged data, policies, and more can be reused or transferred between design interactions of the RL agent or from one agent to another. RL uses prior computation in some cases, but most RL agents are still trained from scratch. However, there has not been an inclusive effort to use prior computational work to train workflow in RL research.
Reincarnating Reinforcement Learning (RRL) is a more computational and efficient workflow based on resuing prior computational work while training new RL agents or improving existing RL agents even in the same environment. RRL can standardize RL research by allowing researchers to handle complex RL problems without requiring excessive computational resources. Moreover, RRL can enable a benchmarking example where researchers continually improve and update existing trained RL agents, specifically on problems that impact the real world, like ballon navigation and chip design. The real-world use cases of RL are likely to be used in scenarios where prior computation is available.
RRL is an alternative research workflow for RL that does not train the RL agents from scratch. Instead, it updates the existing RL agents. Suppose a researcher wants to train an agent named A1 for a particular time but now wishes to experiment with better algorithms. In this case, the Tabula Rasa workflow requires retraining another agent from scratch. In contrast, RRL workflow provides an essential option of transferring the existing agent A1 to another agent and training this agent or simply fine-tuning A1.
Reinforcement Learning assumes that agents interact with the online environment to learn from their own past experiences or records. But these algorithms are very challenging to implement in real-life applications like robotics or autonomous driving because you need to train agents in every situation. However, Google assumes RRL will be helpful when a Reinforcement Learning algorithm is costly and time-consuming, where the prior computation can be brought to practice rather than retraining the agents in RL from scratch.
Oregon Attorney General Ellen Rosenblum announced that Google would pay about $391.5 million in settlements to 40 states over its location tracking practices.
While users thought they had turned off their location tracking of Google account settings, Google continued to collect information about their movements. According to the settlement, Google has agreed to significantly improve its location tracking disclosures and user controls in the next year.
The Google settlement was led by Ellen Rosenblum and Nebraska AG Doug Peterson, who declared that Google prioritized profit over their users’ privacy. Google has been crafty and deceptive in saving users’ information secretly and using it for advertising purposes.
According to the press release from Oregan’s, the Google settlement is the largest consumer privacy settlement in U.S history. Due to Oregon’s leadership role in the settlement and investigation, Oregon will receive $14,800 563.
As per the release, the Attorney General opened the Google investigation by following the 2018 Associated Press article that disclosed Google’s strategy to record the users’ movements when they have explicitly told not to do it. The article highlighted two essential features of Google account settings: Location History and Web & Activity. The Location History is ‘off’ until a user turns on the setting. Still, the Web & Activity, a separate account setting, is automatically ‘on’ when users set up a Google account.
According to the settlement, Google has to be more transparent about its practices with users. Google must show additional information to users whenever they turn a location-related setting on or off. It should make essential information about location tracking unavoidable for users. Google should provide detailed information about the types of location data that Google collects and how it is used in the enhanced ‘Location Technologies’ webpage. Besides Oregon and Nebraska, other states involved in the settlements are Florida, Arkansas, New Jersey, North Carolina, and more.