Friday, January 16, 2026
ad
Home Blog Page 300

Research shows how people would react to humanoids with cloned faces

humanoids with cloned faces

Technological advancements in robotics have led to scientists producing humanoid robots such as Geminoid, Saya, and Sophia indistinguishable from humans. A recent study published in PLOS ONE, titled ‘The clone devaluation effect: A new uncanny phenomenon concerning facial identity’ evaluates how humans respond to images of humanoids with cloned faces.

Developers are optimistic that they will design robots that will surpass the uncanny valley, a well-known phenomenon according to which human-looking humanoids will elicit unpleasant and negative emotions in viewers. In a not too far-fetched future, if we start mass-producing human-like androids that are indistinguishable from flesh-and-blood human beings, how would we react? A collaborative team of researchers from Ritsumeikan University, Kyushu University, and Kansai University conducted six experiments to find how people react. 

The first experiment involved rating the emotional valence, realism, and subjective eeriness of a photoshopped photograph of six human subjects with the clone face/image, one person (single image), and six people with different faces (non-clone image). In the second experiment, participants rated another set of clone images and non-clone images, while the third experiment included rating clone and non-clone pictures of dogs. Unlike others, the fourth experiment had two parts: first, participants had to rate clone images of two sets of twins and then clone faces of twins, triplets, quadruplets, and quintuplets. The fifth experiment involved clone images of Japanese animation and cartoon characters. And the last experiment involved assessing the subjective eeriness and realism of different clone and non-clone images. During this, they also had to answer the Disgust Scale-Revised so researchers could analyze disgust sensitivity.

Read more: Autonomous vehicles can read road signs made from reflective microscale concave interfaces (MCIs)

The results of all six experiments were striking. Participants that took part in the first study rated individuals with clone faces as eerier and more improbable than those with distinct faces and a single person’s face. This was a negative emotional response, and researchers termed it as the clone devaluation effect.

“The clone devaluation effect was stronger when the number of clone faces increased from two to four,” says lead author Dr. Fumiya Yonemitsu from the Graduate School of Human-Environment Studies at Kyushu University. “This effect did not occur when each clone face was indistinguishable, like animal faces in experiment three involving dogs. We also noticed that the duplication of identity, the personality and mind unique to a person, rather than their facial features, has an important role in this effect. “

The fourth experiment recorded that clone faces with the duplication of identity were eerier. Also, the clone devaluation effect that was strong in the first and second experiments became weaker in the fifth experiment when clone faces existed in the lower reality of the context. The eeriness of clone faces stemming from improbability could be positively predicted by disgust, particularly animal-reminder disgust, as noticed in the sixth experiment. Taken together, these results suggest clone faces induce eeriness and that the clone devaluation effect is related to realism and disgust reaction.

These results show that human faces provide vital information for identifying individuals because humans have a one-to-one resemblance between face and identity. Since the humanoids with cloned faces violate this principle, it might make humans misjudge the identity of humanoids with cloned faces as being the same.

Advertisement

IISER Bhopal Researchers Sequence Genome of Giloy, a Medicinal Herb, for the first time Globally

Sequence Genome Giloy

Indian Institute of Science Education and Research (IISER) Bhopal Scientists have sequenced the Genome of Giloy (Tinospora cordifolia), a plant with medicinal properties, for the first time in the world.

The genome and transcriptome sequencing of Giloy is important due to its tremendous use in pharmaceuticals and ayurvedic formulations to treat various health conditions including COVID-19 and can provide deep insights into the genomic basis of its medicinal properties.

The usage of Giloy is also recommended under Ayurveda practice by the Ministry of AYUSH (Ayurveda, Yoga and Naturopathy, Unani, Siddha, and Homeopathy) and the Ministry of Health and Family Welfare, Government of India, in prophylactic care as well as therapeutic applications for all symptomatic or asymptomatic patients infected with COVID-19. It is also used in fever and diabetes.

The research team was led by Dr. Vineet K. Sharma, Associate Professor, Department of Biological Sciences, IISER Bhopal, and comprised Ms. Shruti Mahajan and Mr. Abhisek Chakraborty, PhD Students, IISER Bhopal, and Ms. Titas Sil, BS-MS Student, IISER Bhopal. The Research has been published in the international preprint server for biology bioRxiv.

“Giloy also has anti-microbial activity and is used in skin diseases, urinary tract infection, and dental plaque, among others. It is also found to reduce the clinical symptoms in HIV-positive patients and its antioxidant activity has anti-cancer and chemo-protective properties. Giloy extracts are found to be potential candidates in treating various cancers like brain tumour, breast cancer, and oral cancer, as well,” said Dr. Vineet K. Sharma, Associate Professor, Department of Biological Sciences, IISER Bhopal.

The availability of the Giloy genome will help in bridging the missing link between its genomic and medicinal properties. This study will provide leads for exploring the genomic basis of its medicinal properties.

This research was undertaken by MetaBioSys Group, which focuses on the Indian microbiome including gut, scalp, skin microbiomes in healthy and diseased individuals. They also undertake pioneering work in sequencing and functional analysis of novel eukaryotic and prokaryotic genomes by developing and employing new Machine Learning-based software for Biology Big Data analysis.

Elaborating further about this research, Ms. Shruti Mahajan said, “Giloy is considered as an important multipurpose medicinal plant in Ayurvedic science. This plant came into the limelight due to its immunomodulatory and antiviral activity after the emergence of the COVID-19 pandemic. It has been used in various health conditions due to its immune-modulatory, antipyretic, anti-inflammatory, anti-diabetic, anti-microbial, anti-viral, anti-cancer properties, among others.”

The Key Aspects of this Genome Sequencing include

  • This is the first species ever sequenced from Menispermaceae plant family, which comprises of more than 400 species having therapeutic values.
  • A deep transcriptome sequencing of leaf tissue was also performed for this plant.
  • The draft genome assembly had a size of 1.01 Gbp and contained 19,474 coding gene sequences.
  • The phylogenetic position of Giloy was resolved through a comprehensive genome-wide phylogenetic analysis with 36 other plant species.
  • It will also aid in various comparative genomic studies and will act as a reference for the future species sequenced from its genus and family.

Previous studies have shown that a compound from Giloy was reported to target the two proteases of SARS-CoV-2 virus namely Mpro and Spike proteases, and another compound was predicted to inhibit SARS-CoV-2 Mpro and also disrupts viral spike protein and host ACE-2 interaction. Treatment with Giloy extract modulates the various pathways of the immune system for improved immunity. 

These multiple medicinal properties are because of the presence of its secondary metabolites. Despite these medicinal properties, the unavailability of its genome sequence was a constraint in studying the genomic basis of the medicinal properties. Thus, the genome sequence of Giloy could be a breakthrough as the potential therapeutic agent for diseases like COVID in the future.

Advertisement

Facebook lands in Trouble After its AI wrongfully tags black men as Primates

Facebook AI Primates

Facebook has again found itself in hot waters — this time, its AI has put the ‘Primates’ label on video of black men. 

The video titled “White man calls cops on black men at marina” dates to June 27, 2020, was by The Daily Mail and featured clips of Black men in altercations with white civilians and police officers. It showed how a white guy was able to get a black man arrested from his house after the white man claimed he had been harassed.

The problem arose when some users were viewing the video and then received an automated message prompt from Facebook asking if they wanted to “keep seeing videos about Primates”. The video has nothing to do with monkeys, chimps, or gorillas, despite the fact that humans are among the many species in the primate family.

After receiving a screenshot of the recommendation, Darci Groves, a former Facebook content design manager, took to Twitter to post about the incident. “This ‘keep seeing’ prompt is unacceptable,” Ms. Groves tweeted, aiming the message at current and former colleagues at Facebook. “This is egregious.”

A product manager for Facebook Watch, the company’s video program, responded by calling it “unacceptable” and promising that the company was looking into the underlying cause. 

Facebook immediately took down the recommendation software that was involved in this fiasco.

“As we have said, while we have made improvements to our AI we know it’s not perfect and we have more progress to make. We apologize to anyone who may have seen these offensive recommendations,” Facebook said in response to an AFP inquiry. After realizing the repercussions of the incident, the social networking giant sprang into action and quickly disabled the entire topic recommendation feature. It also began investigating the cause behind it to prevent such instances in the future.

This is not the first case of artificial intelligence gone wrong and ‘racist.’  For instance, a few years ago, an algorithm in Google had mistakenly classified black people as “gorillas” on its Photos app. This forced the company to apologize and assure that the issue would be resolved. However, more than two years later, Wired found that Google’s solution was to censor the word “gorilla” from searches, while also blocking “chimp,” “chimpanzee” and “monkey.”

Marsha de Cordova, a black MP, was mistakenly identified as Dawn Butler by the BBC in February last year. A couple of years back, another facial recognition program incorrectly identified a Brown University student as a suspect in the Sri Lanka bombings, prompting death threats against him. According to Reuters, Chinese President Xi Jinping’s name appeared on Facebook last year as ‘Mr Shithole’ when translated from Burmese, a Facebook-specific problem that was not reported elsewhere.

Read More: How Does Facebook AI’s Self-Supervised Learning Framework For Model Selection & Hyperparameter Tuning Work?

As per a recent Mozilla study, the Youtube AI recommends 71% of videos that are harmful. Most recently, last month, it was discovered that Twitter’s image-cropping algorithm favors persons who are younger, thinner, have feminine characteristics and have lighter skin.

These incidents highlight, misidentification and abuse of minorities as a result of AI are becoming increasingly prevalent. According to a 2018 MIT study of three commercial gender-recognition algorithms, dark-skinned women had mistake rates of up to 34%, approximately 49 times higher than white males. 

While the tragic death of George Floyd brought to light the widespread biases perpetrated by machines and technology, much needs to be done about detecting and removing bias in data. It is true that AI can help minimize human bias, and its models are as good as the training data; it is still a chicken and egg issue. Therefore, it is important to understand how we define bias as well as hold companies accountable for the misuse of AI algorithms.

Advertisement

Autonomous vehicles can read road signs made from reflective microscale concave interfaces (MCIs)

Autonomous vehicles can read signs

“It is vital to be able to explain how a technology works to someone before you attempt to adopt it. Our new paper defines how light interacts with microscale concave interfaces,” says University at Buffalo engineering researcher Qiaoqiang Gan. His research, published online on August 15 in Applied Materials Today, shows that reflective MCIs can aid AVs in recognizing traffic signs.

Gan, Ph.D., is a professor of electrical engineering at the UB School of Engineering and Applied Sciences. He led this collaborative study conducted by a team comprising researchers from UB, Texas Tech University, Fudan University, the University of Shanghai for Science and Technology (USST), and Hubei University. The paper’s first authors are Haifeng Hu, Ph.D., professor of optical-electrical and computer engineering at the USST, and Jacob Rada, UB Ph.D. student in electrical engineering.

The study, titled ‘Multiple concentric rainbows induced by microscale concave interfaces for reflective displays,’ focuses on a thin film of retro-reflective material comprising polymer microspheres laid down on the sticky side of a transparent tape. 

Read more: Stanford’s ML Algorithm Accurately Predicts the Structure of Biological Macromolecules

The study reports that shining white light on this film causes the light to create concentric rainbow rings as it reflects. Alternatively, hitting the material with a single-colored laser (green, red, or blue) generates a bright and dark ring pattern. Reflections from infrared lasers on a thin film of retro-reflective material comprising polymer microspheres also produced distinctive signals comprising concentric rings.

The researchers used a thin film in a stop sign for their study and found that the patterns were clearly visible on a visual camera and LIDAR (laser imaging, detection, and ranging). The visual camera detects visible light and a LIDAR camera detects infrared signals. Traffic signs made with this film will ensure that autonomous vehicles can read road signs.

“Currently, autopilot systems face many challenges in recognizing traffic signs, especially in real-world conditions,” Gan says. “Smart traffic signs made from our material could provide more signals for future systems that use LIDAR and visible pattern recognition together to identify important traffic signs. This may be helpful to improve the traffic safety for autonomous cars.”

The researchers demonstrated a new combined strategy to enhance visible pattern recognition and LIDAR signal, tasks that were previously performed by both visible and infrared cameras. The study also depicted that since MCI constantly produces powerful signals, it is ideal for LIDAR cameras. 

Researchers have issued a patent in the U.S. along with a counterpart in China for the retro-reflective material, with Fudan University and UB as the patent-holders. The rainbow-making technology is available for licensing.

Gan says plans include testing the film using different wavelengths of light and different materials for the microspheres to enhance performance for possible applications such as traffic signs designed for future self-driving cars.

Advertisement

Will data encryption protect user privacy when using edge AI for personalized ads?

Edge AI personalized ads data encryption privacy user data
Image Credit: Analytics Drift Team

Although the veil of encryption and privacy regulations shroud personal data, brands may struggle to keep up with the increased demands for personalized ads. Transcending over the barriers of sensitive data is getting more difficult with each passing day as IoT devices such as smart speakers, beacon and edge technologies continue to pervade homes, workplaces, and public areas, while organizations try to use AI-enabled technology to automate and simplify aspects of the customer experience. 

Factoring in such limitations, a question emerges about obtaining consumer data to aid in the development of the finely precise tailored behavioral and interest models for AI advertising engines. The best solution to such a crisis while aligning with privacy rules can be found using edge AI.

Typically when data is gathered from a computer or device it is uploaded to the cloud, where it is stored and analyzed, but with edge computing, the same processing takes place at or near the source of the data rather than transmitting it over the internet or edge. Here, the edge refers to the location where edge devices such as computers, mobile phones, smart robots, sensors, and actuators connect to the internet and communicate with one another. Unlike cloud IoT implementation where everything is centralized, edge computing focuses on decentralizing the architecture.

Artificial Intelligence has long been a catch-all phrase for a variety of technologies such as computer vision, machine learning, neural networks, deep learning, and many more. Edge AI is specially designed to execute AI algorithms locally on data embedded in Internet of Things (IoT) endpoints, gateways, and other data-processing devices. In short, edge AI can be thought of as a merger of two technologies, AI and edge computing. 

Marketers may create automated systems that react to consumer inputs quickly as data is processed at the source. Edge AI will empower brands to respond instantaneously to consumer interaction, resulting in a hyper-personalized experience that the end-user has control over — all at a fractional cost. By utilizing real-time data processing, brands can quickly offer customized yet personal experiences to customers before they leave a store or close a tab. This data can be based upon information like location and time of day or past interactions with a web page.

Advertising has always been concerned with understanding and exploiting human behavior as a medium to boost market revenue and attract loyal customers. Today, the ability of AI algorithms to convert huge volumes of complicated, confusing data into insight is backing the need to carry the analysis of customer behavior. Brands can now evaluate an individual’s complete social activity, including every phrase, image, like, review, and emoji. 

Yes, this sure sounds like stalking users to sell services and products by playing a marketing gimmick via ads. However, edge computing can tackle these intrinsic concerns. 

Read More: Kneron’s New Edge AI Chip Is 2X More Power-Efficient Than Others

People are more interested in products that are relevant to them, therefore an increasing number of businesses are taking advantage of personalized advertisements. Personalization is crucial because it improves the buying experience by making it more enjoyable, efficient, and most importantly, it boosts sales. As a result, lifeless mannequins and static billboards will soon be a thing of the past. 

Consumers enjoy a strong sense of agency and control when they interact with brand offerings. They also wish to have better control over their engagement data to protect their privacy and prevent data misuse. This is why most of the data sources are asked to adhere to data encryption policies.  

But how to mine user behavior data when everything is end-to-end encrypted? Simple, by sifting through data before it is encrypted and after it is decrypted, without the need for storing the data and processing data locally. The information collected can be classified as per various labels and used to identify the interests of the users. By moving recommendation engines on the user devices, brands will save tremendous amounts of processing power, increase user privacy, and would not require their own hardware resources to store the data. As the inbuilt edge AI capabilities of smartphones and other devices improve, advertising companies can construct even more nuanced classification models.

Advertisers are already skilled at precisely targeting various audiences depending on their preferences. Yet earlier, the advertisement campaigns were based on average customer interactions and interests i.e., they were basically a hit and trial stunts to attract the public to the brand offerings and turn them from casual visitors to long-term customers. Now, equipped with edge AI and better data encryption, the brands can offer specially curated ads that capture the user interests with higher accuracy while building trust. 

There are some challenges too that need to be attended to when deploying or switching to using edge AI on IoT devices. To enforce edge AI for personalized ads, capabilities in the AI space (especially client-side behavior modeling), hardware, software, networking, as well as maintenance of these must be improved. The existing IoT architecture must have programmable gateways before upgrading to edge. 

In addition, with new variants of cyber attacks and spyware, encryption alone may not be sufficient to enforce user data privacy. Therefore, brands must be careful while prioritizing what data or metadata they need to train their edge ai models for producing relevant and engaging personalized ads.

Advertisement

Octopus Tentacles Inspired Robot Arm is making a splash in Robotics

Octopus tentacles based robotic arm, bio robotics,
An octopus-inspired, origami robotic arm. Credit: Shuai Wu.

Inspired by the animal kingdom, using smart polymeric materials, a new generation of robotic tools is beginning to take shape owing to a mix of powerful muscles and sensitive neural receptors.

Octopuses have the most flexible appendages of any creature on the planet. Each of the cephalopod’s eight arms contains roughly two-thirds of the brain’s neurons, allowing them to sense and respond to external conditions with little to no input from the brain. In addition to being supple and powerful, it can bend, twist, lengthen, and shorten in a variety of ways to generate a variety of locomotions. 

The octopus’s highly dexterous limbs have been used as inspiration for the construction of soft robots. Moreover, the capacity to respond and adapt to local conditions without needing a central controller has fascinated the minds of robotics developers. 

Recently, a team of researchers from The Ohio State University and the Georgia Institute of Technology has developed a robot arm that moves like an octopus tentacle without the necessity for a motor. The new robotic arm’s flexibility comes from a few important characteristics, such as magnetic-field-driven motions rather than motors, origami-inspired panels, and a soft exoskeleton. 

Read More: Pollination Robots: A New Frontier in Agriculture Robotics

The researchers employed a segmented method to create a limb that might imitate an octopus arm. Individual segments of hexagonal-shaped, soft dual silicon plates with embedded magnetic particles were used to construct the arm – where the plates were joined together using Kresling origami-inspired slanted plastic panels. Kresling is a style of origami that twists to lengthen and contract. Made using buckling thin shell cylinders, it is an excellent building block for the origami robotic arm because of its inherent multimodal deformation capacity, which allows for deployment, folding, and bending.

Following that, the plates were used to join the segments. The arm was then put in a magnetic field that could be controlled. Since each section had its own magnetic particles, the researchers could control each one separately by altering the magnetic field’s characteristics. This allowed the robot arm to rotate 360 degrees as well as vary its length by twisting the segments together in a concertina-like fashion or expanding to make them longer. According to the researchers,  the octopus tentacle design leaves much room for customization, which includes the number of segments, plate size and the degree of bendability.

This magnetic robot arm was inspired by octopus tentacles
This robotic arm can move and grasp like the arms of an octopus. Ruike Renee Zhao/PNAS

For the testing phase, the researchers created a three-dimensional magnetic field surrounding the arm using electromagnetic coils. They could produce torque by simply changing the direction of the magnetic field surrounding the arm, which would drive the torque and deformation of the individual origami parts. Researchers were also able to fine-tune the motions by controlling each section of the arm separately.

The noncontact actuation of the Kresling robotic arm makes it a unique mechanism for applications that need synergistic robotic movements for navigation, sensing, and interfacing with objects in restricted settings or confined access. The team affirms that miniaturized medical equipment, like tubes and catheters, can be produced using small-scale Kresling robotic arms combined with endoscopy, intubation, and catheterization operations by employing object manipulation and motion under remote control. Even in surgeries requiring treatment delivery, origami robots can allow the material to ‘open’ as it arrives at the spot, unfurling the treatment along with it and applying it to the body part that requires it. Hence, the octopus robotic arm trades strength with flexibility. 

While a conventional robotic arm will need a number of motors that will enable it to have a higher degree of freedom, this octopus robotic arm achieves that only using magnetic fields. These results concerning the control and function of the octopus robotic arm offer a lot of potential for creating self-driving soft robots for healthcare and other industries.

Advertisement

Microsoft Announces Surface Event on September 22: Here’s what to expect

Microsoft surface event surface duo 2, microsoft windows 11

Microsoft has revealed that their fall hardware event, which will take place on September 22, will be entirely virtual. Microsoft is expected to introduce the Surface Pro 8 laptop, Surface Book 4, and a new version of Surface Duo at the event. The event will be broadcast live on the company’s website starting at 8 a.m. PT / 8:30 p.m. IST.

The Surface Duo 2 was initially leaked in June on a relatively unknown YouTube channel, with Windows Central subsequently stating that the photos were a real glimpse at what was to come. The Microsoft Surface Duo 2 features a similar overall appearance to the first-generation foldable tablet, right down to the characteristic hinge, two screens, and massive Windows logo imprinted on the back, as shown in the video. The most significant modification, according to the leaked pictures, will be a triple camera system with telephoto, ultrawide angle, and normal lenses. Though leaked previews show it to be rather large, we will have to wait and see how accurate they were.

As per leaked reports, the specifications for the Duo 2 include a Qualcomm Snapdragon 888 processor, which would put it among Android flagships, as well as an Adreno 660 GPU and 8GB of RAM. It might possibly feature additional RAM and support for 5G networks. It may also have NFC for contactless payments, according to reports.

Microsoft is also expected to release a successor to the Surface Book 3 laptop with the Surface Duo 2. According to sources, Microsoft may not name the forthcoming Surface Book 4 device. It may be dubbed the Microsoft Surface Laptop Pro instead. This is due to rumors that Microsoft will revamp the high-end laptop to include a non-detachable 14-inch display. Microsoft could also announce the Surface Go 3 tablet.

Read More: Microsoft Announces The End Of Support For Windows 10 In 2025

Microsoft had also announced the release date for Windows 11, stating that the new operating system would be available for new devices on October 5. Microsoft also added that Windows 10 PCs that are eligible would be upgraded to Windows 11, the next version of the Windows operating system. However, upon launch, the option to download Android apps through the Microsoft Store will be unavailable. Amazon and Intel are collaborating with Microsoft to bring Android apps to Windows 11. 

Windows 11 has a modern look with rounded corners, pastel colors, and a centered Start menu and Taskbar. The new OS has Snap Groups and Snap Layouts, which are groupings of applications that sit in the Taskbar and may pop up or be reduced at the same time for quicker task switching. They also make it easier to connect and disconnect from a monitor without losing track of where your open windows are.

Smartpens with haptic feedback are also supported in Windows 11, allowing the pen to mimic writing with various materials such as a pencil or pen on various types of paper. There are currently no Surface pens that support this; thus announcement for a new pen with hardware support for Surface Pro 8 appears to be a possibility at the Microsoft Surface Event.

On June 24, 2021, Microsoft unveiled Windows 11 in a 45-minute online event titled “What is Next for Windows.” Four days later, the company gave members of the Windows Insider Preview Program the first look at the new operating system. Microsoft has released seven incremental upgrades since then, each with new features and bug fixes. Now, Microsoft is ready to release a final version of Windows 11 to a larger number of consumers and have device manufacturers begin selling new PCs that run the operating system.

Advertisement

Apple Anticipated AR/VR Headset may Require an iPhone connection

Apple AR VR headse with Iphone connectivity

It is not the first time that tech pundits have been anticipating Apple to debut the augmented reality segment with an AR headset. Now ahead of the annual Apple event, new information has been brought to light that the AR/VR headset will require an active iPhone connection, and may have to offload more processor-heavy tasks to a connected iPhone or Mac. 

If the rumors are to be believed, Apple’s AR Glasses will work similarly to an Apple Watch, syncing with a user’s iPhone and displaying texts, emails, maps, and games across the user’s field of view.

According to the source, the Cupertino giant completed the 5 nm SoC (System-on-Chip) for the AR or VR headset last year and is awaiting its integration with the gadget. Furthermore, Apple finished two additional chips that would be incorporated into the device.

These chips lack a neural engine for AI and machine learning capabilities, making them less powerful than those found in Apple’s Macs and iOS devices. The chip is optimized for wireless data transfer, video compression, and decompression, and power economy to ensure long battery life. This also sounds sensible if the headset’s primary function is to stream data from another device rather than doing the heavy processing itself. 

Read More: Apple Hires EX-BMW Employee To Boost Its Electric Car Project

According to The Information, the device will also include an “unusually huge” image sensor which will be the size of one of the headset’s lenses. Though the image sensor has not been seen in prior leaks, the report mentions, it will collect high-resolution picture data from a user’s surroundings for augmented reality. Because it is impossible to perform VR without totally concealing the user’s vision, and it is difficult to do AR without the user being able to see the outside world, the image sensor may be used to offer a view of the user’s surroundings from within the headset.

The chips for the headset are being made by TSMC, and mass production is expected to take at least a year. The first AR/VR headset might be delivered as early as 2022, although it could be delayed if the device’s development is not completed on time. Meanwhile, TSMC has struggled to make the chip defect-free and has had low yields during pilot production. If the image sensor on the headset is already becoming a huge pain point, delegating the backend of the UX to an iPhone is definitely the best option.

Advertisement

WhatsApp breaches GDPR rules, fined $267 million

WhatsApp breaches GDPR privacy rules

Ireland’s Data Protection Commission (DPC) has fined Facebook-owned WhatsApp €225 million ($267 million) for breaking the European Union’s data privacy rules. In an 89-page summary (PDF), DPC announced its decision. WhatsApp breaches GDPR privacy rules by not correctly inform EU citizens how it handles personal data, including how it shares that information with Facebook, its parent company. WhatsApp became part of Facebook in 2014 through a $19.3 billion acquisition.

WhatsApp has been ordered to update and change its policy of notifying users about sharing their data. Tech companies must comply with Europe’s General Data Protection Regulation (GDPR), which governs how they gather and use data in the European Union. GDPR came into effect in May 2018, and WhatsApp is one of the first companies that DPC has hit with privacy lawsuits under the regulation.

On its website, WhatsApp states that it shares transaction data, phone numbers, mobile device information, IP addresses, business interactions, and other information with Facebook. However, the website states it does not share personal conversations, location data, and call logs with the parent company. 

Read more: Implantable AI Chip Developed for Classification of Biosignals in Real-time

“WhatsApp is committed to providing a secure and private service. We have worked to ensure the information we provide is transparent and comprehensive and will continue to do so,” the spokesperson said. “We disagree with the decision today regarding the transparency we provided to people in 2018, and the penalties are entirely disproportionate.”

In July 2021, Luxembourg National Commission for Data Protection fined Amazon 746 million euros for breaching GDPR rules for using consumer data in advertising. Luxembourg’s data regulator stated that Amazon didn’t comply with GDPR in processing customer’s data. 

Advertisement

NASSCOM CoE-IoT teams up with Taiwan-India AI Technology Innovation Research Center

aiwan-India AI Technology Innovation Research Center, NASSCOM CoE-IoT MoU

Nasscom Centre of Excellence for Internet of Things (CoE-IoT) announced that it has signed an intent of cooperation with Taiwan-India Artificial Intelligence Technology Innovation Research Centre, Taiwan’s largest university and incubator of startups under the National Chung Cheng University.

This collaboration between Nasscom CoE IoT and Taiwan-India Artificial Intelligence Technology Innovation Research Center is focused on incubation support, mentorship, and market access. It will assist Indian tech new companies in sourcing equipment parts from Taiwan during their prototyping to production stages. 

The MoU is anticipated to improve relations between Indian and Taiwanese businesses while also facilitating trade. This would also make it easier for Taiwanese businesses to set up business in India, as well as improve collaboration between the two countries in terms of sourcing hardware and electronic components, resulting in mutual success and increased market access.

Last year, in parallel to the rising death toll and quarantine measures amid the COVID-19 pandemic, the semiconductor industry, which has been instrumental in the digital disruption, suffered a devastating economic setback. Simultaneously, the supply and demand shocks that followed the global economic downturn exacerbated supply chain vulnerabilities almost everywhere.

Read More: Salesforce And NASSCOM Partners To Skill 1 Lakh Aspirants

According to Shivendra Singh, Nasscom’s vice president of global trade development, given the supply chain challenges, the signing of this MoU will be critical in enabling both the countries to institutionalize industrial collaboration, allowing us to improve our ability to source hardware components, increase investments, and collaborate on technical issues on an equal and mutually beneficial basis. 

“NASSCOM looks forward to accelerating this critical partnership by assisting Taiwanese companies in setting up shop in India and establishing business relationships with budding startups,” Mr. Singh added.

Advertisement