Thursday, January 15, 2026
ad
Home Blog Page 314

Digital Immortality or Zombie AI: Concerns of Using AI to Bring Back the Dead

Microsoft artificial intelligence chatbot dead people GPT-3 open ai project december
Image Source: Science Techniz

The prospect of communicating digitally with someone from beyond the grave is no longer a figment of scientific fantasy. Digital duplicates of the deceased have begun to turn quite some heads this year since Microsoft granted the patent for artificial intelligence technology that could bring dead people ‘back to life’ as chatbots. But while it seems like a significant and unexpected milestone in modern technology, one simply cannot overlook the ethical conundrum of this feat.

Most recently, freelance writer Joshua Barbeau is making headlines for virtually bringing his fiancée Jessica Pereira, ‘back from the dead’ eight years after she passed away. He paid Project December (a software that uses artificial intelligence technology to create hyper-realistic chatbots) five dollars for an account on their website and made a new text ‘bot’ named ‘Jessica Courtney Pereira.’ Next, he entered Pereira’s old Facebook and text messages, as well as offered some background information for the software. The resultant model was able to imitate his fiancée while chatting accurately.

Project December is powered by GPT-3, an autoregressive language model that deploys deep learning to produce human-like text. It was developed by Elon Musk-backed research organization OpenAI. GPT-3 can replicate human writing by consuming a massive corpus of datasets of human-created text (particularly Reddit threads) and create everything from academic papers to emails. 

The incident draws a parallel to the popular dystopian series Black Mirror and the movie Her. Without giving spoilers, in the episode “Be Right Back,” Martha, a young lady, grieves the death of her lover, Ash, who passed away in a vehicle accident. During Ash’s funeral, Martha learns about a digital service that will allow her to connect with a chatbot version of her late boyfriend and signs up for it later. In Her, the lonely protagonist dates Samantha, an intelligent operating system, with tormenting repercussions. The video depicts the psychological anguish that may befall on individuals who rely too much on technology.

Read More: Facebook Launched Its Open-Source Chatbot Blender Bot 2.0

In December 2020, Microsoft was awarded a patent by the United States Patent and Trademark Office (USPTO) that outlines an algorithm technique for creating a talking chatbot of a specific individual using their social data. Instead of following the conventional method of training chatbots, this system would use images, voice data, social media posts, electronic messages, and handwritten letters to build a profile of a person. 

According to Microsoft, “The specific person [who the artificchatbot represents] may correspond to a past or present entity (or a version thereof), such as a friend, a relative, an acquaintance, a celebrity, a fictional character, a historical figure, a random entity, etc.” The technology may potentially create a 2D or 3D replica of the individual.

In early 2021, South Korean national broadcaster SBS introduced a new cover of the 2002 ballad I Miss You by Kim Bum Soo, with folk-rock singer Kim Kwang-seok having performed the song. The twist here is, Kim Kwang-seok has been dead for nearly 25 years. According to the AI company Supertone, which reproduced the late singer’s voice, using voice AI system – Singing Voice Synthesis (SVS). This system had learned 20 songs of Kim based on a training tool with over 700 Korean songs to enhance accuracy so that the system can copy a new song in Kim’s style. 

While all these innovations sound exciting and obviously creepy, it raises important questions on breaching privacy and possibility of misinformation. For instance, the Microsoft chatbot can be misused to put words on the mouth of the “dead person’s surrogate,” which they might have never said in real life, by using the crowd-sourced conversational social data to fill in the gaps. 

It is also probable that such an artificial intelligence model may soon be able to “think” by itself. As a result, its subject person’s “digital existence continues to grow after the physical being has died away.” In this approach, the digital avatar of the deceased would stay current with events, form new ideas, and evolve into an entity based on a genuine person rather than a replica of who they were when they died. 

In addition to that, the act of replicating someone’s voice also carries the risk of fraud, misinformation campaigns, and contributing to potentially fake news. Moreover, the artificial intelligence simulations of dead people could have a detrimental effect on real-world relationships. It can also worsen the grieving process if the users opt to live in denial due to having regular contact with the chatbot mimicking the dead.

The Kim Kwang-seok incident may have been well-received among the fans, but creating new works or resurrected voices using artificial intelligence poses copyright concerns. Who is the legal owner of the property? Is it the creator or team behind the AI software or the AI itself?

The new fad of reviving the dead is only getting started. In the coming years, humans will witness more believable performances of beloved dead relatives, artists, historical characters as technology advances. Unfortunately, they would not have any control over how their simulated avatar will be used. Therefore, such issues need to be addressed legally, ethically, and psychologically if artificial intelligence is to continue to be used in this direction.

Advertisement

Google’s DeepMind Open Sources 3D Structures of all Proteins

DeepMind Open Sources 3D Protein Structures

Google’s DeepMind partnered with the European molecular biology laboratory’s European bioinformatics institute (EMBL-EBI) for developing an AI system called ‘AlphaFold’ to predict the three-dimensional structure of a protein by recognizing the sequence of amino acids. 

Proteins are extremely complex substances providing structure to cells and organisms. However, they differ from one another primarily by the sequence of amino acids resulting in protein folding. 

Protein folding is a physical process where a protein chain arranges to a unique three-dimensional structure. In 1958, Sir John Kendrew and his co-workers came up with a low-resolution 3D structure of the protein. This research had led many scientists to demystify the hidden structures inherited by proteins.

DeepMind tops the list of critical assessments of techniques for protein structure prediction (CASP-14) with high accuracy. The AlphaFold database and source code are freely available to the scientific community, this contribution will aid many advanced biological research.

There is a greater scope in the medical field to understand the viral process and the emerging mutations through a deep learning model for structure prediction. This novel machine learning combines physical and biological knowledge of protein structure to leverage multi-sequence alignments to design a deep learning model.

“This will be one of the most important datasets since the mapping of the Human Genome,” said EMBL Deputy Director-General, and EMBL-EBI Director Ewan Birney. AlphaFold has provided 20,000 proteins of the human genome along with proteomes of 20 other biological organisms summing to 3,50,000 protein structures.

All the data provided is freely available for academic and commercial use under Creative Commons Attribution 4.0 (CC-BY 4.0) license terms. In the coming months, this organization plans to expand the database to cover a large proportion (almost 100 million structures) of all cataloged proteins.

Advertisement

Huawei Fires Head of Autonomous Driving Department for Criticizing Tesla

Huawei Fires Head of Autonomous Driving Department for Criticizing Tesla

Huawei fires its head of autonomous driving department Su Qing for criticizing self-driving car manufacturing giant Tesla at the World Artificial Intelligence Conference. 

He made numerous controversial statements and claimed that Tesla’s self-driving vehicles are the reason for the death of many individuals. 

According to the translated statement, Qing said, “Tesla has shown a very high accident rate in the last few years, and the types of accidents are very similar from the killing of the first person to the killing of a recent person. I use the word” killing “here. Use. That may sound serious to everyone. But think about it. When machines enter into human society and human symbiosis, they definitely cause accident rates. It’s ugly.” 

Read More: Qualcomm And Foxconn Announce High Performance Artificial Intelligence Edge Box

She further said, “The point is murder, and we want to reduce the possibility of an accident as much as possible. In terms of probability, this is possible.” 

Chinese media house Global Times confirmed through a tweet that this escalation of the situation was because of Qing’s critical comments on Tesla.

Huawei recently clarified its position on this matter by saying the company respects every competitor and their efforts in the autonomous driving industry. It believes in the philosophy of developing new technologies together with the industry. 

It is a fact that a few people have died due to Tesla’s auto-driving feature. Still, the company has been outspoken about it and has clearly accepted and defined the limitations of its currently available autopilot technology. 

Su Qing has been removed from the position of the head of the Intelligent driving solution department of BU Intelligent Vehicles and has been transferred to the strategic reserve team where he would receive additional training.  

Advertisement

DataRobot Raises $300 Million And Plans To Acquire Algorithmia

DataRobot Raises $300 Million And Plans To Acquire Algorithmia

DataRobot raises $300 million in its series G funding round and also announces its plans to acquire MLOP software developing startup Algorithmia. With this fresh funding, DataRobot’s market valuation has increased to $6.3 billion. 

The company plans to use the funds to conduct research and enhance its augmented intelligence platform. The tech firm also mentioned that it wants to integrate Algorithmia’s sharp focus on model serving with DataRobot’s model management platform to provide low-cost model management services, including deep learning operations for natural language processing to its consumers worldwide. 

Algorithmia developed an artificial intelligence platform that enables IT professionals to manage high-volume model production in a secure environment effectively and enables businesses to integrate MLOPs into their existing management platforms. 

Read More: Toyota Unveiles a Basketball Playing Robot in Tokyo Olympics 2020

DataRobot will use Algorithmia’s expertise to develop industry-leading MLOP infrastructures for its customers. 

The CEO of DataRobot, Dan Wright, said, “This new investment further validates our vision for combining the best that humans and machines have to offer to power predictive insights, enhanced data-driven decisions, and unprecedented business value for our customers.” 

He further added that the company’s demand is increasing on a global scale, and they are grateful to the investors for supporting them in developing better solutions. 

Algorithmia is a Washington-based startup founded by Diego Oppenheimer and Kenny Daniel in the year 2014. It has received a total funding of $38.1 million from investors like Gradient Ventures and Mandora Venture Group. 

Co-founder and CEO of Algorithmia, Diego Oppenheimer, said, “t’s been clear to us for many years that DataRobot shares this philosophy, and we’re thrilled to combine our dedication of enabling customers to thrive in today’s market by delivering more models in production, faster while protecting their business.” 

He also mentioned that they understand the importance of increasing their customer reach to let organizations know the value of their machine learning model. Diego believes that this acquisition will be a step forward towards their goal.

Advertisement

NASSCOM Announces XperienceAI Virtual Summit 2021

NASSCOM Announces XperienceAI Virtual Summit 2021

India’s information technology governance body NASSCOM announces the dates of its XperienceAI Virtual Summit 2021. After the massive success of last year’s event, NASSCOM now announces the second edition of its virtual summit to be held in August. 

The online summit will commence from 3rd August and will end on 6th August 2021 from 10:00 AM to 7:00 PM IST. The summit will host veteran individuals from the artificial intelligence industry to discuss innovations to revolutionize various sectors using artificial intelligence technologies. 

Last year the summit hosted more than 3000 participants from 20 countries around the world. This year XperienceAI has a theme ‘Artificial intelligence as a catalyst for better normal.’ 

Read More: Google’s Waymo Argues That UK Government Shouldn’t Cap Autonomous Vehicles On The Road

It aims to ponder upon certain vital areas that artificial intelligence can help improve, like socio-economic growth and India’s roadmap for artificial intelligence innovations and adoption. 

The summit will also focus on the development of artificial intelligence solutions to tackle the challenges like climate change and the role of AI in building a circular economy. More than 60 reputed speakers like Don Tapscott, Dilip Asbe, Debjani Ghosh, Ashwini Rao, Arunima Sarkar, and Arundhati Bhattacharya will be present at the event. 

Speakers will also discuss the application of artificial intelligence to predict future pandemics. The four-day event will have live workshops, use case discussions, and exclusive keynote sessions. The summit has been sponsored by many industry-leading enterprises like Intel, TCS, Accenture, HCL, and Fractal. 

The XperienceAI summit 2021 will be helpful for people working in various sectors. Startups, coders, policymakers, tech companies, government officials, public sector workers, research scholars, and academicians will find this summit to be very useful and informative. 

Interested candidates can register for free on the official website of NASSCOM. 

Advertisement

IMD to leverage Artificial Intelligence for Better Weather Forecasting

Artificial intelligence weather forecasting, machine learning
Image Credit: The Week

The India Meteorological Department (IMD) plans to employ artificial intelligence (AI) and machine learning (ML) to ensure more accurate weather forecasts. 

On Sunday, IMD Director General Mrutyunjay Mohapatra said in an official statement that the usage of artificial intelligence can enhance weather forecasting, especially for issuing nowcasts, which can help improve 3-6 hours prediction of extreme weather events. He reiterates that while artificial intelligence and machine learning in weather forecasting are not as prevalent as in other fields, this step will ensure new beginnings in this area.

According to Mohapatra, the Ministry of Earth Sciences has asked research organizations to examine how artificial intelligence (AI) can be leveraged for fine-tuning weather forecasting processes. He also stated that the IMD intends to collaborate on this idea with the Indian Institutes of Information Technology (IIITs) at Prayagraj and Vadodara and IIT-Kharagpur for the technology upgrade. The IMD has also teamed up with Google to provide precise short-term and long-term weather forecasts.

The IMD issues forecasts for extreme weather, such as thunderstorms and dust storms. However, thunderstorms, which bring lightning and torrential rains, are more difficult to anticipate than cyclones because of their extreme weather phenomena that form and disperse in a relatively short amount of time.

At present, IMD uses radars, satellite imagery, and other tools to issue nowcasts, which help in offering information on extreme weather events occurring in the next 3-6 hours. The satellite imagery comes from the INSAT series of geosynchronous satellites, as well as the Real-Time Analysis of Products and Information Dissemination (RAPID), a weather data explorer application that serves as a gateway and provides quick interactive visualization as well as 4-Dimensional analysis capabilities.

It also relies on ISRO for ground-based observations from the Automatic Weather Stations (AWS), the Global Telecommunication System (GTS) that measure temperature, sunshine, wind direction, speed, and humidity. All these data include cloud motion, cloud top temperature, and water vapor content, all of which aid in rainfall estimation, weather forecasting, and cyclone genesis and direction. 

Read More: IBM Uses Blockchain And Artificial Intelligence To Help Farmers 

Despite the vast dataset, the weather is a dynamic phenomenon i.e. temperature, wind speed, tides, etc., are never constant. Further, the rising global warming and greenhouse effect have introduced more unpredictability of changes in oceanic and wind activity that are key parameters in weather forecasting. 

The pattern-recognition capabilities of artificial intelligence and machine learning make them suitable assets in weather prediction and forecasting. AI models are fed with enormous quantities of weather data and trained to identify a storm that can bring lightning or tornadoes. This enables the models to predict the possibility of the occurrence of thunderstorms or any other meteorological event using weather and climate datasets. Now meteorologists can make predictions with improved accuracy and thus save lives and money. 

Also, these artificial intelligence models depend on the computational power of supercomputers for large-scale data processing and pattern recognition. As per IMD, an increase in computing power, from 1 teraflop to 8.6 teraflops, has helped the nodal department in better processing observational data.

Advertisement

Qualcomm And Foxconn Announce High Performance Artificial Intelligence Edge Box

Qualcomm And Foxconn Announce High Performance Artificial Intelligence Edge Box

Microchip manufacturing giant Qualcomm and Foxconn announce their new high-performance artificial intelligence edge box. The companies announce the design and launch of Gloria Edge Box through a press release

The Edge Box is powered by Qualcomm Cloud artificial intelligence 100 accelerator. This high-end technology is capable of performing more than seventy trillion artificial intelligence-related operations per second. 

Gloria Edge Box uses Snapdragon 865 mobile processor to support a maximum of 24 Full HD cameras. This technology can be used for various research purposes like traffic analysis, security, and intelligent retailing solution development. 

Read More: IBM,‌ ‌MIT,‌ ‌and‌ ‌Harvard‌ ‌release‌ Common‌ ‌Sense‌ ‌AI ‌Dataset‌ ‌at‌ ‌ICML‌ 2021‌

Senior Vice President and General Manager of Computing and Edge Cloud at Qualcomm, Keith Kressin, said, “We anticipate broad areas of adoption of Foxconn Industrial Internet’s Gloria platform using our leading performance per watt AI solution, the Qualcomm Cloud AI 100. We expect customer adoption of Gloria to include a variety of environments such as retail centers, warehouses, data centers, and factories.” 

He also mentioned that the company is excited about working with Foxconn Industrial Internet to boost the adoption of artificial intelligence-powered edge applications. BKAV Corporation will be the first enterprise to avail the new service provided by Qualcomm and Foxconn. 

The solution is also capable of connecting to 5G networks using 5G sub6 and mmWAVE WAN connectivity technology powered by the Qualcomm Snapdragon X55 5G modem RF system. 

Gloria Edge will deliver best-in-class performance with very low power consumption at a competitive price, said company officials. The solution will help businesses and governments to adopt artificial intelligence services and aid them in building more smart cities. 

The expected commercial launch of Gloria Edge Box is in the second quarter of 2022. Chief Technology Officer of Foxconn Industrial Internet, Dr. Tai Yu Chou, said, “We are pleased to collaborate with Qualcomm Technologies to develop and announce the availability of the outstanding Gloria AI Edge Box.”

Advertisement

Toyota Unveils a Basketball-Playing Robot in Tokyo Olympics 2020

Toyota Unveiles a Basketball Playing Robot in Tokyo Olympics 2020

Japanese car manufacturing company Toyota recently unveils its new basketball-playing robot in the Tokyo Olympics 2020

Toyota reveals its artificial intelligence-powered robot during a Group B game between France and the United States at the Saitama Super Arena, Japan. 

The humanoid robot was able to shoot difficult three-point shots accurately during the halftime of the official match. Viewers across the globe were astounded to see the capabilities of the basketball-playing robot and took no time to express themselves on Twitter. 

Read More: Artificial Intelligence Model of Microsoft can Help Detect Heat Waves in India

Toyota’s third generation of its humanoid robot (CUE 3) is six feet ten inches tall and has sensors around its torso to analyze its distance from the basketball ring and the angle that it needs to achieve for a perfect basket. 

It also uses motorized arms and knees for movements. The robot has also bagged a place in the Guinness Book of World Records by performing the highest number of consecutive basketball free throws done by a humanoid robot. 

President of Association for Advancing Automation, Jeff Burnstein, said, “If you think about the future where robots work in our daily lives, they’re going to have to be extremely accurate, have great vision, great dexterity and great mobility.” 

He further added that this new robot that plays sports is a step towards the bright future of robotics. Toyota officials said that the robot takes 15 seconds to shoot a ball, which according to them, is a bit slow. 

Toyota first launched the prototype of CUE back in 2017 that was capable of shooting balls but requires human intervention for movements and positioning. 

Toyota has deployed numerous artificial intelligence-powered robots in the Tokyo Olympics to aid humans in many operations like transportation, entertainment, and many more.

Advertisement

IBM,‌ ‌MIT,‌ ‌and‌ ‌Harvard‌ ‌release‌ Common‌ ‌Sense‌ ‌AI ‌Dataset‌ ‌at‌ ‌ICML‌ 2021‌

Common Sense AI dataset
Image Credit: Unsplash

Creating an intelligent AI is more than finding patterns in data. They must be able to understand human’s intuitive decision-making as we make decisions based on the intentions, beliefs, and desires of others. At the 2021 International Conference on Machine Learning (ICML), researchers from IBM, MIT, and Harvard University released a Common Sense AI dataset. It was a part of a multi-year project with the U.S. Department of Defense’s Defense Advanced Research Projects Agency (DARPA). The Machine Common Sense project aims to develop models of intuitive psychology and see whether AI can reason similar to how we educate human infants. 

At ICML, Researchers unveiled AGENT (Action, Goal, Efficiency, coNstraint, uTility), a benchmark that empowers machines to grasp the core concepts of intuitive psychology. The AGENT model comprises a large dataset of 8,400 3D animations categorized under four scenarios: Goal Preferences, Action Efficiency, Unobserved Constraints, and Cost-Reward Trade-offs. 

This dataset to train AI modes is similar to how psychologists evaluate an infant’s intuitive ability. Researchers also introduced two baseline machine learning models: BIPaCK and ToMnet-G, based on Bayesian inverse planning and the Theory of Mind neural network. 

Commonsense reasoning has been a bottleneck for researchers in both natural language processing and other artificial intelligence techniques. Intuitive psychology gives us the ability to have meaningful social interactions by understanding and reasoning other people’s states of mind. However, ML models lack this power of intuition and require extensive data sets to train AI models. AGENT aims to bridge this gap and build AI that manifests the same common sense as a young child.

Read more: Artificial Intelligence Model of Microsoft can Help Detect Heat Waves in India

“Today’s machine learning models can have superhuman performance. It is still unclear if they understand basic principles that drive human reasoning. For machines to successfully be able to have social interaction like humans do among themselves, they need to develop the ability to understand hidden mental states of humans,” said Abhishek Bhandwaldar, Research Engineer, MIT-IBM AI Lab.

Like other infant studies, this project also has two phases in each trial: familiarization and test. There are 8,400 3D animations lasting between 5.6s to 25.2s and a frame rate of 35 fps. “With these videos, we constructed 3,360 trials, divided into 1,920 training trials, 480 validation trials, and 960 testing trials. All training and validation trials only contain expected test videos,” the researchers said.

Researchers compared the two machine learning algorithms built on traditional human psychology methods on AGENT with human performance. “Overall, we find that BIPaCK achieves a better performance than ToMnet-G, especially in tests of strong generalization,” reads the paper.

This study shows that we can teach AI models how humans make intuitive decisions. It’s also seen that the ML models lack generalization and need pre-training or advanced architectures. Researchers claim AGENT can be used as a diagnostic tool for developing better models of common sense AI.

Advertisement

Intel Accelerates Process And Packaging Innovations

Intel Accelerates Process And Packaging Innovations

In a recent press release, Intel Corporation announced its new roadmap towards process acceleration and groundbreaking packaging innovations. The company revealed its technologies that would power the upcoming products beyond 2025. 

Two new technologies were unveiled in the press release, which includes Intel’s new transistor architecture called RibbonFET and the world’s first backside power delivery technology named PowerVia. 

After a decade-long wait, Intel has come up with a new architecture technology for its processors. The company is also ready to launch its High Numerical Aperture Extreme Ultraviolet Lithography production technology. 

Read More: Intel Beats Its Revenue Estimate For Q3 In 2021

Senior Vice President and General Manager of technology at Intel, Dr. Ann Kelleher, said, “We led the transition to strained silicon at 90nm, to high-k metal gates at 45nm, and to FinFET at 22nm. Intel 20A will be another watershed moment in process technology with two groundbreaking innovations: RibbonFET and PowerVia.” 

She further added that Intel has a great history of foundational process innovations that have helped the industry grow by many folds. The chip manufacturing giant has come up with a new naming structure for its node that would allow customers to understand their products and their capabilities in a much easier manner. 

The fresh node names are Intel 7, Intel 4, Intel 3, and Intel 20A. The company is currently developing another node called Intel 18A and is expected to be launched in 2025. According to company officials, Amazon Web Services will be the first enterprise that would use Intel’s IFS packaging solutions and will also provide necessary insights. 

The firm also plans to collaborate with other players in the United States and Europe to continue research on more innovations as deep collaborative efforts in the ecosystem are essential for enabling high-volume manufacturing. 

Pat Gelsinger, the CEO of Intel, said, “Building on Intel’s unquestioned leadership in advanced packaging, we are accelerating our innovation roadmap to ensure we are on a clear path to process performance leadership by 2025.”

Advertisement