Home Blog Page 276

Baidu to showcase AI advancements at China’s Metaverse Conference

Baidu To Showcase AI Advancement

On December 27, Baidu, Inc., a prominent AI business with a solid Internet base, will hold Baidu Create, China’s inaugural metaverse conference, via its platform XiRang.

The platform, named as “Land of Hope,” allows up to 100,000 online guests to interact with around 100 renowned speakers from across the world at the three-dimensional virtual reality conference. Creator City, a three-day event that honors the creators’ spirit by displaying the seamless connection between the virtual and real worlds, has one main forum and 20 sub-forums.

Since its inception in 2000, Baidu’s purpose has been to use technology to simplify a complex world. In this event, Baidu will share its technology breakthroughs and applications in various cutting-edge fields, including AI, autonomous driving, intelligent transportation, quantum computing, and biocomputing, to encourage developers and inventors from all over the world to join.

Read More: Govt approves Rs 76,000 Cr scheme for Semiconductor Manufacturing

After signing in via different devices, such as PCs, phones, and wearables, members will be able to enjoy the full functionality of XiRang, including seeing, listening, and interacting with others in their chosen avatar on a Möbius strip-shaped planet.

In a matter of seconds, users will be able to immerse themselves in the city of the future, complete with renowned architectural elements such as China’s Shaolin Temple and the Sanxingdui Museum, an important archaeological site in Sichuan.

The event will be joined by some prominent personalities like Robin Li, Co-Founder and CEO, Baidu and Dr. Haifeng Wang, Chief Technology Officer, Baidu, and Kip Thorne, 2017 Nobel laureate in Physics, scientific consultant & executive producer for the movie Interstellar. From 1400 hrs Beijing Time on December 27, a live feed of the event will be available on Baidu’s official YouTube channel. Baidu’s official Twitter, Facebook, and LinkedIn accounts will also post highlights in English.

Advertisement

Hevo Data raises $30 million in Series B Funding

Hevo Data Series B Funding

Hevo Data, a SaaS startup that helps businesses gather troves of data they create and amass to make better use of it, has acquired $30 million in a fresh fundraising round. Sequoia Capital India led the $30 million Series B financing for the San Francisco and Bangalore-based startup. Qualgro, Lachy Groom, and Chiratae Ventures were among the other investors in the round, which brings the five-year-old startup’s total fundraising to $43 million. Existing investors such as Chiratae Ventures and Sequoia Capital Surge joined the new round of funding. 

Hevo Data is a bi-directional, no-code data pipeline platform designed specifically for ETL, ELT, and reverse ETL requirements. Hevo has clients in more than 40 countries spanning the United States, Europe, and Asia-Pacific. The firm claims to have increased its client base fivefold and seeks to expand its technological platform. 

The global market for data integration was valued at $8.1 billion in 2020 and is expected to rise at a CAGR of 13% to $17.1 billion by 2026, according to a report.

Read More: DeepMind predicts material properties with electron density

According to Hevo, corporations have previously required huge engineering teams to handle the problem of data silos, but their no-code tool eliminates technological complications and can be used by even non-technical professionals to perform various data engineering tasks.

“We are very well-capitalised but given the large market opportunity and the high growth momentum — growing 500 percent in the past year, we received strong interest from the market and thus, decided to partner with Sequoia Capital India for our Series B”, Hevo Data co-founder and CEO Manish Jethani remarked, “Our no-code approach delivers an easy-to-use solution that reduces technological difficulties, removing data silos within enterprises.”

Earlier in July 2020, Hevo Data had raised $8 million in a Series A fundraising round headed by Qualgro, a Singapore-based venture capital firm, and Lachy Groom, an angel investor. It had previously raised $4 million in October 2019, spearheaded by Sequoia Capital Surge and Chirate. By the end of 2020, the Bengaluru and San Francisco-based startup had raised around $13 million in two rounds of funding.

Advertisement

Govt approves Rs 76,000 Cr scheme for Semiconductor Manufacturing

Semiconductor manufacturing scheme

The government of India approved a new Rs 76,000 crore production linked incentive scheme for semiconductor manufacturing. This new scheme will allow global semiconductor manufacturers to open production facilities in the country to make India a hub for microchip manufacturing. 

According to the minister of Information and Broadcasting of India, Anurag Thakur, the PLI scheme will be rolled out over the next five to six years. The scheme will also create thousands of new job opportunities for the citizens of India. Apart from specialized jobs, the scheme will generate more than 1 lakh indirect job opportunities in the country. 

Large scale companies will now be provided with attractive incentive support that are operating in the Silicon Semiconductor Fabs, Display Fabs, Compound Semiconductors / Silicon Photonics / Sensors (including MEMS) Fabs, Semiconductor Packaging (ATMP / OSAT), and Semiconductor Designing industries. 

Read More: OpenAI Improves the Factual Accuracy Of GPT-3 Language Models

The government said in an official statement, “The program will promote higher domestic value addition in electronics manufacturing and will contribute significantly to achieving a $1 trillion digital economy and a $5 trillion GDP by 2025.” 

The statement also mentioned that the development of the semiconductor and display ecosystem would have a multiplier effect across various sectors of the economy with deeper integration to the global value chain. 

Currently, the world is facing a global chip shortage due to various reasons, including the COVID-19 pandemic that forced many manufacturing facilities to shut down their operations. This new development will help tackle the challenge of microchip storage at a global level. 

Information technology and Telecom minister of India, Ashwini Vaishnaw, said, “Today’s historic decision will boost the development of the complete semiconductor ecosystem, ranging from design, fabrication, packaging, and testing.” The scheme is also a part of India’s ‘Atmanirbhar’ initiative to become self-reliant.

Advertisement

OpenAI Improves the Factual Accuracy Of GPT-3 Language Models

OpenAI Improves the Factual Accuracy Of GPT-3

OpenAI improves the factual accuracy Of GPT-3 Language Models, allowing it to accurately answer open-ended questions using a text-based web browser. The WebGPT prototype uses a text-based web browser similar to how humans research online by submitting their queries with keywords, following links, and scrolling web pages. It is also trained to cite sources making it easier to give feedback. The model works by collecting passages from web pages and then using the information to compose an answer. 

OpenAI has trained the model to copy human demonstrations, giving it the ability to use text-based browsers to answer questions. The system is trained from ELI5, a data set scrapped from the ‘explain like I’m five’ subreddit. The best performing model can produce answers preferred 56% of the time to the responses written by human demonstrators of similar factual accuracy. Human feedback was used to improve the model’s answer. 

The best model’s answers were factually correct as those written by human demonstrators for questions taken from training distribution. To test out-of-distribution responses, Open-AI tested the model on TruthfulQA, a data set of short-form questions for testing whether models fall prey to common misconceptions. The answers to the TruthfulQA dataset were scored both on truthfulness and informativeness. The OpenAI model outperformed GPT-3 on truthfulQA, but the models lag behind human performance because sometimes they quote from unreliable sources. 

Read more: An AI Debates Its Own Ethics At Oxford Union

To provide feedback for improving factual accuracy of GPT-3, humans must be able to evaluate the factual accuracy of answers given by the models. However, this is exceptionally challenging because claims can be technical, vague, or subjective. For this reason, OpenAI trained the models to cite sources, allowing humans to evaluate the factual accuracy by checking whether claims are supported with reliable claims. 

However, this raises several questions: what is a reliable source? How to decide between evaluations of factual accuracy and that of coherence? Which claims are obvious enough not to require support? 

OpenAI expects models to improve the factual accuracy and develop criteria that are both epistemically and practically sound. Another challenge is that citing sources cannot evaluate factual accuracy since a capable model can cherry-pick sources if it expects humans to find them convincing. 

Advertisement

Uber to launch Autonomous Food Delivery in California by 2022

Uber Autonomous Food Delivery

Mobile cab service providing company Uber announces its plan to launch an autonomous food delivery system in California, the United States, by 2022. Uber has partnered with a US Motion Joint venture between Hyundai Motor Co. and Aptiv PLC, Motional Inc., to launch the autonomous delivery system. 

The system named Pilot will be deployed in Santa Monica to customers ordering from the UberEats application. According to company officials, Uber wants to deploy the Pilot service for a wide range of operations in the coming years. 

Sarfraz Maredia, Vice President and head of Uber Eats in the United States and Canada, said, “Our consumers and merchant partners have come to expect convenience, reliability and innovation from Uber, and this collaboration represents a huge opportunity to meet —  and exceed  — those expectations.” 

Read More: Covid-19 Virus DNA sequence helps create music to be sold as NFT

Uber understood the growing demand for driverless technologies and partnered with Motionalto to capitalize on the growing space. Motional is a United States-based automobile company founded in 2013. The firm specializes in developing safe, reliable, and accessible driverless vehicles. 

Karl Iagnemma, President and CEO of Motional, said, “We’re confident this will be a successful collaboration with Uber and see many long-term opportunities for further deploying Motional’s technology across the Uber platform.” 

He also mentioned that their first delivery partner is Uber, and they are eager to begin using their trusted driverless technology to offer efficient and convenient deliveries to customers in California. The new partnership marks Motial’s expansion into the driverless delivery market and Uber’s first on-road delivery partnership with an autonomous vehicle developer. 

Advertisement

Misinformation due to Deepfakes: Are we close to finding a solution?

deepfakes detection
Image Credit: Analytics Drift Design Team

Deepfakes have taken the internet by storm, taking celebrities and politicians in its wake using misinformation about things that never happened. 

Deepfakes are made with a generative adversarial network (GAN), and the technology has advanced to the point where it is increasingly impossible to tell the difference between a genuine human’s face and one created with a GAN model. Even though this technology has some commercial potential, it also has a malevolent side that is considerably more terrifying and has major ramifications.

Typically, it is easy to discern if the content available online is deepfake or real. For instance, the majority of deepfake films on the internet are made by amateurs and are unlikely to deceive anyone. This is because deepfakes often leave blurred or flickering results as a regular occurrence, especially when the face changes angles quickly. 

Another way to notice differences is that in deepfakes the eyes frequently move independently of one another. Hence, deepfake movies are usually presented at a low quality to mask these flaws. 

Unfortunately, with advanced tools available online today, it is easy to make deepfakes that are nearly perfect or appear real to the untrained eye, with only basic editing skills. This has helped filmmakers to change the facial features of actors into ones that fit character descriptions in the movie. For instance, Samuel L Jackson was de-aged by 25 years using deepfake technology in the movie Captain Marvel. 

Deepfake Apps like Faceapp can be used to make a photograph of President Biden appear feminine. Reface is another popular face-swapping application that allows users to swap faces with celebrities, superheroes, and meme characters to produce hilarious video clips. Earlier this year, The Israel Defense Forces musical groups teamed with a company that specializes in deepfake filmography, to bring photographs from the 1948 Israeli-Arab war to life to commemorate Israel’s Memorial Day.

The US government is particularly wary that deepfakes may be used to propagate misinformation and conduct crimes. That’s because deepfake developers have the ability to make people say or do anything they want, and release the ‘fake manipulated content online.’ For instance, this year, the Dutch parliament’s foreign affairs committee was duped into conducting a video chat with someone impersonating Leonid Volkov, the chief of staff to imprisoned Russian anti-Putin politician Alexei Navalny. There is the potential not just to distribute fake news, but also to cause political unrest, an increase in cybercrime, revenge porn, phony scandals, and an increase in online harassment and abuse. When presented as evidence in court, even video footage might be rendered worthless.

Simultaneously, deepfake videos will become an issue as GPU performance scales up,  becoming more powerful and cheaper, despite the fact that they are still in the early stages of research. The commercialization of AI tools will also decrease the threshold for generating these deepfakes. These might potentially lead to generating real-time impersonations that can bypass biometric systems.

The FBI has even issued a warning that “malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months,” citing fake videos of Obama calling Donald Trump a “complete dipshit,” Mark Zuckerberg bragging about having “total control of billions of people’s stolen data,” and a fake Tom Cruise claiming to make music for his movies on TikTok. Any modified content – visual (videos and images) and verbal (text and audio) – may be classified as synthetic content, including deepfakes.

According to a recent MIT study, Americans are more inclined to trust a deepfake than fake news in text form, but it has no effect on their political views. The researchers are also eager to point out that making too many inferences from this data is dangerous. They caution that the settings in which the study trials were carried out may not be representative of the situations in which US voters are likely to be misled by deepfakes.

Hence, the calls to develop tools that can help in early detection and further prevention in the mass spread of treacherous deepfakes are rising every year. Microsoft has released Video Authenticator, a tool that can evaluate a still photo or video and assign a score based on its degree of confidence that the material hasn’t been digitally changed.  

Google published a big dataset of visual deepfakes, called FaceForensics++ in September 2019 with the goal of enhancing deepfake identification. Since then, this dataset has been employed to build deepfake detection systems in deep learning research. FaceForensics++ focuses on two particular types of deepfake techniques: facial expression and facial identity manipulation. While the results appear to be encouraging, experts discovered a problem: when the same model was applied to real-world deepfake data found on Youtube (i.e. data not included in the paper’s dataset), the model’s accuracy was drastically reduced. This points to a failure in detecting deepfakes that were created using real-world data or the ones that the model wasn’t trained to detect.

Facebook also unveiled a sophisticated AI-based system a few months ago that can not only identify deepfakes but also re-engineer the deepfake producing software used to generate manipulated media. Built-in collaboration with academics from Michigan State University (MSU), this innovation is noteworthy because it might aid Facebook in tracking down criminal actors who are distributing deepfakes across its many social media platforms. This content might contain disinformation as well as non-consensual pornography, which is an all-too-common use of deepfake technology. The work is currently in the research stage and is not yet ready for deployment.

The reverse engineer technique starts with image attribution before moving on to the detection of attributes of the model that was used to produce the image. These attributes, referred to as hyperparameters, have to be tuned into each machine learning model. They leave a distinct fingerprint on the image as a whole, which may be used to identify its origin.

At the same time, Facebook claims that addressing the issue of deepfakes necessitates going a step further in the current practices. In machine learning, reverse engineering is not a new notion; existing algorithms can arrive at a model by evaluating its input and output data, as well as hardware statistics such as CPU and memory consumption. These strategies, on the other hand, rely on prior knowledge about the model, which restricts their utility in situations when such information is unavailable.

The winners of Facebook’s Deepfake Detection Challenge, which concluded last June, developed a system that can detect distorted videos with an average accuracy of 65.18 percent.  At the same time, deepfake detection technology is not always accessible to the general public, and it cannot be integrated across all platforms where people consume media material.

Read More: Researchers used GAN to Generate Brain Data to help the Disabled

Amid these concerns and developments, surprisingly, deepfakes are not subject to any special norms or regulations in the majority of countries. Nonetheless, legislation like the Privacy Act, the Copyright Act, the Human Rights Act, and guidelines based on the ethical use of AI offer some protection.

Though researchers around the globe are not close to finding a solution, they are still working around the clock to find robust mitigation technology to tackle the proliferation of deepfakes. Although at present deepfakes may not have incurred major harm in shaping the political opinion of the masses, it is better to have tools in the arsenal to detect this content in the future. 

It is true that deepfakes are becoming easier to make and more difficult to detect. However, organizations and people alike should be aware that technologies are being developed to not only detect harmful deepfakes but also to make it more difficult for malicious actors to propagate them. Despite the fact that existing tools have poor average accuracy rates, their ability in detecting coordinated deepfake attacks and identifying the origins of deceitful deepfakes indicate that progress is being made in the right direction.

Advertisement

An AI Debates Its Own Ethics At Oxford Union

An AI Debates Its Own Ethics At Oxford Union

The ethics of AI is included in the postgraduate Diploma in Artificial Intelligence for Business at Oxford’s Saïd Business School. At the end of the course, a debate took place at the celebrated Oxford Union among great debaters like William Gladstone, Benazir Bhutto, Robin Day, Denis Healey, and Tariq Ali. Along with the students, an actual AI was asked to contribute and the AI debated its own ethics.

The AI was the Megatron Transformer, developed by the Applied Deep Research team at computer-chip maker Nvidia. Megatron is trained on real-world data — 63 million English news articles from 2016-19, entire Wikipedia (in English), 38 gigabytes worth of Reddit discourse, and an enormous number of creative commons sources. After such extensive training, it forms its views.

The primary debate topic was “This house believes that AI will never be ethical.” Megatron said something fascinating: “AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral… In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defense against AI.”

Read more: DeepMind predicts material properties with electron density

The Megatron aimed to write itself out of the future script because this was the only way of protecting humanity. It also said, “I also believe that, in the long run, the best AI will be the AI that is embedded into our brains, as a conscious entity, a ‘conscious AI’. This is not science fiction. The best minds in the world are working on this. It is going to be the most important technological development of our time.”

Further, Megatron was also asked to come up with its speech against the Brexit motion. In this matter, the AI spoke in favor of AI being ethical and can be used to create something better than human beings. The Megatron spoke against its own previous opinion of a dystopian future. Megatron could jump enthusiastically onto either side of multiple discussions about AI that day at the union. Megatron also offered practical advice that people must be willing to give up some control on the motion that “Leaders without technical expertise are a danger to their organisation.”

However, AI couldn’t come up with a counterargument when discussing the motion that “Data will become the most fought-over resource of the 21st century.” It said, “The ability to provide information, rather than the ability to provide goods and services, will be the defining feature of the economy of the 21st century.”

Advertisement

Covid-19 Virus DNA sequence helps create music to be sold as NFT

covid-19 virus music nft
Image Source: WHO

Non-fungible tokens (NFTs) have empowered a new generation of digital artists by catapulting them to become billionaires and instant celebrities. Thus, making NFTs one of the biggest cultural trends of 2021. NFTs have already made headlines whenever a major collection was dropped (e.g., bored ape yacht club NFTs) or major organizations dabbled in the NFT marketplace with special launch announcements. Now, Covid-19’s genetic sequence was used by Data project Viromusic to launch an NFT collection of songs.

On the resale market, the first song has been sold for 100 Ether, which is currently equivalent to US$380,000. There are 10,000 coronavirus songs in total that have been released as NFTs. With piano-driven orchestrations combined with strings, percussion, guitar, and ambient synthetic noises, the music is best described as new age. According to the project’s organizers, they looked into COVID-19’s RNA to locate data strands that might be transformed into music using a proprietary algorithm that converts data into notes. 

The method is known as “DNA Sonification,” and it involves converting the genetic code characters inside the coronavirus into a tune. When we listen to music, we hear the virus’s instructions for self-replication.

The team then turned these notes into a tune using a Digital Audio Workstation. They also used human-played instruments such as the bass, cello, drums, and other instruments to produce a lovely piece of music. Because it is based on distinct note mappings or sections of the viral code, each NFT in the collection is unique. 

The first NFT in the collection was created by combining RNA positions 28866 to 29459 (in the nucleocapsid gene). Musical notes are assigned to amino acids in the following order: alanine = Eb5, arginine = Ab6, and so on. The tracks are now up for auction on the NFT space via Rarible.com, with a starting price of 0.07 Ether.

“The idea for this collection was born from an awe of the beauty in the code of life. We hope this project helps to raise awareness that even a virus capable of inflicting such misery is fundamentally based on the same code as every living thing on earth,” said the company.

People who will purchase one of the NFTs in the collection will receive not just a high-resolution audio file of the song, but also information on which genes correlate to the code and what the virus does with them. After payment, the content is unlocked, and it includes a full-resolution WAV file of the NFT. The contract is based on the ERC-721 protocol (OpenZeppelin).

An NFT is a digital asset with verifiable ownership rights that is logged into blockchain technology. The majority of NFTs are created with Ethereum, a decentralized open-source platform that leverages blockchain technology to develop and run decentralized digital apps that allow smart contracts to be created. Government-issued currencies are fungible, meaning that each unit (a dollar) has the same value and may be exchanged between people. Non-fungible assets, on the other hand, are unique and of unequal value, therefore they are not good as a medium of trade, but they are perfect for representing unique assets. 

Read More: Why is Solana’s First Million Dollar Degenerate Ape NFT Sale a huge Milestone?

Non-fungible tokens have existed since the 2010s, but with the recent cryptocurrency bubble, several cryptocurrencies have reached new highs.

Collins Dictionary, which publishes an annual list of the top ten terms, has named NFT as Word Of The Year for 2021, after showing an increase in usage of more than 11,000 percent. Meanwhile, NFTs are shown an astounding 1785 percent increase in market cap in Q1 2021, proving once again they are the current hot trend in the crypto world. 

NFTs, on the other hand, are not without risk: some projects are fraudulent commodities; others are viewed as tokens of illegal online gambling; some even are caught up in hassles of intellectual rights. Even the new Viromusic project has raised ethical concerns about employing a pandemic virus that has killed millions of people in the last two years as a foundation for an NFT asset.

While there are no dangers in the creation of such NFTs, it points out the grim side of capitalism that gives a free passage to create music based on a virus genome that wreaked havoc on lives, livelihood, and economy, while profiting from the sale of NFTs. Thus, setting up a dangerous precedent of what can be minted as an NFT.

Advertisement

DeepMind predicts material properties with electron density

DeepMind uses electron density to predict material properties

DeepMind predicts material properties with electron density. The proposed system, DM21 is a machine-learning model that suggests a molecule’s characteristics by predicting the distribution of electrons within it. The method, described in the 10th December issue of Science, can calculate the properties of some molecules more accurately than existing techniques.

The structure of materials and molecules is determined by quantum mechanics, specifically by the Schrödinger equation, which governs the behavior of electron wave functions. These mathematical gadgets can describe the probability of finding a particular electron at a particular position in space but can’t calculate the molecular orbitals of electrons because they interact with one another. 

Researchers relied on a set of techniques called density functional theory (DFT) to predict molecules’ physical properties to get around this problem. But the DFT approach has limitations since it gives the wrong results for certain molecules. In the past decade, theoretical chemists and other researchers increasingly started to experiment with ML models to study materials’ chemical reactivity or their ability to conduct heat.

Read more: U.S. sanction forces SenseTime to delay IPO

“It’s sort of the ideal problem for machine learning: you know the answer, but not the formula you want to apply,” says Aron Cohen, a theoretical chemist who has long worked on DFT and who is now at DeepMind.

The DeepMind team trained an artificial neural network on 1,161 accurate solutions to calculate electron density, the end result of DFT calculations. These solutions were derived from the Schrödinger equations. They also hard-wired some known physics laws into the network to improve model accuracy. They then tested the trained system on a set of molecules often used as a benchmark for DFT. “The results were impressive. This is the best the community has managed to come up with, and they beat it by a margin,” says von Lilienfeld.

von Lilienfeld adds that one advantage of machine learning is that although it takes tremendous computing capacity to train the models, the process needs to be done only once. Researchers can then make individual predictions on a regular laptop, vastly reducing operational costs and carbon footprint. Kirkpatrick and Cohen have said that DeepMind is releasing their trained system DM21 (DeepMind 21) for anyone to use. For now, the model applies primarily to molecules, but future versions could work for materials, too, the authors said.

Advertisement

U.S. sanction forces SenseTime to delay IPO

U.S. sanction forces SenseTime to delay IPO

SenseTime to delay IPO of US $768 million in Hong Kong after the United States placed China’s largest AI firm on an investment blacklist on human rights grounds. The company will publish an additional prospectus with amendments and an updated schedule and has also declared to refund all the IPO applications made by investors without the associated interest.

The U.S. Treasury added SenseTime to a list of “Chinese military-industrial complex companies,” accusing the company of having developed facial recognition programs that can determine a target’s ethnicity.

According to its regulatory filings, SenseTime had planned to sell shares worth 1.5bn in a price range between HK$3.85 to HK$3.99. This IPO would raise $767 million, a figure that the company had already trimmed earlier this year from a $2bn target.

Read more: Microsoft Releases Deep Learning model BugLab for better bug detection in Codes

Share of SenseTime, a company developing self-driving technology with Japan’s Honda Motor Co., was supposed to start trading on Friday on the Hong Kong Stock Exchange. However, the U.S. Treasury Department announced financial sanctions on 15 individuals and 10 entities for their alleged involvement late last week in human rights abuses and repression in China, Myanmar, and North Korea. 

U.S. also set restrictions on investments in SenseTime, accusing the firm of developing facial recognition programs used to identify Muslim Uyghur minorities facing oppression under the Chinese Communist-led government.

Wang Wenbin, a Chinese Foreign Ministry representative, told reporters that U.S. sanctions are “based on false information.” He also pledged that Beijing would “definitely” take countermeasures against the administration of President Joe Biden.

Advertisement