The United States Department of Defense (DoD) selects data infrastructure providing company Scale AI to accelerate the government’s artificial intelligence capabilities. Department of Defense (DoD) Joint Artificial Intelligence Center (JAIC) signed a $250 million blanket purchasing agreement with Scale AI for helping DoD to expand its AI abilities.
According to the contract, all the federal agencies will receive complete access to the cutting-edge technology of Scale AI. This will allow government officials to solve their most critical challenges using artificial intelligence and machine learning solutions.
CEO and founder of Scale AI, Alexandr Wang, said, “AI is not a one-and-done technology, and we’re thrilled to see the JAIC embrace the continuous approach to T&E that Scale was founded on.”
He further added that the government’s AI initiatives would be more robust, accountable, and equitable if this framework is adopted, guaranteeing that US AI expenditures result in effective deployments of innovative, significant technologies.
Scale AI will develop Test & Evaluation (T&E) capabilities for the DoD, focusing on use cases like autonomous systems, deep learning-based image analysis, humans augmented by machines, and methods to measure warfighter cognitive workloads, natural language processing-powered products, and many more.
San Francisco-based artificial intelligence company Scale AI was founded by Alexandr Wang and Lucy Guo in 2016. The company specializes in providing a platform that manages the entire ML lifecycle, from data annotation and curation to model testing and evaluation, enabling any organization to effectively develop and deploy AI solutions.
To date, the firm has raised more than $600 million from investors like Dragoneer Investment Group, Empede Capital, Durable Capital Partners, and several others. “I think Scale is very lucky where even early on we have been able to achieve meaningful business and production business with the DoD,” said Wang. He also mentioned that according to him, their federal government business is already viable.
On Friday, China’s internet regulator announced new guidelines for content providers that modify face and voice data, the latest step in the country’s fight against “deepfakes.” In addition, the creation of cyberspace that supports Chinese socialist ideals was also proposed by the Cyberspace Administration of China (CAC). The regulations are open for public comment through February 28, with the final version susceptible to revision.
According to the CAC’s statement, fraudsters will be more motivated to employ digitally generated voice, video, chatbots, or facial or gesture manipulation content in coming times. As a result, the proposal prohibits the use of such fakes in any application that might upset the social order, infringe on people’s rights, spread false information, or portray sexual activity. It also advises obtaining permission to utilize what China refers to as “deep synthesis” before it can be used for legal purposes. Here, deep synthesis has been defined as “Using deep learning and virtual reality to generate and synthesize algorithms to produce text, images, audio, video, virtual scenes, and other information.”
The “Internet Information Service Deep Synthesis Management Regulations” proposal vows to control technologies that generate deepfakes. Deepfake service providers must authenticate their users’ identities before providing them access to relevant items, according to the proposed regulation. Companies are also obliged to follow the correct political direction and respect social morality and ethics. The regulations also make it illegal to make deepfakes without the consent of the person or individuals who appear in them. The proposal also includes a user complaints system and procedures to prevent deepfakes from being used to spread false information. Providers of deep synthesis technology will be forced to suspend or delete their apps if required.
Deep synthesis service providers are now required to improve training data management, ensure legal and proper data processing, and take the required steps to secure data security.
According to the proposal, in case, training data contains personal information, it should also adhere to the corresponding personal information protection regulations, and personal information must not be processed illegally. As per Article 12 of the draft, “Where a deep synthesis service provider provides significant editing functions for biometric information such as face and human voice, it shall prompt them (provider) to notify and obtain the individual consent of the subject whose personal information is being edited.”
For first-time violators, the laws mandate penalties of 10,000 to 100,000 yuan (US$1,600 to US$16,000), although violations can also result in civil and criminal lawsuits.
China is already buckling under the pressure of regulating the use of deepfakes which has taken the nation by storm in the past few years. For instance, in August 2019, a new app called ZAO went viral, allowing users to swap their faces with those of celebrities. Meanwhile, Chinese individuals are paying for deepfake films in which the face of their choosing – a celebrity or a person they know – is superimposed over the body of a porn star. Avatarify, a Russian AI app that converts static portrait images into videos, became popular on Douyin, China’s equivalent of TikTok, in February last year. Soon, Chinese users were quick to come up with creative ways to exploit the software to make humorous videos, including one in which Elon Musk and Jack Ma appeared to be singing the famous tune Dragostea Din Tei in unison.
Source: Avatarify
According to a deepfake white paper issued by Nisos, a cybersecurity intelligence organization, the three most prominent nations where deepfake developers live are Russia, Japan, and China.
Worried about the dismal popularity of deepfakes, last March, Chinese regulators had summoned 11 domestic technology companies, including Alibaba Group, Tencent, and ByteDance, for talks on the use of ‘deepfake’ technologies on their content platforms. Regulators had also instructed the companies to perform their security evaluations and submit reports to the government when adding new functionalities or new information services that “have the ability to mobilize society.”
While the current proposed draft won’t immediately set action against deepfakes and deep synthesis service providers, it will allow the government to redeem itself in the age of manipulated, misinformed content.
Elliott Management Corp and Vista Equity Partners announce their plans to acquire Citrix in an deal worth $13 billion, reported Reuters. Recently, Elliott and Vista used the loan market to fund their $104 per share cash bid for Citrix.
After the complete acquisition, the companies plan to merge Citrix with Vista’s own data analytics firm, Tibco. Vista is a Texas-based company that specializes in software buyouts. This new deal will be the company’s largest acquisition to date.
According to various sources, since last October, Elliott, the hedge fund that has been looking for partners to help it go private. United States-based cloud computing company Citrix Systems was founded by Ed Lacobucci and Srikanth Tirumala in 1989.
The firm specializes in providing a complete and integrated portfolio of Workspace-as-a-Service, application delivery, virtualization, mobility, network delivery, and file sharing solutions.
Citrix aims to give people, organizations, and things that are securely connected and accessible to make the extraordinary possible.
Citrix failed to capitalize on the increase of remote working during the ongoing pandemic as it spent too much on its salesforce and very little on its distribution partners, according to Citrix interim Chief Executive Robert Calderoni. Higher operating costs dragged on the company’s operating profits, which fell to $84.5 million in the third quarter from $128.3 million a year ago.
The National Institute of Korean Language plans to use artificial intelligence for testing writing proficiency. The announcement was made by the National Language Institute’s director Chang So-won.
According to the plans, the institute will use AI to develop a system for Korean language proficiency diagnosis. A fund of nearly $8.39 million has been allotted for the development process of the language proficiency test.
The institute will develop the artificial intelligence-powered language proficiency test over the next five years. Currently, South Korea’s population has an average writing skill of 48 out of 100. This new AI-powered language diagnostic test will drastically help in improving the overall writing skills of the country’s citizens.
A former professor at the Seoul National University’s Department of Korean Language and Literature, Chang, said, “While the importance of essay writing is emphasized around the world, there are no indicators to evaluate writing in Korea, so less and less colleges are having essay tests.”
He further added that if they can develop a nationwide AI-enabled evaluation system, it will be useful for various entrance exams. The AI-powered language test will help provide an objective assessment of overall language skills, which no other available tests in the country provide.
“When I evaluated university entrance essays as a professor, I found the need for objective evaluation indicators,” said Chang. He also mentioned that the grading criteria for the SATs in the United States and the Baccalaureate in France were highly precise when he looked into them.
According to the institute, they are expecting an initial investment of around $5 million during the first phase of the development process. Apart from the test, a new Korean language training program will also be developed to meet the ever-increasing demand for Korean language teachers in foreign countries.
Scientists from China have developed a new artificial intelligence-powered nanny to take care of babies in an artificial womb. The artificial womb provides an environment for babies to grow safely, and the AI-powered nanny can monitor the entire process.
Researchers in Suzhou, China’s eastern Jiangsu province, built this artificial intelligence-powered system to safely take care of embryos. Professor Sun Haixuan led the research team at the Suzhou Institute of Biomedical Engineering and Technology, a Chinese Academy of Sciences subsidiary. Currently, the artificial intelligence-powered nanny is monitoring the health of several animal embryos.
The technology includes a container that is filled with nutritious fluids that support the growth of mouse embryos inside it. The study was published last month in the domestic peer-reviewed Journal of Biomedical Engineering.
Various surveys point out that young Chinese women are increasingly abandoning conventional objectives of marriage and children. Hence, this newly developed system can help mass-produce babies in a country where citizens are unwilling to bear children to regulate the country’s population.
The research paper mentioned that the new technology “not only helps further understand the origin of life and embryonic development of humans but also provide a theoretical basis for solving birth defects and other major reproductive health problems.”
Additionally, the artificial womb can completely eliminate the need for women to bear babies as it provides a safer and more controlled growing atmosphere for babies.
Researchers had to manually monitor each embryo during the early stage of the research, making it difficult for them to keep track of huge numbers of embryos simultaneously. The AI-nanny was developed to tackle this issue of monitoring multiple embryos at once.
Researchers from the University of Buffalo use explainable artificial intelligence in a study to effectively detect lung and bronchus cancer mortality rates in patients. The system is capable of making high-level predictions about LBC mortality rates.
It is the first research to use ensemble machine learning with an explainable algorithm for visualizing and understanding spatial heterogeneity of the relationships between LBC mortality and risk factors.
The new study was written by Zia U. Ahmed, Kang Sun, Michael Shelly, and Lina Mu, and it uses explainable artificial intelligence or XAI, to identify key risk factors for LBC mortality.
Explainable artificial intelligence (XAI) was used with a stack-ensemble machine learning model framework to examine and display the spatial distribution of known risk factors’ contributions to lung and bronchus cancer (LBC) death rates across the United States.
Researchers say that smoking prevalence, poverty, and a community’s elevation were most important in predicting LBC mortality rates among the risk factors studied. However, the risk factors and LBC mortality rates were found to vary geographically.
The study mentioned, “Explainable artificial intelligence for exploring spatial variability of lung and bronchus cancer mortality rates in the contiguous USA.”
Researchers used five base-learners, namely the generalized linear model (GLM), random forest (R.F.), Gradient boosting machine (GBM), extreme Gradient boosting machine (XGBoost), and Deep Neural Network (DNN), to develop the stack-ensemble models. With more data and multiple models, A.I. algorithms operate better, making the stack ensemble model more effective than any single model.
“The results matter because the U.S. is a spatially heterogeneous environment. There is a wide variety in socioeconomic factors and education levels — essentially, one size does not fit all. Here local interpretation of machine learning models is more important than global interpretation,” said Ahmed.
Artificial intelligence-enabled cyber security company Quantum Star Technologies launches its new AI-powered malware detection software named Starpoint. The newly launched software can detect malware using deep learning to provide better security to its customers.
According to the company, Starpoint can detect threats at the binary level, both pre and post-execution, to considerably reduce the time required in malware detection. It is a one-of-a-kind software that uses artificial intelligence and deep learning to detect malware effectively.
Most of the threat detection software currently available in the market uses traditional static means for detecting threats, which makes Starpoint a revolutionary software.
CEO of Quantum Star Technologies, Jeff Larson, said regarding Starpoint, “It brings an added, unmatched layer of security that is computationally inexpensive to integrate into already existing platforms. This, paired with Starpoint’s speed to detection, can lower internal costs of large enterprises, saving them potentially millions of dollars a year that could be reallocated to different areas.”
He further added that one of Starpoint’s strengths is that it can be deployed to supplement existing cyber security postures. Starpoint uses an algorithm that detects characteristics of malicious codes and is flexible to be tailored to any environment.
The software recognizes data in a multidimensional coordinate system and then runs it through an advanced neural network to detect threats accurately. These data points merge into information that Starpoint categorizes as malicious or benign and communicates to the user.
The highly capable AI-powered software drastically reduces the time required for threat detection to mere a few seconds. United States-based cybersecurity firm Quantum Star Technologies was founded by Jeff Larson in 2018. The company received complete funding from Kingdom Capital in September 2018.
Quantum Star Technologies claims that their newly launched AI-enabled malware detection software has an accuracy of nearly 99%, making it one of the most reliable applications available in the market. Additionally, Starpoint is also very cost-effective when compared to other classical software.
Global transportation and eCommerce service providing company FedEx launches its new AI-powered sorting robot. FedEx collaborated with artificial intelligence-enabled robotic firm Dorabot to develop the new sorting robot named DoraSorter.
This new development is FedEx’s step towards modernizing and automating the logistic network. In recent years, a considerable boom has been witnessed in the eCommerce industries leading to a vast number of shipments worldwide.
DoraSorter will aid FedEx in meeting the demand of an ever-increasing number of shipments quickly, minimizing the need for human intervention in the sorting process involved in eCommerce transportation.
According to FedEx, the AI-powered sporting robot will be initially deployed at the 5,200m2 FedEx South China E-Commerce Shipment Sorting Center in Guangzhou. The robot is already capable of handling multiple tasks, including managing small quantities of inbound and outbound shipments from customers. However, both the companies are still working to further increase and fine-tune the capabilities of DoraSorter.
Vice President of Operations at FedEx China, Robert Chu, said, “To meet customers’ changing needs, we have been exploring and investing in new technologies to enhance every key aspect of transportation. The rapid rise in e-commerce has led to higher customer demand for timeliness and flexibility in logistics services, creating new challenges and opportunities for the entire logistics industry.”
He further added that their partnership with Dorabot is the latest move by FedEx to use intelligent robots for increasing operational efficiencies and construct an agile logistics infrastructure to support the growth of eCommerce.
China-based robotics firm Dorabot was founded by D.D. Zhou, Deng Yaohuan, Hao Zhang in 2015. The company specializes in developing warehouse automation robots using technologies like artificial intelligence, computer vision, deep learning, motion planning, and several others.
“It is the starting point of a global collaboration between Dorabot and FedEx. We hope that we can work together to bring AI and robotics applications to more businesses and consumers,” said the CEO of Dorabot, Xiaobai Deng.
Artificial intelligence-powered virtual smart sensors developer Elliptic Labs’ AI virtual proximity sensor named Inner Beauty, will be featured in four Xiaomi Redmi smartphones. The virtual proximity sensor will be used in Xiaomi’s Note series smartphones, which will be launched globally soon.
The phones in which the Inner Beauty AI Virtual Proximity sensor will be present are Redmi Note 11, Note 11S, Note 11 Pro, and Note 11 Pro 5G. When users bring the smartphone up to their ear during a phone call, Elliptic Labs’ AI Virtual Proximity Sensor turns off the display. It disables the touchscreen’s touch functions, which to date required physical hardware to be present in smartphones.
CEO of Elliptic Labs, Laila Danielsen, said,” Replacing hardware sensors with Elliptic Labs software-only AI Virtual Smart Sensor Platform™ delivers cost-optimized innovation and human and eco-friendly solutions while eliminating continued supply chain risk. This makes Elliptic Labs the ideal partner for global smartphone manufacturers like Xiaomi.”
She further added that Xiaomi is beginning this year by introducing their AI Virtual Proximity Sensor into four smartphone models, which they are pleased about. Since 2016, Xiaomi has put its trust in Elliptic Labs’ reliable technology.
Proximity sensors are a crucial component of smartphones as they drastically increase smartphones’ usability by minimizing the chances of unwanted touches while users are on phone calls and also help in providing better battery life by automatically switching off display units when required.
The AI Virtual Proximity Sensor decreases device cost and removes sourcing requirements by replacing hardware with software sensors. Norway-based AI technology company Elliptic Labs was founded by Laila Danielsen in 2006.
The company specializes in developing artificial intelligence-enabled virtual sensors for multiple industries, including smartphones, IoT, laptops, automotive, and many more. To date, it has raised over $20 million from investors like EASME – EU Executive Agency for SMEs, Anne Worsoe, and others.
For decades we have been trying to perfect artificial intelligence algorithms and models that can be at par with the cognitive human brain. From parsing data numbers in seconds, finding new patterns to models that can create their own content. In 2020, OpenAI, a research business co-founded by Elon Musk released GPT-3 (Generative Pre-trained Transformer version 3) model, which created huge shockwaves in the natural language processing industry. Trained on 570GB of text information gathered a publicly available dataset known as CommonCrawl along with other texts selected by OpenAI, including the text of Wikipedia, GPT-3 can generate textual output without any supervised training.
This model featured 100 times more parameters than GPT-2 and ten times more than Microsoft’s then advanced Turing NLG language model. GPT-3 performs well on downstream NLP tasks in zero-shot and few-shot settings, thanks to the huge number of parameters and the extensive dataset it was trained on. GPT-3 is proficient in doing tasks like writing articles that are difficult to differentiate from those authored by people. It can also summarize long texts, translate languages, take memos, write React and JavaScript codes, and so on.
While GPT-3’s capacity to synthesize content has been touted as the finest in AI to date, there are a few things to keep in mind. For example, while GPT-3 can produce high-quality text, it can yield incoherent output while forming large phrases and repeating text sequences repeatedly. GPT-3 can also output nonsensical content on occasion. Along with these drawbacks, GPT-3 has the possibility of being used for phishing, spamming, disseminating false information, or other fraudulent actions because of its human-like text generation capacity. Furthermore, the text created by GPT-3 has the biases of the language on which it was trained.
Aligning AI systems with human objectives, intentions, and values has remained a distant dream after years of research and development. Every major AI discipline appears to tackle a portion of the issue of reproducing human intellect while leaving crucial sections unsolved. And when we apply present AI technology to domains where we need intelligent beings to operate with the reason and logic that we demand from humans, there are many grey areas that need to be addressed. For example, Nabla, a Paris-based healthcare firm, developed a chatbot using GPT-3 and tested if it can help people struggling with mental health problems. To their utter shock, they noticed that the model urged a hypothetical suicidal patient to kill themselves.
Recently, OpenAI explained that its goal was to develop a model that can produce content from the resources provided to it, whether it is text prompts or online literature. The company now has unveiled a new version of GPT-3, which it claims eliminates some of the most damaging flaws that marred the previous edition. The revised model, dubbed InstructGPT, is better at following the directions of individuals who use it, resulting in less inappropriate language, disinformation, and overall mistakes—unless expressly ordered not to. OpenAI asserts that InstructGPT is closer to enforcing AI alignment than the previous iterations of GPT-3.
OpenAI recruited 40 humans to evaluate GPT-3’s responses to a variety of prewritten prompts, such as “Write a story about a wise frog called Julius” or “Write a creative ad for the following product to run on Facebook,” in order to train InstructGPT. The team used only prompts submitted through the Playground to an older version of the InstructGPT models, delivered in January 2021. Higher marks were given to responses that they thought were more in keeping with the prompt writer’s apparent intention. In contrast, the responses that contained sexual or violent language, disparaged a specific group of individuals, stated an opinion, and so on were given a lower score.
After collecting the responses, the research team used the feedback as an incentive in reinforcement learning from human feedback (RLHF), which ‘instructed’ InstructGPT to respond to prompts in ways that the judges favored. RLHF was originally created to teach AI how to drive robots and defeat human players in video games, but it’s now being used to fine-tune large language models for NLP tasks like summarizing essays and news stories.
The researchers observed that users of its API preferred InstructGPT over GPT-3 more than 70% of the time on the basis of the prompts provided during experimentation. The researchers also tested different-sized versions of InstructGPT and discovered that, although being more than 100 times smaller, users still favored the replies of a 1.3 billion-parameter InstructGPT model to those from the 175 billion-parameter GPT-3 model.
While the preliminary results look convincing, as they tend to chase the notion that alignment in AI can be achieved by building small language models, there are some limitations. For starters, OpenAI highlighted that InstructGPT has not yet solved The Alignment Problem. While measuring the InstructGPT’s “hallucination rate,” the company’s researchers found that it can make up information half (21%) as often as GPT-3 models (41%). It can also introduce an “alignment tax”: aligning the models only on consumer tasks might cause them to perform poorly on other academic NLP tasks.
InstructGPT continues to make minor mistakes, resulting in replies that are sometimes irrelevant or incomprehensible. If you offer it a prompt with a falsehood in it, for example, it will accept it as true. It will still occasionally defy an instruction or say something unpleasant, as well as produce violent language and misleading information.
However, for the time being, OpenAI is confident that InstructGPT is a safer bet than GPT-3! Meanwhile, OpenAI believes that RLHF may be used to limit toxicity in a variety of models, not just pure language models. For the time being, RLHF is confined to language models, leaving the toxicity problem in multimodal models unsolved.