OpenAI released Triton 1.0, an open-source Python-like programming language for GPUs. It serves researchers with no CUDA (Compute Unified Device Architecture) experience to write highly efficient GPU codes. Triton enables peak hardware performance with relatively little effort and is 2x more efficient than equivalent Torch implementations.
In recent times, Deep Neural Networks (DNN) models have outperformed across many domains ranging from natural language processing to computer vision. However, deep learning neural models undergo high computation with parallel work, thereby requiring multi and many-core processors. Such High-Performance Computing (HPC) needs have increased the demands for GPUs to compute the processing of large data at a rapid speed.
Modern research in deep learning is implemented using a combination of the native framework of operators, which may require the creation of many temporary tensors. This approach lacks flexibility, is too verbose, and degrades the performance of the neural network. However, OpenAI’s Triton mitigated this issue by providing an intermediate language and compiler.
Triton’s success lies in the modular system architecture that is centered around Triton-IR, allowing their compiler to automatically perform a wide variety of important program optimizations. They had to revisit the traditional “Single Program, Multiple Data” (SPMD) thread execution model for GPU, and proposed a block algorithm, useful while performing sparse operations. This Block-based algorithm aggressively optimizes programs for data locality and parallelism.
OpenAI’s Triton aims to provide an open-source environment to write code faster with higher productivity and flexibility than CUDA and other existing DSLs (Domain Specific Language) respectively. Currently, Triton is compatible only with Linux and supports NVIDIA GPUs (Compute Capability 7.0+) hardware. The next release may support AMD GPUs, CPUs, and the foundations of this project are described in the following MAPL2019 publication: Triton: An Intermediate Language and Compiler for Tiled Neural Network Computations.
Defense technology startup Shield AI acquires Martin UAV through a definitive agreement on 28th July 2021. Martin UAV is a Texas-based startup that specializes in developing unmanned aircraft and provides cost-effective aerial services to its customers.
Shield AI aims to integrate Martin UAV’s platform with its own solution named Hivemind to develop airplanes capable of performing vertical landing and takeoffs for the United States Army and airforce.
Shield AI’s platform uses reinforcement learning, computer vision algorithms, and artificial intelligence to train various sorts of crewless vehicles to perform military operations. Martin UAV has developed one of the world’s best unmanned aerial vehicles that are capable of delivering a whopping eleven hours of fly time and is equipped with a single ducted thrust vectoring fan.
Company officials claim that its aerial vehicle can fly more than ten times than its competitors. The vehicle can carry a maximum of twenty-five pounds of payload.
Chief Executive Officer of Martin UAV, Ruben Martin, said, “GPS and communications on the battlefield are no longer assured. A great aircraft without an AI to make intelligent decisions will be sidelined against China, Russia, and an increasing number of adversaries who are fielding electronic warfare and anti-air systems. Shield AI is one of the only companies that has operationalized advanced aircraft autonomy on the battlefield.”
He further mentioned that this acquisition would make the company’s V-BAT UAV the world’s first vehicle capable of performing sustained operations in denied environmental conditions. Martin UAV’s vehicles have already been tested for two years by the US military to perform numerous operations. V-BAT also won a competitive selection round of the US military earlier this year.
Co-founder of Shield AI, Brandon Tseng, said, “Expeditionary. Intelligent. Collaborative. Expeditionary means capability on edge, within the control of the units who need it most. V-BAT is expeditionary today. Intelligent means aircraft that make their own decisions to execute the commander’s intent to accomplish missions with or without reach-back.”
Researchers at McGill University are now using new artificial intelligence technology to predict suicidal behavior in students. The COVID-19 pandemic has severely affected the mental health of students that has increased their suicidal behavior.
A research team has been assembled from various Universities to develop artificial intelligence algorithms to recognize factors that would help predict early signs of suicidal behavior. This technology will allow doctors to treat such students at an early stage to improve their mental health.
Massimiliano Orri of McGill University said, “Many known factors can contribute to the increased risk in university students, such as the transition from high school to college, psychosocial stress, academic pressures, and adapting to a new environment. These are risks that have also been exacerbated by the health crisis triggered by the COVID-19 pandemic, although there is no clear evidence of an increase in suicide rates during the pandemic.”
According to a Ph.D. student of the University of Bordeaux, suicide is the world’s second most significant cause of death among 15 to 24-year-old individuals. Researchers are developing the artificial intelligence algorithm after analyzing the data of more than five thousand university students in France from 2013 to 2019.
The researchers were able to detect four factors like anxiety, self-esteem, and depressive symptoms that are capable of recognizing suicidal behavior with an accuracy of 80%. According to a survey conducted by the researchers, around 17% of students, both men, and women had shown suicidal behavior while completing their degree courses.
A professor at the University of Bordeaux, Christophe Tzourio, said, “This research opens up the possibility of large-scale screening by identifying students at risk of suicide using short, simple questionnaires, in order to refer them to appropriate care.”
He further added that this technology could provide an alternative to mental health assessments by doctors for students.
Researchers at Harvard University are now using artificial intelligence to search alien technologies in space. Under the Galileo Project of Harvard University, scientists are trying to locate extinct and active extraterrestrial civilizations in deeper parts of the universe.
The research is using a number of high-power telescopes as its primary instrument to gather evidence of the presence of aliens. Scientists are using artificial intelligence algorithms to identify alien-built satellites and unidentified aerial phenomena (UAP).
The research has already received funding of $1.75 million. Prof. Abe Loeb, Head researcher of project Galileo, said, “We can no longer ignore the possibility that technological civilizations predated us.”
According to the scientists, the data gathered from numerous telescopes will be scanned and analyzed by an artificial intelligence algorithm to identify alien existence. Loeb said, “Science should not reject potential extraterrestrial explanations because of social stigma or cultural preferences that are not conducive to the scientific method of unbiased, empirical inquiry.”
He further mentioned that now scientists must start looking through new telescopes, both literally and figuratively. The impact of the finding of any extraterrestrial establishment will be enormous as it will affect our current technological approach and our society.
Loeb had earlier proposed his theory that the comet Oumuamou that crossed the orbit of Earth in 2017 was an alien technology. He believes that such incidents will boost scientists’ willingness to conduct further research on alien existence.
The project Galileo also aims to study interstellar objects that enter our solar system and identify alien satellites that spy on the Earth. Abe Loeb has collaborated with renowned scientists like Stephen Hawkings and has published more than a hundred scientific research papers till date.
“Science should not reject potential extraterrestrial explanations because of social stigma or cultural preferences that are not conducive to the scientific method of unbiased, empirical inquiry,” said Loeb in a recent statement.
Microsoft partnered with OpenAI and developed GitHub Copilot, which uses GPT-3 algorithm to suggest users with code generation. Many GitHub users have enjoyed Copilot as a pair programmer, and the majority of them embrace the suggestions while coding. However, the SendGrid engineer reported the first bug showcasing the issue of leaking sensitive and functional API keys, thereby giving access to databases.
Sam Nguyen, a software engineer of SendGrid, got a list of secret API keys when he asked the AI tool for the same. API keys are simple encrypted strings useful for accessing databases. The developer opened a request reporting this concern with a screenshot showcasing at least four proposed keys. Github CEO Nat Friedman has acknowledged the issue stating that “these secrets are almost entirely fictional, synthesized from the training data.”
Although Github’s team is working on this issue, it has ignited many open source developers to migrate from Github. Developers envy GitHub Copilot AI and claim this tool uses copyrighted source code in an unauthorized and unlicensed way.
One of the developers said “This product injects source code derived from copyrighted sources into their customers’ software without informing credits of the licensed source code. This significantly violates the terms of copyright holder’s work.” Currently, Microsoft has released a public version of Github Copilot, which is trained from codes from the public repositories of GitHub.
It laters plans to release a commercial product version, supporting enterprises in understanding their programming styles. This AI technology will not only be limited for Microsoft as OpenAI CTO Greg Brockman said “they will be releasing Codex model this summer for third party developers to tailor their own application.”
The prospect of communicating digitally with someone from beyond the grave is no longer a figment of scientific fantasy. Digital duplicates of the deceased have begun to turn quite some heads this year since Microsoft granted the patent for artificial intelligence technology that could bring dead people ‘back to life’ as chatbots. But while it seems like a significant and unexpected milestone in modern technology, one simply cannot overlook the ethical conundrum of this feat.
Most recently, freelance writer Joshua Barbeau is making headlines for virtually bringing his fiancée Jessica Pereira, ‘back from the dead’ eight years after she passed away. He paid Project December (a software that uses artificial intelligence technology to create hyper-realistic chatbots) five dollars for an account on their website and made a new text ‘bot’ named ‘Jessica Courtney Pereira.’ Next, he entered Pereira’s old Facebook and text messages, as well as offered some background information for the software. The resultant model was able to imitate his fiancée while chatting accurately.
Project December is powered by GPT-3, an autoregressive language model that deploys deep learning to produce human-like text. It was developed by Elon Musk-backed research organization OpenAI. GPT-3 can replicate human writing by consuming a massive corpus of datasets of human-created text (particularly Reddit threads) and create everything from academic papers to emails.
The incident draws a parallel to the popular dystopian series Black Mirror and the movie Her. Without giving spoilers, in the episode “Be Right Back,” Martha, a young lady, grieves the death of her lover, Ash, who passed away in a vehicle accident. During Ash’s funeral, Martha learns about a digital service that will allow her to connect with a chatbot version of her late boyfriend and signs up for it later. In Her, the lonely protagonist dates Samantha, an intelligent operating system, with tormenting repercussions. The video depicts the psychological anguish that may befall on individuals who rely too much on technology.
In December 2020, Microsoft was awarded a patent by the United States Patent and Trademark Office (USPTO) that outlines an algorithm technique for creating a talking chatbot of a specific individual using their social data. Instead of following the conventional method of training chatbots, this system would use images, voice data, social media posts, electronic messages, and handwritten letters to build a profile of a person.
According to Microsoft, “The specific person [who the artificchatbot represents] may correspond to a past or present entity (or a version thereof), such as a friend, a relative, an acquaintance, a celebrity, a fictional character, a historical figure, a random entity, etc.” The technology may potentially create a 2D or 3D replica of the individual.
In early 2021, South Korean national broadcaster SBS introduced a new cover of the 2002 ballad I Miss You by Kim Bum Soo, with folk-rock singer Kim Kwang-seok having performed the song. The twist here is, Kim Kwang-seok has been dead for nearly 25 years. According to the AI company Supertone, which reproduced the late singer’s voice, using voice AI system – Singing Voice Synthesis (SVS). This system had learned 20 songs of Kim based on a training tool with over 700 Korean songs to enhance accuracy so that the system can copy a new song in Kim’s style.
While all these innovations sound exciting and obviously creepy, it raises important questions on breaching privacy and possibility of misinformation. For instance, the Microsoft chatbot can be misused to put words on the mouth of the “dead person’s surrogate,” which they might have never said in real life, by using the crowd-sourced conversational social data to fill in the gaps.
It is also probable that such an artificial intelligence model may soon be able to “think” by itself. As a result, its subject person’s “digital existence continues to grow after the physical being has died away.” In this approach, the digital avatar of the deceased would stay current with events, form new ideas, and evolve into an entity based on a genuine person rather than a replica of who they were when they died.
In addition to that, the act of replicating someone’s voice also carries the risk of fraud, misinformation campaigns, and contributing to potentially fake news. Moreover, the artificial intelligence simulations of dead people could have a detrimental effect on real-world relationships. It can also worsen the grieving process if the users opt to live in denial due to having regular contact with the chatbot mimicking the dead.
The Kim Kwang-seok incident may have been well-received among the fans, but creating new works or resurrected voices using artificial intelligence poses copyright concerns. Who is the legal owner of the property? Is it the creator or team behind the AI software or the AI itself?
The new fad of reviving the dead is only getting started. In the coming years, humans will witness more believable performances of beloved dead relatives, artists, historical characters as technology advances. Unfortunately, they would not have any control over how their simulated avatar will be used. Therefore, such issues need to be addressed legally, ethically, and psychologically if artificial intelligence is to continue to be used in this direction.
Google’s DeepMind partnered with the European molecular biology laboratory’s European bioinformatics institute (EMBL-EBI) for developing an AI system called ‘AlphaFold’ to predict the three-dimensional structure of a protein by recognizing the sequence of amino acids.
Proteins are extremely complex substances providing structure to cells and organisms. However, they differ from one another primarily by the sequence of amino acids resulting in protein folding.
Protein folding is a physical process where a protein chain arranges to a unique three-dimensional structure. In 1958, Sir John Kendrew and his co-workers came up with a low-resolution 3D structure of the protein. This research had led many scientists to demystify the hidden structures inherited by proteins.
DeepMind tops the list of critical assessments of techniques for protein structure prediction (CASP-14) with high accuracy. The AlphaFold database and source code are freely available to the scientific community, this contribution will aid many advanced biological research.
There is a greater scope in the medical field to understand the viral process and the emerging mutations through a deep learning model for structure prediction. This novel machine learning combines physical and biological knowledge of protein structure to leverage multi-sequence alignments to design a deep learning model.
“This will be one of the most important datasets since the mapping of the Human Genome,” said EMBL Deputy Director-General, and EMBL-EBI Director Ewan Birney. AlphaFold has provided 20,000 proteins of the human genome along with proteomes of 20 other biological organisms summing to 3,50,000 protein structures.
All the data provided is freely available for academic and commercial use under Creative Commons Attribution 4.0 (CC-BY 4.0) license terms. In the coming months, this organization plans to expand the database to cover a large proportion (almost 100 million structures) of all cataloged proteins.
Huawei fires its head of autonomous driving department Su Qing for criticizing self-driving car manufacturing giant Tesla at the World Artificial Intelligence Conference.
He made numerous controversial statements and claimed that Tesla’s self-driving vehicles are the reason for the death of many individuals.
According to the translated statement, Qing said, “Tesla has shown a very high accident rate in the last few years, and the types of accidents are very similar from the killing of the first person to the killing of a recent person. I use the word” killing “here. Use. That may sound serious to everyone. But think about it. When machines enter into human society and human symbiosis, they definitely cause accident rates. It’s ugly.”
She further said, “The point is murder, and we want to reduce the possibility of an accident as much as possible. In terms of probability, this is possible.”
Chinese media house Global Times confirmed through a tweet that this escalation of the situation was because of Qing’s critical comments on Tesla.
Su Jing, a senior executive of Huawei’s car unit, was removed from his position for “inappropriate remarks” about #Tesla. Su publicly said in July that Tesla has a “relative high accident rate” and is “killing people.” pic.twitter.com/fRf25cLsOt
Huawei recently clarified its position on this matter by saying the company respects every competitor and their efforts in the autonomous driving industry. It believes in the philosophy of developing new technologies together with the industry.
It is a fact that a few people have died due to Tesla’s auto-driving feature. Still, the company has been outspoken about it and has clearly accepted and defined the limitations of its currently available autopilot technology.
Su Qing has been removed from the position of the head of the Intelligent driving solution department of BU Intelligent Vehicles and has been transferred to the strategic reserve team where he would receive additional training.
DataRobot raises $300 million in its series G funding round and also announces its plans to acquire MLOP software developing startup Algorithmia. With this fresh funding, DataRobot’s market valuation has increased to $6.3 billion.
The company plans to use the funds to conduct research and enhance its augmented intelligence platform. The tech firm also mentioned that it wants to integrate Algorithmia’s sharp focus on model serving with DataRobot’s model management platform to provide low-cost model management services, including deep learning operations for natural language processing to its consumers worldwide.
Algorithmia developed an artificial intelligence platform that enables IT professionals to manage high-volume model production in a secure environment effectively and enables businesses to integrate MLOPs into their existing management platforms.
DataRobot will use Algorithmia’s expertise to develop industry-leading MLOP infrastructures for its customers.
The CEO of DataRobot, Dan Wright, said, “This new investment further validates our vision for combining the best that humans and machines have to offer to power predictive insights, enhanced data-driven decisions, and unprecedented business value for our customers.”
He further added that the company’s demand is increasing on a global scale, and they are grateful to the investors for supporting them in developing better solutions.
Algorithmia is a Washington-based startup founded by Diego Oppenheimer and Kenny Daniel in the year 2014. It has received a total funding of $38.1 million from investors like Gradient Ventures and Mandora Venture Group.
Co-founder and CEO of Algorithmia, Diego Oppenheimer, said, “t’s been clear to us for many years that DataRobot shares this philosophy, and we’re thrilled to combine our dedication of enabling customers to thrive in today’s market by delivering more models in production, faster while protecting their business.”
He also mentioned that they understand the importance of increasing their customer reach to let organizations know the value of their machine learning model. Diego believes that this acquisition will be a step forward towards their goal.
India’s information technology governance body NASSCOM announces the dates of its XperienceAI Virtual Summit 2021. After the massive success of last year’s event, NASSCOM now announces the second edition of its virtual summit to be held in August.
The online summit will commence from 3rd August and will end on 6th August 2021 from 10:00 AM to 7:00 PM IST. The summit will host veteran individuals from the artificial intelligence industry to discuss innovations to revolutionize various sectors using artificial intelligence technologies.
Last year the summit hosted more than 3000 participants from 20 countries around the world. This year XperienceAI has a theme ‘Artificial intelligence as a catalyst for better normal.’
It aims to ponder upon certain vital areas that artificial intelligence can help improve, like socio-economic growth and India’s roadmap for artificial intelligence innovations and adoption.
The summit will also focus on the development of artificial intelligence solutions to tackle the challenges like climate change and the role of AI in building a circular economy. More than 60 reputed speakers like Don Tapscott, Dilip Asbe, Debjani Ghosh, Ashwini Rao, Arunima Sarkar, and Arundhati Bhattacharya will be present at the event.
Speakers will also discuss the application of artificial intelligence to predict future pandemics. The four-day event will have live workshops, use case discussions, and exclusive keynote sessions. The summit has been sponsored by many industry-leading enterprises like Intel, TCS, Accenture, HCL, and Fractal.
The XperienceAI summit 2021 will be helpful for people working in various sectors. Startups, coders, policymakers, tech companies, government officials, public sector workers, research scholars, and academicians will find this summit to be very useful and informative.
Interested candidates can register for free on the official website of NASSCOM.