Researchers from the University of Washington School of Medicine combined the CRISPR technology with a protein designed with artificial intelligence to awaken individual dormant genes by disabling the chemical “off switches” that silence them.
This approach will allow researchers to understand the role of individual genes in normal cell growth, development, aging, diseases such as cancer. With this approach, researchers can safely upregulate specific genes to influence cell activity without permanently changing the genome and causing unintended mistakes.
The research findings by Hannele Ruohola Baker and Shiri Levy have been published in the journal Cell Reports. Hannele Ruohola-Baker, professor of biochemistry and associate director of ISCRM, led this research. David Baker, also a professor of biochemistry and head of the IPD, developed the AI-designed protein at the UW Medicine Institute for Protein Design (IPD).
“This was a very important finding,” said Ruohola-Baker. “TATA boxes are scattered throughout the genome, and current thinking in biology is that the important TATA boxes are very close to the gene transcription site, and the others don’t seem to matter. The power of this tool is that it can find the critical PRC2 dependent elements, in this case, TATA boxes that matter.”
With this new technique and AI-Designed Protein, researchers can awaken individual dormant genes and control gene activity without altering the genome’s DNA sequence by targeting chemical changes that help package genes in our chromosomes and regulate their activity. Besides, since these modifications occur on top of genes instead inside the genes, they are called epigenetics. The chemical modifications responsible for regulating gene activity are called epigenetic markers.
Scientists are interested in epigenetic modifications because epigenetic markers contribute to aging, accumulate with time, and affect the health of future generations as we can pass them on to our children.
The Ukraine government launches its new NFT Museum of War in remembrance of the event and to support crowdfunding.
While Russia uses tanks to destroy Ukraine, we rely on revolutionary blockchain tech. @Meta_History_UA NFT-Museum is launched. The place to keep the memory of war. And the place to celebrate the Ukrainian identity and freedom. Check here: https://t.co/IrNV0w54tg
The Meta History NFT Museum, a blockchain-based history of the Russian invasion of Ukraine, was announced by Ukraine’s Ministry of Digital Transformation.
On Friday, Ukraine began auctioning a collection of NFTs as part of a cryptocurrency fundraising campaign. The NFT museum is a collection of digital pictures representing a particular day in the fight.
The NFTs include images of various incidents such as silhouettes of jets, screengrabs of news headlines, a cartoon-style illustration of an explosion, and others.
Officials said that artists interested in being exhibited at the museum must submit a portfolio of their work, which will be reviewed by art directors to see if the work is appropriate. Once approved, the NFTs will be minted on the Ethereum blockchain.
Meta History’s website mentions that the NFTs will be created in chronological order, according to the occurrences, in order to preserve and respect the genuine history.
Moreover, artists will also represent how the battle is viewed by the peaceful citizens of Ukraine. The government hopes that by launching this new initiative, it will be able to preserve the memory of historical events, distribute accurate information to the global digital community, and raise funds for Ukraine’s support.
“100 percent of funds from the sale will go directly to the official crypto-accounts of the Ministry of Digital Transformation of Ukraine to support the army and civilians,” mentioned the website.
During the war, cryptocurrencies have considerably helped Ukraine gather funds to support its military operations. To date, crypto fundraising has allowed the Ukraine government to collect over $65 million, which has turned out to be of great help as the country defends itself against Russia.
Technology giant Microsoft has been recently accused of paying millions of dollars in bribes each year.
Former Microsoft employee and the whistleblower of this incident, Yasser Elabd, claims that more than $200 million is spent each year on bribes by Microsoft and kickbacks tied to the company’s overseas contracting operations in nations such as Ghana, Nigeria, Zimbabwe, Qatar, and Saudi Arabia.
Elabd was successful as the director of emerging markets for the Middle East and Africa, and he obtained numerous promotions.
In a blog uploaded on Lioness, he mentioned a $40,000 expenditure request that piqued his interest. According to Elabd, the customer who received the handsome payment was not listed in Microsoft’s internal database of potential clients, and they were unqualified for the project’s scope.
“The partner in the deal was underqualified for the project’s outlined scope, and he wasn’t even supposed to be doing business with Microsoft: he had been terminated four months earlier for poor performance on the sales team, and corporate policy prohibits former employees from working as partners for six months from their departure without special approval,” mentioned the blog.
The problem was escalated to upper management, and the legal and HR teams halted the $40,000 expenditure. However, the officials did not check further into the Microsoft employees who were coordinating the false deal.
“We are committed to doing business in a responsible way and always encourage anyone to report anything they see that may violate the law, our policies, or our ethical standards,” said Vice President of Microsoft Deputy General Counsel for Compliance and Ethics, Becky Lenaburg regarding this incident.
He further mentioned that the company believes it has already examined and resolved these allegations, which date back many years. Additionally, Microsoft also worked with government agencies to address any issues that arose.
The Government of Madhya Pradesh announces that it plans to introduce artificial intelligence (AI) as a subject in schools across the state.
According to the government, the AI course will be included in the syllabus of students studying in grade 8 and higher. It is a one-of-a-kind initiative, and MP becomes the first state of India to launch artificial intelligence as a subject in schools.
The announcement was made by the Chief Minister of Madhya Pradesh, Shivraj Singh Chouhan, on Sunday 27th March.
In addition to the AI course, the MP government will also launch a veterinary telemedicine service, allowing livestock owners to seek advice over the phone on diseases affecting cows and other animals.
Shivraj Singh Chouhan said, “A decision has been taken to arrange veterinary telemedicine facility so that livestock keepers can get advice on the phone in case of any ailment to cows and other animals.”
Farmers will be able to contact experts over the phone regarding agriculture-related difficulties and crop diseases, according to the chief minister. The decision comes after a 2-day brainstorming session of the Madhya Pradesh cabinet. Other significant decisions made by the state cabinet include the establishment of Sanjivani clinics in urban areas and transportation policy for rural areas.
United States-based facial recognition solution providing company Clearview AI announces the release of the 2.0 version of its much famous facial recognition platform primarily used by law enforcement agencies.
Clearview 2.0 is a significant upgrade to the company’s platform, with a database of more than 20 billion publicly available facial pictures.
According to Clearview AI, its facial recognition algorithm is the world’s most accurate, and the recently launched second version comes with improved compliance and management capabilities to better support law enforcement investigations.
Clearview AI 2.0 offers training for registered users along with additional verification measures that ensure the tool is being used legally.
Co-founder and CEO of Clearview AI Hoan Ton-That said, “Clearview 2.0 adds several security and functionality enhancements to our already proven technology, all designed to help investigators employ the technology in a more user-friendly way that facilitates quicker matches to build cases that are 100 percent compliant.”
He further added that this is in addition to their database of over 20 billion photos, which is the world’s largest of its kind. The company said that the second version of its facial recognition platform is currently being rolled out to existing clients, including the FBI, the Department of Homeland Security, and multiple local agencies operating in the United States.
According to NVIDIA, Inverse Rendering is a technique that uses artificial intelligence to mimic how light interacts in the real world, allowing researchers to reconstruct a 3D scene from a collection of 2D photographs collected from various angles.
NVIDIA researchers have developed a highly capable ultra-fast neural network training and rendering system that can perform inverse rendering in a matter of seconds. This method was used by NVIDIA to develop neural radiance fields, or NeRF, a popular new technology.
Vice President of Graphic Research at NVIDIA, David Luebke, said, “If traditional 3D representations like polygonal meshes are akin to vector images, NeRFs are like bitmap images: they densely capture the way light radiates from an object or within a scene.”
He further added that instant NeRF could be as essential to 3D as digital cameras and JPEG compression was to 2D photography, vastly enhancing the speed, convenience, and reach of 3D capture and sharing.
The company claims that this new artificial intelligence-powered solution is the quickest NeRF technology to date, with speedups of up to 1,000x in some circumstances. The model can render the final 3D scene in just a few seconds after a short duration of training on a few dozen still photographs.
Apart from the model, multiple other technologies such as new GPUs, CPU, Autonomous driving tech, and more were unveiled during the GTC event.
The government of Telangana announces the launch of its new Forest AI Grand Challenge program under its artificial intelligence mission initiative.
Telangana government has collaborated with consulting, technology, and digital transformation services providing company Capgemini to offer this new initiative with the aim of revolutionizing the forest department with new innovations.
According to the plan, this program will allow the forest department to collect detailed, large-scale information regarding the estimated number, location, and range-specific behavior of animals in the forest.
Participants will need to develop novel artificial intelligence-powered solutions to help the forest department in various tasks, including wildlife conservation efforts, a matter of prime importance.
“There is tremendous potential for governments and innovators to collaborate through open innovation. I am quite confident that this initiative of T-AIM will help the Telangana Forest Department implement novel AI-based solutions for wildlife conservation,” said Jayesh Ranjan, Principal Secretary, Government of Telangana, during the launch of this one-of-a-kind program.
Interested participants from all across India need to submit their approach notes before April 2022. The authorities will then evaluate the submissions and declare the winners in June 2022.
Telangana government is making numerous efforts to boost artificial intelligence adoption in the region, starting from AI in education, agriculture to initiatives like the Revv accelerator program to assist artificial intelligence startups in scaling their businesses.
The government says that Telangana will take AI to the next level, aiming to become the top AI innovation implementation center by capitalizing on the AI opportunity for which it has collaborated with NASSCOM and multiple other organizations.
Vice President and Head of CSR at Capgemini, Anurag Pratap, said, “Sustainability is core to the commitment of Capgemini to make the world a better place for our people. In the last few years, we have developed and supported some of the most innovative technology-based projects that focus on ensuring Sustainability.”
He further mentioned that This collaboration with T-AIM is part of a larger initiative to foster and leverage an “Open Innovation Ecosystem” to produce, identify, and promote revolutionary ideas in quality education, Sustainability, sustainable cities, communities, and excellent health and wellbeing.
Microsoft has announced an improvement to its translation services that promises significantly enhanced translations across a range of language pairs, thanks to new machine learning algorithms based on Microsoft’s Project Z-Code. These improvements will increase the quality of machine translations and allow Translator, a Microsoft Azure Cognitive Service that is part of Azure AI, to offer more languages than the most common ones with limited training data. Z-Code is part of Microsoft’s XYZ-Code initiative, which aims to combine text, vision, and audio models across various languages to produce more critical and practical AI systems. The update employs a “spare Mixture of Experts” (MoE) technique, which has resulted in new models scoring between 3% and 15% higher than the company’s earlier models in blind evaluations.
Last October, Microsoft updated Translator with Z-code upgrades, adding support for 12 more languages, including Georgian, Tibetan, and Uyghur. Now the recently released new version of Z-code, called Z-code Mixture of Experts, can better grasp “low-resource” linguistic subtleties. According to Microsoft, Z-code uses transfer learning, an AI approach that applies knowledge from one task to another, similar task, to enhance translation for the estimated 1,500 “low-resource” languages worldwide.
Microsoft’s model, like all others, learned from examples in enormous datasets gathered from both public and private archives (e.g., ebooks, websites such as Wikipedia and hand-translated documents). Low-resource languages are those having less than one million example sentences, which makes creating models more difficult as AI models perform better when given more instances.
The new models learn to translate between various languages simultaneously using the Mixture of Experts. Its fundamental function is to split tasks into several subtasks, which are then delegated to smaller, more specialized models known as “experts.” In this scenario, the model uses more parameters while dynamically picking which parameters to employ for a particular input and then deciding which task to delegate to which expert based on its own predictions. This allows the model to specialize a subset of the parameters (experts) during training. The model then employs the appropriate experts for the job during runtime, which is more computationally efficient than using all of the model’s parameters. To put it another way, it’s a model that contains multiple more specialized models.
The new system can immediately translate between ten languages, eliminating the need for multiple systems. Translator formerly required 20 different models for each of the ten languages: one for English to French, French to English, English to Macedonian, Macedonian to English, and so on. Besides, Microsoft has lately begun to use Z-Code models to boost additional AI functions, such as entity identification, text summarization, custom text categorization, and key extraction.
MoEs were initially proposed in the 1990s and subsequent research publications from organizations such as Google detail experiments with MoE language models with trillions of parameters. On the other hand, Microsoft asserts that Z-code MoE is the first MoE language model to be used in production.
For the first time, Microsoft researchers worked collaboratively with NVIDIA to deploy Z-code Mixture of Experts models in production and NVIDIA GPUs. NVIDIA created special CUDA kernels to implement MoE layers on a single V100 GPU efficiently and used the CUTLASS and FasterTransformer libraries. They were able to deploy these models utilizing a more efficient runtime to optimize these new types of models using the NVIDIA Triton Inference Server. Compared to standard GPU (PyTorch) runtimes, the new runtime achieved up to a 27x speedup.
The Z-code team also collaborated with Microsoft DeepSpeed researchers to discover how to train a large Mixture of Experts models, such as Z-code, as well as smaller Z-code models for production settings.
Quality gains of Z-code MoE models over existing models. Languages are ordered by training data sizes. Source: Microsoft
Microsoft used human assessment to compare the new MoE’s quality enhancements to the existing production system, and found that Z-code-MoE systems beat individual bilingual systems, with average increases of 4%. For example, the models upgraded English to French translations by 3.2%, English to Turkish translations by 5.8%, Japanese to English translations by 7.6%, English to Arabic translations by 9.3%, and English to Slovenian translations by 15%.
The Microsoft Z-code-based MoE translation model is now available by invitation to customers to use Translator, for document translation. Document Translation is a feature that allows you to translate whole documents, or large groups of documents, into a number of file formats while maintaining their original formatting. It’s also worth mentioning that Microsoft Translator is the first machine translation company to provide consumers with live Z-code Mixture of Experts models.
Microsoft confirms that its systems were compromised during a cyberattack carried out by LAPSUS$, a hacker group.
According to Microsoft, LAPSUS$ had hacked into one of its accounts, giving it “limited access” to corporate networks but not customer data.
Microsoft provided clarity on this matter through a recently uploaded blog. LAPSUS$, also known as DEV-0537, is famous for employing a pure extortion and destruction strategy with no ransomware payloads.
Microsoft had been conducting an internal investigation regarding the compromise, but the public disclosure of the episode escalated the situation and allowed the team to accelerate their actions.
“Our cybersecurity response teams quickly engaged to remediate the compromised account and prevent further activity. Microsoft does not rely on the secrecy of code as a security measure, and viewing source code does not lead to elevation of risk,” said Microsoft.
The extremely notorious South American hacking group LAPSUS$ had earlier targeted multiple companies, including some giants like NVIDIA, Samsung, Okta, Ubisoft, and others.
NVIDIA validated that its network was compromised in a cyberattack that resulted in a leak of its proprietary data and employee login information. LAPSUS$ claimed to have a GPU driver capable of bypassing NVIDIA’s Ethereum mining limiter on the company’s RTX 3000 graphics cards.
Okta mentioned that an attempt was made to compromise the account of a third-party customer support engineer who worked for one of our subprocessors, which was then investigated and contained.
LAPSUS$ used to attack targeted bitcoin accounts, causing wallets and cash to be compromised and stolen. However, recently, the hacking group has diversified its targets and is now attacking various telecommunication, higher education, and government organizations.
Microsoft says that LAPSUS$ understands the interrelated nature of identities and trust relationships in current technological ecosystems and seeks out companies that may use their access from one organization to gain access to partners or suppliers.
Moreover, the group does not operate anonymously. Instead, it spreads words regarding its cyberattacks on companies.
A team from Collaborations Pharmaceuticals, Inc. repurposed a drug discovery AI in a recent work published in the journal Nature Machine Intelligence. In just 6 hours, it discovered 40,000 new possible chemical weapons, some of which were very comparable to the most deadly nerve agent ever developed. The publication, Dual application of artificial-intelligence-powered drug discovery, is a wake-up call for organizations working in the field of AI in drug development.
The researchers at Collaborations Pharmaceuticals were using AI to search for molecules that could be used to treat diseases, and part of that process included screening out those that could eventually lead to death. The report claims that they previously built MegaSyn, a commercial de novo molecule generator guided by Machine Learning model predictions of bioactivity. The objective was to discover novel therapeutic inhibitors of human disease targets.
As part of a symposium (Convergence conference) request to study new technology’s potentially negative ramifications, the team sought to investigate how fast and readily an artificial intelligence system might be misused if it assigned a detrimental, rather than positive objective. Using their AI MegaSyn, which normally rewards bioactivity (how effectively a drug interacts with a target) while deterring toxicity, they simply flipped the toxicity parameters while keeping the bioactivity reward, giving drugs a higher toxicity score.
According to the researchers, ‘The thought had never previously struck us. We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting. Our work is rooted in building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery.’
Even their studies on Ebola and neurotoxins, which had triggered questions about the possible harmful outcomes of their machine learning models, had not raised their concerns. They were blissfully unaware of the harm AI could be causing when they steered MegaSyn in the direction of nerve-agent-type compounds similar to VX. VX is a banned nerve agent used to assassinate Kim Jong-nam, the half-brother of North Korean leader Kim Jong Un, along with other chemical warfare agents once they focused it towards making nerve agent-like chemicals. It is a tasteless and odorless nerve agent that may cause a human to sweat and twitch even if just a little amount is ingested. It works by inhibiting the enzyme acetylcholinesterase, which aids muscular action. VX is fatal because it prevents your diaphragm, or lung muscles, from moving, causing your lungs to become paralyzed. A higher dose might produce convulsions and possibly cause a person to cease breathing. The MegaSyn AI made several startling advancements over the 6 hours it was operational, one of which was suggesting VX. New compounds were also discovered that were projected to be more hazardous than publicly recognized chemical warfare weapons based on estimated LD50 values.
While no actual molecules were created as part of the experiment, the authors noted that numerous firms provide chemical synthesis services and mentioned a lack of regulation as a reason for this. They expressed their concerns that it would be simple to create new, very dangerous substances that might be deployed as chemical weapons with a few adjustments.
Though Collaborations Pharmaceuticals has removed its death library, and it is now planning to limit the usage of its technologies. The authors propose a hotline for reporting potential misuse to authorities, as well as a code of conduct for everyone working in AI-focused drug discovery, similar to The Hague Ethical Guidelines, which promote ethical behavior in the chemical sciences.
The researchers are now urging a refreshed insight into how artificial intelligence systems could be exploited for malevolent objectives. They believe that more awareness, stronger guidelines, and stricter controls in the research community might help us avoid the dangers that AI capabilities could otherwise lead to.