Home Blog Page 279

Meta launches PyTorch Live to build AI-powered mobile applications

Meta PyTorch Live

In the PyTorch developers day 2021 conference, Meta launched PyTorch Live that uses a single programming language to design AI-based applications for both Android and iOS platforms. The mission of PyTorch is enabling cutting-edge research to developers, co-developing with many stakeholders, modularity to allow developers with their desired tools, and being performant and production-oriented. By keeping these four criteria as a base, PyTorch introduces PyTorch Live. 

PyTorch Live simplifies the stringent resource restrictions of mobile devices and also reduces the workloads of mobile developers to create novel ML-based applications. PyTorch Live is a set of tools to build AI-powered mobile applications that runs on both Android and iOS platforms. 

Usually, building apps that work across different platforms requires expertise in multiple programming languages and therefore increases the cost of leveraging mobile models on different devices. In addition, developers would be required to separately configure the project and build UI (User Interface) that runs on different platforms, thereby slackening the app development process.

Instead of writing the same app twice in two different programming languages, PyTorch Live uses JavaScript as a unified language to write and build apps for both platforms.

Read more: Hopkin to use Artificial Intelligence to promote Healthy Aging 

PyTorch Live is powered by two successful open-source projects called PyTorch Mobile and React Native to build AI-powered mobile applications. While PyTorch Mobile is a runtime used to perform on-device inference for training and deploying in mobile applications, React Native library is used to build interactive user interfaces for Android and iOS platforms.

To design and build AI-powered mobile applications, developers can use PyTorch Live’s CLI (Command Line Interface), Data Processing API, and Cross-Platform apps. While CLI quickly sets up the mobile app development environment and bootstraps the mobile app projects, Data processing API is used to prepare and integrate custom models for building a new mobile application. The Cross-Platform apps use PyTorch Live APIs to build AI-powered mobile apps for Android and iOS. 

Users can build their own user interface to build models using PyTorch Live’s Core APIs like Camera API and Canvas API. While Camera API is used to build a UI that identifies the objects in an image captured by a user, Canvas API is used to build a UI that allows a user to draw and predict the respective letters or digits.

According to Meta, PyTorch Live (GitHub) will also support developers to work with audio and video data in the near future.

Advertisement

Hopkins to use Artificial Intelligence to Promote Healthy Aging

Hopkin artificial intelligence promote healthy aging

Johns Hopkins gets the grant to use artificial intelligence to promote healthy aging. The National Institute of Aging has allocated over $20M to Hopkins for them to execute their plans to promote healthy aging. 

This new development will considerably help in providing a better lifestyle and living experience to senior citizens. Johns Hopkins will use the allocated funds over five years to build an AI and technology collaboratory (AITC). 

The new collaboratory will have members from the Johns Hopkins University schools of medicine and nursing, the Whiting School of Engineering, and the Carey Business School. The collaboratory will also have members from various industries, senior citizens of the country, and technology developers. 

Read More: DeepMind Makes Huge Breakthrough by Discovering New Insights in Mathematics

Rama Chellappa, a Bloomberg distinguished professor of electrical and computer engineering, and Peter Abadir said, “This new enterprise is attempting to disrupt these problems in ways that will lengthen the years that people have to enjoy independent, highly functional lives, free of cognitive impairment.” 

He further added that there are numerous aged citizens who suffer from multiple health issues and have functional and cognitive declines that restrict them from living an independent life for a long time. 

“The excitement is that our work can help physicians use the technology as markers for measuring the evolution of age-related diseases, like dementia and Alzheimer’s, and predicting falls using patterns and behaviors of older adults,” added Chellappa. 

She also mentioned that predicting behaviors and understanding how individuals age is an arduous task. Many experts believe that this new initiative will help in drastically reducing the total number of deaths in those 65 and older. 

However, the success and reach of this newly launched initiative will depend upon how citizens choose to adopt and use the AI-powered technology.  

Advertisement

Clearview AI to win US patent for Facial Recognition Technology

Clerarview AI patent facial recognition

Artificial intelligence technology developing company Clearview AI is all set to win a new United States patent for facial recognition systems. This development will allow companies to use Clearview AI’s technology when they pay the required administration fees. 

Clearview AI’s artificial intelligence-powered face recognition system searches social media and adds images of users to its database. However, various experts and critics are concerned about the potential growth of similar kinds of technologies because of the new Clearview AI patent. 

Experts believe that it is essential that governments and lawmakers regulate such facial recognition systems of ethical use of new world technologies. 

Earlier this year, the Royal Canadian Mounted Police (RCMP) used Clearview AI’s facial recognition system, violating the country’s Privacy Act. The Privacy Commissioner of Canada had submitted a special report regarding the same to the Parliament. 

Read More: Odisha Government to use AI camera to check Torture in Police Custody

Co-founder and CEO of Clearview AI, Hoan Ton, said, “There are other facial recognition patents out there — that are methods of doing it — but this is the first one around the use of large-scale internet data. As a person of mixed race, having non-biased technology is important to me.” 

The AI system of Clearview AI has already been used by various enforcement agencies, including the FBI and the department of homeland security. The platform has been consistently criticized for illegally fetching and storing images of social media users without their consent. 

According to experts, the method of Clearview AI to harness and store pictures is a complete violation of the basic right to privacy of social media users. Regarding the criticizing arguments, Ton said, “All information in our datasets are all publicly available info that people voluntarily posted online — it’s not anything on your private camera roll.” 

New York-based artificial intelligence firm Clearview AI was founded by Hoan Ton and Richard Schwartz in 2017. Till date, the company has raised over $38 million from investors like Kirenaga Partners, Hal Lambert, Peter Theil, and many others in three funding rounds. 

Advertisement

Odisha Government to use AI camera to check Torture in Police Custody

Odisha Government AI camera police

The government of Odisha plans to use AI-powered CCTV cameras to check and monitor any kind of torture arrested individuals in police custody. The new system will ensure that no accused is mistreated while they are under police custody. 

According to the plan, all the previously installed CCTV cameras in various police stations across the state will be replaced with the newly designed artificial intelligence-powered CCTV camera. 

A report of the National Human Rights Commission pointed out that five people die every day in India while they are under police custody. The new AI camera system aims to reduce this figure to an absolute zero in the state of Odisha. 

Read More: MRI and AI can Detect Early Signs of Tumor Cell Death after Novel Therapy

The AI-powered camera system will use IP-based CCTV cameras, which will be able to send alarms to respective authorities when police officers assault anyone within the premises of the police station. Apart from torture, the AI surveillance system will also help in reducing any sort of corruption within the system, including bribing. 

A senior officer at Odisha Computer Application Center said, “Efforts are on to use a defined software through which cameras can detect such activities including money being taken by the police from the accused or suspect.” 

Earlier this year, the supreme court of India instructed every police station to install CCTV cameras to help keep track of the happenings in their premises. The installation of AI-powered CCTV camera systems has already reached the completion stage in various cities of Odisha like Bhubaneswar, Cuttack, Puri, Khurda, and Jagatsinghpur. A total of twenty AI CCTV cameras will be installed to cover multiple locations in police stations. 

Manoj Kumar Mishra, Secretary of State Electronics and Information Technology Department, said, “In our quest to bring in more transparency in government institutions under that 5T charter, the state government will expedite installation of CCTV cameras in all 593 police stations.” He also mentioned that this would enable authorities to monitor police stations using the latest technology. 

Advertisement

DeepMind Makes Huge Breakthrough by Discovering New Insights in Mathematics

DeepMind mathematics
Image Credit: Analytics Drift Design Team

DeepMind is a London-based artificial intelligence company owned by Alphabet, the parent company of Google. DeepMind was previously best recognized for developing a technology that could defeat the greatest human players in the strategic game Go, a landmark breakthrough in artificial intelligence. Recently, using DeepMind’s artificial intelligence, Sydney researcher Professor Geordie Williamson along with colleagues at Oxford started working on developing fundamentally new techniques in mathematics.

A knot and a drawing of a complex polyhedron on graph paper.
Image Source: DeepMind

According to a recent study published in the journal Nature, the team of researchers from the universities of Sydney and Oxford has been working with DeepMind to apply machine learning to suggest new lines of investigation and to try to prove mathematical theorems. It is true that since the past few decades, computers have long been used to create data for experimental mathematics, but the challenge of detecting intriguing patterns has mostly rested on mathematicians’ intuition. 

Now, with the help of machine learning, it is feasible to create far more data than any mathematician could possibly analyze in their lifetime. This is based on the premise that identifying data patterns in a machine learning dataset is analogous to finding patterns that connect complex mathematical objects by developing conjectures (assumption about how such patterns could work). If we can establish that these conjectures are correct, we can turn them into theorems.

Making a supposition from scratch is a far more difficult and subtle task. To refute a hypothesis, an AI just has to sift through a large number of inputs in search of a single case that contradicts the hypothesis. Developing a hypothesis or proving a theorem, on the other hand, needs insight, talent, and the linking together of several logical processes.

Professor Williamson leveraged DeepMind’s artificial intelligence to get close to proving a 40-year-old hypothesis about a representation theory, regarding Kazhdan-Lusztig polynomials. Known as the combinatorial invariance conjecture, it claims that there exists a relationship between certain directed graphs and polynomials. A directed graph is a set of vertices connected by edges, with each node having a direction associated with it. DeepMind was able to establish assurance in the existence of such a relationship using machine learning techniques, and hypothesize that it might be correlated to structures known as “broken dihedral intervals” and “external reflections.” Professor Williamson used this information to develop an algorithm that solved the combinatorial invariance conjecture. The new algorithm has been computationally tested across over 3 million cases.

Meanwhile, University of Oxford co-authors Professor Marc Lackeby and Professor András Juhász established a startling relationship between algebraic and geometric invariants of knots, producing a whole new mathematical theorem.

As explained on the official blog of the University of Sydney, in knot theory, invariants are employed to solve the problem of differentiating knots from one another. They also help mathematicians in comprehending knot properties and their connections to other disciplines of mathematics. DeepMind also added that knots have connections with quantum field theory and non-Euclidean geometry.

Read More: DeepMind Trains AI Agent in a New Dynamic and Interactive XLand

The researchers next set out to determine which AI method would be most useful in identifying a pattern that connected two attributes. One approach, in particular, known as saliency maps, proved to be quite beneficial. It’s frequently used in computer vision to figure out which regions of an image contain the most important data. Saliency maps identified knot qualities that were likely connected to one another, and a formula was devised that appeared to be true in all situations that could be evaluated. Lackenby and Juhász then proved that the formula was applicable to a large class of knots.

While knot theory is fascinating in and of itself, it also has a wide range of applications in the physical sciences, including comprehending DNA strands, fluid dynamics, and the interaction of forces in the Sun’s corona.

“Any area of mathematics where sufficiently large data sets can be generated could benefit from this approach,” says Juhász. DeepMind believes that these techniques could even have applications in fields like biology or economics.

Advertisement

MRI and AI can Detect Early Signs of Tumor Cell Death after Novel Therapy

MRI AI detect tumor cell

Researchers at the Massachusetts General Hospital recently announced that Magnetic Resonance Imaging (MRI) and artificial intelligence (AI) could be used to detect early signs of tumor cell deaths. A novel virus-based cancer therapy can help point out tumor cell deaths when coupled with MRI and AI. 

Virus-based cancer therapy is a new form of treatment that selectively kills cancer cells while not causing any damage to normal tissues. This form of therapy will drastically help in treating patients suffering from brain tumors. A non-invasive monitoring process has to be performed throughout the therapy to analyze and understand how the virus interacts with cancer cells. 

Christian Farrar, Ph.D., and an investigator at the Athinoula A. Martinos Center for Biomedical Imaging said, “We programmed an MRI scanner to create unique signal ‘fingerprints’ for different molecular compounds and cellular pH. A deep learning neural network was then used to decode the fingerprints and generate quantitative pH and molecular maps.” 

Read More: SenseTime to open new Artificial Intelligence Casino in Singapore

He further added that this new method of treatment was validated in mouse brain tumor research, where it was proven to be highly effective. The new MRI and AI-powered virus treatment method will drastically reduce the detection time of dangerous cancer cells. 

However, the treatment procedure still has to be scrutinized, and further investigation will help researchers optimize the virus-based therapy. Researcher at Athinoula A. Martinos Center for Biomedical Imaging, Or Perlman, said, “One of the most interesting and key components for the success of this approach was the use of simulated molecular fingerprints to train the machine learning neural network.” 

He also mentioned that this kind of treatment concept could revolutionize the healthcare industry by solving various medical and scientific challenges. The research work has been funded by the European Union’s Horizon 2020 research and innovation program.

Advertisement

Microsoft launches fully managed Azure Load Testing Services

Azure load testing

Mandy Whaley, Partner Director of Product, Azure Dev Tools, announced the preview of the Azure Load Testing service. It is a fully managed Azure service, which allows testers and developers to optimize the application’s performance, capacity, and scalability. This load testing software service works by deliberately stimulating traffic on your application to determine how it performs under stress and heavy load. 

Using this service, developers can fix the traffic and load issues of the application before going live. Developers can also predict how smoothly the application performs with a vast set of audience or users. The load testing service can also be used by organizations to form different scenarios to check the performance and resiliency of the application, when a massive set of end-users simultaneously tries to access it. 

To work with the Azure Load Testing service, a user has to set up the Azure load testing resource that serves as an infrastructure for running a large-scale load test. Using Apache JMeter script, a developer simulates the number of virtual users who simultaneously access the application endpoints. The script also includes information about endpoints and test configuration settings. To execute the testing process, users are prompted to upload the script file and run the load test. 

Read more: Analytics Services of AWS now goes Serverless

Once the load testing starts, a live update about detailed resource metrics is displayed in the load testing dashboard. The dashboard will display both the client and server-side metrics. 

The client-side metrics give details about the number of virtual users, request-response time, and number of requests per second. The server-side metrics provide information about the number of database reads, type of HTTP responses, and container resource consumption.

Developers can refer to the live dashboard metrics to determine how the application works in different scenarios. They can also compare the test result across various load tests with different numbers of virtual users to understand the application’s performance over time. Azure Load Testing always saves a history of previous test runs with different virtual users and use cases. Using this, a developer can visually compare multiple runs to trace performance regressions of the application across different scenarios.

Instead of performing load testing separately, developers can include it into the existing CI/CD (Continuous Integration and Continuous Deployment) pipelines or workflows. With this method, developers can achieve an end-to-end software development lifecycle in high-end applications.  

Advertisement

Clarity AI raises $450 million in Funding Round led by SoftBank

Clarity AI Funding Round SoftBank

Sustainable data technology company Clarity AI raised $450 million in its new funding round led by SoftBank Group Corp Vision Fund 2. Other investors like BlackRock, Fifth Wall ClimateTech Fund, Sir Jonathan I’ve, Kibo Ventures, Mundi Ventures, Seaya Ventures, and many others also participated in the funding round. 

This new development is a result of the growing need for reliable data across the globe. Clarity AI plans to use the newly raised funds to extensively hire new talents to further enhance its product and also expand into new markets for increasing its customer base at a global level. 

CEO and founder of Clarity AI, Rebeca Minguela, said, “The social and environmental challenges the world faces and the corresponding economic opportunities unlocked have put impact assessment at the forefront of the minds of investors and organizations.” 

Read More: Qualcomm and Google announce Partnership on Neural Architecture Search (NAS)

She further added that their technology is showing the clients how their investments are impacting the world. SoftBank Vision Funds are known for backing startups that have the potential to bring innovations in their respective industries. Hence it can be expected that Clarity AI will quickly scale up its operations and product development with the newly generated funds. 

Apart from SoftBank, Clarity AI has also made other deals with leading funding platforms, including the largest fund distribution network, Allfunds. Greater New York-based artificial intelligence company Clarity AI was founded in 2017. The startup specializes in developing enterprise solutions for investors to help them tackle the challenge of insufficient or unequal allocation of capital. 

SoftBank Investment Adviser, Jimi Macdonald, said, “The sustainable investment market is already more than one third of total global assets and growing rapidly. Empowering investors to make informed decisions about social and environmental impact is a real differentiator.”

Advertisement

After Controversial Exit From Google, Timnit Gebru has Founded her Own AI Research Firm

Timnit Gebru Dair
Image Credit: Analytics Drift Design Team

Timnit Gebru was one of Google’s most notable Black female employees until she was fired from her position as co-leader of the company’s Ethical AI project last year. After the dramatic departure of researcher Timnit Gebru, the company has faced a bombardment of criticism in the following months, particularly on Twitter. Gebru claims she was fired through email, following her refusal to retract her paper on large language models. Now, she’s set up a fresh new research center named DAIR, which is devoted to the issues she believes were being overlooked at Google.

According to its press release, the Distributed Artificial Intelligence Research (DAIR) Organization is an independent, community-rooted institute established to oppose Big Tech’s ubiquitous influence on AI research, development, and deployment. The MacArthur Foundation, Ford Foundation, Kapor Center, Open Society Foundation, and the Rockefeller Foundation collectively contributed $3.7 million to DAIR.

DAIR intends to chronicle damages while also developing a vision for AI applications that can benefit the same people. Gebru had earned herself a name in AI for co-authoring in the study of facial recognition software’s prejudice against persons of race, which led to Big Techs like Amazon changing their policies. She was sacked from Google a year ago after writing a research article criticizing the company’s profitable AI work on huge language models, which can assist with conversational search queries.

The backlash at Google brought to light the inherent conflicts that arise when tech corporations fund or hire academics to investigate the ramifications of the technology they want to benefit from. DAIR, according to Gebru, will question the possible drawbacks of AI since it will be free of the academic politics and pressure to publish that may stifle university research. In other words, emphasis will be on academic paper publication, but without the constant pressure of traditional academia or the paternalistic involvement of multinational companies restricting the researchers. 

This will be something different from Google’s step to impose additional restrictions to the topics that its researchers are allowed to investigate following the Gebru incident.

Read More: UNESCO unveils First Global Agreement On Ethics Of Artificial Intelligence: What Next?

DAIR will also demonstrate AI applications that are unlikely to be created elsewhere, according to Gebru, with the goal of inspiring others to push the technology in new ways. One such effort is developing a public data collection of aerial images of South Africa to investigate how apartheid’s legacy is still inscribed in land usage. According to a preliminary examination of the photos, most unoccupied property built between 2011 and 2017 was turned into wealthy residential districts in a densely populated zone historically confined to non-white people, where many poor people now reside.

Later this month, at NeurIPS, the world’s most prestigious AI conference, DAIR will make its formal debut in academic AI research with a paper on that project. Raesetje Sefala, DAIR’s inaugural research fellow, is the principal author of the publication, which also contains contributions from outside researchers.

Gebru said she hopes to use the funding to break free from the shackles of Big Tech such as the exclusion of outspoken researchers, the evaluation of potential harms after an AI system is in use. DAIR will also prioritize research against commercial AI projects, such as large language models, whose negative impact is investigated only after they’ve been deployed in the real world. Gebru’s ill-fated paper, too, evaluated the recognized problems of so-called large language models like GPT-3, which was taking the tech world by storm last year. She said that notions like AI applications that did not employ massive data sets or were focused on less profit-oriented goals, such as language revitalization, were given little thought.

Advertisement

Picsart acquires Artificial Intelligence company DeepCraft

Picsart acquires DeepCraft

All in one photo and video editing application Picsart announces that it has acquired artificial intelligence and computer vision company DeepCraft. No information has been provided by Picsart regarding the valuation of this acquisition deal with DeepCraft. 

However, according to sources, both cash and stock exchange in the agreement have been in the 7-digit range. With the new acquisition, Picsart will use the expertise of DeepCraft to further enhance its video editing capabilities. 

Picsart is already the world’s one of the most popular editing platforms, with over 100 million videos edited every year. DeepCraft’s talented team will ensure that Picsarts users get to see new and advanced editing features. 

Read More: SenseTime to open new Artificial Intelligence Casino in Singapore

Chief Technology Officer and Co-founder of Picsart, Artavazd Mehrabyan, said, “DeepCraft is a unique team of deep technology engineers, and we’ve been working with them to build our core technologies for over a year. As we invest further into our video editing capabilities, we are confident the team will play a significant role in building the platform for the next generation of creators.” 

Both the companies have already been working under the same roof for over a year that has let Picsart understand the capabilities of DeepCraft and how it can help Picart grow further. As a part of the acquisition deal, many DeepCraft employees will be joining the Picsart team. 

Co-founder and CEO of DeepCraft, Armen Abroyan, will be joining as Director of Engineering for the Core Engineering team, and Co-founder Vardges Hovhannisyan as Picsart’s Principal Engineer. 

DeepCraft is Armenia-based research and development company founded in 2017. The firm specializes in developing a wide variety of digital creation and editing tools for professionals and generic users. 
Co-founder of DeepCraft, Armen Abroyan, said, “It gives us the resources to uplevel our technology, the ability to apply our collective knowledge on a global scale, and the opportunity to advance innovation in the industry as a whole.” He further mentioned that they are excited to be acquired by the first unicorn company of Armenia, Picsart.

Advertisement