Harvard University announces the launch of its new institute named Kempner for the study of natural and artificial intelligence. The Kempner Institute will carry out studies to better understand the fundamentals of human and machine intelligence.
Bernardo Sabatini from the neurology department of the Harvard Medical School and Sham Kakade will lead the Kempner Institute. The newly launched initiative has been funded by a $500 million gift from Priscilla Chan and Mark Zuckerberg. The institute has been named after Zuckerberg’s mother, Karen Kempner Zuckerberg.
According to officials, the funds will be used to appoint ten new faculty members and build new computing facilities for students. Both Zuckerberg and Chan have made precious contributions to Harvard in the past in the form of supporting students, researchers, faculty, and many more.
President of Harvard University, Larry Bacow, said, “The Kempner Institute at Harvard represents a remarkable opportunity to bring together approaches and expertise in biological and cognitive science with machine learning, statistics, and computer science to make real progress in understanding how the human brain works to improve how we address disease, create new therapies.”
He also mentioned that the Kempner institute would help researchers better understand the human body and the world at a broader scale. According to the plans, Kempner Institute will be established at the new Science and Engineering Complex in Allston by 2022. The institute’s prime focus is to train undergraduates, graduates, and post-doctoral fellows in the human and artificial intelligence fields.
Individuals from underrepresented fields, Science, Technology, Engineering, and Mathematics (STEM), will be recruited to join the institute to carry out research. Faculties from various fields, including computer science, neuroscience, applied mathematics, cognitive science, and many others, will be working together in the Kempner Institute.
According to Sabatini, the institute will bring in experts from multiple fields to create a new population of researchers through education and funding programs.
Microsoft announced that it has collaborated with Rigetti Computing, a pioneer in full-stack quantum computing. As a part of this partnership, Microsoft will offer Rigetti quantum computers over the cloud to users of its Azure Quantum service.
When the Rigetti system is fully operational, it will be the largest quantum computer available on Azure Quantum. According to the two companies, the integration is expected to be finished and available to consumers in the first quarter of 2022.
Quantum computers are information processing devices that employ quantum physics phenomena. Information in a classical computer is represented as a binary bit where a bit can be a one or a zero. In comparison, information is represented in a quantum computer by a quantum bit, or qubit, which may be placed in a quantum state that allows it to represent both zero and one at the same time.
Furthermore, in a classical computer, each bit in a computer chip works individually. The qubits of a quantum computer are “entangled” with others in the quantum processor, allowing them to collaborate to find a solution. As a result, quantum computers, in principle, have exponentially more power than classical computers backed by these two features.
Rigetti quantum computers employ superconducting qubits, which have demonstrated faster execution speeds and more scalability than other commercially available quantum computing systems. Rigetti quantum computers have the potential to help tackle a wide range of practical challenges, including machine learning, drug discovery, renewable energy, logistics optimization, and financial simulations, thanks to their performance characteristics.
It also excels in quantum processing languages and applications like combinatorial optimization, which entails decreasing functions with a high number of variables. As a result, Rigetti’s self-reliance grows, and it becomes a considerably more appealing investment choice since it has a variety of cards to play.
Krysta Svore, General Manager of Microsoft Quantum, had this to say about the collaboration: “Rigetti’s scalable approach to superconducting quantum computers will create new opportunities for the Azure Quantum development community.” Krysta also adds, “We’re working closely with Rigetti to deliver hybrid quantum-classical computing with the performance to tackle problems that were previously out of reach.”
Rigetti is currently attending Q2B 2021, a quantum computing conference, where it will demonstrate a quantum chemistry algorithm operating on a Rigetti quantum computer in the cloud employing Microsoft’s quantum intermediate representation (QIR). Rigetti and Microsoft are working together through the QIR Alliance to make quantum computing more compatible, which will cut development efforts for everyone in the area.
QIR will also be available on Azure Quantum to enable low latency and parallel execution.
Azure Quantum, according to Microsoft, is the finest development environment for simultaneously developing quantum algorithms for multiple platforms while maintaining the flexibility to adjust the same algorithms for different systems. Users may use a variety of programming languages to run their algorithms on numerous quantum systems, including Qiskit, Cirq, and Q#.
In mid-2022, Intel plans to take Mobileye public in the United States. This deal could value Intel’s self-driving unit at more than $50 billion. Chip giant Intel expects to hold on to majority ownership in the unit after the initial public offering (IPO).
The largest employer of Israel’s high-tech industry with nearly 14,000 workers also expects to retain Mobileye’s executive team and has no intention to spin off its majority ownership in Mobileye.
The company also stated that it would continue to provide technical resources to the automaker. Mobileye Chief Executive Officer Amnon Shashua said that the partnership yields substantial revenue and free cash flow to Mobileye, allowing funding for autonomous vehicle development.
“Amnon and I determined that an IPO provides the best opportunity to build on Mobileye’s track record for innovation and unlock value for shareholders,” Intel CEO Pat Gelsinger said in the statement.
Mobileye was founded in 1999, and in 2017, Intel bought Mobileye for $15.3 billion, putting it into direct competition with rivals Qualcomm Inc and Nvidia Corp for developing driverless systems for global automakers. Mobileye has taken a different approach from its competitors, using a camera-based system that helps cars with lane change assistance and adaptive cruise control.
Mobileye is currently using lidar units from Luminar Technologies but plans to build its own “lidar” sensor to help its cars map out a 3D view of the road. However, Mobileye has never used Intel’s factories to make its chips. Instead, it relies on Taiwan Semiconductor Manufacturing Co for all of its “EyeQ” chips to date.
A team of MIT researchers has developed a deep learning model that predicts the 3D shapes of a molecule using the 2D graph of its molecular structure. GeoMol is a model that processes molecules in seconds and performs better than previous models. It also specifies the 3D structure of each bond independently. The molecules are usually depicted as little graphs in this case, where individual atoms in a molecule are represented as nodes, and chemical bonds that connect them are represented as edges.
In cheminformatics or computational drug development, it is critical to deal with molecules in their native 3D structure. These 3D conformations determine the biological, chemical, and physical characteristics.
Understanding how a molecule will interact with certain protein surfaces requires determining its 3D form. However, it is not a simple process as pharmaceutical companies frequently conduct multiple laboratory tests on various compounds. Furthermore, it is a time-consuming and costly procedure.
According to Octavian-Eugen Ganea, a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-lead author of the paper, GeoMol might aid these firms in speeding up the drug discovery process by reducing the number of molecules that need to be tested.
GeoMol predicts local atomic 3D structures and torsion angles, avoiding excessive over-parameterization of the geometric degrees of freedom, by using the competency of message passing neural networks (MPNNs) to gather local and global graph information. Message passing neural network is a new deep learning technology that is particularly built to operate on graphs.
At first, the model predicts the lengths of chemical bonds between atoms as well as the angles of those bonds. The arrangement and connectivity of atoms decide which bonds can rotate. The structure of each atom’s surroundings is then predicted separately. It then assembles surrounding rotatable bonds by computing torsion angles and aligning them.
The rotatable bonds can take on a wide variety of values in this case. As a result, the MIT team can capture a lot of the local and global environments that impact the prediction by employing message passing neural networks. Since the rotatable bond can take multiple values, the researcher team wants their prediction to be capable of reflecting that underlying distribution.
To test GeoMol, the MIT researchers used a dataset of molecules containing information on likely 3D shapes they could take. This dataset was developed by Rafael Gomez-Bombarelli, the Jeffrey Cheah Career Development Chair in Engineering, and graduate student Simon Axelrod.
The team compared the GeoMol against machine learning models and other ways to see how many of these plausible 3D structures it could capture.
GeoMol beat the other models on almost all of the measures that were assessed.
GeoMol also precisely specifies chirality throughout the prediction process since it identifies the 3D structure of each bond separately, removing the need for post-process optimization. Because a chiral molecule’s mirror copy would not interact with the environment in the same manner, chirality has been a key challenge in predicting the 3D structure of molecules. This might lead drugs to interact improperly with proteins, potentially resulting in serious adverse effects.
While the age of Metaverse is fast dawning upon us, this will usher a new chapter for the NFT marketplace, cryptocurrencies, and video gaming. Amid the fears that cryptocurrencies are still a risky gamble, Bitcoin, Ethereum, and many others are slowly gaining approval from Wall Street. Meanwhile, the dynamics between gaming and NFT are changing with the advent of NFT games.
This year, the NFT marketplace surged, beginning with NFT artwork that leveraged blockchain’s transparent and secure digital record to verify its products uniquely. This assured gamers that they could have digital ownership and resell their game characters for a profit, allowing a new business model for games known as play-to-earn, in which players earn incentives. With the widespread acceptance of NFT, the gaming industry is on the crux of new disruption.
New trends like “play-to-earn” and more widely part of the Web 3.0 movement will act as harbingers of a new era of gaming with a real-world economy and new player incentives. Web 3.0 refers to the third generation of the internet, which many predict will be driven by decentralized infrastructure and machine-based data interpretation. As a result, existing business models in the gaming industry may be fundamentally disrupted.
The arcade games gave the initial push to the gaming industry. It was followed by free-to-play games where gamers could pay to avail themselves of extra power-ups and RPG characters skins, which created a digital items-powered economy strictly controlled by gaming companies. Now, we are blessed with play-to-earn, which is centered on generating real-world value from in-game items and other types of digital products using non-fungible tokens, cryptocurrencies, and other blockchain technologies.
In play-to-earn, gamers no longer have to spend hundreds of dollars on products that are just related to their accounts. Instead, these goods are stored in a non-custodial wallet that gamers may use to play games.
This basically means that gamers can purchase commodities on the open market and use them for the course of the game. Once they finish the game, they can either sell their stuff on the open market for real money once they finish the game or use the commodities in another game that interests them. In other words, your assets are no longer bound to the game in which you purchased them. You can leave at any time and at least get some of the value you put in. One example of a successful play-to-earn game is Axie Infinity, which already has millions of players worldwide and has generated more than US$2.5 billion in trading volume.
Once this becomes mainstream, it will provide the groundwork for the much-desired Metaverse that many social media and gaming companies are attempting to build.
Polygon is one of the most well-known cryptocurrencies and NFT companies. As a network that provides scalability solutions for projects built on the Ethereum (ETH) blockchain, Polygon has attracted an increasing number of users and developers interested in NFT, allowing them to generate and trade these assets at lower costs. This is achieved by adding an additional layer (Layer 2), which works on top of primary blockchains to speed up transactions. This allows it to reduce gas expenses, which are connected to the computational costs of blockchain transactions from user-made NFT purchases.
According to a study from blockchain development platform Alchemy, Polygon already has over 3,000 applications. Polygon has moved to the top of the list of companies that had made big investments in blockchain-based gaming so far, when it launched Polygon Studios, a $100 million incubator for NFT game ventures, earlier this year.
Its gaming initiative continues with the recent announcement of a new collaboration with GameOn Entertainment. Polygon Studios will contribute half of the non-dilutive money for continuing product development costs as part of the partnership. The investment will specifically benefit GameOn in spearheading the development, minting, and selling of NFTs on the games it currently manages. GameOn’s prime focus would be to sell white-label prediction games, fantasy games, and NFT-based games to other entertainment firms, which they could then push out to their user base, creating a win-win situation for everyone. Because this is developed on the Polygon network, it can minimize the costs of developing NFT games.
In a statement, its CEO, Matt Bailey, said, “By leveraging Polygon’s technology, GameOn continues to focus on blockchain and NFTs, bringing innovative gamification to Web 3.0 economies. Through resource-generating partnerships and acquisition mergers, we will continue to redouble our efforts to be the one-stop-shop for gamification , including NFT-based games”.
Building on the success of Polygon, the native MATIC cryptocurrency saw massive growth in 2021. Over the course of the year, the cryptocurrency value has increased by more than 13,000%. In addition, the network’s total number of unique addresses has topped 100 million. The token increased by 70% in October alone. Despite ending November with a loss, assets increased by more than 15% this week.
EngineeredArts recently unveiled its newly developed humanoid robot that is capable of making complex facial expressions. The robot, named Ameca, can make human-like expressions that are of the highest accuracy.
The robot comes with a face that has eyes, nose, lips, forehead, and cheeks. Developers have made the robot highly capable of making hyper-realistic human expressions from aww to surprise.
Ameca was revealed in a YouTube video uploaded by the company where it was making expressions by changing the shape of its facial features. EngineeredArts is a United Kingdom-based robotics company that is currently headed by Will Jackson. The firm specializes in developing various sorts of humanoid robots with multiple features and capabilities.
According to the company, Ameca is the world’s most advanced human-shaped robot. Ameca will turn out to be of great importance for students and developers to improve human-robot interaction. EngineeredArts plans to unveil more capabilities of the robot at the CES 2022 event that is supposed to be held in January.
The robot does not yet know how to walk but the company plans to upgrade it in the coming years. In the revealed video, the robot can be seen making various hand gestures that are synchronized with its facial expressions. Apart from Ameca, the company has also developed another humanoid robot named Mesmer that can perform similar facial actions.
Original 3D scans of real human expressions were used to train the robot in the development process that has resulted in a highly accurate bone structure and facial features of the humanoid robot.
In the PyTorch developers day 2021 conference, Meta launched PyTorch Live that uses a single programming language to design AI-based applications for both Android and iOS platforms. The mission of PyTorch is enabling cutting-edge research to developers, co-developing with many stakeholders, modularity to allow developers with their desired tools, and being performant and production-oriented. By keeping these four criteria as a base, PyTorch introduces PyTorch Live.
PyTorch Live simplifies the stringent resource restrictions of mobile devices and also reduces the workloads of mobile developers to create novel ML-based applications. PyTorch Live is a set of tools to build AI-powered mobile applications that runs on both Android and iOS platforms.
Usually, building apps that work across different platforms requires expertise in multiple programming languages and therefore increases the cost of leveraging mobile models on different devices. In addition, developers would be required to separately configure the project and build UI (User Interface) that runs on different platforms, thereby slackening the app development process.
Instead of writing the same app twice in two different programming languages, PyTorch Live uses JavaScript as a unified language to write and build apps for both platforms.
PyTorch Live is powered by two successful open-source projects called PyTorch Mobile and React Native to build AI-powered mobile applications. While PyTorch Mobile is a runtime used to perform on-device inference for training and deploying in mobile applications, React Native library is used to build interactive user interfaces for Android and iOS platforms.
To design and build AI-powered mobile applications, developers can use PyTorch Live’s CLI (Command Line Interface), Data Processing API, and Cross-Platform apps. While CLI quickly sets up the mobile app development environment and bootstraps the mobile app projects, Data processing API is used to prepare and integrate custom models for building a new mobile application. The Cross-Platform apps use PyTorch Live APIs to build AI-powered mobile apps for Android and iOS.
Users can build their own user interface to build models using PyTorch Live’s Core APIs like Camera API and Canvas API. While Camera API is used to build a UI that identifies the objects in an image captured by a user, Canvas API is used to build a UI that allows a user to draw and predict the respective letters or digits.
According to Meta, PyTorch Live (GitHub) will also support developers to work with audio and video data in the near future.
Johns Hopkins gets the grant to use artificial intelligence to promote healthy aging. The National Institute of Aging has allocated over $20M to Hopkins for them to execute their plans to promote healthy aging.
This new development will considerably help in providing a better lifestyle and living experience to senior citizens. Johns Hopkins will use the allocated funds over five years to build an AI and technology collaboratory (AITC).
The new collaboratory will have members from the Johns Hopkins University schools of medicine and nursing, the Whiting School of Engineering, and the Carey Business School. The collaboratory will also have members from various industries, senior citizens of the country, and technology developers.
Rama Chellappa, a Bloomberg distinguished professor of electrical and computer engineering, and Peter Abadir said, “This new enterprise is attempting to disrupt these problems in ways that will lengthen the years that people have to enjoy independent, highly functional lives, free of cognitive impairment.”
He further added that there are numerous aged citizens who suffer from multiple health issues and have functional and cognitive declines that restrict them from living an independent life for a long time.
“The excitement is that our work can help physicians use the technology as markers for measuring the evolution of age-related diseases, like dementia and Alzheimer’s, and predicting falls using patterns and behaviors of older adults,” added Chellappa.
She also mentioned that predicting behaviors and understanding how individuals age is an arduous task. Many experts believe that this new initiative will help in drastically reducing the total number of deaths in those 65 and older.
However, the success and reach of this newly launched initiative will depend upon how citizens choose to adopt and use the AI-powered technology.
Artificial intelligence technology developing company Clearview AI is all set to win a new United States patent for facial recognition systems. This development will allow companies to use Clearview AI’s technology when they pay the required administration fees.
Clearview AI’s artificial intelligence-powered face recognition system searches social media and adds images of users to its database. However, various experts and critics are concerned about the potential growth of similar kinds of technologies because of the new Clearview AI patent.
Experts believe that it is essential that governments and lawmakers regulate such facial recognition systems of ethical use of new world technologies.
Co-founder and CEO of Clearview AI, Hoan Ton, said, “There are other facial recognition patents out there — that are methods of doing it — but this is the first one around the use of large-scale internet data. As a person of mixed race, having non-biased technology is important to me.”
The AI system of Clearview AI has already been used by various enforcement agencies, including the FBI and the department of homeland security. The platform has been consistently criticized for illegally fetching and storing images of social media users without their consent.
According to experts, the method of Clearview AI to harness and store pictures is a complete violation of the basic right to privacy of social media users. Regarding the criticizing arguments, Ton said, “All information in our datasets are all publicly available info that people voluntarily posted online — it’s not anything on your private camera roll.”
New York-based artificial intelligence firm Clearview AI was founded by Hoan Ton and Richard Schwartz in 2017. Till date, the company has raised over $38 million from investors like Kirenaga Partners, Hal Lambert, Peter Theil, and many others in three funding rounds.
The government of Odisha plans to use AI-powered CCTV cameras to check and monitor any kind of torture arrested individuals in police custody. The new system will ensure that no accused is mistreated while they are under police custody.
According to the plan, all the previously installed CCTV cameras in various police stations across the state will be replaced with the newly designed artificial intelligence-powered CCTV camera.
A report of the National Human Rights Commission pointed out that five people die every day in India while they are under police custody. The new AI camera system aims to reduce this figure to an absolute zero in the state of Odisha.
The AI-powered camera system will use IP-based CCTV cameras, which will be able to send alarms to respective authorities when police officers assault anyone within the premises of the police station. Apart from torture, the AI surveillance system will also help in reducing any sort of corruption within the system, including bribing.
A senior officer at Odisha Computer Application Center said, “Efforts are on to use a defined software through which cameras can detect such activities including money being taken by the police from the accused or suspect.”
Earlier this year, the supreme court of India instructed every police station to install CCTV cameras to help keep track of the happenings in their premises. The installation of AI-powered CCTV camera systems has already reached the completion stage in various cities of Odisha like Bhubaneswar, Cuttack, Puri, Khurda, and Jagatsinghpur. A total of twenty AI CCTV cameras will be installed to cover multiple locations in police stations.
Manoj Kumar Mishra, Secretary of State Electronics and Information Technology Department, said, “In our quest to bring in more transparency in government institutions under that 5T charter, the state government will expedite installation of CCTV cameras in all 593 police stations.” He also mentioned that this would enable authorities to monitor police stations using the latest technology.