Artificial Intelligence has been used for content creation of various kinds for a long time. Now, the focus has also shifted to content curation using artificial intelligence with the end goal of making education more accessible to people of all spheres. With students accessing education remotely these days, they constantly have to watch long college lectures on loop to understand concepts, clear doubts, or access certain information. The same is the case with remote workers in terms of presentation notes and documents. However, the use of artificial intelligence can address these concerns.
Content curation is sourcing relevant, high-quality information from diverse sources and promoting it to increase engagement. Similar to content generation, the process of content curation can be automated using artificial intelligence and machine learning. AI content curation not only provides assistance in content selection but also customizes the result for each person depending on their specific preferences.
For example, businesses and educational institutions produce hundreds of millions of hours of long-form video every day. However, since the production and editing costs are enormous, most of the population is unable to access and benefit from these productions. To address this issue, artificial intelligence and machine learning-based Educational Vision Technologies (EVT) was founded. EVT provides services that make long-form video content more comprehensive and easily accessible for workers and university students.
EVT utilizes artificial intelligence and machine learning tools that provide content curation. These services employ machine learning algorithms to divide videos into short, manageable chapters of just a few minutes. Each of these short videos is accompanied by a transcript and notes. The EVT program was first designed with the needs of students with disabilities in mind. However, it also helps students who are unable to attend classes or find making notes difficult.
Users can either upload their own movies for content curation on EVT Bloom or use EVT Learning Systems to automatically upload the required content. The EVT software automatically generates a searchable voice transcript, interactive table of contents, quiz questions, and speaker summaries by segmenting recordings into concise video chapters. On web platforms, the software puts the titles of brief videos as headers to allow users to read aloud and help those who are visually impaired or blind access the video material easily.
EVT Bloom processing takes place on Oracle Cloud Infrastructure (OCI), and the content is secured and stored in OCI object storage. EVT has developed its various machine learning microservices on OCI. With the help of engineers from Oracle, EVT prototyped and developed their machine learning microservice infrastructure in the Oracle Cloud.
Given that EVT offers hundreds of educational videos that are broadcast from its website, one would expect the cloud charges to be high. But since Oracle content curation service does not charge egress for the first 10 TB per month, EVT was able to significantly reduce the cost of video streaming, which makes it affordable too.
Conclusion
AI tools like ETV can significantly boost the productivity of students by helping them with content curation. Studies show that it takes more cognitive effort to take notes while listening to a lecture than it does while playing chess. To overexert students’ cognitive bandwidth to write everything from the whiteboard or chalkboard hardly makes any sense. With affordable AI innovations like EVT, the education system can be drastically transformed to make it more adaptable for the digital world, thus making it more accessible simultaneously.
A case brought by Fabrizio D’Aloia against Binance Holdings, Poloniex, gate.io, OKX, and Bitkub over allegations that someone was running a fraudulent clone online brokerage has led to a legal precedent that airdropping legal documents via blockchain using NFT, according to a notice released by the U.K. law firm Giambrone & Partners on Tuesday.
D’Aloia claimed to have been tricked into depositing more than 2.1 million Tether and 230,000 USD Coin valued at roughly US$2.33 million into two distinct digital wallets that turned out to be fake. Before he realized he was “a victim of fraudulent conduct” in May, the con artists convinced him to transfer money from his cryptocurrency wallets to trade on the platform under the pretense of being from online brokerage TD Ameritrade via the website tda-finan.com.
On June 24, the High Court of England and Wales authorized D’Aloia to distribute the court documents to people linked to two anonymous digital wallets. This is a historic step against cryptocurrency scammers.
The court also stated that exchanges must ensure that stolen cryptocurrency is neither transferred nor removed from their systems.
This is not the first instance of someone using the blockchain in a lawsuit. According to the legal firms Holland & Knight and Bluestone, the New York Supreme Court granted a restraining order against a hacker via NFT drop in the US$8 million LCX exchange breach case in June.
The LCX or Liechtenstein Cryptoassets Exchange is a Liechtenstein-based cryptocurrency exchange that was hacked in January. At the time, it was reported that the hackers compromised the exchange’s hot wallets, leading to the theft of many digital currencies, including Ether (ETH) and USD Coin (USDC).
According to Giambrone & Partners, who represented D’Aloia, the ruling is a welcome example of a court embracing new technology. It is also a noteworthy verdict because it shows that England and Wales is one of the greatest jurisdictions in the world, if not the best, for safeguarding victims of crypto asset theft.
An NFT is a unique blockchain-based digital asset. It is cryptographically validated, meaning the creator or author is authenticated within an irrevocable digital record. An NFT could be anything that exists on a machine, from art to a video clip to a music sample.
When a creator creates an NFT, all other copies of the artwork become its copies. As a result, the economic value of the NFT is connected to the blockchain that confirms the legitimacy of the work rather than the job itself. NFTs, like bitcoin or ether, have instrumental but not inherent value. However, thanks to increased adoption and endorsement from celebrities and large corporations, NFTs have gained much traction over the past year.
The U.K. government revealed intentions to create its own NFT earlier this year to establish itself as a “global leader” in the cryptocurrency industry. The government-owned Royal Mint, which is in charge of producing coins for the United Kingdom, has been instructed to develop and issue the NFT by the summer, according to Finance Minister Rishi Sunak.
Andrej Karpathy, director of artificial intelligence (AI) and Autopilot Vision at Tesla, has announced that he is quitting his job after five years with the company.
Karpathy’s resignation comes after Tesla recently laid off 229 annotation employees from its Autopilot team. The company also closed one of its offices in the US. The remaining 47 employees are expected to be employed at Tesla’s Buffalo Autopilot office.
Tesla has laid off workers from its San Mateo office, which employed 276 workers. The layoffs were part of the 10% reduction in the salaried workforce that the CEO of Tesla, Elon Musk, had announced last month.
The workers who were laid off were primarily low-skilled and had low-wage jobs, such as Autopilot data labeling, which involves determining whether Tesla’s algorithm identified an object well or not.
Karpathy joined Tesla five years ago. Before joining Tesla in 2017, he was a researcher at OpenAI, the AI nonprofit previously backed by Musk. He was on a four-month leave which fuelled speculations about his return.
I have no concrete plans for what’s next but look to spend more time revisiting my long-term passions around technical work in AI, open source and education.
Karpathy said that in 5 years, Autopilot graduated from lane keeping to city streets and is looking forward to seeing the solid Autopilot team continue that momentum. In a tweet, Karpathy said that he has no concrete plans for what he will pursue next but looks to spend more time around technical work in AI and open source.
Image Credits: Sebnem Coskun Anadolu Agency via Getty Images
A group of Chinese scientists from Sichuan University in southwest China has created soft robot fish that “eat” microplastics and might one day help clean up the filthy waters around the world. Researchers from Sichuan University in southwest China claimed that the bionic fish could also “self-heal” and absorb microplastics even when injured in a study that was published in the peer-reviewed journal ACS’ Nano Letters on June 22.
Photo: Nano Letters
According to the researchers, the 1.3 cm (1/2 inch) long robot can swim up to 2.67 body lengths per second, outpacing the majority of artificial soft robots. Microplastics adhere to the surface of the bionic fish because it is composed of synthetic resin polyurethane. According to the researchers, the robot can continually absorb surrounding microplastics and carry them to a predetermined location because particular components in microplastics have strong chemical connections and electrostatic interactions with the fish material.
Nacre, often known as the “mother of pearl,” is found on the inside of clam shells and is composed of layers of calcium carbonate mineral-polymer composites and a main silk protein filler, which makes it robust and flexible. In comparison, traditional soft robot materials like hydrogels and elastomers get readily destroyed in water. Using the mother of pearl as a base, the research team created an elastomer actuator using gradient nanostructures based on sulfonated graphene and β-cyclodextrin, which resulted in polyurethane latex material that can withstand high temperatures.
The research team gradually created a fish-shaped robot that can swim in any direction while being powered by a light source. When a laser is focused on the fish’s tail, the light deforms the material, causing it to bend. Because polyurethane is also biocompatible, it can be safely digested if it is unintentionally consumed by other fish. The nanocomposite material used in its construction had a self-healing efficiency of up to 89%.
Robots today appear to be designed to replace all kinds of human occupations. They can carry heavy objects, carry out tedious activities, help in food delivery, scurry into tight areas for research, medical applications, agriculture or disaster recovery, or act as disposable stand-ins in potentially fatal circumstances like military warfare. Now, this soft robot developed by Sichuan University can address the long-time concern of microplastics in water bodies that threaten fragile ecosystems.
Microplastics are tiny remnants of larger plastic debris that measure less than 5mm in size. They could be mistaken for food due to their small size and get lodged in an animal’s digestive system, starving it to death. Before being used, certain plastics are coated with harmful compounds that can kill neighboring marine life when the coating on microplastics dissolves.
It is challenging to clean the already contaminated water, even while decreasing plastic waste and filtering wastewater before it enters the ocean can help reduce the amount of microplastics in our seas. Even banning the production of certain single-use plastics, as done recently by Canada, would not help free the already polluted waters and microplastics. The minute particles can also stay trapped deep beneath the seafloor’s fissures and crevices, making it difficult for large, rigid robots to access these locations. The bionic fish has so far demonstrated its capacity to ingest pollutants in shallow water, but scientists intend to deploy it in deeper waters as well to study marine pollution.
The research study was funded by the Sichuan Natural Science Fund for Distinguished Young Scholars, the National Natural Science Foundation of China, and the National Key Research and Development Program of China.
Left: Hubble. Right: James Webb. Credit: ESA/NASA/STSCI
On Monday, President Biden hosted a gathering at the White House where he unveiled the James Webb Space Telescope’s first scientific photograph. It is one of the targets that NASA earlier disclosed on Friday, July 8: galaxy cluster SMACS 0723. SMACS 0723 is located in the southern constellation of Volans at a distance of 5.12 billion light-years from our planet. The image showed how this galaxy cluster looked 4.6 billion years ago.
Webb’s First Deep Field Image source: NASA, ESA, CSA, STScI
This is the deepest and brightest infrared picture of the distant universe yet taken by NASA’s James Webb Space Telescope. The photograph known as Webb’s First Deep Field is rich in detail. It is a composite picture created from pictures acquired over a period of 12.5 hours at various wavelengths. This animation shows the amount of additional information seen by the more recent probe by contrasting the Hubble Space Telescope with JWST’s images of SMACS 0723.
The area of the cosmos depicted in this picture is about the size of a grain of sand held out in front of someone. All the images captured by the telescope are anticipated to result in revelations that will improve our understanding of how the cosmos started some 13.7 billion years ago.
A machine learning model known as Morpheus was created by researchers and taught to sift through pictures, identify faint blob-shaped objects from the deep abyss of space, and assess whether or not they are galaxies and, if so, what kind. In simple words, it will enable pixel-level morphological classifications of cosmological images. NVIDIA, a prominent player in the technology industry, offered its GPUs to speed up Morpheus on several platforms.
The COSMOS-Webb program, the biggest and most challenging undertaking the telescope will embark on in its first year, is largely dependent on the Morpheus. To examine how dark matter evolved when these structures started housing stars, a team of almost 50 researchers will survey half a million galaxies from a region of the sky where they’ll be looking for the oldest, fully-formed galaxies. They’ll employ the software to automate this procedure, which was formerly employed to assist with image classification for the Hubble Space Telescope.
A large portion (0.6 square degrees) of the sky will be surveyed by COSMOS-Near-Infrared Webb’s Camera (NIRCam) during the course of more than 200 hours of observation. It will concurrently use the Mid-Infrared Instrument (MIRI) to map a smaller region. Known as the epoch of reionization, COSMOS will focus on the development of the universe, which took place from 400,000 to 1 billion years post the big bang. Additionally, COSMOS will be used to map the large-scale dark matter distribution of galaxies back to very early times and search for some of the rarest galaxies that were present in the early universe, i.e., the first 2 billion years after the big bang.
With multi-band, high-resolution near-infrared imaging and an unprecedented 32,000 galaxies in the mid-infrared, COSMOS-Webb will examine half a million galaxies. This survey will serve as a critical historical dataset from Webb for researchers worldwide researching galaxies outside the Milky Way due to its quick public release of the data.
Morpheus received training on the Lux supercomputer at UC Santa Cruz, which has 80 CPU-only compute nodes, each containing two 20-core Intel Cascade Lake Xeon processors along with 28 GPU-only nodes containing two Nvidia V100 GPUs each. According to Brant Robertson, an astrophysics professor at the University of California, Santa Cruz and one of the leading researchers behind Morpheus, the James Webb Space Telescope will actually allow us to see the universe in a new way that we’ve never seen before with the aid of this machine learning software. Morpheus will thus need to be retrained after being initially trained on 7,600 galaxy pictures captured by NASA’s Hubble Space Telescope to more effectively adapt to James Webb Space Telescope data.
These competencies will be helpful when the telescope delivers a wider and deeper perspective of the cosmos than ever before, with each image containing more structures that cannot be explored manually with the unaided eye. The most recent version of the software also includes additional image processing features, such as deblending, which can distinguish astronomical objects in the sky that appear to be overlapping.
Image Source: Northrop Grumman
The James Webb Telescope project was initially conceived in 1996, with an original cost estimate of US$0.5 billion. Because it was named after a former administrator who allowed the government to discriminate against homosexual and gay personnel, it has repeatedly drawn criticism from the public. After several delays, the US$10 billion telescope was eventually launched on Christmas Day of the previous year. At the moment, it orbits the L2, or second Sun-Earth Lagrange point, which keeps it out of our planet’s shadow. In comparison, the Hubble orbits around the Earth at an altitude of 570km. The fact that Webb predominantly observes the cosmos in the infrared whereas Hubble primarily investigates it at optical and ultraviolet wavelengths is another significant distinction between the two telescopes. This has a major benefit since infrared views can see through cosmic dust to reveal objects or structures that are otherwise concealed.
James Webb Space Telescope boasts a base made of a sun shield composed of thin layers of Kapton, and a panel of hexagonal-shaped mirrors with a gold plating lying on top. The 18 mirrors, each measuring 6.5 meters across, are supported by struts and motorized actuators, allowing them to move in six directions simultaneously. A larger-than-expected micrometeoroid struck one of these mirrors in May. The mirror had previously experienced four minor micrometeoroid hits. According to NASA, the telescope has performed better than expected despite the strikes, with hardly any data loss.
For the purpose of concentrating light from distant objects more than 13 billion light-years away, they were precisely aligned with one another at 1/10,000th the thickness of a human hair to act as one large mirror.
A secondary mirror with a diameter of 0.74 meters that is mounted to the end of three long arms refocuses the light reflected from the large mirror. Photons are directed to several equipments, including a near-infrared camera and spectrograph, a mid-infrared camera and spectrograph system, and a field guidance sensor, which aids in pointing the telescope at relevant objects. These instruments are located behind the front mirror.
The image is only one of the initial few locations that astronomers chose to investigate during the JWST’s initial science operations run. In the following days, new images detailing the Carina Nebula, the Southern Ring Nebula, Stephan’s Quintet, and the light spectrum of WASP-96 b were released.
One of the biggest and brightest nebulae in the sky, the Carina Nebula lies around 7,600 light-years beyond. Several enormous stars in this region are many times as big as the Sun. The telescope captured the edge of the gaseous cavity within NGC 3324, a star-forming region within the nebula.
Southern Ring Nebula is a planetary nebula with an expanding cloud of gas, surrounding a dying star. It is around 2,000 light-years from Earth and has a diameter of over half a light-year. Stephan’s Quintet is a collection of five galaxies initially identified as a compact galaxy group in 1877. It is situated in the Pegasus constellation, approximately 290 million light-years away.
WASP-96 b Credit: ESA/NASA/STSCI
Outside our solar system, there is a huge planet called WASP-96 b, mostly made of gas. It is roughly 1,150 light-years away from Earth and revolves around a star similar to the Sun. For 6.4 hours, Webb observed the light coming from the WASP-96 system as the planet traveled across the star.
Indian cryptocurrency builders and traders are still hopeful about the fate of the digital currency in India despite the vulnerabilities in guidelines with respect to trade in cryptocurrencies and the recent crash in the crypto market.
At the beginning of this year, the crypto market dropped below the $2 trillion mark and continued to have a steady fall before witnessing a slight recovery in April. Overall, the cryptocurrency market has slumped by approximately 70% in value from its all-time record high in November 2021. Tokens like Dogecoin, Avalanche, and Solana, among others, have witnessed up to a 90% dip. Currently, the total cryptocurrency market cap stands at $860 billion.
Despite the all-time low, experts believe that India can still play a significant role in the future of cryptocurrency if it forges ahead in the right direction. This notion is backed by the fact that crypto is still a very new concept, and like any other novel innovation, it will require time and investment to make regulations in its regard. According to a study, the global cryptocurrency market is expected to reach $1,902.5 by 2028, and artificial intelligence will play a significant role in that.
The fate of cryptocurrency in India is not as bleak as it should be because of its decision-making government. The Indian government is currently formulating a policy on Web 3.0, which is the new breed of the internet that can potentially become the decentralized version of the virtual world. This policy can enable India to take part in the worldwide strategy improvement and adjust its strategies to this quick world. According to experts, Web 3.0 is the eventual fate of cryptographic money, and embracing artificial intelligence, India is making efforts to be at the forefront of creating strategies around it.
How can AI help the fate of the cryptocurrency industry?
Artificial intelligence is already being used quite actively in the cryptocurrency industry. However, many experts believe that the adoption of AI in the crypto market will continue to grow steadily in the upcoming years. The AI market size is expected to reach $360.36 billion by the year 2028. Since the AI industry itself is developing at a drastic pace, it can effectively support the further development of the cryptocurrency industry as well.
Over the past few years, artificial intelligence has been effectively used to create numerous chart patterns and indicators to assist crypto traders in their ventures. With AI tools, traders can make the most out of the volatile cryptocurrency market. However, AI has so much more to offer.
One of the major concerns for traders regarding crypto trading is that the market stays open 24×7. This means that there is always some sort of activity taking place in the crypto market. Considering that, it is almost impossible for traders to analyze these price movements constantly. However, artificial intelligence is more than capable of monitoring the price movements in the market at all times. For example, it can ensure that whenever there is some movement in the market that might earn traders some profits, the automated trading bot will take informed actions on behalf of the traders.
Conclusion
The past few months have been quite damaging to the cryptocurrency market. The majority of the crypto assets fell in value, resulting in massive panic in the market. Experts in the market are predicting different scenarios of developments in the near future. Some experts claim that the prices of crypto assets will continue to dip, while others say they will start to recover in the foreseeable future. However, one thing is certain. As artificial intelligence keeps upgrading every single day, AI will become even more effective in terms of contributing to the cryptocurrency industry in the future.
The Central Bank of Sri Lanka (CBSL) has warned the country against using cryptocurrency in light of the ongoing economic crisis. CBSL said cryptocurrency is largely unregulated, and any dealings related to it should be avoided.
CBSL also reminded that it does not consider cryptocurrencies as a ‘legal tender’ in the country and said that the government has not given a license or any other form of authorization to any crypto-related entity to operate separately in the nation.
The Central Bank has also not authorized any initial coin offerings (ICO) or mining operations relating to virtual currency exchanges in the nation. According to CBSL, virtual currencies are considered ‘unregulated financial instruments’ and therefore have no regulatory oversight regarding their usage in Sri Lanka.
The warning comes as the country deals with the sovereign-debt crisis that has crippled the local economy. Sri Lanka fell into default this year in May and is struggling to secure essential imports from other nations. According to a report, inflation in the country had touched a year-on-year record of 54.6% in June.
The total market value of final services and goods, measured through the GDP standard, also contracted by 1.6% in the first quarter of 2022. The falling value of the local currency has prompted many Sri Lankans to invest in cryptocurrencies.
Major cryptocurrencies are also witnessing a steady decline. The crypto market has dropped by over 56% in the last few months, from $2 trillion to $873.03 billion. The reversal has led to a similar decline in public stock markets and private market deal flow activities.
The National Institute of Standards and Technology (NIST) of the US Department of Commerce revealed earlier last week the selection of four encryption algorithms that would be incorporated into the organization’s post-quantum cryptography (PQC) standard. NIST plans to complete this standard over the next two years and is likely to add other algorithms in the future.
The proverbial necessity is the mother of the invention applies to the field of quantum computing. While classical computers express information in binary, i.e., 1 and 0, quantum computing exploits the concepts of quantum physics like superposition, entanglement, and quantum interference to the classical computing techniques. Today, quantum computers are highly sought for weather forecasting, financial modeling, drug development and more.
However, in 1994, with the creation of Shor’s algorithm, researchers demonstrated that if the development in quantum computing could be maintained for a long enough period of time, quantum computers could defeat existing encryption technologies like Rivest–Shamir–Adleman (RSA) algorithm and elliptic curve cryptography (ECC). Developed by American mathematician Peter Shor, Shor’s Algorithm is a non-linear method for factoring composite numbers, defies many of the constraints of linear computation, and can easily locate the prime factors in any number, regardless of size. The largest key size to be solved was a 795-bit RSA key, which was factored in 2019 by a group of academics. The long-trusted Diffie-Hellman key exchange technique, which is used for contemporary cryptographic communications such as SSL, TLS, PKI, and IPsec, is projected to be broken by quantum computers in addition to RSA and ECC.
Although it will be years before quantum computers are strong enough to crack public-key encryption, when they do, they might pose a serious danger to financial and personal data and national security. This key drawback is well known in the computing industry. A few businesses have started working on developing, testing, and implementing new encryption algorithms that are resistant to quantum computers. Companies like IBM have already started providing solutions that focus on post-quantum cryptography protection.
Recently, many PQC-focused companies have emerged from stealth. In May, QuSecure, a three-year-old firm with headquarters in San Mateo, California, debuted QuProtect as its first post-quantum cryptography solution. According to QuSecure, QuProtect is an orchestration platform capable of securing both data in transit and data at rest that has been encrypted using the latest post-quantum cryptography algorithms. Another company, PQShield offers post-quantum cryptography hardware, an encrypted messaging platform, and a System on Chip to protect smart cards and security chips from post-quantum attacks.
Since 2016, the National Institute of Standards and Technology has been spearheading the search for post-quantum cryptography technology to create and test in order to secure such data. It whittled 82 original proposals for the Post Quantum Cryptography Standardization project to four final methods for two encryption tasks: general encryption (where two users swap keys) and digital signature authentication (identity verification).
Math issues using algebra are frequently used in traditional cryptography like the RSA and ECC, though geometric problems are more frequently used in quantum cryptography. One of these geometric problems is based on lattices, which are a multidimensional grid of points that spread out in all directions. The next step is for the computer to locate nearby points or vectors inside this lattice.
While SPHINCS+ [an algorithm for digital signature verification] employs hash functions, three of the selected algorithms are based on a class of mathematical problems known as structured lattices.
According to NIST, two of the four technologies selected are anticipated to be employed more often. One, known as CRYSTALS-Kyber, will protect online data by creating the cryptographic keys required for two computers to exchange encrypted data. It operates relatively small encryption keys and moves comparatively faster. The second, CRYSTALS-Dilithium, is used to sign encrypted data and prove who sent it. It will probably take two years for the methods to become fully standardized for inclusion in current software and hardware.
Laurie E. Locascio, NIST Director and Under Secretary of Commerce for Standards and Technology, stated in a public statement that the NIST post-quantum cryptography program has taken advantage of the world’s best minds in cryptography to produce this first group of quantum-resistant algorithms that will result in a standard and greatly improve the security of our digital information.
NIST advises CRYSTALS-Dilithium as the principal method for digital signatures, with FALCON for applications that require smaller signatures than Dilithium can offer. It revealed that though SPHINCS+ is slower than the other two but was approved since it is based on a new mathematical method and so provides a possibility to increase diversity. The algorithms are available on the NIST website.
Also, NIST won’t stop at four. The organization added that additional candidates are being considered and that it will soon reveal the winners of the second round. The other four techniques are intended for broad encryption and do not employ hash functions or structured lattices in their approaches.
NIST stated that a useful standard provides solutions tailored for various scenarios, employs a variety of encryption techniques, and provides more than one algorithm for each use case in order to explain the necessity for multiple standards and a multi-stage strategy.
NIST urges security professionals to investigate the new algorithms and think about how their applications will utilize them while the standard is still being developed, but not to incorporate them into their systems just yet because the algorithms could change marginally before the standard is finished.
In the meanwhile, the US government’s attempts to offer defenses against quantum computing are growing. Recent White House directives called for the fast approval of the Bipartisan Innovation Act as well as underlined that the government and businesses should move forward with NIST’s standards. The Quantum Computing Cybersecurity Preparedness Act, put forth by US Representative Nancy Mace (R-SC), was unanimously approved by the US House of Representatives Oversight and Government Reform Committee on May 11 in response to a White House directive to advance the migration of federal government IT systems with post-quantum cryptography capabilities.
In addition to outlining rough timelines and responsibilities for federal agencies to migrate the majority of the US’s cryptographic systems to quantum-resistant cryptography, the Biden administration’s memorandum also underlines its desire for the US to maintain its leadership in quantum information science (QIS).
The White House wants the US to move to cryptographic systems that are resistant to a ‘cryptanalytically’ relevant quantum computer (CRQC) by 2035. However, there is no set timeline for this transition.
This development comes at a time when quantum computers are ready to hit the commercial market. When quantum computers are ready for commercial usage, the PQC algorithms promise to provide exponentially more powerful encryption than present standards. For instance, as a part of its Quantum computing roadmap, a 433-qubit processor named IBM Osprey, is set to be released by the end of this year, more than tripling the size of IBM Eagle, a 127-qubit processor unveiled in November 2021.
China is also at the forefront of the development of quantum technology because of its extensive research funding, which has been increased by other countries as well. Nations are competing to create the first practical quantum computing system because of the security concerns of using quantum technology instead of conventional methods.
For now, one can be confident that introducing these new cryptographic standards will be crucial in enabling businesses to decide which solutions to apply in their settings to safeguard their data against post-quantum dangers, which experts predict could materialize as early as 2030.
On the occasion of the 75th anniversary of India’s Independence, Defense minister Rajnath Singh recently inaugurated the ‘Artificial Intelligence in Defence’ (AIDef) symposium, which involved an exhibition of AI-enabled solutions.
Among the 75 AI-enabled products displayed and launched by the minister at the symposium were surveillance and reconnaissance systems, autonomous weapon systems, human behavioral analysis software, robotic products, and several other simulators and testing equipment. The defense minister said that a driver fatigue monitoring system, AI-enabled voice transcription software, and evaluation of welding defects software are also included in the defense arsenal.
In lieu of the Russian-Ukraine war, Singh recalled Vladimir Putin’s words, mentioning that whoever becomes the leader in the AI sphere will become the ruler of the world. Singh emphasized that India has no intention to rule the world and said that in order to avoid being ruled by any other country, India must develop its AI capabilities. He added that India must be ready to tackle any legal, political, and economic battles that may follow due to the advances in artificial intelligence.
What seems like a pretty obvious move, the Indian Armed Forces are drawing lessons from the ongoing Russia-Ukraine war and adopting new technologies to upgrade themselves for an uncertain future. Particular emphasis is being laid on AI, Machine Learning, Deep Learning algorithms, Robotics, Quantum Labs, Industry 4.0, and much more.
India has seen several technological advances in the defense sector. Recently, the Indian Air Force (IAF), under the Unit for Digitisation, Automation, Artificial Intelligence and Application Networking (UDAAN), inaugurated the Centre of Excellence for Artificial Intelligence at Air Force Station, New Delhi. IAF plans to undertake several proactive steps to embed Artificial Intelligence (Al) and Industry 4.0-based technologies in its war-fighting processes.
Similarly, the Indian Army has been making several conscious efforts to incorporate the latest technologies in its services. The Army is currently working on quantum computing labs, robotic surveillance platforms, 5G, and the real-time application of artificial intelligence in border areas. Special emphasis is being laid on air defense systems which are backed by automated drone detection systems, augmented reality, and unmanned combat units for tank formations. For the past couple of months, the Eastern and Northern Commands of the Indian Army have been holding major technology symposiums focusing on identifying the forces’ requirements and customizing the operational requirements.
The Indian Army also established Quantum Labs in 2021 to transform the current cryptography system, which involves algorithms to code voice and data for secure transmission. The system ensures secure transmission during conflicts as radio waves are used by all the major equipment to communicate. These systems relay real-time live feeds to the commanders, making their job much easier by detecting enemy, man, or machine.
All the systems used by Indian Armed Forces are embedded AI-oriented machines tuned for anomaly detection and interpretation. They also can detect intrusions at the line of control (LoC) and read drone footage.
The adoption of recent technological advances in India’s Defense arsenal shows the country’s commitment to embracing the future of technology i.e. artificial intelligence. The focus on AI-driven technologies is growing, and the Indian Army is keen to adopt them in light of the emerging threats of the third world war. With the recent addition of AI technologies, the Indian Defense is expected to keep introducing several new advancements in the near future.
Evolv Technology, a company that deals in weapons detection and security screening, has announced that Gillette Stadium will use its AI weapon detection technology which incorporates Evolv Express System.
Evolv Express amalgamates powerful sensor technology with proven AI and analytics to help ensure more accurate threat detection for safer public venues.
Evolv’s systems are trained to detect weapons and other potential threats while ignoring harmless items like keys, loose change, and cell phones. As thousands of fans gather at the stadium, they can now pass through the entrances without having to stop or wait in long queues.
This technology has been employed to create better customer experiences as fans anticipate the next big game by the New England Patriots and the New England Revolution.
In a statement, the stadium officials said that after a thorough evaluation, the decision to choose Evolv Technology was made. The fans will have better experiences with more sophisticated AI-based systems which have less intrusive ways to identify and address potential threats.
According to Billboard, Gillette Stadium is one of the world’s top-grossing concert venues, with around 65,878 seats. The venue hosts various major ticketed events throughout the year, including international soccer matches, motorsports, NCAA athletics, and high school football state championships.