Thursday, January 15, 2026
ad
Home Blog Page 301

Squeezed Quantum microcomb opens a path toward Quantum Computing in real-world conditions

Squeezed Quantum microcomb

A microresonator-based frequency comb or microcomb is a photonic device that generates many optical frequencies on a tiny cavity known as a microresonator. Microcomb can be used to measure/generate frequencies with extreme precision since the colors are uniformly distributed. Microwomb has a wide range of applications since most measurements can be linked to frequency. A single photonic chip can replace tens of lasers, thereby decreasing power consumption in optical communication channels. They are also utilized in the calibration of spectrographs in astronomical observatories and might help discover exoplanets.

However, quantum microcombs architectures built on probabilistic quantum states are not scalable without quantum memory. XuYi is an assistant professor of electrical and computer engineering at the University of Virginia School of Engineering and Applied Science. In his recent research, he and his group demonstrated a deterministic, two-mode-squeezed quantum frequency comb in a silica microresonator on a silicon chip. 

Nature Communications published XuYi’s paper A squeezed quantum microcomb on a chip. The paper’s co-first authors are Mandana Jahanbozorgi, a Ph.D. student of electrical and computer engineering, and Zijiao Yang, a Ph.D. student in physics. Hansuek Lee, assistant professor at the Korean Advanced Institute of Science and Technology, and Olivier Pfister, professor of quantum optics and quantum information at UVA, also contributed to the research’s success.

Read more: Volkswagen Develops New Applications of Automotive Quantum Computing

XuYi’s research group has created a scalable quantum computing platform on a photonic chip the size of a penny. The photonics-based squeezed quantum microcomb can drastically reduce the number of devices needed to achieve quantum speed.

“The future of the field is integrated quantum optics,” Pfister said. “Only by transferring quantum optics experiments from protected optics labs to field-compatible photonic chips will bona fide quantum technology be able to see the light of day. We are extremely fortunate to have been able to attract to UVA a world expert in quantum photonics such as Xu Yi, and I’m very excited by the perspectives these new results open to us.”

Yi’s photonics-based squeezed quantum microcomb is attractive because each light wave has the potential to become a quantum unit. He carried the multiplexing concepts of optical fibers into the quantum realm. Yi’s research group created a ring-shaped, millimeter-sized quantum source that efficiently converts photons from single to multiple wavelengths. His team verified the generation of 40 qumodes (fundamental information-carrying units of CV quantum computers) in the form of 20 two-mode squeezed comb pairs from a single microresonator on a chip, demonstrating that optical multiplexing of quantum modes can work in integrated photonic platforms. The number of measured qumodes was 40 due to the limit span of local oscillators. “We estimate that when we optimize the system, we can generate thousands of qumodes from a single device,” Yi said.

Image Source

The above figure depicts the experimental setup of Yi’s team. They used a continuous-wave (CW) laser for driving both squeezed microcomb and the local oscillators. All the experiment data is available here

Yi’s photonics-based system offers two benefits in opening a path of quantum computing for real-world conditions. Firstly, unlike quantum computing platforms that require cryogenic temperatures for cooling superconducting electronic circuits, photonic integrated chips can sleep/run at room temperature since photos have no mass. Secondly, Hansuek Lee used a silicon chip for fabricating the microresonator, implying that the quantum source can be mass-produced. 

Advertisement

Stanford’s ML Algorithm Accurately Predicts the Structure of Biological Macromolecules

Structure of Biological Macromolecules

Structural biology is concerned with the molecular structure of biological macromolecules as proteins, RNA, and DNA. It is a branch of molecular biology, biochemistry, and biophysics. Macromolecules are responsible for carrying out most cell functions, but they can perform their functions only by coiling into specific three-dimensional shapes. Scientists can’t see the structure of biomolecules even with the most advanced light microscopes because they are too small to see in detail.

However, since the development of technology and AI, it has become slightly easy to determine the 3D structures of biological molecules. Recent research in structural biology at Stanford was able to determine the structure of proteins and RNAs accurately. The study is published in two papers: Hierarchical, rotation-equivariant neural networks to select structural models of protein complexes that appeared in Proteins in December 2020, and the second account is titled Geometric deep learning of RNA structure that appeared in Science on August 27, 2021. 

Ron O. Dror, Ph.D., associate professor of computer science, led the first study published in Proteins. The second study was co-led by Dror and Rhiju Das, Ph.D., associate professor of biochemistry. Assisting in both studies were Stanford University Ph.D. students, Stephan Eismann and Raphael Townshend. Both the studies used an ML Algorithm to predict the 3D structures of biological molecules accurately. 

Read more: Implantable AI Chip Developed for Classification of Biosignals in Real-time

“Structural biology, which is the study of the shapes of molecules, has this mantra that structure determines function,” said Townshend. The accurate prediction of the molecular structure has implications in informed drug design practices and fundamental biological research. It also allows researchers to explain how different molecules work. 

The researchers let the algorithm discover what features make a structural prediction more or less accurate to ensure there is no bias towards certain features if given as an input. “The problem with these hand-crafted features in an algorithm is that the algorithm becomes biased towards what the person who picks these features thinks is important, and you might miss some information that you would need to do better,” said Eismann.

In this process, the algorithm recovered features that researchers knew and also discovered new characteristics. After applying the ML algorithm to proteins, researchers tested it on ‘RNA puzzles.’ The tool outperformed all the other puzzle participants.

But why does a protein’s shape matter? The structure or shape of a protein determines its interaction with other molecules and also its function. For instance, we know now that antibodies are shaped like a Y, and DNA polymerase III is donut-shaped. The Y shape of antibodies helps the immune-system protein bind with foreign molecules such as bacteria or viruses with one end while supplying other immune-system proteins with the other. Whereas misfolded or misshapen proteins lead to diseases, and they stop functioning correctly. Parkinson’s disease, Alzheimer’s disease, and cystic fibrosis are examples of diseases caused by misfolded proteins.

Structure-based understanding of proteins is imperative for developing certain drugs as they work by either supporting or blocking the activity of specific proteins. For instance, researchers will have to use structures to understand how two proteins work together to turn off or alter one protein. This method was used to develop protease inhibitors, anti-HIV drugs. Since HIV protease keeps the virus alive, researchers used the structure design to determine molecules that block HIV protease.

In the study by Stanford researchers, the resulting scoring function substantially outperformed previous methods. The algorithm could consistently produce the best results in community-wide blind RNA structure prediction challenges. The ML algorithm uses only atomic coordinates as inputs and was trained in only 18 currently known RNA structures. Yet, it could effectively overcome a major limitation of standard deep neural networks in structural biology. The algorithm was initially used to determine protein structure, and it doesn’t use any RNA-specific information. The approach can apply to solving diverse problems in biochemistry, structural biology, materials science, and beyond.

Advertisement

NVIDIA unveils Artificial Intelligence Technology for Speech Synthesis

NVIDIA Speech synthesis technology

NVIDIA recently unveiled its research on speech synthesis that would make voice assistants like Google Assistant and Siri sound way more human-like. The current technology used for generating speech has improved by many folds over the past years but still lacks critical human speech elements like rhythm and intonation. 

NVIDIA researchers are developing a new speech synthesis technology that would make voice assistants sound richer and will be able to produce voice modulations and dynamics closer to those made by humans. 

The complete research will be released in session at the Interspeech 2021 conference that will commence on 3rd September. The researchers from NVIDIA’s text-to-speech department have developed a unique model named RAD-TTS that can achieve the aforementioned qualities in a voice bot. 

Read More: Implantable AI Chip Developed for Classification of Biosignals in Real-time

The developers conducted extensive research on conversational artificial intelligence, natural language processing, audio enhancement, automated speech recognition, and various other subjects to build the RAD-TTS model. The model can efficiently be run on NVIDIA GPUs, and the company has open-sourced some of the research works through their NVIDIA NeMo toolkit. 

NVIDIA mentioned in a blog, “With this interface, our video producer could record himself reading the video script, and then use the AI model to convert his speech into the female narrator’s voice.” 

The blog also mentioned that users could use the technology to tweak the generated speech to improve the narration and flow of videos. Earlier speech synthesis models were not able to produce accurate voice modulations, so they could not add the emotional aspect of speech to narrations. 

NVIDIA’s RAD-TTS comes with a unique feature that can change a speaker’s voice to sound like someone else. “The AI model’s capabilities go beyond voiceover work: text-to-speech can be used in gaming, to aid individuals with vocal disabilities or to help users translate between languages in their own voice,” said NVIDIA. 

Advertisement

Implantable AI Chip Developed for Classification of Biosignals in Real-time

Implantable AI chip

Artificial intelligence (AI) has been bringing positive change in the medicine and healthcare industry. However, transplanting AI into the human body has remained a significant technical challenge. A research team led by Dr. Hans Kleemann, Professor Karl Leo, and Matteo Cucchi from the Dresden University of Technology has developed a biocompatible implantable AI chip. This first-time research shows that the chip will classify healthy and diseased biological signals in real-time. The results of the implantable AI chip research were published in the Science Advances journal.

Much effort has been given to developing biocompatible organic materials for biosignal detection. The current research uses a polymer-based fiber network that structurally resembles the human brain. This fiber network enables the neuromorphic AI principles of reservoir computing. The random placement of polymer fibers allows the data to be processed the same way as the human brain. The polymer fibers are non-linear, and they allow amplification of even the slightest signal changes. The organic electrochemical transistors (OECTs) and reservoir computing (RC) based chip is a non-invasive and lightweight implant that consumes less energy than pacemakers during the process.

Read More: Iteris reveals Artificial Intelligence-powered Detection Sensor Vantage Apex

“The vision of combining modern electronics with biology has come a long way in recent years with the development of so-called organic mixed conductors,” explains Matteo Cucchi, Ph.D. student and first author of the paper. “By harnessing the power of neuromorphic computing, such as reservoir computing used here, we have succeeded in not only solving complex classification tasks in real-time, but we will also potentially be able to do this within the human body. This approach will make it possible to develop further intelligent systems in the future that can help save human lives.”

The AI chip achieved 88% accuracy in distinguishing between a healthy heartbeat and three common arrhythmias. The potential uses of implantable AI chips are broad. Doctors can use them to monitor and report post-surgery cardiac arrhythmias recovery and complications to both doctors and patients via smartphones. The quick reporting time of implantable AI systems will ensure swift medical assistance. 

Advertisement

Can adding Hardware Trojans into Quantum Chip stop Hackers?

TUM post-quantum cryptography quantum chip
Image credit: Technical University of Munich (TUM)

When it comes to improving the efficiency of computers, developers focus on enhancing the hardware and software capabilities. However, when attacked by malware, the functionalities of the hardware and software can be inhibited. This holds true for quantum computers too. 

Despite the availability of various cybersecurity solutions, we continue to witness an increase in cyber-attacks and data breaches on a daily basis. While we leverage machine learning tools to fend off cyber threats, cyberattackers are also busy coming up with new and more powerful forms of malicious attacks. But this is not where the security concerns end.

At present, data is transferred as packets on the internet, from one end to the other. These data packets are generally encrypted and have public-key cryptography (PKC). After receiving the data packets, the receiver reassembles and decrypts them using PKC. While the existing encryption methods are fairly a safe solution, once quantum computers become mainstream, offering internet security using only PKC will be challenging.

Hence, scientists are working on technologies to protect against quantum computer attacks and to improve quantum security. 

The following are two popular quantum cybersecurity solutions in use at the moment:

  1. Quantum-safe cryptography: Also referred to as post-quantum cryptography, it creates novel cryptographic techniques that can be implemented on today’s classical computers but are immune to assaults from quantum computers in the future. One method is to increase the size of digital keys such that the number of permutations that must be searched using raw computational power increases dramatically.
  2. Quantum key distribution: It is the act of establishing a shared key between two trusted parties utilizing quantum communication so that an untrusted eavesdropper cannot learn anything about the key. This is accomplished by sending data using photons of light rather than bits, allowing firms to take advantage of photons’ no-change and no-cloning properties. This helps firms to send a confidential key that cannot be covertly duplicated or intercepted between two parties.

Recently, Technical University of Munich (TUM) researchers have created a computer chip that efficiently supports post-quantum cryptography and has four hardware trojans. 

The chip is an application-specific integrated circuit (ASIC) that is based on the open-source RISC-V standard and is a modification of an open-source chip design. The ASIC’s design was preferred with an aim to show that it can thwart attempts by hackers to decode communications using quantum computers. 

The TUM researchers claim that their chip is the first post-quantum cryptography device built fully using hardware and software co-design. As a consequence, encrypting using Kyber is around ten times faster. According to Georg Sigl, a TUM researcher who conducted the research, it also consumes around eight times less energy and is almost as flexible. The team published these results in 2020 in the journal IACR Transactions on Cryptographic Hardware and Embedded Systems.

Kyber is a lattice-based post-quantum cryptography algorithm that is also one of the promising candidates for post-quantum cryptography compared to chips based entirely on software solutions. In most cases, a lattice-based cryptography algorithm chooses a target point in a lattice on which a secret message is built. The algorithm then adds random noise to this point, making it similar to but not identical to another lattice point. For both classical and quantum computers, identifying the original target point and the accompanying secret message, without knowing what noise was added is difficult — especially when the lattice is exceedingly big.

Furthermore, processes like generating randomness and multiplying polynomials in lattice-based cryptography algorithms can use a lot of computing resources. As a result, the chip’s hardware and control software were designed to function together to generate randomness and minimize polynomial multiplication complexity efficiently.

The processor can also execute the SIKE algorithm, which needs a lot more processing power. According to the team’s design estimates, SIKE can be implemented 21 times quicker than chips utilizing solely software-based encryption. Even if someday lattice-based methods are considered no longer secure, SIKE can be viewed as a potential alternative. The team mentioned such findings in 2020 in the Proceedings of the 39th International Conference on Computer-Aided Design.

Apart from Kyber and SIKE acceleration, the research team is also using this chip as an accelerator for smart hardware Trojan detection. The idea behind incorporating hardware trojans on the chip was to explore approaches for detecting this sort of “malware from the chip factory.”

In layman’s terms, hardware trojans are additions or alterations to the circuit that are introduced to cause havoc by changing the system’s intended function. When Trojans are activated, they have a negative impact on electronics, resulting in decreased dependability, system failure, remote access to hardware, sensitive data leaking, and incurring damage to a company’s reputation.

Therefore, if attackers are successful in embedding trojan circuitry in the chip design before or during manufacture, the results might be devastating. To make the situation worse, hardware trojans are engineered to go untraceable to conventional testing practices, optical verification methodologies and evade post-quantum cryptography. 

Read More: Google Announces Creating Time Crystal Inside its Sycamore Quantum Computer

While there is one proposed solution of using golden chips for microscopic inspection and comparison with suspicious hardware trojan infected chips, very little is known about how hardware trojans get included or are used by attackers. To understand more about them, the researchers included four separate hardware trojans on their device, each of which operates in a unique manner.

Sigl explains, “To develop protective measures, we have to put ourselves in the shoes of attackers, so to speak, and develop and hide Trojans ourselves. That’s why we have built four Trojans into our post-quantum chip that we developed and that work quite differently.”

In the following months, Sigl and his colleagues will evaluate the chip’s cryptography capabilities, as well as the operation and detectability of the hardware Trojans. Post that, the chip will be dismantled for research reasons. This will be accomplished by layer-by-layer destroying their proof-of-concept chip while taking individual photos of the layers then feeding each stage of the process to freshly created machine learning algorithms. This procedure will enable training algorithms to reconstruct the exact functioning of chips via reverse engineering and recognize hardware functions even in the lack of technical information about what the hardware performs. 

The above findings are published in Proceedings of the 18th ACM International Conference on Computing Frontiers (2021).

Advertisement

Iteris reveals Artificial Intelligence-powered Detection Sensor Vantage Apex

Iteris reveals Artificial Intelligence-powered Detection Sensor for Smart Intersections

Smart mobility infrastructure management company Iteris reveals its newly developed artificial intelligence-powered technology named Vantage Apex. Iteris’ Vantage Apex is the world’s first full high definition, four-dimensional smart radar sensor that uses artificial intelligence algorithms to carry out its operations. 

The new sensor is powered by Iteris’ high-end machine learning and artificial intelligence-enabled image analytics tool, high-performance computing, and neural network algorithms that enable it to track and identify various vehicles, cyclists, vulnerable pedestrians, and many more. 

The videos captured by Vantage Apex can be viewed at several traffic management centers and also using a smartphone application named Iteris Video Viewer. This cutting-edge technology is capable of capturing crisp 1080p 4D videos with a field of view of over 600 feet. It is also equipped with front fire radar technology that helps it to identify relevant objects accurately. 

Read More: Velodyne Lidar partners with MOV.AI to provide solutions for Industrial and E-commerce robotics

Senior Vice President and General Manager of Advanced sensor technologies at Iteris, Todd Kreter, said, “With the addition of Vantage Apex to Iteris’ market-leading portfolio of smart sensors, transportation agencies now have access to unmatched detection and tracking accuracy of vehicles, pedestrians, and cyclists, as well as an HD video display for TMC monitoring.”

He further mentioned that this would help TMC workers to ensure safety, mobility, and sustainability in high-traffic areas. The new Vantage Apex is fully compatible with the company’s cloud-based detection health monitoring system named ClearGuide SPM and another traffic monitoring application, VantageLive. 

Iteris is a Santa Ana-based artificial intelligence-powered mobility infrastructure management firm that was founded in the year 2004. Earlier, the company acquired a startup named TrafficCast through a $16 million deal to strengthen its research and development department further. 

Advertisement

Velodyne Lidar partners with MOV.AI to provide solutions for Industrial and E-commerce robotics

Velodyne Lidar partners with MOV.AI to provide solutions for Industrial and E-commerce robotics

Lidar and software developing company Velodyne Lidar partners with robotic engine platform provider MOV.AI to provide autonomous solutions for industrial and e-commerce robotics. The information was revealed through a press release published by Velodyne Lidar. 

Both the companies plan to develop enterprise-grade solutions for various commonly faced challenges like navigation, mapping, obstacle, and risk avoidance. 

Integrating MOV.AI REP with a high-end sensor manufactured by Velodyne named Puck will allow the companies to come up with groundbreaking advanced technological solutions that would help to automate the e-commerce, manufacturing, logistics, and healthcare industry at a global level. 

Read More: India Launches New Drone Rules 2021

Due to the COVID-19 pandemic, several sectors, including manufacturing and e-commerce are witnessing a greater need for automation to keep pace with the increasing demand of consumers. The solution developed by the two companies can play a vital role in this automation adoption process. 

CEO of MOV.AI, Motti Kushnir, said, “Through the collaboration with Velodyne, we are able to offer our customers advanced SLAM navigation powered by the Puck, one of the world’s leading lidar sensors.” 

He further mentioned that MOV.AI’s platform allows autonomous mobile vehicles access to one of the best navigation systems that can also be customized according to the needs of its clients. 

The navigation system gives AMRs the ability to perform complex autonomous movements in both indoor and outdoor conditions. The jointly developed solution will provide ±2 cm accuracy in over 65% of different environmental and terrain conditions. 

Velodyne is a United States-based sensor and software developing company founded by David Hall in the year 1983. The firm has raised a total funding of $375 million from industry-leading companies like Ford, Hyundai, Baidu, and Nikon.
Velodyne Lidar’s executive director of Europe, Erich Smidt, said, “We see extensive potential in this space, with the global AMR market size is expected to reach USD 8.70 billion in 2028, with a CAGR of 23.7 percent from 2021 to 2028, according to Fortune Business Insights.”

Advertisement

Peak AI Raises $75 million in Series C Funding Round

peak-announcing-our-series-c-funding

Decision intelligence solution start-up Peak AI raises $75 million in Series C funding. The third funding round of this startup based out of Manchester, England was led by Softbank Vision Fund 2, a prominent name investor. The Series B investors, Oxx, MMC Ventures, Arete, and Praetura Ventures, participated in the Series C round. 

With 21$ million raised in Series B in February this year, the company has now raised a total of $119 million. The Series C funding will be used to expand into new markets, continue building the platform, and hire some 200 new people in the coming quarters.

Unlike technology companies that build AI tools and platforms as a business goal, Peak’s platform is aimed at non-tech companies as a business service. Peak AI has built a decision intelligence platform used by brands, manufacturers, retailers, and others to build personalized customer experiences and monitor stock levels. Its current customers include the likes of KFC, Molson Coors, Asos and Speedy, Nike, Marshalls, and others. 

Read more: Databricks raises $1.6 billion in Series H funding round

“In Peak, we have a partner with a shared vision that the future enterprise will run on a centralized AI software platform capable of optimizing entire value chains,” Max Ohrstrand, senior investor for SoftBank Investment Advisers said in a statement. “To realize this, a new breed of the platform is needed, and we’re hugely impressed with what Richard and the excellent team have built at Peak. We’re delighted to be supporting them on their way to becoming the category-defining, global leader in Decision Intelligence.”

Peak claims that their customers have seen a return on ad spend double, revenues on an average rise of 5%, supply chain costs reduce by 5%, and inventory holdings reduce by 12%. However, it’s not a no-code platform. It is directed towards engineers and data scientists for identifying operational processes that might benefit from AI tools and build them without heavy lifting. 

Advertisement

Databricks raises $1.6 billion in Series H funding round

Databricks raises $1.6 billion in Series H funding round

Data and artificial intelligence solutions developing firm Databricks raises $1.6 billion in its recently held series H funding round led by Counterpoint Global. Other investors like Baillie Gifford, Clearbridge Investments, and UC Investments also participated in the funding round. 

This fresh funding has increased Databricks market valuation to $38 billion. With the new funds, the company plans to further widen its lead in the rapidly booming data lakehouse market. The firm is already among the industry leaders in this field. 

Chief Data Officer of AT&T, Andy Markus, said, “ We leverage Data Lakehouse in Databricks for our most granular data as well as real-time data pipelines supporting key AI/ML applications.” 

Read More: Cognigy launches Conversational AI Analytics Suite Cognigy Insights

He further mentioned that Databricks helps them to modernize their data ecosystem and migrate it to the cloud that helps them to provide a better customer experience. To strengthen the efforts of improving its data lakehouse, Databricks also announced that it had appointed former Salesforce employee Andy Kofoid as its new President of Global Field Operations. 

The company allows its clients to build and manage their data lakehouse on popular platforms like Microsoft Azure, Google Cloud, and AWS to enable them to enjoy the benefits of every possible data and analytics workload on a single platform. 

San Francisco-based artificial intelligence company Databricks was founded by Ali Ghodsi, Andy Konwinski, Ion Stoica, Scott Shenker, and Raynold Xin in the year 2013. The enterprise has a workforce of over 5000 trained employees and has raised a total funding of $3.6 billion till date. 

Co-founder and CEO of Databricks, Ali Ghodsi, said while talking about the funding, “This marks a thrilling new chapter that will allow us to accelerate our pace of innovation and further invest in the success of data-driven organizations on their journey to the lakehouse.” 

He also added that this new investment comes after a massive surge in demand for the company’s data lakehouse platform and its increased adoption rate in the industry.

Advertisement

India Launches New Drone Rules 2021

India New Drone Policy 2021

The Civil Aviation Ministry of the Indian government notified the liberalized New Drone Rules 2021, which are set to transform the core economics sectors like mining, agriculture, infrastructure, logistics, emergency response, transportation, surveillance, geospatial mapping, defense, and law enforcement. The Drones Rules 2021 will replace the Unmanned Aircraft Systems Rules 2021. 

The Indian Government passed the new drone rules just a week after the Indian government gave conditional permission to ten organizations to use drones for various purposes for one year. The new drone rules were notified on July 15, and comments were invited until August 15 from industry and stakeholders. 

Under the new rules, coverage has been increased from 300 kg to 500 kg, including drone taxis and heavy payload-carrying drones. Also, no flight permission will be required for flying drones for up to 400 feet in green zones and up to 200 feet between 8 and 12 km from the airport perimeter.

Read more: Volkswagen Develops New Applications of Automotive Quantum Computing

According to the New Drone Policy 2021, no pilot license will be required for non-commercial use of nano drones, micro-drones, and research and development organizations. There will be no restriction on foreign-owned companies registered in India for drone operations. Also, no security clearance is mandatory for registering or licensing insurance for drones. Instead of charging the drone operation based on size, it has been reduced to nominal levels. 

“We are going to ensure drone application in transportation, logistics, defense, mining, infrastructure sectors, and more. It will provide more jobs. Our aim is to make India a global drone hub by 2030,” Civil Aviation Minister Jyotiraditya Scindia said at a press conference.

The Digital sky platform initiative by the Ministry of civil aviation (MoCA) will go a long way in providing a secure platform. It’ll support drone technology frameworks such as NPNT. However, NPNT, designed to enable digital flight permissions, manage unmanned aircraft operations, and operate traffic efficiently, won’t take off for a few more months due to the complexities involved. The New Drone Policy also involves safety features like real-time tracking beacons and geo-fencing. The significant reduction in authorizations and pre-approvals will foster innovation and bring a notable boost to Dronepreneurs. 

Advertisement