Home Blog Page 189

How can AI help the fate of cryptocurrency in India?

AI for cryptocurrency in India

Indian cryptocurrency builders and traders are still hopeful about the fate of the digital currency in India despite the vulnerabilities in guidelines with respect to trade in cryptocurrencies and the recent crash in the crypto market. 

At the beginning of this year, the crypto market dropped below the $2 trillion mark and continued to have a steady fall before witnessing a slight recovery in April. Overall, the cryptocurrency market has slumped by approximately 70% in value from its all-time record high in November 2021. Tokens like Dogecoin, Avalanche, and Solana, among others, have witnessed up to a 90% dip. Currently, the total cryptocurrency market cap stands at $860 billion.

Despite the all-time low, experts believe that India can still play a significant role in the future of cryptocurrency if it forges ahead in the right direction. This notion is backed by the fact that crypto is still a very new concept, and like any other novel innovation, it will require time and investment to make regulations in its regard. According to a study, the global cryptocurrency market is expected to reach $1,902.5 by 2028, and artificial intelligence will play a significant role in that. 

Read More: Central Bank Of Sri Lanka Warns Against Cryptocurrency Amid Economic Crisis

The fate of cryptocurrency in India is not as bleak as it should be because of its decision-making government. The Indian government is currently formulating a policy on Web 3.0, which is the new breed of the internet that can potentially become the decentralized version of the virtual world. This policy can enable India to take part in the worldwide strategy improvement and adjust its strategies to this quick world. According to experts, Web 3.0 is the eventual fate of cryptographic money, and embracing artificial intelligence, India is making efforts to be at the forefront of creating strategies around it.

How can AI help the fate of the cryptocurrency industry?

Artificial intelligence is already being used quite actively in the cryptocurrency industry. However, many experts believe that the adoption of AI in the crypto market will continue to grow steadily in the upcoming years. The AI market size is expected to reach $360.36 billion by the year 2028. Since the AI industry itself is developing at a drastic pace, it can effectively support the further development of the cryptocurrency industry as well. 

Over the past few years, artificial intelligence has been effectively used to create numerous chart patterns and indicators to assist crypto traders in their ventures. With AI tools, traders can make the most out of the volatile cryptocurrency market. However, AI has so much more to offer. 

One of the major concerns for traders regarding crypto trading is that the market stays open 24×7. This means that there is always some sort of activity taking place in the crypto market. Considering that, it is almost impossible for traders to analyze these price movements constantly. However, artificial intelligence is more than capable of monitoring the price movements in the market at all times. For example, it can ensure that whenever there is some movement in the market that might earn traders some profits, the automated trading bot will take informed actions on behalf of the traders. 

Conclusion 

The past few months have been quite damaging to the cryptocurrency market. The majority of the crypto assets fell in value, resulting in massive panic in the market. Experts in the market are predicting different scenarios of developments in the near future. Some experts claim that the prices of crypto assets will continue to dip, while others say they will start to recover in the foreseeable future. However, one thing is certain. As artificial intelligence keeps upgrading every single day, AI will become even more effective in terms of contributing to the cryptocurrency industry in the future.

Advertisement

Central Bank of Sri Lanka warns against cryptocurrency amid economic crisis

Central Bank of Sri Lanka warns against cryptocurrency

The Central Bank of Sri Lanka (CBSL) has warned the country against using cryptocurrency in light of the ongoing economic crisis. CBSL said cryptocurrency is largely unregulated, and any dealings related to it should be avoided. 

CBSL also reminded that it does not consider cryptocurrencies as a ‘legal tender’ in the country and said that the government has not given a license or any other form of authorization to any crypto-related entity to operate separately in the nation.

The Central Bank has also not authorized any initial coin offerings (ICO) or mining operations relating to virtual currency exchanges in the nation. According to CBSL, virtual currencies are considered ‘unregulated financial instruments’ and therefore have no regulatory oversight regarding their usage in Sri Lanka. 

Read More: Commonwealth Bank Of Australia Introduces AI To Protect Customers From Scams 

The warning comes as the country deals with the sovereign-debt crisis that has crippled the local economy. Sri Lanka fell into default this year in May and is struggling to secure essential imports from other nations. According to a report, inflation in the country had touched a year-on-year record of 54.6% in June.

The total market value of final services and goods, measured through the GDP standard, also contracted by 1.6% in the first quarter of 2022. The falling value of the local currency has prompted many Sri Lankans to invest in cryptocurrencies.

Major cryptocurrencies are also witnessing a steady decline. The crypto market has dropped by over 56% in the last few months, from $2 trillion to $873.03 billion. The reversal has led to a similar decline in public stock markets and private market deal flow activities.

Advertisement

NIST announces four post-quantum cryptography algorithms

NIST selects four post-quantum cryptography algorithms

The National Institute of Standards and Technology (NIST) of the US Department of Commerce revealed earlier last week the selection of four encryption algorithms that would be incorporated into the organization’s post-quantum cryptography (PQC) standard. NIST plans to complete this standard over the next two years and is likely to add other algorithms in the future. 

The proverbial necessity is the mother of the invention applies to the field of quantum computing. While classical computers express information in binary, i.e., 1 and 0, quantum computing exploits the concepts of quantum physics like superposition, entanglement, and quantum interference to the classical computing techniques. Today, quantum computers are highly sought for weather forecasting, financial modeling, drug development and more.

However, in 1994, with the creation of Shor’s algorithm, researchers demonstrated that if the development in quantum computing could be maintained for a long enough period of time, quantum computers could defeat existing encryption technologies like Rivest–Shamir–Adleman (RSA) algorithm and elliptic curve cryptography (ECC). Developed by American mathematician Peter Shor, Shor’s Algorithm is a non-linear method for factoring composite numbers, defies many of the constraints of linear computation, and can easily locate the prime factors in any number, regardless of size. The largest key size to be solved was a 795-bit RSA key, which was factored in 2019 by a group of academics. The long-trusted Diffie-Hellman key exchange technique, which is used for contemporary cryptographic communications such as SSL, TLS, PKI, and IPsec, is projected to be broken by quantum computers in addition to RSA and ECC. 

Although it will be years before quantum computers are strong enough to crack public-key encryption, when they do, they might pose a serious danger to financial and personal data and national security. This key drawback is well known in the computing industry. A few businesses have started working on developing, testing, and implementing new encryption algorithms that are resistant to quantum computers. Companies like IBM have already started providing solutions that focus on post-quantum cryptography protection. 

Recently, many PQC-focused companies have emerged from stealth. In May, QuSecure, a three-year-old firm with headquarters in San Mateo, California, debuted QuProtect as its first post-quantum cryptography solution. According to QuSecure, QuProtect is an orchestration platform capable of securing both data in transit and data at rest that has been encrypted using the latest post-quantum cryptography algorithms. Another company, PQShield offers post-quantum cryptography hardware, an encrypted messaging platform, and a System on Chip to protect smart cards and security chips from post-quantum attacks.

Since 2016, the National Institute of Standards and Technology has been spearheading the search for post-quantum cryptography technology to create and test in order to secure such data. It whittled 82 original proposals for the Post Quantum Cryptography Standardization project to four final methods for two encryption tasks: general encryption (where two users swap keys) and digital signature authentication (identity verification).

Math issues using algebra are frequently used in traditional cryptography like the RSA and ECC, though geometric problems are more frequently used in quantum cryptography. One of these geometric problems is based on lattices, which are a multidimensional grid of points that spread out in all directions. The next step is for the computer to locate nearby points or vectors inside this lattice.

While SPHINCS+ [an algorithm for digital signature verification] employs hash functions, three of the selected algorithms are based on a class of mathematical problems known as structured lattices.

According to NIST, two of the four technologies selected are anticipated to be employed more often. One, known as CRYSTALS-Kyber, will protect online data by creating the cryptographic keys required for two computers to exchange encrypted data. It operates relatively small encryption keys and moves comparatively faster. The second, CRYSTALS-Dilithium, is used to sign encrypted data and prove who sent it. It will probably take two years for the methods to become fully standardized for inclusion in current software and hardware.

Laurie E. Locascio, NIST Director and Under Secretary of Commerce for Standards and Technology, stated in a public statement that the NIST post-quantum cryptography program has taken advantage of the world’s best minds in cryptography to produce this first group of quantum-resistant algorithms that will result in a standard and greatly improve the security of our digital information.

Also Read: Can adding Hardware Trojans into Quantum Chip stop Hackers?

NIST advises CRYSTALS-Dilithium as the principal method for digital signatures, with FALCON for applications that require smaller signatures than Dilithium can offer. It revealed that though SPHINCS+ is slower than the other two but was approved since it is based on a new mathematical method and so provides a possibility to increase diversity. The algorithms are available on the NIST website.

Also, NIST won’t stop at four. The organization added that additional candidates are being considered and that it will soon reveal the winners of the second round. The other four techniques are intended for broad encryption and do not employ hash functions or structured lattices in their approaches. 

NIST stated that a useful standard provides solutions tailored for various scenarios, employs a variety of encryption techniques, and provides more than one algorithm for each use case in order to explain the necessity for multiple standards and a multi-stage strategy.

NIST urges security professionals to investigate the new algorithms and think about how their applications will utilize them while the standard is still being developed, but not to incorporate them into their systems just yet because the algorithms could change marginally before the standard is finished. 

In the meanwhile, the US government’s attempts to offer defenses against quantum computing are growing. Recent White House directives called for the fast approval of the Bipartisan Innovation Act as well as underlined that the government and businesses should move forward with NIST’s standards. The Quantum Computing Cybersecurity Preparedness Act, put forth by US Representative Nancy Mace (R-SC), was unanimously approved by the US House of Representatives Oversight and Government Reform Committee on May 11 in response to a White House directive to advance the migration of federal government IT systems with post-quantum cryptography capabilities.

In addition to outlining rough timelines and responsibilities for federal agencies to migrate the majority of the US’s cryptographic systems to quantum-resistant cryptography, the Biden administration’s memorandum also underlines its desire for the US to maintain its leadership in quantum information science (QIS).

The White House wants the US to move to cryptographic systems that are resistant to a ‘cryptanalytically’ relevant quantum computer (CRQC) by 2035. However, there is no set timeline for this transition.

This development comes at a time when quantum computers are ready to hit the commercial market. When quantum computers are ready for commercial usage, the PQC algorithms promise to provide exponentially more powerful encryption than present standards. For instance, as a part of its Quantum computing roadmap, a 433-qubit processor named IBM Osprey, is set to be released by the end of this year, more than tripling the size of IBM Eagle, a 127-qubit processor unveiled in November 2021.

China is also at the forefront of the development of quantum technology because of its extensive research funding, which has been increased by other countries as well. Nations are competing to create the first practical quantum computing system because of the security concerns of using quantum technology instead of conventional methods.

For now, one can be confident that introducing these new cryptographic standards will be crucial in enabling businesses to decide which solutions to apply in their settings to safeguard their data against post-quantum dangers, which experts predict could materialize as early as 2030.

Advertisement

Indian Defense Forces upgrade their arsenal with Artificial Intelligence

Indian Defense Forces upgrade with Artificial Intelligence

On the occasion of the 75th anniversary of India’s Independence, Defense minister Rajnath Singh recently inaugurated the ‘Artificial Intelligence in Defence’ (AIDef) symposium, which involved an exhibition of AI-enabled solutions.

Among the 75 AI-enabled products displayed and launched by the minister at the symposium were surveillance and reconnaissance systems, autonomous weapon systems, human behavioral analysis software, robotic products, and several other simulators and testing equipment. The defense minister said that a driver fatigue monitoring system, AI-enabled voice transcription software, and evaluation of welding defects software are also included in the defense arsenal. 

In lieu of the Russian-Ukraine war, Singh recalled Vladimir Putin’s words, mentioning that whoever becomes the leader in the AI sphere will become the ruler of the world. Singh emphasized that India has no intention to rule the world and said that in order to avoid being ruled by any other country, India must develop its AI capabilities. He added that India must be ready to tackle any legal, political, and economic battles that may follow due to the advances in artificial intelligence. 

Read More: UK Govt Launches Future Of UK Defense Artificial Intelligence

What seems like a pretty obvious move, the Indian Armed Forces are drawing lessons from the ongoing Russia-Ukraine war and adopting new technologies to upgrade themselves for an uncertain future. Particular emphasis is being laid on AI, Machine Learning, Deep Learning algorithms, Robotics, Quantum Labs, Industry 4.0, and much more. 

India has seen several technological advances in the defense sector. Recently, the Indian Air Force (IAF), under the Unit for Digitisation, Automation, Artificial Intelligence and Application Networking (UDAAN), inaugurated the Centre of Excellence for Artificial Intelligence at Air Force Station, New Delhi. IAF plans to undertake several proactive steps to embed Artificial Intelligence (Al) and Industry 4.0-based technologies in its war-fighting processes. 

Similarly, the Indian Army has been making several conscious efforts to incorporate the latest technologies in its services. The Army is currently working on quantum computing labs, robotic surveillance platforms, 5G, and the real-time application of artificial intelligence in border areas. Special emphasis is being laid on air defense systems which are backed by automated drone detection systems, augmented reality, and unmanned combat units for tank formations. For the past couple of months, the Eastern and Northern Commands of the Indian Army have been holding major technology symposiums focusing on identifying the forces’ requirements and customizing the operational requirements.

The Indian Army also established Quantum Labs in 2021 to transform the current cryptography system, which involves algorithms to code voice and data for secure transmission. The system ensures secure transmission during conflicts as radio waves are used by all the major equipment to communicate. These systems relay real-time live feeds to the commanders, making their job much easier by detecting enemy, man, or machine.

All the systems used by Indian Armed Forces are embedded AI-oriented machines tuned for anomaly detection and interpretation. They also can detect intrusions at the line of control (LoC) and read drone footage.  

The adoption of recent technological advances in India’s Defense arsenal shows the country’s commitment to embracing the future of technology i.e. artificial intelligence. The focus on AI-driven technologies is growing, and the Indian Army is keen to adopt them in light of the emerging threats of the third world war. With the recent addition of AI technologies, the Indian Defense is expected to keep introducing several new advancements in the near future. 

Advertisement

Gillette Stadium to use AI weapons detection technology by Evolv 

Gillette Stadium uses AI weapons detection technology by Evolv

Evolv Technology, a company that deals in weapons detection and security screening, has announced that Gillette Stadium will use its AI weapon detection technology which incorporates Evolv Express System. 

Evolv Express amalgamates powerful sensor technology with proven AI and analytics to help ensure more accurate threat detection for safer public venues. 

Evolv’s systems are trained to detect weapons and other potential threats while ignoring harmless items like keys, loose change, and cell phones. As thousands of fans gather at the stadium, they can now pass through the entrances without having to stop or wait in long queues. 

Read More: Oxford High School Tests AI Gun Detection System

This technology has been employed to create better customer experiences as fans anticipate the next big game by the New England Patriots and the New England Revolution. 

In a statement, the stadium officials said that after a thorough evaluation, the decision to choose Evolv Technology was made. The fans will have better experiences with more sophisticated AI-based systems which have less intrusive ways to identify and address potential threats.

According to Billboard, Gillette Stadium is one of the world’s top-grossing concert venues, with around 65,878 seats. The venue hosts various major ticketed events throughout the year, including international soccer matches, motorsports, NCAA athletics, and high school football state championships.

Advertisement

Introducing Autocast: Dataset to Enable Forecasting of Real-World Events via Neural Networks

autocast dataset forecasting

Forecasting any future event or the possibility of an industry-specific trend dominating the market by shaping it towards new possibilities can be challenging. A research team from UC Berkeley, MIT, the University of Illinois, and the University of Oxford recently presented Autocast, a dataset containing thousands of forecasting questions and an accompanying date-based news corpus for evaluating the automatic forecasting abilities of neural network models. They also curated IntervalQA, a dataset of numerical questions and metrics for calibration. Both datasets are included in a paper titled Forecasting Future World Events with Neural Networks.

According to the researchers, there are two types of forecasting. In statistical forecasting, predictions are made using either ML time-series models or more conventional statistical time-series prediction models like autoregression. The models are built and fine-tuned by humans, but individual forecasts are not changed. This is effective when the variable being forecast has many prior observations and a slight distribution shift. However, the forecasts made by human forecasters in judgmental forecasting are based on their own judgment. Although the forecasters frequently incorporate data from a variety of sources, such as news, common sense, general knowledge, and reasoning, they may also use statistical models. This type of forecasting is used when historical data are scarce. 

Earlier, forecasting was only employed for a select few areas since it depends on limited human skills. This inspired scientists to leverage ML to automate forecasting, for example, by automating human reasoning, quantitative modeling, and information retrieval. In comparison to human forecasters, ML models could potentially offer certain benefits. These include parsing data or comprehending data rapidly, and finding patterns in noisy, high-dimensional data, where relying on human intuition and skills may not suffice. Besides, there is a possibility that the knowledge of outcomes of certain historical events can introduce a bias in reasoning. Here, ML models can offer better results on historical data on the basis of data patterns instead of inclination toward specific outcomes due to past records.

The team enumerates their key contributions as follows:

  1. Introducing Autocast, a forecasting dataset with a wide range of topics (such as politics, economics, society, and science) and time horizons.
  2. A substantial news corpus arranged by date is a standout feature of that dataset, enabling them to assess model performance on historical projections exhaustively.
  3. Showcasing that current language models struggle with forecasting, while having accuracy and calibration well below a reliable human baseline.

The team assembled 6,707 total forecasting questions from three open forecasting competitions (Metaculus, Good Judgment Open, and CSET Foretell) to create their Autocast dataset. These questions typically have a large public interest (such as national elections as opposed to municipal polls) and clear resolution requirements. The questions are either multiple-choice, true/false, or ask you to predict a number or a date.

Participants in these forecasting competitions start forecasting a question on a specific day (the “start date”), and then revise it several times until the “close date.” The forecast is resolved at a later time, and participants are graded according to all of their projections. 

It is important to note that, although not invariably, the resolution date falls immediately following the closure date. It is also possible that the resolution can potentially occur before the scheduled closure date, as could be the case when predicting the timing of an event. As a result, from the start to the closure date, a time series of projections comprise the “crowd” forecast (which aggregates over participants). The question, the start and end dates, the resolution of the question, the response, and the time-series of crowd forecasts are all included in the Autocast.

To determine if retrieval-based methods could enhance model performance by choosing appropriate articles from the dataset with Autocast, the researchers first examined the QA model UnifiedQA-v2 (Khashabi et al., 2022) and text-to-text framework T5 (Raffel et al., 2020) without retrieval. These models are trained on various tasks, providing high generalization on numerous unknown language problems. The team reported results on classification questions using zero-shot prompting for UnifiedQA. Researchers reported random performance since the UnifiedQA models were not trained on numerical questions and to allow comparison with other baselines. Meanwhile, using its original output head, T5 was adjusted for true/false and multiple-choice questions. They introduced an additional linear output head to T5 in order to produce numerical responses.

The team encoded articles obtained by the lexical search method BM25 (Robertson et al., 1994; Thakur et al., 2021) with cross-encoder reranking using a Fusion-in-Decoder (FiD, Izacard and Grave, 2021) model for retrieval. The frozen fine-tuned FiD model creates an embedding of every day’s top news article between the open and closing dates of a given question and then feeds these created embeddings to an autoregressive big language model like GPT-2. The team explains that FiD can be seen as a rudimentary extension of T5 for incorporating retrieval because it uses T5 to encode retrieved passages together with the question. 

The results reveal that retrieval-based techniques using Autocast significantly outperform UnifiedQA-v2 and T5, and their efficiency increases as the number of parameters rise. This suggests that larger models are better able to learn to extract relevant information from retrieved articles than smaller models.

Overall, the study demonstrates that extracting from a sizable news corpus can effectively train language models on prior forecasting questions.

Read More: MIT Team Builds New algorithm to Label Every Pixel in Computer Vision Dataset

Although the findings are still below the baseline of a human expert, performance can be improved by expanding the model and strengthening information retrieval. The team is certain that Autocast’s innovative method for allowing large language models to predict future global events will have major practical advantages in a variety of applications.

The group also pointed out that there are several orders of magnitude worth of quantitative data in the Autocast training set. Additionally, there are fewer than 1,000 numerical training questions in Autocast. This issue of calibrating predictions for values spanning several orders of magnitude using text inputs has not been addressed in the work on calibration for language models. Therefore, they compiled IntervalQA, an additional dataset of numerical estimate problems, and offered metrics to gauge calibration. The dataset’s problems entail making calibrated predictions for fixed numerical quantities rather than forecasting problems. 

The questions were taken from the following NLP datasets: SQuAD, 80K Hours Calibration (80k, 2013), Eighth Grade Arithmetic (Cobbe et al., 2021), TriviaQA (Joshi et al., 2017), Jeopardy, MATH (Hendrycks et al., 2021b), and MMLU (Hendrycks et al., 2021a). When these datasets were filtered for questions with numerical responses, the researchers received roughly 30,000 questions.

The Autocast dataset and code are available on the project’s GitHub. 

Advertisement

AI program PLATO can learn and think like human babies

AI program PLATO can learn and think like human babies

According to a new study published in Nature Human Behaviour, researchers have created an AI program called PLATO that can learn and think like human babies.

PLATO, an acronym for Physics Learning through Auto-encoding and Tracking Objects, was trained using a series of coded videos created to depict the same rudimentary knowledge babies have in the first few months of their life. 

Extending the work of developmental psychologists on infants, the researchers built and open-sourced dataset on physical concepts. The concepts were introduced to the AI through the clips of balls falling to the ground, disappearing behind other objects, and then reappearing, bouncing off each other, etc. 

Read More: Babies To Unlock The Next Generation Of AI, Research Says

The data set built by the researchers covered these five concepts that infants understand: 

  • permanence (objects will not suddenly disappear)
  • solidity (solid objects cannot pass through each other)
  • continuity (objects move consistently through space and time)
  • unchangeableness (object properties, such as shape and size, do not change)
  • directional inertia (objects move consistently with the principles of inertia)

When PLATO was shown videos of impossible scenarios that defied physics or the concepts mentioned above that it had learned, the software expressed surprise or the AI equivalent of it. The AI could recognize that something went wrong that broke the laws of physics.

Technically speaking, the researchers detected evidence of violation-of-expectation (VoE) signals, just like in infant studies, showing the AI understood the concepts it was taught.

However, PLATO cannot precisely replicate a three-month-old baby yet. There was minor AI surprise when it was shown scenarios that did not involve any objects or when the testing and training models were quite similar.  

Advertisement

Meta launches AI content verification tool Sphere 

Meta launches AI content verification tool Sphere

The social media conglomerate and Facebook owner Meta has announced the use of a new tool called Sphere, an AI program that taps the vast repository of information on the open web to provide a knowledge base for artificial intelligence and other systems to work. Wikipedia is the first known user of Spear, which uses the tool to automatically scan entries and identify whether citations are strongly or weakly supported. So, Sphere  is more of an AI content verification tool. The research team at Meta has open-sourced Sphere based on 134 million web pages.

The online encyclopedia Wikipedia has 6.5 million entries and is, on average, seeing an addition of some 17,000 articles each month. It is no surprise that the crowdsourced content requires editing from time to time, and while there is a team of editors tasked with overseeing that, it is a tedious task that grows by the day. It is not just the size but the mandate that makes the task daunting, considering how many students, educators and others rely on it as a repository of records.

The Wikimedia Foundation, which oversees Wikipedia, has been weighing up new ways of leveraging all the data available on the open encyclopedia. Last month, it announced the Wikimedia Enterprise tier and its first two commercial customers, Google and Internet Archive. The companies, which use Wikipedia-based data for their business-generating interests, will now have more formal service agreements involved in that.

Read More: Meta AI’s New AI Model Can Translates 200 Languages With Enhanced Quality

Meta continues to be weighed down by a bad public reputation stemming partly from accusations that it allows misinformation and toxic ideas to gain ground freely. Navigating through its mess, launching a content verification tool like Sphere can be perceived as a PR exercise for Meta. A potentially helpful tool, Sphere, if works, can send out the message that people in the organization are trying to work against the misinformation and propaganda. 

The announcement that Meta will work with Wikipedia does not reference Wikimedia Enterprise directly. It is more generalized, referencing the addition of more tools for Wikipedia to ensure that its content is verified and accurate. However, this is something potential customers of the Wikimedia Enterprise service will want to know when considering paying for the service. 

Meta has confirmed that there is no financial contract in this deal. Wikipedia is not becoming a paying customer of Meta or vice versa. To train the Sphere model, Meta created a new dataset, ‘WAFER’ of 4 million Wikipedia citations, which is more intricate than ever used for this sort of research. Also, recently Meta announced that Wikipedia editors are using a new AI-based language-translation tool it had built. So, there is a deeper collaboration that can be seen here. 

In a statement, Meta said that the company’s goal is to eventually build a platform to help Wikipedia editors systematically spot citation or content issues in the corresponding article at scale and quickly fix them. While it is still a production and implementation phase for the AI tool Sphere at the moment, the editors will likely start selecting the passages which need verifying for now. 

Advertisement

Columbia Unveils Guide for Implementing Blockchain for Public Projects

Columbia releases guide for Blockchain projects
Image Credit: Analytics Drift

Colombia’s Ministry of Information Technologies and Communications, MinTIC, has produced a roadmap guide outlining the steps to using blockchain in state-level initiatives. The document explains blockchain and its fundamental components and outlines the standards that particular projects should adhere to, based on their own requirements.

The document guide, which is published in Spanish, is titled “Reference Guide for the Adoption and Implementation of Projects with Blockchain Technology for the Colombian State.” It discusses the fundamentals of blockchain and the several enterprises that might profit from using blockchain technology. Furthermore, it stipulates that such implementations will be governed by national laws. The document states, “A blockchain technology project in the public sector requires a detailed review of the requirements of the general challenge to be resolved and the usability of the distributed database depending on the project type.”

The guide also suggests that the country’s current legal system, which requires state entities to abide by what is expressly established in Colombian law, should apply to the implementation of this technology. 

The ministry also made reference to some earlier blockchain-related projects. In more detail, they include the partnership between the Bank of Colombia and R3 to use Corda for various settlement cases, as well as RITA, a network created by a national university that uses blockchain to secure and verify the authenticity of academic diplomas.

Recently, MinTIC announced a new use case for blockchain technology that aims to help people who require their own land certificates. The project will use the Ripple Ledger (XRP Ledger) as a base to register and confirm the legitimacy of these certificates. It was recently finished by a third-party business called Peersyst Technology. With the aim of quickly distributing 100,000 of the certificates to land owners, the project aims to hasten the process of issuing these land documents.

Also Read: JPMorgan identifies new use for Blockchain in Trading and Lending

This is not Columbia’s first venture in the world of blockchain. Last year, the Central Bank of Colombia (Banco de la República), IDB Group, and Banco Davivienda piloted Colombia’s first blockchain bond. According to a public announcement from Banco de la República, the bond would be issued, placed, traded, and settled over blockchain technology using smart contracts for the Colombian securities market.

Advertisement

AI-tocracy Dystopia: China Claims to have Build AI software to Test Loyalty to the Chinese Communist Party

China party loyalty mind reading ai software
Image Credit: Analytics Drift

In a rather shocking development, researchers from the Comprehensive National Science Center in Hefei, China, claimed to have created artificial intelligence software capable of “mind-reading” and can read thoughts. Additionally, they claimed in a now-deleted video that the system could be used to gauge a person’s commitment to the Chinese Communist Party.

The announcement of this software is heralded as a step to foster “AI-tocracy” by China, drawing upon the fears of living in a big brother dystopia as mentioned in George Orwell’s popular novel 1984.

The software tool, according to experts, monitors the emotions, facial expressions, and brain waves of a person who has received “thought and political education” and analyzes them, however, it is unclear exactly how the system measures allegiance or reads minds.

The institution boasted about its “mind-reading” software in a Weibo video titled “The Smart Political Education Bar” that it released on July 1. As per translation by Radio Free Asia the software will be used on party members to further reinforce their determination to be thankful to the party, listen to the party, and follow the party. 

The institution said its AI software was watching the subject’s behavior to assess how attentive he was to the party’s thought education as he scrolled through online material promoting party ideology at a kiosk. The software would next evaluate the person’s “emotional identification” and “learning attentiveness” to determine if they met the loyalty bar or required further training. It is now not feasible to do a thorough review of the biometric tool’s performance in producing the desired outcomes because the research and video are no longer accessible to the general audience, following a public outcry. 

However, the Hefei Comprehensive National Science Center said in a statement that it had urged 43 party members of the research team to participate in party classes while being observed by the new software before it was removed from its website. The alleged biometric tool’s release (and disappearance) comes weeks after a New York Times investigation exposed how China is using biometric mass surveillance on a considerably greater scale than previously thought.

China has a notoriety for spying on both its own citizens and those living in the territories it occupies, whether it’s Taiwan or Hong Kong. Earlier, several times the Chinese government has also come under fire for tracking and policing Uighurs, an ethnic minority group held by the Chinese Communist Party in “reeducation” camps, using AI and face recognition technology. According to the Senate Foreign Relations Committee, the Chinese Communist Party has imprisoned between 1 million and 3 million Uighurs in reeducation camps.

Also Read: The Rise of China in the Autonomous Vehicle Industry

Though it is undoubtedly not the first time that a brainwave scan capability has been applied to human beings using them to assess CCP loyalty highlights how AI can be exploited by the government for their vested interests. This also adds to the fears of Chinese citizens who are already living under a myriad of sensors in the nation. This is also surprising because China already has laws for user privacy backed by its AI Ethics guidelines announced last year.

Advertisement