Home Blog Page 172

Controversy sparked as AI-generated art piece wins top spot at a competition

Controversy sparked as AI generated art piece wins top spot

A designer has sparked controversy after his artificial AI-generated piece of art won the top position at a competition in the US, with critics calling the win a threat to human artists everywhere.

Jason M Allen and his Theatre D’opera Spatial image beat more than a dozen other digitally manipulated/digital art photography entries at the Colorado State Fair.

The winning artwork was created using the Midjourney, an AI tool that turns lines of text into realistic graphics. The award included a $300 cash prize. AI tools for generating images have been around for years, with companies such as OpenAI and Google being notable investors in these text-to-image systems.

Read More: Thailand Rolls Out New Rules On Advertising For Crypto Companies 

However, several individuals have taken to social media to express their anger over the award. They argued that it took away from the hard work invested by humans to create unique art physically. One social media user said on Twitter that Midjourney could be fun, but it should never be used to cheat on other artists.

Some expressed concern that it could endanger their livelihoods. In contrast, others said AI-generated art should have its separate category in the future, something Allen also suggested as a way to avoid any future controversy in an interview with the Pueblo Chieftain newspaper.

The two judges for the category were unaware that Allen’s submission was AI-generated, but they also said that it would not have changed their decision as they were looking for how the art tells a story and invokes spirit.

Advertisement

Thailand rolls out new rules on advertising for crypto companies 

Thailand rolls out new rules on advertising for crypto companies

Securities and Exchange Commission (SEC) of Thailand has rolled out new rules and regulations on advertising for crypto companies after the industry has come under scrutiny from authorities.

The new rules include providing a balanced view of potential risk and returns and clearly showing investment risks in advertisements, the SEC said on Thursday. Information on advertising terms must also be given to regulators.

The SEC said that operators must give details of spending and ads, including the use of bloggers and influencers, to the SEC, including terms and time frame. He added that operators had 30 days to comply with the new rules.

Read More: US Orders NVIDIA And ARM To Stop Selling AI Chips To China 

Crypto companies in Thailand advertise extennsively on digital media, and billboards promoting the industry can also be seen throughout the city’s capital Bangkok. Recently, Thailand’s regulator has also imposed fines to crypto companies, with the operations of several companies hit by a worldwide slump in the value of digital currencies.

An executive from local operator Bitkub, Samret Wajanasathian, was fined $231,670.75 (8.5 million baht) this week for insider trading. Samret said he would appeal the decision.

Last month, Thailand’s fourth-largest lender by assets, SCB X PCL, scrapped a $500 million worth of acquisition of Bitkub over regulatory concerns.

Rival operator, Zipmex, was fined 1.92 million baht on Wednesday for suspending withdrawals in July. The startup said in that it was closely assessing the terms of the penalty with their legal counsel.

Advertisement

Microsoft launches first cloud data center region in Qatar 

Microsoft launches first cloud data center in Qatar

Microsoft has launched its very first cloud data center region in Qatar in collaboration with the Qatari Ministry of Communications and Information Technology. 

The 55th region for Microsoft globally will join the biggest cloud infrastructure in the world, enhancing Qatar’s regional and global competitiveness and consolidating its digital transformation. It will also boost local growth by supporting economic diversification, fostering talent, and attracting foreign investment. 

The launch ceremony, titled Qatar Digital Journey to the Future, was attended by several ministers, senior officials in the commercial sectors, and Microsoft officials. The government has already passed a series of laws to encourage investment in the digital economy. 

Read More: US Orders NVIDIA And ARM To Stop Selling AI Chips To China

With the opening of the Microsoft center, local and international businesses will be able to host their cloud data in Qatar, benefiting from high levels of reliability and performance. Customers can now use Microsoft Azure to develop advanced apps in a secure cloud environment using artificial intelligence, data analytics, the Internet of things, and hybrid cloud capabilities. 

Ralph Haupter, president of Microsoft Europe, Middle East, and Africa, said that customers in Qatar are using Microsoft cloud to innovate, achieve their goals, and accomplish a lot with less effort. He said Qatar’s first large-scale cloud data center would provide more opportunities to accelerate digital transformation. 

Several agencies in Qatar are using Microsoft cloud data centers to develop their digital capabilities, including the MCIT through its national programs, the TASMU platform, the Qatar digital government, and the Supreme Committee for Delivery and Legacy.

Advertisement

ProcTHOR by Allen Institute generates embodied AI environments 

procTHOR by Allen institue researchers generates embodied AI environments

Using large-scale training data in computer vision and natural language processing (NLP) models has strengthened and developed new findings. The recently deployed models of CLIP, DALL-E, GPT-3, and Flamingo have used massive task agnostic data to pre-train the neural architecture, which results in a remarkable performance at downstream tasks, including zero and few-short settings. Lately, embodied AI simulators are gaining attention and strengthened by physics, manipulators, object states, deformable objects, fluids, and real-sim counterparts. However, scaling them up to ten of thousand scenes is challenging. Given this, ProcTHOR by Allen Institute researchers was developed to create a procedural generation of embodied AI environments. The name goes for procedural-THOR, which stands for the house of interactions.

What is Embodied AI?

Embodied AI is AI controlling a physical thing, such as robots or autonomous vehicles. It is an interdisciplinary field combining natural language processing, reinforcement learning, computer vision, physic-based simulations, navigation, and robotics. This new age technology is an approach of computer learning to apply a relationship of mind and body identical to the human embodiment, how our mind and body react to complex movements and situations. Embodied AI starts with embodied agents, virtual robots, and egocentric assistants training in a realistic 3D simulation environment. The working of embodied AI is based on reinforcement learning, a type of machine learning that makes the machine perform suitable actions to maximize reward according to the situation. Researchers in embodied AI development are trying to avoid algorithm-led approaches and direct towards attempting to understand how biological system work, then build principles of intelligent behavior so that these can be applied to artificial systems. 

The embodiment hypothesis dates back to 2005 when Linda Smith proposed that the idea of intelligence emerges in the interaction in an environment and is a result of the sensorimotor activity. Even though the initial hypothesis was centered on psychology and cognitive science, the recent growth and research developments of embodied intelligence come from computer vision. While the applications of embodied AI seem to have great potential, till now, this has only benefited a couple of manufacturers and startups. Some researchers believe embodied AI can be combined with existing internet of things (IoT) devices that can take life-saving decisions on the spot within milliseconds. 

Read more: MIT Team Builds New algorithm to Label Every Pixel in Computer Vision Dataset

What is ProcTHOR?

ProcTHOR is a machine learning framework based on AI2-THOR used for the procedural generation of embodied AI environments. AI2-THOR is an open-source interactive environment containing four types of scenes for embodied AI. ProcTHOR framework can construct whole interactive procedurally physic-enabled settings for embodied AI research. The PRIOR team developed it at the Allen Institute for AI under a research paper, ‘ProcTHOR: Large-scale Embodied AI using procedural generation.’ ProcTHOR aims to train robots within a virtual environment and then apply the learning in real life. 

ProcTHOR allows random sampling of large datasets of varying, interactive, customizable, and high-performing virtual environments to train and evaluate embodied agents. For example, given a room specification, say a 3bhk house, ProcTHOR helps you build varieties of floor plans that meet your requirement. The environments in ProcTHOR are completely interactive and support navigation, object manipulation, and multi-agent interaction.

This framework is a state-of-art application of machine learning that extends AI2-THOR inheriting its huge asset library, robotic agents, and precise physics stimulation. The pre-training with ProcTHOR improves the downstream performance and gives a zero-shot performance, or per se is a zero-shot learning model. Zero-shot learning is a significant technique in machine learning, which refers to the models classifying objects or data based on very few to almost no labeled data points. The ProcTHOR by Allen Institute researchers has five key characteristics:

  • Diversity: One can create several varieties of rich environments with ProcTHOR. The framework provides many options for every embodied AI task, like the diversity of floor plans, assets, materials, object placements, and lighting. 
  • Interactivity: The property of interacting with objects in the environment is fundamental to embodied AI tasks. ProcTHOR has agents with arms for manipulation of objects. 
  • Customizability: ProcTHOR gives users the complete power of customization from rooms to material and lighting specifications. 
  • Scale: ProcTHOR provides 16 different scene specifications and 18 semantic asset groups. These result in an indefinite number of assets and scenes for seeding the generation process. So, each environment/ house created on ProcTHOR is scaled to find the best result per requirements.
  • Efficiency: ProcTHOR represents scenes in a JSON file and loads them into AI2-THOR at runtime to make the memory overhead of sorting houses astoundingly efficient. Furthermore, ProcTHOR gives high framerates to train embodied AI models where the scene generation process is automatic and fast.

ProcTHOR-10k 

ProcTHOR-10k is the model of ProcTHOR by Allen Insititute researchers using a sample set of 10,000 fully interactive houses obtained by the procedural generation process. In addition, it contains a set of 1,000 validation and 1,000 testing houses for evaluation. The assets are split across train, validation, and test, counting to 1633 unique assets and 108 asset types. 

There are two essential requirements for large-scale training in embodied AI simulator:

Scene statistics: The scene statistics of houses in ProcTHOR-10k are generated by applying 16 different room specifications. The room specification provides to change the distribution of size and complexity of houses. It is seen that ProcTHOR has a broader spectrum of scenes than other embodied AI simulators, including AI2-iTHOR, RoboTHOR, Gibson, and HM3D. 

Rendering speed: High rendering speed directly proportions to large-scale training because training algorithms require to converge millions of iterations. The GPU experiments were performed and recorded, which tells in the number of experiments how many processes were distributed among the GPUs. It was found for 1 GPU experiment, 15 processes, and for 8 GPU experiments, 120 processes were distributed. In the end, a comparison between ProcTHOR, iTHOR, and RoboTHOR was done, which concluded that ProcTHOR provides more framerates and renders it fast enough to train large models in a fair amount of time.

Read more: Jio Haptik uses Microsoft Azure Cognitive Services to improve Hindi conversational AI

Training and scalability in ProcTHOR

The former methods of embodied AI environments demand a lot of work from 3D designers who must create 3D elements, organize them in suitable configurations inside sizable spaces, and create the proper textures and lighting in these scenes. In the latter, specialized cameras are moved through various real-world scenarios, and the resulting photos are then pieced together to produce 3D reconstructions of the scenes. Using these strategies, it was impossible to scale up the scene repositories by several orders of magnitude. Then, ProcTHOR came that can handle a higher magnitude of the number of scenes than current modern simulators because of the arbitrary massive collection of settings. Additionally, it supports dynamic material randomizations, which enable the randomization of particular asset colors and materials each time an environment is stored in memory for training.

The training inside ProcTHOR is a complex process over several levels, including room specification, connecting rooms, lighting, object placement, and many more. The paper mentioned above demonstrated the potential of ProcTHOR with the ProcTHOR-10k model, which has a sample of 10,000 generated houses and a simple neural network. The advantages of scaling up from 10 to 100 to 1K, then to 10K scenes are shown by an ablation analysis, and it is suggested that even more benefits could be obtained by using ProcTHOR to create even bigger settings. Modern models for various navigation and interaction benchmarks are produced by agents trained on ProcTHOR-10K with minimum neural architectures (no depth sensor, only RGB channels, no explicit mapping, and no human task supervision). With no fine-tuning on the downstream benchmark, we also show strong zero-shot performance on these benchmarks, frequently outperforming earlier state-of-the-art systems that access the downstream training data. The code used in the research of ProcTHOR will be made publicly available shortly. Until then, ProcTHOR-10K was launched in a Google Colab notebook.

Among other frameworks to build embodied AI environments, ProcTHOR by Allen Institute researchers has made a name for itself because of its procedural approach to generation. Furthermore, the data set produced for ProcTHOR enables the training of simulated embodied agents in more diverse environments.

Advertisement

Seoul Green-Lights Beta Test Run of Metaverse Seoul project

Metaverse Seoul

The Seoul Metropolitan Government (SMG) claimed last year that it would be the first large city to enter the metaverse. The aim is to establish a virtual communication ecosystem for all facets of the city government, now known as “Metaverse Seoul.” This would involve economic, cultural, tourist, educational, and civic service in three stages beginning this year. 

On Wednesday, Seoul had a closed beta test run of the first stage of its metaverse project Metaverse Seoul. This “Introduction” phase will be followed by “Expansion” (2023 to 2024) and finally, “Settlement” (2025 to 2026).

The test run included a virtual recreation of Seoul City Hall and the Seoul Plaza, where people could engage in interactive activities and games. It also has a virtual counseling room where young people could meet and talk about their issues with mentors in the metaverse. Officials believe this will assuage worries about feeling awkward talking to a possible mentor in person.

About 3,200 users of Seoul Learn, the city’s online learning platform, had signed up to take part in the test run, along with professionals from the Seoul IT Tech Governance Group. Seoul will work on an improvement based on the suggestions made by these participants.

The first phase of Metaverse Seoul is anticipated to become live by the end of November. Following that, a multitude of facilities and services, like the “Virtual Mayor’s Office,” a “Seoul FinTech Lab,” Invest Seoul, and Seoul Campus Town,” would be gradually introduced. Through a designated “Virtual Tourist Zone,” the metaverse will also provide virtual copies of its well-known tourist sites, including Gwanghwamun Plaza, Deoksugung Palace, and Namdaemun Market. Visitors also could explore digital reconstructions of lost historical places like the Donuimun gate, which was razed during the Japanese colonial period.

Read More: UAE’s AI Minister Demands Laws and Actions Against Crimes in Metaverse

As part of the city’s Seoul Vision 2030 plan, the South Korean capital has invested KRW 3.9 billion (approximately €2.8 billion) in the project. The mayor, Oh Se-hoon, stated that the project intends to make Seoul a city of coexistence, a worldwide leader, a secure city, and a future emotional city. Residents of Seoul will soon be able to don VR headsets to attend mass gathering events, speak with avatar authorities, and see authentically recreated landmarks.

Other initiatives that have been mentioned include the Seoul Lantern Festival, which will be hosted in the metaverse beginning in 2023 and will be accessible worldwide. The city also announced that it will utilize the platform to produce services for the socially disadvantaged, such as safety and convenience content for individuals with disabilities, and that it will expand the platform to all municipal government sectors to increase efficiency.

Seoul is only one of a rising number of towns globally exploring ways to use metaverse technology to manage public services better, engage residents, and increase participation with companies or downtown areas. Other cities in this growing list include Dubai and Santa Monica. In July, Dubai unveiled its own metaverse strategy, which seeks to become the city a global hub for the metaverse community and be one of the top ten metaverse economies in the world.

If used effectively, the metaverse offers a real chance to enhance municipal services and the quality of life for citizens. To accomplish this, urban authorities must be at the core of metaverse cities initiatives.

Advertisement

Reddit Acquires Spiketrap to Improve Ads Targeting

Reddit acquires Spiketrap

In recent years, Reddit has continually put efforts into strengthening its ad targeting and optimization capabilities, this time with the acquisition of audience contextualization company Spiketrap. Though the specifics of the deal are not disclosed, Reddit has said that Spiketrap’s AI-powered contextual analysis and tools would assist Reddit in areas such as ad quality scoring and will enhance prediction models for enabling auto-bidding. In other words, this acquisition will assist Reddit in better understanding its graph and matching relevant ads, resulting in improved ad results.

Reddit wants to make it simpler for advertisers to target relevant audiences based on interests; therefore, the agreement represents the company’s expanding commitment to its advertising business. The acquisition continues Reddit’s record of purchasing AI-powered businesses like MeaningCloud in July and Spell in June. The MeaningCloud platform is a natural language processing platform that enables developers to create apps that can extract meaning from written information, such as text on Reddit’s forums. Spell is a SaaS AI platform that enables technology teams to construct and execute machine learning algorithm experiments at scale.

In today’s millennials and Gen-Z-dominated internet culture, people often use emoticons, incomplete sentences, memes, inside jokes, and other unstructured languages when conversing with one another. Hence, measuring how people engage with online content using simple statistics based on what they click, see, or vote for is crucial and challenging. 

The contextual analysis offers more profound insights than can be obtained from simply quantitative statistics of the number of engagements or link clicks. For instance, by utilizing metrics like audience sentiment, impact ratio, and conversation trends, the data collected from a campaign by Spiketrap’s proprietary AI, Clair AI, may provide an advertiser with a bit more insight about not only how but why people are talking about a campaign. 

Clair AI is renowned for its potential to glean operational insights from unstructured data instantly. Reddit will be able to track popular articles in real-time using the Clair AI system, which will then map those stories against online community engagement to more accurately map information flows. 

Read More: Reddit Launches New NFT Avatar Marketplace for its users

Another Spiketrap product called Emotion AI can identify the tone of a message, such as enthusiasm, sarcasm, or toxicity. Reddit has struggled with toxicity over the years, frequently having to ban harassing and nasty subreddits as well as those that advertisers would want to avoid.

Spiketrap is a California-based company that was established in 2016 and provides specialized audience analytics and media solutions to businesses in the gaming and media sectors.

Reddit reports that the Spiketrap team has joined the organization and will lead several initiatives in the future for its advertisements business. Reddit is presently concentrating on fusing Spiketrap’s tools, technology, and resources with Reddit; as part of this transition, the company emphasizes, a plan for its commercial operation will be included.

Advertisement

Baidu unveils its first superconducting quantum computer Qianshi

Baidu unveils its first superconducting quantum computer Qianshi

The Chinese company Baidu unveiled the first superconducting quantum computer, Qianshi, which touts it as the first all-platform quantum hardware-software integration solution in the world. 

This is the company’s first superconducting quantum computer that combines its hardware platform and software stack called Liang Xi on an industrial scale. The ability of a material to conduct electricity without producing heat or wasting energy is known as superconductivity. Several businesses have introduced quantum systems using superconductive components in recent years. The software stack, which Baidu has developed, includes:

  • A quantum machine learning called Paddle Quantum
  • A quantum error processing toolkit dubbed QEP
  • Several other components

The 10-qubit quantum computer, which integrates various practical quantum applications, was shown at the Quantum Create 2022 conference in Beijing. These applications include quantum algorithms for simulating protein folding and designing new materials for novel lithium batteries. The company further emphasizes that the performance of other commercially accessible quantum computers is currently limited to 7 qubits. This innovation results from four years of extensive research and development by Baidu’s Institute for Quantum Computing. The division has already begun developing Qianshi’s successor, which Baidu claims will have a 36-qubit superconducting quantum processor with couplers.

Read More: NIST announces four post-quantum cryptography algorithms

According to Runyao Duan, director of the Institute for Quantum Computing at Baidu Research, “with Qianshi and Liang Xi, users can create quantum algorithms and use quantum computing power without developing their quantum hardware, control systems, or programming languages.” Runyao claims with the help of Baidu’s inventions, anybody with a smartphone may now access quantum computing from anywhere at any time. Baidu’s platform is also readily compatible with a wide range of quantum processors, allowing for “plug-and-play” access.

Advertisement

US orders NVIDIA and ARM to stop selling AI chips to China

US orders NVIDIA and arm to stop sales of top AI chips

The ongoing heated competition between USA and China to dominate the tech industry took a surprising turn on Wednesday. Chip manufacturer NVIDIA said that US officials asked it to stop shipping two of its top computing processors for use in Artificial Intelligence research to China as a sign of rising tensions over the global semiconductor conflict.

In order to address the possibility that the covered products could be utilized in, or diverted to, a military end use, the US government has imposed a new licensing prerequisite for any future export to China (including Hong Kong) and Russia, according to a Securities and Exchange Commission filing. With tensions in Beijing after visits to Taiwan by House Speaker Nancy Pelosi and other US leaders this month, the new regulation represents a substantial uptick in the Biden administration’s attempts to avert future supply chain problems and promote competition with the Chinese government.

Chinese enterprises won’t be able to do advanced computer functions, such as voice and image recognition, satellite imagery for weapons, etc., cost-effectively without the supply of American chips from firms like NVIDIA and its rival Advanced Micro Devices (AMD). The new license requirement, according to NVIDIA, will affect the export of systems like DGX that use its A100 and H100 processors, which are developed to accelerate machine learning activities. Furthermore, it may jeopardize the completion of the H100, NVIDIA’s flagship processor, announced this year. 

According to the filing, the licensing obligation also applies to any future NVIDIA integrated circuit that achieves peak performance and chip-to-chip I/O performance levels that are equal to or greater than those of the A100, as well as any system that employs those circuits.

If clients choose not to buy the company’s alternative product offerings or if the US government delays granting licenses or refuses licenses to important customers, it will cost NVIDIA a US$400 million loss in sales to China.

According to an AMD representative, new licensing requirements would prevent its MI250 artificial intelligence chips from being sold to China. On the plus side, the MI100 chips are not likely to be affected, and the company does not anticipate the new regulation will have a material impact on business.

Read More: From SIGGRAPH to Jetson AGX Orin Production Modules: Latest Announcements by NVIDIA

The US Department of Commerce refused to disclose the new requirements it has established for AI chips that cannot be exported to China but did state that it is evaluating its China-related rules and procedures to prevent sophisticated technologies out of the wrong hands.

The US is pursuing a comprehensive approach to implementing extra measures needed to be associated with technology, end-uses, and end-users to preserve US national security and foreign policy objectives, a spokeswoman told Reuters

After the news broke down, NVIDIA’s shares dropped 6.6% after hours, while AMD saw a 3.7% drop after hours. According to Stacy Rasgon, a financial analyst at Bernstein, the announcement indicated that around 10% of NVIDIA’s data center sales—which investors have closely followed in recent years—came from China and that the sales decline was probably manageable for NVIDIA.

The chip restriction comes after NVIDIA predicted last week that the current quarter would see a significant decline in sales due to an underperforming gaming market. According to NVIDIA, third-quarter revenue would total US$5.90 billion, a 17% decrease from last year. NVIDIA revealed US$6.7 billion in sales for the second quarter, which was much lower than estimates. However, it reported revenue from its data center business rose by 61% over the same period last year.

This is not the first time the United States has sought to restrict selling chips to Chinese vendors. The government of former President Donald Trump prohibited suppliers from providing the telecom giant Huawei with processors built with American technology without a special license in 2020. This caused Huawei to flood chip makers based outside the US to help with their company needs. 

China’s government has been investing heavily in its domestic chip industry over the past few years in an effort to support businesses that can compete with US, South Korean, and Japanese industry behemoths. However, the US government’s restriction will put a dent in its need for chip supplies. The nation has already been struggling to meet new demands amid the semiconductor crisis worsened by the covid pandemic. Early indications of shifting demand caused some computer companies to stockpile chips and place advance orders as the pandemic worsened, leaving other companies scrambling to get the components. Meanwhile, due to the rapid changes in demand brought on by the pandemic, the cost of transporting cargo containers and air freight rates worldwide has skyrocketed. 

The ban met a negative response from China. Wang Wenbin, a representative of the Chinese foreign ministry, stated on Thursday that the US was trying to impose a technology blockade on China. He said the restriction demonstrated America’s efforts to uphold its technological hegemony.

Shu Jueting, a spokesperson of the Chinese commerce ministry, said that the move endangered the stability of the world’s supply and industrial chains and the legitimate rights and interests of Chinese businesses. According to Jueting, this action would interfere with supply networks and the global economic recovery. The US side should immediately halt its misguided measures, treat businesses from all nations fairly, including those from China, and do more to support global economic stability.

For now, China will be compelled to rely on multiple lower-end processor alternatives from NVIDIA that were not prohibited if it fails to find direct local replacements. Even though it would be more expensive, this effort to replicate the processing capabilities would not operate at the same speeds.

Advertisement

Jio Haptik uses Microsoft Azure Cognitive Services to improve Hindi conversational AI

Jio Haptik uses Microsoft Azure Cognitive Services to improve Hindi conversational AI

One of the leading conversational AI companies, Jio Haptik, is using Microsoft Azure Cognitive Services to enhance existing Hindi conversational AI models’ accuracy. 

This popular AI translation model that enables end-to-end conversations in Hindi, English, and Hinglish is integral to allowing users to interact with the Intelligent Virtual Assistants at Jio Mobility. 

The one-of-its-kind model has helped the Jio Mobility team provide support across multiple categories by enhancing local language queries by 2.5 times and reducing human interventions by 80%.

Read More: US Officials Order Nvidia To Stop Exporting Computing Chips For AI Work To China

With Jio Mobility, Haptik provides chatbots across three platforms: WhatsApp Business account, MyJio App, and Jio website. This solution has assisted Jio Mobility in engaging in 2M+ conversations from customers in Hindi and Hinglish with an 80% decrease in human interventions and a 2.5 times increase in localized queries.

Microsoft, in collaboration with Jio Haptik, facilitates vernacular support for users by leveraging transfer learning from multilingual and monolingual data to create this unique language model. Their unified efforts have helped Jio Mobility’s customer care feature a one-of-a-kind conversational service in Hindi and Hinglish.

Basic translation models have existed in the market for a long time. Still, they are not equipped to handle grammatical errors, and unstructured sentences are essential for conversational agents to perform well. However, this one-stop solution by Azure Cognitive Services and Jio Haptik manages Hindi in Roman script (Hinglish) and Devanagari, allowing users to switch between languages while functioning at high accuracy for short sentences and domain-specific conversations.

Advertisement

US officials order Nvidia to stop exporting computing chips for AI work to China

US officials ask Nvidia to stop exporting computing chips for AI work to China

Chip designer Nvidia has said that US officials have ordered it to stop exporting two top computing chips required for artificial intelligence (AI) work to China. 

The move could cripple the ability of Chinese firms to carry out advanced work like image recognition and hamper Nvidia’s expectations to generate $400 million in sales this quarter. Nvidia shares fell about 4% after hours. 

The company said that the ban, which affects its H100 and A100 chips designed to speed up machine learning tasks, could interfere with the completion of development of the H100, the flagship chip Nvidia announced this year. AMD shares were down 2% in after-hours trading. 

Read More: Woman Accidentally Receives $10.5 Million From Crypto.Com

AMD had received new license requirements to stop its MI250 artificial intelligence chips from being exported to China, but it believes its MI100 chips will not be affected. AMD said it does not believe the new rules will negatively impact its business.

Nvidia said US officials told it the new rule would address the risk that the covered products may be used in, or diverted to, military end use or military end user in China.

The announcement signals a significant escalation of the US crackdown on China’s technological capabilities as tensions bubble over the fate of Taiwan, where chips for Nvidia and almost every other major chip firm are manufactured. 

Without American chips from companies like Nvidia and its rival Advanced Micro Devices, Chinese organizations will be unable to cost-effectively carry out the kind of advanced computing used for image and speech recognition, among many other tasks.

Advertisement