Wednesday, November 19, 2025
ad
Home Blog Page 138

Ola Electric to venture into electric bikes 

Ola Electric venture into electric bikes

After achieving remarkable milestones in the e-scooters arena in recent months, Ola Electric is now venturing into production of electric bikes. The Founder and CEO of Ola Electric, Bhavish Aggarwal, hinted at the latter in a tweet. 

The exact details of the specifications and production for the Ola Electric bikes are not yet available and are expected to be released soon. However, considering the comments to the CEO’s tweet, people do seem excited at the news. 

Ola’s new EV startup Ola Electric has witnessed a tremendous response on its first three recently launched scooters and has said that the company has made its 1,00,000 vehicles within a period of 10 months. With three products, S1 Air, S1, and S1 Pro, the company has made record sales during the last festive season.

Read More: IIT Kanpur Offers A Master’s Degree Course In Cybersecurity

Talking about this landmark Bhavish said, “Since beginning our journey to electrify India, we have unleashed the potential of EVs in our nation, providing customers with a product and experience that is far superior to anything a petrol alternative can provide. This accomplishment is just the start”.

Notably, Ola has taken over Hero Electric and now has the best month-on-month growth in terms of sales in the segment. According to sources, there was a 60% growth every month.

Advertisement

Neural Acoustic Fields: MIT-IBM Watson team use Acoustic Information to build ML model

Neural acoustic fields

A machine learning model called Neural Acoustic Fields (NAFs) has been developed by researchers from MIT and the IBM Watson AI Lab to forecast what sounds a listener will hear in various 3D settings. The machine-learning model can mimic what a listener would hear at different locations by simulating how any sound in a room would travel across the space using spatial acoustic information.

The neural acoustic fields system can understand the underlying 3D geometry of a room from sound recordings by precisely modeling the acoustics of a scene. The researchers may utilize the acoustic data collected by their system to create realistic visual reconstructions of a space, akin to how people use sound to infer the elements of their physical surroundings.

This approach might aid artificial intelligence agents in better comprehending their surroundings in addition to its potential uses in virtual and augmented reality. According to Yilun Du, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and co-author of a paper describing the model, an underwater exploration robot could sense things that are farther away by simulating the acoustic properties of the sound in its environment.

Du admits that most researchers have so far concentrated solely on simulating vision. These models often combine a neural renderer with an implicit representation that has been trained in order to capture and render visuals of a scene simultaneously. By leveraging the multiview consistency between visual observations, these methods can extrapolate images of the same scene from unique viewpoints. However, because humans have a multimodal perception, sound is just as essential as vision, which opens up an attractive research area on improving how sound is used to describe the environment.

Previous studies on capturing a location’s acoustics called for careful planning and designing of its acoustic function which cannot be applied to arbitrary scenes. According to the study report from MIT, despite recent improvements in learned implicit functions that have produced ever better visual world representations, learning spatial auditory representations has not made similar strides. A variant of the machine-learning model known as the implicit neural representation model has been employed in computer vision research to produce continuous, smooth reconstructions of 3D scenes from images. These models make use of neural networks, which are composed of layers of linked nodes, or neurons, that analyze data to perform an action. These models make use of neural networks, which are composed of layers of linked nodes, or neurons, that analyze data to perform an action.

Read More: Mechanical Neural Network: Architectured Material that adapts to changing conditions

The MIT researchers used a similar model to depict how sound continuously permeates a scene. However, they failed! 

This inspired the team to work on neural acoustic fields, an implicit model that reflects how sounds travel in a spatial environment. Neural acoustic fields encode and transmit an impulse response in the Fourier frequency domain to capture the complex signal representation of impulse responses.

NAFs can be used to create or enhance existing feature maps of rooms. (📷: Luo et al)
NAFs can be used to create or enhance existing feature maps of rooms. (Credit: Luo et al)

In order to enable neural acoustic fields to constantly map all emitter and listener location pairings to a neural impulse response function that can be used to process any sound, acoustic propagation in a scene is modeled as a linear time-invariant system. Using sound instead of visuals enabled the team to get around the (vision) model’s dependence on photometric consistency—a phenomenon where an item seems to look about the same no matter where you are standing but does not apply to sound—which was necessary. To circumvent the photometric consistency issue, the neural acoustic fields method utilizes the reciprocal nature of sound i.e., exchanging the location of the source and the listener has no effect on how the sound is perceived, as well as the impact of regional elements like furniture or carpeting on the sound as it travels and bounces. The model randomly picks locations and learns from its experience by using a grid of objects and architectural features.

The NAF system is based on techniques originally developed for computer vision systems. (📷: Luo et al)
The NAF system is based on techniques originally developed for computer vision systems. (Credit: Luo et al)

Researchers input the final neural acoustic fields with both model visual information about an acoustic setting as well as spectrograms that demonstrate what an audio piece would sound like if the emitter and listener were positioned at specific points around the room. The algorithm then forecasts what the audio would sound like at any location in the scenario in which the listener would move.

The machine learning model produces an impulse response that depicts how a sound would alter as it spreads throughout the environment. The researchers then use this impulse response to various sounds to hear how they ought to alter when a person moves about a room.

The researchers found that their methodology consistently produced more precise sound models when compared to other techniques for modeling acoustic data. Their model also had a far higher degree of generalization to new locations in a scene than previous approaches since it incorporated local geometric information.

Additionally, researchers discovered that incorporating the acoustic knowledge their model picks up into a computer vision model can improve the visual reconstruction of the scene. In other words, a neural acoustic fields model could be used backwards to enhance or even build a visual map from scratch.

The researchers intend to continue improving the model so that it can be generalized to new scenarios. Additionally, they plan to use this method for more complex impulsive reactions and larger scenes, like entire buildings or even a whole town or metropolis.

The MIT-IBM Watson AI Lab’s principal research staff member Chuang Gan believes that this new method may present novel opportunities to develop a multimodal immersive experience for the metaverse application.

The research team also mention the limitations of their neural acoustic fields model. Their method, like previous spatial acoustic field coding studies, does not model the phase. While a magnitude-only approximation may still be sufficient for tasks that depend on the phase, it may not be able to reproduce believable spatial acoustic effects in a compact and continuous manner. This NAF model needs a precomputed acoustic field, which was also the prerequisite in earlier acoustic field studies. Though this isn’t a drawback for many applications, researchers believe the potential to generalize from really small training samples can create new opportunities. Finally, this model is fitted to a particular scene like earlier research that uses implicit neural representations. It’s still unclear if it’s possible to forecast the acoustic field of new scenes.

Advertisement

You may be ‘watched’ at Qatar FIFA World Cup

Qatar Fifa world cup facial recognition
Image Credit: CFP

For the upcoming football World Cup in Qatar, FIFA revealed in August that 15,000 cameras with facial recognition technology would be used to monitor the whole event, including spectators. According to the organizers’ chief technology officer, Niyas Abdulrahiman, the cameras monitoring football fans across eight stadiums and on the streets of Doha would herald in a new norm, a new trend in venue management — calling it Qatar’s gift to the world of sport.

The facial recognition-based surveillance is a facet of Qatar’s attempts to monitor security risks, including terrorism and hooliganism, during the competition, which is anticipated to draw over 1 million spectators. The eight stadiums where the matches will be played will be under the technological command and control of the Aspire Command and Control center, which will also manage the surveillance network. All neighboring metro trains and buses would be monitored by the control center. 

The College of Engineering at Qatar University (QU) has created an intelligent crowd management and control system with various aspects for crowd counting, face recognition, and abnormal event detection in partnership with the Supreme Committee for Delivery and Legacy. Using data from drones, the university research team initially created a method for counting crowds that uses dilated and scaled neural networks to extract useful characteristics and estimate crowd densities. The research team has also worked on developing a face identification system that uses a multitask convolutional neural network to take into account faces in various poses. For this, a cascade structure was used to integrate a posture estimation algorithm and a face identification module. The left side, frontal, and right-side captures of faces served as the training data for the CNN-based posture estimation method. To eliminate unnecessary face information, a skin-based face segmentation approach that is centered on structure-texture decomposition and a color-invariant description (e.g., background content) has been developed.

Other security issues have been brought up concerning the forthcoming World Cup event, in addition to the usage of biometric technologies to survey attendees. Visitors entering Qatar will be required to download two smartphone applications that may jeopardize their personal privacy and data security. Access to events will be managed by the Hayya Card, a digital identification card that can only be obtained by uploading a passport scan and a clear photo of your face.

Qatar’s World Cup organizers are not just deploying facial recognition-based technology to monitor football fan activity alone. Earlier, FIFA said that the football tournament would have semi-automated offside detection technology. With the use of this technology, officials will be able to make judgments more quickly, which will assist the game’s progress.

These facial recognition-based monitoring devices have been implemented at football stadiums and clubs around the world in recent years. Valencia CF and the biometrics company FacePhi signed a contract in June 2021 to develop and implement face recognition technology at Mestalla Stadium for the following season. 

Read More: Iranian Government Admits using Facial Recognition to Identify Women Violating Hijab Rule

A face recognition system developed by Russian technology company NtechLab was previously utilized by local law enforcement to identify and detain more than 40 people during World Cup-related activities in Moscow in 2018. NEC, a Japanese company that also provided its face recognition cameras for the 2020 Tokyo Olympics, provided facial recognition stadium security for the Brazil 2014 World Cup.

Facial recognition technology has not always been successful in monitoring crowds, since there have been instances where things went wrong. At the 2017 Champions League final in Cardiff, UK, facial scanning technology falsely labeled almost 2,000 spectators as potential offenders. After a court ruling, the system was shelved, only to be redeployed early this year.

Advertisement

Meta lays off more than 11000 employees amidst other steps to become leaner and more efficient

meta lays off 11000 employees

Mark Zuckerberg, Meta’s CEO, announced that the company is downsizing as it lays off approximately 13% of its employees, mounting over 11,000. The layoffs are a part of a few other steps, like freezing hiring through Q1, that Meta will take to cut down discretionary spending.

Zuckerberg said that the company continuously invested in moving online and digitizing after the world moved online due to the COVID-19 pandemic. At the time, digitizing brought in considerable revenue, which led companies to anticipate outsized revenue growth even after the pandemic ended. However, it did not work out as expected. The digital trends have downturned to the levels prevailing before the pandemic.

Read More: Zoho to Expand R&D as it Crosses the US$1 Billion Milestone

To survive the paradigm shift, the company needs to become more capital-efficient. Zuckerberg added, “We’ve cut costs across our business, including scaling back budgets, reducing perks, and shrinking our real estate footprint.”

Meta will compensate all affected employees (US) with severance pay of 16 weeks in addition to two weeks of yearly service. The company will also provide immigration support to help out stationed employees. Other remuneration includes the payment for all remaining PTO tenures, health insurance vesting, career support with an external vendor, and access to job leads.

Advertisement

Why is Twitter laying off employees post Musk takeover?

Twitter laying off employees

Elon Musk finally got the dibs on the social media platform Twitter for US$44bn on October 28 after receiving a go-ahead on the previously postponed lawsuit by the former. Since the takeover, the social media platform has come under the limelight for various reasons, including potent rumors, laying off employees, and claiming to be accused of dropping or weakening its content policies. Musk’s takeover has raised speech concerns worldwide, especially given how “self-confessed” he is. To deprecate any following debate on the issue, Musk tweeted against any policy changes. 

Nevertheless, the platform remained under the light as Musk dissolved the board of directors while affixing his control as the ‘sole director.’ Within the first week, Musk fired Chief Executive Parag Agrawal, while others, like the Chief People and Diversity Officer Dalana Brand, resigned after the takeover. 

Moving into the second workweek under Musk, the platform was expected to lay off over 25% of the employees to bring in people with higher profiles and restructure the microblogging platform to make it more accurate and liable.

However, as many as 3,700 people were laid worldwide, out of which over 180 employees were laid off from its marketing and communications department in India alone, on Friday, November 4. Soon after, many of these people were reached out as they were fired “in error” or they were “too essential” for the changes that Twitter was working on. 

The company has been sued for mass layoffs in California, where employees have allegedly been sacked without valid notice. Following the abrupt suspension of their access to corporate services, including email and Slack, several employees discovered they had been fired. Another class-action lawsuit was filed in San Francisco federal court following the layoffs, now being called a “whiplash.”

However, all the changes brought to the employee portfolio by Musk’s takeover resulted from Twitter’s performance despite the massive workforce in the last few years. The social media platform was losing as much as US$4m/day, leaving the new director with no choice but to lay off people with lesser significant roles. Musk clarified that the company offered three months of severance pay, roughly 50% more than the legal requirement.

Read More: MSU Researchers Release ‘DANCE,’ A Python Library For Deep Learning

The paradigm shift may not seem significant; however, it has affected underrepresented groups like females, Hispanics, and black people. In 2021, Twitter saw a considerable increase in the proportion of Black and Latinx employees, partly due to the company’s ability to accommodate remote work. These groups make up a more significant share of those who wish to work remotely than men or their white counterparts. 

Many believe that Twitter is laying off to save the year-end compensation given to the employees. Musk is vocal in denying such accusations by tweeting, like always. 

Jack Dorsey, Twitter’s former CEO and co-founder broke his silence on the layoff allegations and said he was hiring employees too quickly, resulting in the ardent need to downsize the platform post Musk took over. The changes Twitter is experiencing post Musk’s takeover were long-awaited, given the platform’s performance and spending. For instance, the platform spent 50% more than DeepMind for research and development!

Nevertheless, Musk is encouraging those still working at the company to release new features rapidly. The company plans to roll out the Blue Subscription in India, a feature that provides additional access to the most engaged people on the platform. Elon has also authorized 50 professionals, including people from Tesla’s Autopilot team, Neuralink, and Boring Company, to get more experts on the panel.

Advertisement

Is Tesla paving the way for humanoids by developing its bots to exhibit enormous strength?

tesla humanoid

Owned and operated by Elon Musk, Tesla has been working on developing functional prototypes of humanoid robots for many years. The company introduced its first bot as a concept in August 2021 by the name of Tesla Bot, later renamed Optimus Subprime. After over a year, the bot entered into an operational prototype phase and was presented to the world on AI Day, September 2022. Another prototype named Bumble C was also presented on the same day. Bumble C walked on the stage without backup support and could swiftly move its arms and legs. 

The company claimed that the bots would be an essential part of the company’s portfolio in the coming decade. Even though people do not believe robots will soon be ready for prime time, they already play numerous roles in daily life. Several robots are currently employed in manufacturing automobiles, and vacuum robots clean homes daily. On the other hand, humanoid robots are only seen in a few experimental ventures. 

Nonetheless, Tesla is optimistic about the future of humanoids like Optimus. In a new video posted by Tesla to promote job openings in the Actuators Team, Tesla Bots were seen exhibiting enormous strength while lifting a piano weighing half a ton (500kg)!

The video shows that one of the two actuators quickly lifts the piano using hydraulic power. The event is enough to make people wonder about the power future robots could leverage. 

After watching the Tesla Bot actuator do the heavy lifting, Tesla AI Day, 2022 attendees who were earlier critical about Optimus and Bumble C, as they felt humanoids consume much more time and effort, must now be impressed. 

While it seems simple in the video because the hydraulic pumps are outside and have the necessary power, it becomes more complicated when installed inside a more petite humanoid body. This is one of the reasons that makes the development of humanoid robots challenging. 

A single humanoid comprises not one but several such actuators whose power and size might be constrained compared to the individual actuators operating externally. Moreover, the actuators used in humanoid robots must support the dynamic range of activities to make them useful in the real world. Actuators also require a lot of energy; to power that, the humanoid may have excessive weight. This poses yet another challenge while balancing the weight-to-power ratio.

Read More: Tesla’s second AI Day to be held in Palo Alto

Several other challenges come across when a humanoid is being developed, out of which motion planning and leveraging artificial intelligence is one of the toughest. A humanoid must integrate the data from its sensors, model its surroundings, and decide what objectives it intends to accomplish. It should be able to plan its movements and interactions with people/objects and then accordingly commands its controllers to locomote. All of this should work out in a way pleasant for others so that there are no uncanny effects. It is a phenomenon known as the uncanny valley – if something looks human-like but not believably so; it becomes terrifying and even frightening.

Another reason Tesla’s Bots and other humanoids might face criticism is the extent of expectations. People often expect humanoids to function just like them, performing all human tasks smoothly. However, robots are a very long way off from being able to work humanely. People should instead focus on the task for which the robots have been designed. 

While Tesla has now mastered the prototype in exhibiting the strength of its components, the next step is to see how it performs when placed in a humanoid body. 
Nonetheless, Tesla is adamant about its plans to build a 73-kg humanoid prototype powered by a 2.3kWh battery and make it available at a price under US$20,000. Musk confirmed that the prototype would be ready for moderate-scale production by the end of 2023. So, in the coming decade, people may expect to see a fully operational humanoid robot with massive strength and human-like abilities.

Advertisement

IIT Kanpur offers a master’s degree course in cybersecurity

IIT Kanpur (IITK) is all set to offer a master’s degree program in cybersecurity for working professionals organized by IITK’s computer science and engineering department. The new master’s degree program will be instructed by world-class faculty members and researchers from IIT Kanpur.

12 November is the last date to apply for this new course. Although no GATE score is required to enroll in the new course, students must have decent academic and professional records and clear the interview process. After completing the course, they will have access to the IIT Kanpur placement cell and incubation center.

Read more: Password attacks rise to 921 every second globally: Microsoft 

Students need to complete the four years bachelor’s degree course or a master’s degree course in computer science, like MCA, ME, or M.Tech, with 55 % of marks and a minimum of 2 years of working experience, before enrolling in the new cybersecurity course. 

The new cybersecurity course consists of online live and self-paced sessions delivered through AI-powered iPearl.ai. Students can interact live with the IIT Kanpur faculty with the iPearl.ai platform. The program allows students to apply course learnings on projects while working in teams and establishing a peer network. The final exams of the programs can be conducted in the major cities of India. The program allows students to meet the experts and experience the IITK campus during visits.

The program allows candidates to complete the course in between 1 to 3 years. Candidates are trained on the latest technologies, tools, techniques, and concepts in cybersecurity. The program syllabus includes cryptography, computer networking, web security, network security, infrastructure security, operating systems principles, and IoT security.

Advertisement

Zoho to Expand R&D as it Crosses the US$1 Billion Milestone

zoho expand r&d as it crosses US$1 billion

Zoho, an Indian business software company, announced its plans to expand R&D as it crosses the US$1 billion milestone. The investments in R&D and a proposal of opening over 100 network PoPs (points of presence) worldwide in the next five years were made at Zoholics India, the company’s annual conference.

Zoho has impacted nearly every aspect of software development and web-based business activities, starting with a Network Management company called AdventNet Inc. in 1996 to offering “Zoho Remotely” to facilitate “work from home” in 2020.

Sridhar Vembu, Zoho’s CEO and co-founder, said that the company always believed in humility, as one cannot create newer energy or food by coding. He added that the company’s slumped growth from 2021 to 2022 was a reminder that humility runs the world and that it should pursue its long-term goals focusing on saving customers’ money rather than making supernormal profits. 

Read More: Microsoft’s Copilot is Being Sued for Violating Copyright Law

Vembu said, “Unfortunately, recent developments in our industry amidst a backdrop of rapidly deteriorating global economic outlook are a rude reminder of our own limits as technologists.”


Owning its own data centers, Zoho currently has 12 such centers, including two in India. It runs its proprietary software on approximately 14 network PoPs and more than 150 monitoring PoPs that let users monitor their website’s performance. Zoho grew by approximately 77% in 2021 with its top five offerings: Zoho One, CRM Plus, EX by Zoho People, Zoho Workplace, and the finance suite led by Zoho Books. For Zoho’s performance, Vembu was awarded Padma Shri, India’s fourth-highest civilian honor last year.

Advertisement

Israel to Trial Self-driving bus Pilot Program worth US$17 million

self-driving public bus trial in Israel
A photo of the Dan bus company’s new electric buses at the Reading terminal in Tel Aviv on February 22, 2022. (Avshalom Sassoni/Flash90

In an effort to reduce the notorious traffic congestion in the nation and enhance services to promote public transportation, Israel is poised to trial self-driving public buses over the next two years.

The Israel Innovation Authority (IIA) announced on Sunday that four consortiums had been chosen to conduct the trial across the nation. The pilot trial will begin with the first phase of operating and testing in a secured and controlled environment to demonstrate technological, regulatory, safety, and business viability. Then the trial will move to the second phase, where they will run self-driving bus lines on public roads within a range that will grow over the course of the two-year trial project.

As part of the national plan for autonomous public transportation, which was first unveiled in April, the state is contributing half of an investment of NIS 61 million shekels (US$17 million) for the pilot with the aim of examining the viability of integrating autonomous vehicles into Israel’s public transportation system.

The IIA will provide the groups with half of the funding for their pilots, and they will work inside a unique, cutting-edge regulatory framework created by the National Public Transport Authority of the Transportation Ministry, which will also provide them with licenses and supervision.

Traffic congestion has been plaguing Israeli roads for a long time and optimizing public transportation would enhance service, the passenger experience, and safety standards. By switching to a fleet of driverless buses, it will also assist the state transport authorities in dealing with a shortage of bus drivers.

The four companies participating in the public transportation pilot trial include the two main bus companies in Israel, i.e., Egged (which has the largest intercity routes) and Dan (which exclusively operates in the Gush Dan region). The other two companies are Metropoline (which offers bus routes in southern Israel to and from Tel Aviv and Beersheba as well as within and between communities in the Sharon region) and Nateev Express, based in Nazareth, which runs routes across the Upper Galilee in the north.

Read More: Waymo’s Robotaxi Service to hit roads in Los Angeles

These four transportation companies will collaborate with their various groups of startups and organizations in Israel, France, Turkey, Norway, and the United States.

For instance, Egged is collaborating with an unnamed French auto manufacturer whose autonomous car is now being tested in 20 nations. The Dan company will team up with its longtime partner, mobility and ride-sharing company Via Transportation (with whom it operates the Bubble shuttle system in and around Tel Aviv), as well as EasyMile, the French developer of a battery-powered autonomous electric bus, and Israeli firms Enigmatos, an autonomous security startup, and Ottopia, which produces assistive technology for autonomous vehicles. EasyMile already has fleets of self-driving buses in France and Germany.

Metropoline will also work with Ottopia, Michigan-based autonomous driving software platform Adastec, Turkish bus manufacturer Karsan, and Norwegian fleet management company Applied Autonomy for its trial. Adastec and Applied Autonomy are also participating in other pilot initiatives for self-driving technology in Michigan and Norway, respectively. Nateev Express will collaborate with Israeli company Imagry, the creator of a camera-based, Level 4-5 self-driving platform, to run autonomous shuttles in and around the Chaim Sheba Medical Center, which is Israel’s largest hospital, situated in Ramat Gan.

According to the IIA, the initiative will allow policymakers to map the infrastructure needed to run an autonomous public transportation system and examine the commercial potential of such a venture. Groups who successfully implement and finish the two-year trial will be granted contracts to expand their services in Israel.

Advertisement

NVIDIA introduces A800 GPU chips to replace banned A100 chips for China

NVIDIA A800 chip

The American semiconductor design company NVIDIA introduced a replacement chip with a slower processing speed for its second-largest market almost three months after the United States blocked China’s access to two of its high-end microchips. The A800 chip, which replaces the banned A100 chip, was specifically created for the Chinese market and will enable the company to continue selling its products to Chinese customers while also adhering to the new US export control regulations.

The A100 processor has been used to power supercomputers, artificial intelligence, and high-performance data centers in sectors ranging from biotech to finance to manufacturing. A100 and NVIDIA’s enterprise AI chip H100 were added to the U.S. export control list in order to address the possibility that the listed products may be routed to or utilized by a “military end use” or “military end user” in China and Russia. Although the Biden administration has pushed for “tech decoupling,” NVIDIA’s introduction of A800 to the Chinese market highlights the US chipmakers’ dependency on China.

Except for their connection speeds, the A100 and A800 are nearly similar. The A800 works at 400 gigabytes per second, while the A100 operates at 600 gigabytes per second. As per the performance barrier set by the new rules, NVIDIA cannot sell chips with rates of 600 gigabytes per second and up. The A800 GPU, which began manufacturing in the third quarter, satisfies the US government’s strict criteria for lowered export constraints and cannot be programmed to go beyond it. However, Reuters reports that NVIDIA has declined to comment if it had consulted the Commerce Department about the A800 chip.

Source: Omnisky

NVIDIA was not the only company forced to comply with new US export regulations. AMD was another chip manufacturer which was asked to stop selling AI chips to China or face government action. 

Read More: NVIDIA probes power cable burns due to RTX 4090 graphic card

In an earlier statement, NVIDIA said that if Chinese consumers choose not to purchase the company’s alternative product offerings or if the US government does not provide licenses promptly, the export restriction instituted by the Biden administration will cause the company to lose almost $400 million in sales.

Fortunately, the supply and sale of A800 chips can assuage the blow to the Chinese markets since the ban. Distributors of chips in China, including Omnisky, are now advertising A800 in their product catalogs. Even retailers on the Taobao online marketplace were offering the NVIDIA A800.

Advertisement