Saturday, November 22, 2025
ad
Home Blog Page 232

Coinbase to Invest in India and Hire 1000 employees

Coinbase invest India hire 1000

Online cryptocurrency trading platform Coinbase announces that it has invested $150 million in Indian startups operating in the Web3 and crypto industries, along with its plans to hire more than 1000 new employees in India this year. 

This is a step toward the company’s goal of expanding its operations in India, which has the most number of crypto owners. To date, Coinbase has invested in two unicorn Indian startups in the crypto space, namely Coinswitch Kuber and CoinDCX. 

Coinbase’s Indian tech hub opened last year and has already employed over 300 people throughout the country. With the addition of 1000 more employees, Coinbase will be able to accelerate the growth of the Indian crypto market considerably. 

Read More: Viz.ai raises $100 million in Series D Funding Round

“India has built a robust identity and digital payments infrastructure and implemented it at rapid scale and speed. Combined with India’s world-class software talent, we believe that crypto and web3 technology can help accelerate India’s economic and financial inclusion goals,” mentioned Coinbase in a blog

Last week, Coinbase Ventures announced a partnership with Builders Tribe to launch and organize a startup pitch event for India’s Web3 startups on April 8th. 

The announcement comes at a time when the country’s government is yet to decide the fate of cryptocurrencies and blockchain technology in India. 

According to recent reports, India’s Government is currently drafting a list of frequently asked questions to provide more clarity on the Government’s take on income tax and GST for virtual assets. 

“FAQ on taxation of cryptocurrency and virtual digital assets is in works. Although FAQs are for information purposes and do not have legal sanctity, the law ministry’s opinion is being sought to ensure that there is no loophole,” a government official told PTI. 

Moreover, the matter of concern for investors is that various digital Wallets which support Crypto trading, such as MobiKwik, have suspended their support for crypto transactions. Coinbase, which launched UPI support for crypto exchanges, has also taken down the feature just three days after the feature went live. 

Advertisement

Chipotle Recruits Robots to prepare your Next order of Tortilla Chips

chipotle robot chippy misorobotics
Courtesy: Chipotle

We have seen robots delivering food, assembling cars parts, painting, and even dancing to BTS songs. Now, with Chipotle testing a robot that makes tortilla chips, robotics solutions will soon be deployed to service restaurant chains and solve ongoing labor challenges. Chipotle is using a version of Miso Robotics‘ arm-based automaton tailored to create tortilla chips, dubbed Chippy. Chippy can deliver you tortilla chips with a dash of additional guacamole, too, because the bot not only understands how to recreate Chipotle’s recipe but also how to make “subtle changes.”

Chippy is now being tested at Chipotle’s Cultivate Center, an innovation hub in Irvine, California, and will be available later this year at a Southern California location. Following that, the company will rely on staff and customer input to design a more comprehensive rollout strategy.

Chipotle is testing out an autonomous kitchen assistant, Chippy, which offers a robotic solution for making chips in restaurants.
Chipotle is testing out an autonomous kitchen assistant, Chippy, which offers a robotic solution for making chips in restaurants.
Courtesy: Chipotle

For the time being, Chippy uses artificial intelligence to duplicate Chipotle’s identical formula for cooking chips, which includes corn masa flour, water, and sunflower oil, as well as a sprinkle of salt and a spritz of lime juice, all without compromising on the quality and taste. 

Chipotle isn’t the first company to incorporate artificial intelligence in its operations. It also features a concierge chatbot, Pepper, who guarantees that customers have an amazing experience on the Chipotle app and Chipotle.com.

This is also not the first time we turned to robots to address labor shortage woes at restaurants. For instance, last month, White Castle announced the deployment of Flippy 2, a burger-flipping robot made by Miso Robotics for more than 100 of its locations. Miso’s Flippy Wings is also being tested at Buffalo Wild Wings, which is owned by Inspire Brands. Fast-food restaurants like Sonic, McDonald’s, and Checkers have used drive-thrus with robots. During the pandemic, a Latin American restaurant in Dallas used a trio of cyborgs to work as waiters, while robots also cooked and served food to reporters during the Beijing Winter Olympics. In 2019, startup Blendid had launched a smoothie-making robot Chef B on the University of San Francisco’s campus to make The Classic, Strawberries and Cream and The Foggy Don smoothies.

According to Miso Robotics CEO Michael Bell, the labor issue isn’t going away anytime soon, and there’s a huge demand for restaurants to automate operations. Many restaurant owners anticipate recruiting workers to be challenging until at least 2023, according to a February National Restaurant Association report, despite the industry’s workforce growing by an estimated 400,000 jobs.

Read More: Does deployment of robotic dogs at US-Mexico border pose a serious ethical conundrum?

Apart from addressing the labor shortage, another impetus for firms like Chipotle and others to adopt robotics is to save money. The unfortunate fact is that cutting costs will also ‘cost’ human workers who need to work. In other words, while businesses are aiming to minimize operational expenses, people are struggling to secure valuable employment in an ever-changing financial quicksand.

Advertisement

Manchester Launches Center to design AI Robots for Real World Applications

Manchester AI robots center

The University of Manchester announces the launch of its new center to design artificial intelligence (AI)-powered robots for real-world applications. 

With this new center, researchers will develop novel AI-enabled robots to tackle some of the most significant challenges that the world faces today. 

For instance, researchers at the center are developing robotic systems capable of exploring the harshest settings, such as those found in the nuclear sector, power generating, or agriculture. 

Read More: Amazon and Johns Hopkins announce new AI institute

The new facility organized an inaugural workshop to offer a strategic focus to Manchester’s robot and AI community and share expertise. Researchers in the new facility will also consider several ethical aspects while developing the robots to ensure they adhere to social norms and responsibilities. 

Vice Dean for Research and Innovation in the University’s Faculty of Science and Engineering, Prof Richard Curry, said, “Robotics is now an important field that can be found in research areas across the University’s academic portfolio – which is not surprising, as robotic and autonomous systems are being applied in all parts of our lives.” 

He further added that they are giving their diverse, world-class work in robotics and AI a new focus by establishing this new Manchester center of excellence in robotics and AI. 

According to the University of Manchester, the following are a few research worlds that will be carried out at the center – 

  • Designing mechatronics control systems focusing on bio-inspired solutions.
  • Building new software engineering and AI methodologies for autonomous system verification.
  • Researching human-robot interaction focusing on the use of brain-inspired approaches to control robots. 
  • Studying ethics and human-centered robotics challenges.

“Manchester’s robotics community has achieved a critical mass of expertise – however, our approach in the designing of robots and autonomous systems for real-world applications is distinctive through our novel use of AI-based knowledge,” said Angelo Cangelosi, Prof of Machine Learning and Robotics at Manchester. He also mentioned that Manchester University offers one of the world’s best positions in the field of autonomous systems. 

Advertisement

Viz.ai raises $100 million in Series D Funding Round

Viz.ai raises $100 million funding

Medical imaging and emergency treatment solutions providing company Viz.ai raises $100 million in its recently held series D funding round led by Tiger Global and Insight Partners. 

Multiple other investors such as Scale Ventures, Kleiner Perkins, Threshold, GV (formerly Google Ventures), Sozo Ventures, CRV, and Susa also participated in the funding round. 

The new funding was raised by Viz.ai at a whopping valuation of $1.2 billion. According to the company, it will use the fresh funds to support its rapid growth, expand the Viz Platform to detect and triage additional diseases, and increase its global customer base. 

Read More: Elon Musk to NOT join Twitter’s Board, says CEO Parag Agrawal

“Viz.ai is the stand-out AI healthcare company; they are first-in-class in intelligent care coordination, with a solid foundation of clinical evidence supporting the value delivered to healthcare providers and patients,” said John Curtius, Partner at Tiger Global. 

Additionally, Viz.ai also announced the launch of its new artificial intelligence-powered life science platform. The novel AI platform will revolutionize the clinical trial and treatment methods, claims the company. 

Viz.ai is currently hiring clinical, product, engineering, and business minds to support growth in multiple locations across the globe. 

United States-based technology company Viz.ai was founded by Chris Mansi, David Golan, and Manoj Ramachandran in 2016. The firm specializes in developing deep learning and artificial intelligence solutions to help healthcare practitioners better treat their patients. To date, Viz.ai has raised more than $251 million from multiple investors over seven funding rounds. 

Co-founder and CEO of Viz.ai, Chris Mansi, said, “We will continue to invest heavily in cutting edge technology and services to integrate deeply into the clinical workflow, allowing us to automate disease detection, and increase diagnostic rates and enhance workflows across the entire hub and spoke health system.” 

Chris further added that the company is committed to providing better, faster, and more equitable access to life-saving treatments to patients.

Advertisement

Elon Musk to NOT join Twitter’s Board, says CEO Parag Agrawal

Elon Musk Not join twitter board

CEO of Twitter, Parag Agarwal, recently shared a Tweet disclosing the news that Elon Musk has chosen not to join Twitter’s board of directors. 

This development comes after Musk bought 9.2% of shares in the company, making him the single largest shareholder of Twitter. 

Agarwal and outgoing board member Jack Dorsey showed their support for Musk and welcomed him to join Twitter’s board when it was revealed that he invested a whopping 2.9 billion to acquire the company’s stakes. 

Read More: New AI tool can identify Depressed Twitter Users

“We announced on Tuesday that Elon (Musk) would be appointed to the Board contingent on a background check and formal acceptance. Elon’s appointment to the board was to become officially effective 4/9 (9 April 2022), but Elon shared that same morning that he will no longer be joining the board,” said Parag Agarwal in his recent tweet, addressing Twitter users and his team members. 

Prior to this episode, Agarwal had several conversations with the Twitter board and directly with Musk about his joining. Twitter had earlier said it had reached an agreement with Musk to give him a seat on the board of directors, which will last until its annual shareholders meeting in 2024. 

Soon after Musk acquired the shares, he posted various tweets to allow users to share their opinions regarding making changes in the microblogging platform, such as the addition of a new Edit button, the addition of Authentication Checkmark feature, integration of Dogecoin payments, and several others. 

Additionally, Musk also discussed that Twitter should charge for its subscription membership, adding that the fee should be appropriate to affordability and in local currency. 

Advertisement

New AI tool can identify Depressed Twitter Users

AI Tool identify Depressed Twitter Users

Researchers from Brunel University London and the University of Leicester have developed a novel artificial intelligence (AI) tool that accurately identifies depressed Twitter users by extracting and analyzing 38 data points from the public profile. 

The algorithm considers factors like post contents, posting times, and the user’s social circle to determine whether the Twitterati is depressed. 

Developers of this AI algorithm claim that it has a respectable accuracy of nearly 90%. Due to various causes, including social stigma or ignorance of mental state, a vast proportion of prospective depression sufferers around the world do not seek professional care. 

Read More: Microsoft Offers Detection Guidance on Spring4Shell Vulnerability

This negligence then leads to severe delays in diagnosis and treatment. The new AI algorithm can considerably help change the scenario and might open new ways for future diagnosis. 

Director of Brunel’s Institute of Digital Future, and co-author of the study, Abdul Sadka, said, “We tested the algorithm on two large databases and benchmarked our results against other depression detection techniques. In all cases, we’ve managed to outperform existing techniques in terms of their classification accuracy.” 

The AI tool filters people with fewer than five tweets and then uses natural language processing to repair misspellings and abbreviations in the remaining profiles. According to the researchers, such technology might potentially detect a user’s depression before they publish something online. This would allow social media platforms to raise alerts that might help early diagnose and treat identified users. 

Moreover, the bot can also be used for other purposes, including sentiment analysis, employee screening, criminal investigation, etc. 

“The next stage of this research will be to examine its validity in different environments or backgrounds, and more importantly, the technology raised from this investigation may be further developed to other applications, such as e-commerce, recruitment examination or candidacy screening,” said co-author of the study Huiyu Zhou. 

The study has been published in IEEE Transactions on Affective Computing.

Advertisement

OpenAI unveils DALL-E 2, an updated version of its text-to-image generator

Openai DALL-E 2
Source: OpenAI

In January 2021, OpenAI launched DALL-E, a 12-billion parameter version of GPT-3 trained to produce pictures from text descriptions using a dataset of text-image pairings. A portmanteau of the artist “Salvador Dalí” and the robot “WALL-E,” DALL-E’s astounding performance was an instant hit in the AI community, and it also received extensive mainstream media coverage. It can also synthesize items that don’t exist in the actual world by combining diverse concepts. Moreover, the DALL-E model can conduct prompt-based image-to-image translation tasks. Recently, OpenAI unveiled DALL-E 2, an upgrade to its text-to-image generator that incorporates a higher-resolution and lower-latency version of the original system. 

DALL-E 2 results for “Teddy bears mixing sparkling chemicals as mad scientists, steampunk.”
DALL-E 2 results for “Teddy bears mixing sparkling chemicals as mad scientists, steampunk.”
Source: OpenAI

When DALL-E was first announced, OpenAI stated that it will continue to improve the system while looking at possible risks such as picture generation bias and the spread of false information. It was seeking to overcome these challenges with technical precautions and a new content policy, while simultaneously decreasing its computational load and advancing the model’s core capabilities.

A photo of an astronaut riding a horse.
An astronaut riding a horse in a photorealistic style (Variation)
Source: OpenAI

DALL-E 2 is based on OpenAI’s CLIP image recognition system, which was created to inspect a given image and summarize its information in a way that people can comprehend. OpenAI iterated on this process to generate “unCLIP,” an inverted version that begins with the description and progresses to an image.

Users can now choose and modify particular portions of existing photographs, as well as add or delete items and their shadows, mash-up two images into a single collage, and create variants of an existing image. Furthermore, the output graphics are 1,024 x 1,024 pixels, rather than the 256 x 256 pixels created by the previous version. 

The working of DALL-E 2 is divided into two stages: the first makes a CLIP image embedded with a text caption, and the second generates a picture from it. The results are impressive, and they might have a significant impact on the art and graphic design industries, particularly video game businesses, which hire designers to painstakingly develop worlds and concept concepts.

DALL-E 2 produces images that are several times larger and more detailed than the original. This enhancement is possible due to the transition to a diffusion model, a form of image generation that begins with pure noise and refines the image over time, making it a little more like the image requested until there is no noise left at all. 

DALL-E can also generate a smart replacement of a given area in an image. Furthermore, you can provide the system with an example image and it will produce as many versions of it as you like, ranging from extremely near copies to artistic revisions.

Unlike the earlier version, which was open for everyone to play with on the OpenAI website, this new version is now only available for testing by verified partners who are limited in what they may submit or produce with it. They are prohibited from uploading or creating images that are “not G-rated” and “could cause harm,” such as hate symbols, nudity, obscene gestures, or “big conspiracies or events relating to important ongoing geopolitical events.” They must also explain how AI was used to create the images, and they cannot share the images with others via an app or website.

Read More: OpenAI announced Upgraded Version of GPT-3: What’s the catch?

The existing testers are also prohibited from exporting their created works to a third-party platform. However, OpenAI aims to incorporate it into the group’s API toolkit in the future, allowing it to power third-party apps. Meanwhile, if you wish to try DALL-E 2 for yourself, you can sign up for the waitlist on OpenAI’s website.

Advertisement

Amazon and Johns Hopkins announce new AI institute

Amazon Johns Hopkins AI Institute

Global technology giant Amazon announces partnership with Johns Hopkins University (JHU) for establishing a new JHU + Amazon Initiative for Interactive AI (AI2AI). 

The jointly established artificial intelligence institute will primarily focus on pioneering AI developments, focusing on machine learning, computer vision, natural language processing, and speech processing. 

The Johns Hopkins Whiting School of Engineering’s new JHU + Amazon Initiative for Interactive AI will take advantage of the university’s world-class expertise in interactive AI. 

Read More: TCS’ Conversational AI Platform recognized by Celent

According to the announcement, the project’s inaugural director will be Sanjeev Khudanpur, an associate professor in the Department of Electrical and Computer Engineering. 

“Hopkins is already renowned for its pioneering work in these areas of AI, and working with Amazon researchers will accelerate the timetable for the next big strides. I often compare humans and AI to Luke Skywalker and R2D2 in Star Wars: They’re able to accomplish amazing feats in a tiny X-wing fighter because they interact effectively to align their complementary strengths,” said Sanjeev. 

He further added that he is thrilled about the potential of the Hopkins AI community banding together under the banner of this endeavor and mapping the future of transformational, interactive AI alongside Amazon researchers. 

The funding from Amazon will be used for multiple purposes in this project, such as wading annual fellowship to Ph.D. students, collaborating on research projects led by JHU faculty, and several others. 

Earlier the two entities had collaborated with four Johns Hopkins faculty members joining Amazon as part of its Scholars program. 

“We value the challenges that they bring us and the life-changing potential of the solutions we will create together, and look forward to strengthening our work together over the coming years,” said Ed Schlesinger, Benjamin T. Rome Dean of the Whiting School of Engineering. 

Advertisement

EU provides Regulatory Approval for First autonomous X-ray Analyzing AI

EU xray ai ChestLink oxipit

The European Union (EU) has given regulatory approval to the first AI tool capable of full autonomous X-ray analysis. Designed by medical imaging company Oxipit, the autonomous AI imaging suite called ChestLink was developed to operate alongside medical practitioners to help with their clinical workflow. ChestLink can scan X-rays for anomalies and give findings to patients if their X-rays are normal. The tool transmits the X-ray to a radiologist for manual assessment when it detects a possible health issue. If the AI is unsure that a patient’s X-ray could show abnormalities, it sends findings to a radiologist as a preventive step to guarantee that they don’t miss any underlying health risks.

According to informative materials from the Oxipit, most X-rays in primary care are trouble-free, therefore automating the procedure for such scans might reduce radiologists’ workloads.

In the EU, the technology currently has CE Class IIb certification, indicating that it complies with safety regulations. The certification is similar to FDA clearance in the United States, but the measurements are significantly different as a CE mark is easier to get, takes less time, and doesn’t require as much review as an FDA clearance. The FDA examines a device to see if it is safe and effective, and it frequently requests additional information from manufacturers. 

ChestLink spent more than a year in numerous pilot tests at various locations evaluating 500,000 genuine X-rays before being certified. It made zero “clinically relevant” errors during these tests. Oxipit is hopeful that ChestLink will also get certified by the FDA  for potential use in the United States.

Read More: EU Parliament Passes Privacy-Busting Crypto Rules

The first clinical deployment of ChestLink is scheduled for early 2023. For the purpose of safety and certainty, Oxipit will begin by assigning ChestLink retrospective analyses, i.e., X-rays that have previously been evaluated by radiologists. ChestLink will be deployed for preliminary analysis under the supervision of Oxipit and medical institution personnel after it passes that real-world test. ChestLink will be moved to autonomous prospective reporting in the last stage of implementation, as planned. Staff will be able to swiftly track the processes of application choices using a real-time analytics page if a radiologist needs to manually evaluate a patient’s X-ray after ChestLink granted them a clean bill of health.

Advertisement

Boeing partners with Microsoft to accelerate Digital Transformation

Boeing Microsoft digital transformation

Aircraft manufacturing company Boeing partners with technology giant Microsoft to accelerate its digital transformation journey. 

This strategic partnership with Microsoft will allow Boeing to use the Microsoft Cloud and AI capabilities to upgrade its IT infrastructure and mission-critical applications with intelligent new data-driven solutions, allowing for new ways of working, operating, and conducting business. 

Boeing was one of the first companies to use the Microsoft Cloud, storing multiple digital aviation apps on Microsoft Azure and leveraging artificial intelligence to improve customer outcomes and streamline operations. 

Read More: Microsoft Offers Detection Guidance on Spring4Shell Vulnerability

According to the plan, Boeing will leverage Microsoft Cloud and AI capabilities to upgrade essential infrastructure, optimize business processes, and several other tasks. 

Chief Information Officer and Senior Vice President of Information Technology & Data Analytics at Boeing, Susan Doniz, said, “Today’s announcement represents a significant investment in Boeing’s digital future. Our strategic partnership with Microsoft will help us realize our cloud strategy by removing infrastructure restraints, properly scaling to unlock innovation, and further strengthening our commitment to sustainable operations.” 

She further added that Microsoft’s proven collaboration strategy, trusted cloud technologies, and extensive industry knowledge will assist them in achieving their transformation goals and strengthening Boeing’s digital foundation. 

This new development will allow Boeing to extract meaningful and long-term value from its vast database, confirming their shared commitment to lead aerospace innovation in the future. Boeing is a global leader in aircraft designing, manufacturing, and servicing commercial airplanes and defense instruments with a customer base spread across 150 countries globally. 

EVP and Chief Commercial Officer of Microsoft, Judson Althoff, said, “Boeing and Microsoft have been working together for more than two decades, and this partnership builds on that history to support Boeing’s digital future by helping it optimize operations and develop digital solutions that will benefit the global aviation industry.” 

He also mentioned that by providing flexible, agile, and scalable intelligent and data-driven solutions on a secure and compliant platform, the Microsoft Cloud and its AI capabilities would serve as a significant component of Boeing’s digital aviation strategy.

Advertisement