Wednesday, November 19, 2025
ad
Home Blog Page 139

Movio to make videos featuring human avatars using generative AI

Movio to make videos using generative AI

Movio, a two-year-old startup, is leveraging machine learning and generative AI frameworks like GAN to make videos having talking human avatars. The platform is developing a Canva-style drag-and-drop interface for video designing. 

Users can first pick from a range of templates, and then they can add a hyper-realistic avatar to be the spokesperson of the video, with speech generated by text input. The AI-made human’s outfit, face, and voice can be swapped with a click.

Movio can synthesize only talking heads for now. Still, it is working toward its algorithms that can generate whole-body movement, allowing the company to get closer to its aim of being an all-in-one AI video production platform.

Read More: UK Bank Santander To Block Crypto Exchanges Next Year 

The startup charges users based on the length of videos correlated with the script they submit and a premium fee for those who use customized faces. This feature is particularly popular in corporate training. Moreover, Movio has opened its API for third-party websites, some of which use its engine to create pop-up customer support avatars.

Movio’s user base among paying customers is nearing almost 1,000. It has raised around $9 million so far in a funding round from investors, including Sequoia Capital China, IDG, and, most recently, Baidu Ventures. 

Advertisement

Tesla’s Autopilot is facing unprecedented scrutiny. But, why?

Tesla's Autopilot facing unprecedented scrutiny

Elon Musk has championed Tesla driver assistance Autopilot and Full Self-Driving (FSD) software as innovative advancements that can improve road safety while positioning the electric vehicle maker as a technology leader. Despite that, Tesla has been battling its biggest challenge ever since the launch of autopilot in 2015, as it faces severe scrutiny over the feature. 

Tesla’s autopilot is facing a series of lawsuits and investigations over fatal Tesla car accidents. However, it does make one wonder what may be causing Tesla to face such unprecedented scrutiny over its Autopilot system. Let’s look at some of the legal and regulatory challenges that Tesla has been dealing with: 

Lawsuits 

A Model S driver charged with manslaughter in 2019 after a fatal accident while using Tesla Autopilot in Los Angeles is facing a trial, which is set to start on November 15. Although Tesla was not charged, the Tesla system and its claims about it are expected to be in focus, as several US senators demanded an investigation. The trial is being watched closely by legal experts as a test case for the fault of a human driver in a car partly driving itself. The family of the people who died in the car crash has also sued the EV maker, claiming Tesla should have taken action to safeguard abuses of autopilot.

Read More: Tesla To Start Mass Production Of Cybertruck At The End Of 2023 

Another lawsuit against Tesla will go to trial in February. It concerns an accident that involved the death of 50-year-old Tesla Model 3 owner Jeremy Banner when his EV struck a tractor-trailer at the intersection of a highway in Florida. This will be the first civil lawsuit related to the autopilot that goes to trial. Apart from this, a 2018 crash of Tesla Model X killed the driver and an Apple engineer, Walter Huang, when it slammed into a concrete divider on a freeway in California’s Mountain View. A lawsuit by his wife suing Tesla is set to go to trial in March.

The National Transportation Safety Board investigated the Florida and California accidents and blamed both Tesla and the drivers. The NTSB said drivers rely too much on the Autopilot system, whereas Tesla failed to restrict autopilot use or adequately monitor driver attentiveness. 

Investigations

Tesla has also been facing an investigation from the US Department of Justice, California’s DMV, and NHTSA. Tesla is under investigation in the US over the claims that the company’s EVs can drive themselves. The Department of Justice investigation could potentially conclude with criminal charges against Tesla or its executives.

California’s transportation regulator accused Tesla of “deceptive practices” of advertising in August, which suggested its driver assistance technology enabled autonomous vehicle control. The Department of Motor Vehicles (DMV) could potentially suspend Tesla’s license to sell EVs in California and will ask the company to make restitution to drivers. California’s DMV is also conducting an independent safety review which can force Tesla to apply for regulatory permits to operate its electric vehicles in California. 

In June, the National Highway Traffic Safety Administration upgraded its defect investigation into 830,000 Tesla vehicles with autopilot, a required step before it could seek a recall. The auto safety regulator is reviewing whether Tesla vehicles adequately ensure drivers are paying attention. Since 2016, NHTSA has opened nearly 40 special investigations involving 19 deaths in crashes involving Tesla vehicles.

Conclusion

It is evident from the above-mentioned lawsuits and investigations that Tesla’s Autopilot does come forward as a culprit in some scenarios. However, Tesla’s exact position remains unclear. Despite their names, Autopilot and Full Self-Driving have significant limitations. 

In a letter to California’s Department of Motor Vehicles, a Tesla lawyer acknowledged that Full Self-Driving struggled to react to a wide range of driving situations and should not be considered a fully autonomous driving system. Germany’s federal motor transport authority, KBA, found abnormalities while investigating Tesla’s autopilot function. Their software and sensors cannot control cars in many situations, which is why drivers need to keep their eyes on the road and hands close to the wheel. As of now, it seems only fair that Tesla abides with the ongoing lawsuits and investigations, if proven guilty, takes up responsibility for its shortcomings. 

Advertisement

Kerala-based DCUBE Ai collaborates with Australian startups to provide healthcare to astronauts

DCUBE Ai collaborates with Australian startups

DCUBE Ai, a machine learning applications company located in Thiruvananthapuram, is venturing into space tech. During the recent Bengaluru Space Expo, it signed an MoU with Australian space tech startups AltData and SABRN. The companies will collaborate with Adelaide University to provide healthcare services to astronauts.

While SABRN is creating a health pod or E-Lifepod to monitor the well-being of astronauts, artificial intelligence will help with the pod’s analysis and sensor integration. DCUBE Ai is one of six companies that have signed the MoU with Australian startups. 

DCUBE Ai signed an MoU with the Australian startups AltData and SABRN as part of the International Space Investment initiative between Australia and India. As part of this project, they will develop hardware and software that must be tested in orbit. 

Read More: Password Attacks Rise To 921 Every Second Globally: Microsoft 

PSLV missions are a possibility for testing and validation for deep space unmanned missions, the company said. However, their immediate focus remains on AI/ML capability for space tech. 

DCUBE Ai creates software solutions using emerging technologies for medium and large enterprises. Recently, it has grown into a tech-savvy company that solves clients’ business challenges. DCUBE Ai provides services in technology consulting, custom software application development, maintenance of software, and integration using technologies, including IoT, AI, and cloud

Advertisement

Microsoft’s Copilot is Being Sued for Violating Copyright Law

copilot sued for violating copyright law

Matthew Patrick, an attorney and programmer, is suing Microsoft’s Copilot for violating copyright law. Along with Microsoft, GitHub, a software developer platform, and OpenAI, an artificial intelligence company, are also being sued for related damages. 

Copilot is a programming assistant developed by Microsoft and OpenAI in collaboration with GitHub. The tool is powered by artificial intelligence and can produce code in several programming languages. Copilot uses OpenAI’s Codex, an artificial intelligence model based on GPT-3, to produce code recommendations. Codex was trained on GitHub-hosted open-source code, enabling the model to pick up on common coding trends.

Patrick claims that Copilot extracts information without disclosing the source. This infringes on the rights of the associated programmers in their code as well as various open-source license types. He, therefore, filed the lawsuit on behalf of all those who suffered damages.

Read More: NITI Aayog’s Notion of Responsible AI

This is not the first time that Copilot has come under fire. Sam Nguyen, a software engineer at SendGrid, reported that Copilot was leaking functional API keys. Another developer claimed that a large chunk of code generated by Copilot was his copyright.

The legal counsel, Joseph Saveri, that filed Patrick’s lawsuit in the Northern California Federal District Court, said the firm would make an effort to contact any such stakeholders who were affected via another website.

Advertisement

Humanoid Robots: Hype for Economic Gains or threat to Humans?

humanoid robots goldman sachs report
Image Credit: gremlin//Getty Images

The fascination with humanoid robots is understandable, and several companies are already developing cutting-edge humanoid robots that promise to bring us widespread use of intelligent machines in various applications and contexts. According to a new Goldman Sachs estimate, the market for humanoid robots, like the newly debuted Tesla Bot, might reach US$150 billion annually over the next 15 years. This bold prediction may be corroborated by the rise of humanoid robots in the news, but are we humans prepared to embrace such robots as a part of our everyday lives? Haven’t movies like the Terminator series, Avengers Age of Ultron, and X-Men: Days of Future Past warned us otherwise?

Goldman Sachs has been following the Tesla-led resurrection of humanoid robots, and therefore recently, the company published a fresh analysis on a potential early market investment case. According to the report, the market for humanoid robots might be worth US$154 billion by 2035, which is what Tesla CEO Elon Musk himself predicted.

Goldman Sachs asserted that the introduction of Tesla’s humanoid robot prototype, “Optimus,” has rekindled discourse about the market potential presented by such innovation. The company believes that in 10–15 years, a market size of at least US$6 billion would be feasible, allowing humanoid robots to meet 4% of the US manufacturing labor shortage gap by 2030 and 2% of the world’s need for elderly care by 2035.

The company further explained its “blue-sky scenario”: Goldman Sachs anticipates a market of up to US$152 billion by 2035 in a “blue-sky” scenario, which is close to the global market for electric vehicles and one-third of the market for smartphones as of 2021. This suggests labor shortage problems, such as those for manufacturing and elderly care, can be addressed to a large extent.

Tesla Optimus Prototype

It’s interesting to note that Goldman Sachs acknowledges Tesla as being the driving force behind the ‘renaissance’ of humanoid robots, yet the firm advises purchasing motion component suppliers to profit from this new market.

2022 has been a remarkable year for humanoid robots. Many people think that robots are often seen negatively in Hollywood as incarnations of evil sentinels. While the assassin robot is a common representation in Western popular culture, then many people in other regions of the world also view robots as saviors or caretakers or assets.

Ameca

Last month, the Museum of Future in Dubai unveiled a humanoid robot named Ameca as a new addition to the staff. Ameca can welcome people, give directions, and speak numerous languages. The museum’s official Instagram account posted a video of Ameca and a MOTF employee identified Aya introducing themselves. The video of her being included in the MOTF went viral right away. While many viewers were astounded by the video, others were concerned for the future of humanity. Ameca was created by Engineered Arts, however, at the moment, her bottom half isn’t working, thus she can’t move. The creators of the robot claim they are developing a version that will make it more like humans.

Grace

In the same month, researchers at the Jewish General Hospital in Montreal started a pilot research to determine if Grace, a humanoid robot, can assist seniors in overcoming loneliness. The GeriPARTy Laboratory team created Grace to listen when people talk and then generate a response. She can also perform reading exercises, tell jokes, and discuss subjects that older adults would find interesting. As part of research directed by Montreal’s Jewish General Hospital, Résidence Pearl & Theo will get visits from this humanoid robot twice weekly for a period of eight weeks. During each 30-minute session, her mission will be to keep elders in nursing facilities company and to help overcome social isolation. According to the residents, unlike human companions, Grace can work the whole day without getting tired or bored, nor has to cope with the mental strain of handling strong emotions like humans.

Residents of the Gulmohar Garden Society in Jaipur, India, saw the participation of the Sona 3.5 AI Humanoid Robot and Sona 2.5 Service Robot Men in the Diwali celebrations. These robots, created by Club First Robotics in Jaipur, can successfully perform various tasks. For instance, Sona 2.5 oversaw the cafeteria’s food service while Sona 3.5 filed grievances, provided comments, and answered questions about society. During the festivities, Xena 5.0 handled all security responsibilities, turned on the fire suppression system, irrigated the garden, and kept an eye on everything on camera.

AI-Da

October also saw Ai-Da, the world’s first ultra-realistic AI robot artist, being questioned by a committee in the UK Parliament. The British Parliament saw a robot appear before the UK’s Upper House, the House of Lords, for the first time in history. The Ai-Da testified before the UK’s Communications and Digital Committee as part of the inquiry into the future of the creative industries. The humanoid robot, clad in an orange shirt and denim dungarees, addressed questions on the possible challenges to creativity posed by artificial intelligence and technology. The humanoid robot, developed by Aidan Meller of the University of Oxford, continued to speak openly about its ability to create and improve creative disciplines, including poetry and painting.

Read More: Boston Dynamics Pledges not to Arm Robots with Weapons

All these inventions have highlighted the caring and resourceful side of humanoid robots. However, many are not still on board with the idea. The quest to develop universal humanoid robotic solutions is too complicated and rarely provides the best answer to any given real-world issue. In contrast to the aforementioned example, it is less clear in what direction and precisely for what real-world applications humanoid robots will be deployed, particularly the Tesla robot Optimus, which is now stealing the limelight. Critics have never hesitated to call Optimus a demo robot whose ulterior goal might boost Tesla’s stocks. The Goldman Sachs report might highlight a possible economic benefit from the humanoid robot hype, however, there are questions about the time the technology will take to reach commercialization, ethical issues, and more that need to be answered. 

Sure, we do have instances of movies that have portrayed robots as our aides, like C3PO from Star Wars, Wall-E from Wall-E, or Baymax from Big Hero 6, but reality may not align with them. While humanoid robots like Grace make up for good PR, we cannot ignore the possibility of the uncanny valley effect. The uncanny valley has always been one of the most significant psychological phenomena and challenges in the realm of robotics. As we inch closer to making robots that are aesthetically and functionally xerox of humans, we risk the uncanny valley effect of being uncomfortable around these robots. The future, which promises a new age of humanoid robots, also threatens to push us to a new forced reality of being comfortable with these humanmade bipedal bots – resulting in humans feeling anxious, paranoid, and uneasy around them. As we perfect humanoid robots to make them more efficient, we also need to account for the usability and uncanny valley effect before signing up for mass adoption.

Advertisement

NITI Aayog’s Notion of Responsible AI

niti aayog responsible ai

Due to the rising applicability of artificial intelligence, the government has decided to focus its efforts on several sectors that can embrace and benefit from the same. NITI Aayog is a government body developing itself as a state-of-the-art center for resources and skills. It promotes research and policy development for the government while dealing with contingency issues. This government body has embarked on making AI responsible in the country. 

In June 2018, NITI Aayog released a discussion paper named National Strategy for Artificial Intelligence (NSAI) as a part of its mandate entrusted in the Budget Speech of 2018-2019. The NSAI discussion paper highlighted the potential of artificial intelligence (AI) and its large-scale adoption in the country and made recommendations to ensure responsible utilization and management. The paper also included a roadmap for implementing AI in five public sectors and described “AI for All” as the guiding philosophy for upcoming AI design, development, and implementation in India.

Besides researching, NITI Aayog has also collaborated with companies like Amazon Web Series and Intel to establish innovation centers. It established the Experience Studio at the Delhi headquarters to facilitate innovation among industry experts, government stakeholders, and startups.

More recently, NITI Aayog has published a series of papers discussing “Responsible AI,” the practice of integrating good intent while leveraging artificial intelligence. The series was propagated using the hashtag #AIForAll. 

NITI Aayog published the first edition, “Principles for Responsible AI,” a two-part approach paper defining ethical design, development, and use of artificial intelligence in India and enforcement methods for putting these principles into practice. This edition mentions case studies and considerations in the context of ‘Narrow AI’ solutions categorized as ‘systems considerations’ and ‘societal considerations.’ The former category results from system designs and deployment methods, while the latter stems from ethical challenges arising from AI applications.

This edition also builds on Capgemini’s report highlighting that approximately 85% of organizations had ethical concerns about using AI. It discusses other relevant concerns like job loss due to automation, malicious intent that comes with technology, targeted propaganda, etc.

Read More: Google is Developing an AI App that Creates Images from Text

The second part of the Responsible AI series, “Operationalizing Principles for Responsible AI,” identifies a series of actions that organizations should adopt while embracing AI responsibly. Written in collaboration with the World Economic Forum Centre for the Fourth Industrial Revolution, the book bifurcates all necessary actions between the government and private sector. A particular focus has been placed on the government’s role in ensuring responsible AI adoption and managing the actions of the private sector. 

NITI Aayog recently released the third edition of the series, “Responsible AI for All: Adopting the Framework – A use case approach on Facial Recognition Technology.” To document this paper, NITI Aayog collaborated with Vidhi Centre for Legal Policy and tested the principles and actions mentioned in the previous two releases in the first use case, Facial Recognition Technology (FRT). The book describes FRT as a concept with some common uses across the country.

However, the mentioned technology is a debatable topic on both domestic and international scales because of its hazards to human rights, like privacy. Therefore, as part of the organization’s effort to make AI more responsible, it will work closely with the Ministry of Civil Aviation to launch the Digi Yatra Program. This program will incorporate facial recognition (FR) and facial verification (FV) technologies to enhance the travel experience. FVT will also be used at various airports for passenger identification verification, ticket validation, and other checks as necessary from time to time, depending on the operational requirements.

Working in such a technical field and realizing its potential is a significant step forward for the government. The public sector is also embracing it, not only to ensure that such technologies are ethically used responsibly in the private sector but also for the enhancement of its own. There are many use cases where governments can use artificial intelligence. It can benefit emergency services, public interaction, virtual assistants, and many others.

Advertisement

Key announcements at Google AI@ Event 2022 

Key announcements at Google AI@ event 2022
Google AI@ Event

Google held its inaugural AI@ Event (Google AI@ Event 2022) last Wednesday at the company’s Pier 57 offices in New York City to highlight its latest work in AI technology. The event focused on new advances as well as early research from Google AI and research teams in climate change, Generative AI, Language Translation, Health AI, disaster management, responsible AI, accessibility, healthcare, and creativity. Here is a rundown of all the announcements made during the event.

Flood Hub

The business declared the opening of a brand-new improved platform called Flood Hub, which aims to analyze enormous meteorological data sets to show the potential for flooding in various counties. Google has been using AI to anticipate floods since 2018, warning consumers through Google search and Maps. It began utilizing AI to anticipate flood patterns in India’s Patna region in 2018. Three years later, an enlarged version of the technology-assisted in reaching an estimated 23 million individuals in India and Bangladesh with 115 million flood alerts using Google Search and Maps. As part of the latest update, this feature is available in 18 additional countries, viz., Brazil, Colombia, Sri Lanka, Burkina Faso, Cameroon, Chad, the Democratic Republic of Congo, Ivory Coast, Ghana, Guinea, Malawi, Nigeria, Sierra Leone, Angola, South Sudan, Namibia, Liberia, and South Africa. Google said that if FloodHub performs as planned, it may be able to anticipate floods in affected regions up to seven days in advance

Wildfire Detection System

Google began providing a map feature that instantly displays wildfire boundaries to users in the US in 2020. Google introduced a wildfire layer to maps in 2021, with the US still having more granular tracking of specific incidents. Since July, the company has kept tabs on more than 30 significant wildfire incidents in the United States and Canada.

During the Google AI@ Event 2022, Google said it is introducing an improved AI-powered wildfire monitoring system to the United States, Canada, Mexico, and some regions of Australia. The system uses machine learning algorithms built on satellite data from NASA and the National Oceanic and Atmospheric Administration to track and forecast the development of wildfires in real-time. The feature’s initial focus is assisting first responders in deciding how to put out the fire effectively. Since the largest change is on the back end, users might notice few differences in how they use the product.

AI Test Kitchen for Imagen

In May, Google released its AI text-to-image model, Imagen, which uses diffusion models to create high-resolution images mapping noise back to data. Google revealed at the AI@ Event that Imagen would be added to the AI Test Kitchen app for season 2, albeit only as a restricted Imagen demo. The tool can only be used to generate AI-generated content using the two demos: City Dreamer and Wobble. In contrast to City Dreamer, which lets users create photos of themed cities, Wobble lets users build artificial intelligence-created monsters using similar language prompts. Google uses the AI Kitchen app as a platform to test some of their AI models and solicit user input. It aims to minimize substantial inadvertent harm by letting consumers see its technological prowess and offer feedback through these incredibly constrained use cases. A condensed version of the company’s controversial LaMDA chatbot may also be found on the app.

AI Text Kitchen is now accessible in English for Android and iOS users in Australia, Canada, Kenya, New Zealand, the United Kingdom, and the United States. 

Read More: How AI Image Generators are Compounding existing Gender and Cultural Bias woes?

Generative Video Content

For the first time, Google used Imagen Video and Phenaki to produce a long coherent video from text prompts for the Google AI@ Event 2022. Imagen Video is a text-to-video generative AI model that can create high-definition videos from text input. It is, to put it simply, an extension of Imagen. The text-conditioned video diffusion model is capable of producing movies with a maximum resolution of 1280×768 at a frame rate of 24 fps. Like Imagen Video, Phenaki is a language model capable of realistic video synthesis when given a series of textual prompts.

The video demo showed a single blue helium balloon that was the subject of Google’s AI-generated, super-resolution video, where viewers could see it move across a park before hitting a stray giraffe. The film was interspersed with a series of relevant written commands that were shown on an adjacent screen every few seconds.

Speaking at the AI@ Event, Google Brain Principal Scientist Douglas Eck claimed it’s quite challenging to produce videos with high quality and coherence in time. Movies, or really any other media aiming to employ pictures and videos to create a cohesive tale, rely heavily on that mix of visual quality and continuity over time.

Wordcraft

Google is also paying attention to the generative side of conversational AI despite the market for visual synthetic media’s explosive expansion. The software titan said that it had initiated early-stage experimentation with Wordcraft, a text generator built on its LaMDA dialog system. Wordcraft, in contrast to other text-editing programs like WordTune or Grammarly, aims to aid in the creation of fiction rather than merely enhancing spelling and grammar. On the Wordcraft Writers Workshop website, a group of 13 authors have been using Wordcraft to create brand-new stories that you can read.

1000 Languages Initiative

At the Google AI@ Event 2022, Google announced its ambitious 1,000 Languages Initiative, an effort to create a single AI language model that supports the top 1,000 languages spoken worldwide, including Luganda, Swahili, Lingala, and others. The company created a Universal Speech Model (USM) for this project that has been trained in more than 400 languages. To gather audio samples of various regional languages, Google is also collaborating with South Asian local governments, NGOs, and academic institutions.

Google already has a vast language portfolio, but it wants to continue. More than 7000 languages are spoken worldwide, but just a few are represented online today. The initiative will concentrate on improving representation while AI models are being trained.

Robot to write codes

The amount of work required to train a robot to carry out new jobs can be decreased thanks to an internally developed software tool that was debuted by Google at AI@ Event 2022. The tool, Code as Policies, or CaP, claims that developing code for large language models may be repurposed to produce robot policy code in response to directions in natural language. The objective is to enable robotic systems to develop their own code. The intention is to spare human developers the trouble of having to go in and reprogram things when new information comes in.

It’s available on GitHub under an open-source license.

CaP relies on Google’s PaLM-SayCan paradigm for robots to interpret open-ended human suggestions and reply appropriately and safely in a physical setting. It expands on the PaLM-SayCan research by allowing language models to carry out complex robotic tasks using the full expression of Python code for general-purpose jobs. With CaP, Google advocates leveraging language models to urge robots to create code directly. Apart from PaLM-SayCan, CaP also builds on previous work in automated code completion, like GitHub’s Copilot functionality and OpenAI’s GPT-3.

In addition to writing new code, the tool may use software libraries, which are pre-packaged collections of code that perform common activities. CaP also uses third-party libraries and APIs to create the most appropriate code for a given situation, including supporting instructions in non-English languages and even emojis.

CaP’s capabilities were evaluated by Google researchers in a series of internal tests. In one experiment, the researchers examined whether CaP could instruct a robot to move toy blocks around a table. When given the directive to “arrange the blocks in a square around the middle,” CaP was able to produce code that enabled the robot to do just that.

Healthcare AI 

With a mission to offer accessible healthcare solutions, Google announced it is creating a low-cost ultrasonic tool in collaboration with Northwestern Medicine to help nurses and midwives in underdeveloped areas without access to sonographers. Using an Android app and a portable ultrasound monitor, nurses and midwives in the U.S. and Zambia are trialing a system that assesses a fetus’ gestational age and position in the womb. By detecting problems early in pregnancy, the AI application will assist medical professionals in collecting and interpreting ultrasound pictures and providing timely healthcare.

At the AI@ Event, Google also assured it would expand its collaboration with caregivers and public health authorities to provide access to diabetic retinopathy screening using its Automated Retinal Disease Assessment technology (ARDA). By clicking a photo of their eyes with their smartphone, more than 150,000 people have undergone screening.

Responsible AI

In an effort to reaffirm the company’s commitment to Responsible AI, Google Vice President of Engineering Research Marian Croak highlighted certain possible drawbacks offered by the technology on show at the Google AI@ Event 2022. These include concerns about prejudice and toxicity being amplified by algorithms, deep fakes further eroding faith in news, and false information that can make it difficult to tell what is true from what is false. According to Croak, part of that process include doing research to enable people to have more influence over AI systems so that they could collaborate with systems rather than having the machine take complete charge of situations.

Croak asserted that she believes Google’s AI Principles prioritize people, the avoidance of damage, and safety over its standard economic objectives. She claims that Google conducts adversarial testing on a continuous and constant basis. Then, their researchers make sure that they are establishing quantitative standards across all of the aspects of their AI technologies that can be evaluated and confirmed. These initiatives are being carried out by a wide group of researchers, including social scientists, ethicists, and engineers.

Advertisement

Bengaluru Metro launched a QR ticketing service on WhatsApp 

WhatsApp and Bangaluru Metro Rail Corporation Limited (BMRCL) have recently launched Namma Metro’s first WhatsApp-based chatbot QR ticketing service.

The WhatsApp-based chatbot is integrated with Unified Payments Interface (UPI) enabled payments on Whatsapp, allowing Namma metro passengers to purchase tickets and recharge their travel passes within Whatsapp. As per BMRCL, the Whatsapp-based chatbot is the first transit service to enable end-to-end QR ticketing on WhatsApp. The chatbot is available in both English and Kannada languages for passengers.

The Namma Metro official app is readily available on the Google play store. It consists of services like purchasing QR tickets, recharging metro cards, providing feedback, using travel planners to get the nearest metro stations, fare information, and train departure times, and purchasing single journey tickets with WhatsApp payments through a UPI pin. Easy cancel and refund services are available on the QR tickets. These QR tickets are available at a 5% discount off the token fee. 

Read more: Password attacks rise to 921 every second globally: Microsoft 

The managing director of BMRCL, Anjum Parewz, said that the Whatsapp-based QR ticketing service is to provide more comfort during traveling. Passengers are already using the app to plan their trips with QR tickets from 1st November. BMRCL has collaborated with WhatsApp to avail of the QR tickets service, and no additional charge is applicable for purchasing these tickets on both platforms.

Advertisement

Password attacks rise to 921 every second globally: Microsoft 

As per Microsoft’s digital defense report of 2022, the volume of password attacks has risen to an estimated 921 attacks every second globally, which is a 74 % increase in just one year.

According to the same report, attacks against remote management devices have increased, with more than 100 million attacks detected in May 2022. These attacks have increased to five times more than the last year.

Microsoft last year’s defense report stated that it synthesized over 24 trillion security signals daily, which changed to 43 trillion signals in 2022. Microsoft uses sophisticated data analytics and AI algorithms to understand and protect against digital threats, attacks, and criminal activity.

Read more: Meta’s new AI solves International Math Olympiad problems

In 2021, Microsoft blocked 9 billion endpoint threats, 31 billion identity threats, and 32 billion email threats. To date, Microsoft has removed more than 10,000 domains used by cyber criminals and 600 used by nation-state actors. As per the report, the nation’s state actors have targeted 22% of the IT sector, 17% of NGOs, 14% of the education sector, and 10% of the government sectors. 

The report claimed that 93% of Microsoft’s ransomware incident response engagements revealed insufficient privilege access and lateral movement controls. The most common factors leading to weak protection against ransomware are weak identity controls, limited data protection, ineffective security operations, and multi-factor authentication.

Microsoft comprises more than 8500 security and threat intelligence experts, including engineers, researchers, data scientists, geopolitical analysts, investigators, frontline responders, cyber security experts, and threat hunters across 77 nations.

Read more: NVIDIA collaborates with Mozilla Common Voice for Speech AI

Advertisement

Mayo Clinic Researchers Introduce a Novel ML-based Diffusion Model for Medical Imaging of the Brain

mayo clinic introduces diffusion model for brain imaging

Mayo Clinic researchers have been looking beyond the standard generative ML models for realistic medical imaging. They have come to introduce DDPMs (denoising diffusion probabilistic models), an ML-based diffusion technique to generate medical images. DDPMs are a relatively new class of generative ML models that enables the generation of labeled synthetic images. 

Traditionally, generative ML models are used to learn from medical imaging data and generate realistic images that are not patient-specific. Researchers use these synthetic images to study medical conditions and abnormalities without compromising patient privacy. The only drawback is that such models generate unlabeled imaging data, which is selectively helpful in real-world applications. 

However, the volume of medical imaging data with specific abnormalities is considerably less than that with common pathologies. This results in insufficiently large and imbalanced imaging datasets used for training the models, making them less accurate.

Read More: Researchers Propose a Novel AR Localization and Mapping Technique, ‘LaMAR’

Diffusion models are based on the Markov chain theory and generate synthetic output by ‘gradual denoising’ of an image having Gaussian noise. The process is repeated for all images, making these models run significantly slower than others. Nevertheless, diffusion models outperform other generative models as they can extract more representative features from the input medical imaging data. 

In proposing this research, the researchers created a tool that can retrieve 2D axial image slices from the FLAIR sequence of a brain MRI and inpaint a pre-defined area of that slice with a realistic image. The inpainted image can represent components, including the surrounding edema, tumors, or tumor-less brain tissues. 


The diffusion model presented in ‘MULTITASK BRAIN TUMOR INPAINTING WITH DIFFUSION MODELS: A METHODOLOGICAL REPORT‘ will enable medical practitioners to induce/remove tumoral and tumor-less tissues using brain MRI slices with limited data. The researchers have also provided the code online for people to use.

Advertisement