Friday, January 16, 2026
ad
Home Blog Page 291

Top 10 Deepfake Apps for Android and iOS

Deepfake apps

Have you seen the video of Barack Obama calling Donald Trump a ‘complete dipshit.’ It wasn’t a real video. Instead, it was a deepfake video that allows users to manipulate an existing video or image by replacing it with someone else’s voice and facial features. Recent advances in AI have led to a rapid increase of deepfake apps and videos in the public domain.

The deepfake technology uses deep learning, AI, and a Generative Adversarial Network or GAN to create videos or images that are actually fake. It involves training auto-encoders or generative adversarial networks. 

Although creating a realistic deep fake video on mobiles and conventional computers is challenging, various deep fake apps and websites are available to create hilarious images or videos that don’t violate anyone’s privacy. 

Here are the 10 best deepfake apps and websites you can experiment with for creating funny videos and images:

1. Reface

Image Credit: Reface

Reface is one of Android’s most popular deepfake apps that allows users to impose their face on photos, videos, and GIFs. The built-in system will enable users to create the most realistic fake pictures and videos. It has the most convincing results, and your friends and family won’t even know that it’s fake. You can share your creations on social media and see their reaction. 

It was nominated for the Google Play user’s choice awards in 2020. Reface also has a gender swap feature where you can see how you would look in another gender. 

Android | iOS

2. ZAO

Image Credit: ZAO App

Zao is a Chinese deepfake app that became viral throughout the web for its face-swapping feature. Just like Snapchat’s face swap filter, ZAO allows two people to switch their faces, which can create hilarious results. You can use ZAO to immerse your face in your favorite TV shows and movies. This free deepfake app only needs one photo to explore thousands of trendy clothing, hairstyles, and makeup. 

This application quickly became somewhat controversial due to its updated privacy problems, creating a massive backlash throughout the web. The creators of the application changed their privacy policy. The creators had the rights to the images inserted into the app. This basically means that if you add your face to the application, you can no longer stop the creator from using your face for whatever purpose they like. ZAO has responded to concerns and has since adapted its original privacy policy. 

Android | iOS

3. Faceswap

Image Credit: Faceswap

Faceswap is a top deepfake app where you can create the funniest and most amazing photos that’ll surprise all your friends and acquaintances. Besides delivering accurate results, the critical feature of this app is an easy interface with numerous functions. The most outstanding quality of the Face Swap deepfake app is its ability to match animals’ faces. You can have a brave tiger or a cute kitty face swapped with yours.

Android | iOS

4. Celebrity Face Morph

Image Credit: Celebrity Face Morph

If you’re a huge fan of Wolverine, Jack Sparrow, SuperMan, or any other celebrity figures, Celebrity Face Morph is the perfect deepfake app for you. You can choose a character from an enormous list and morph a snap of your face into them. The list includes sports celebrities, movie stars, animals, and politicians. The Celebrity Face Morph deepfake app uses advanced AI neural networks and a powerful image recognition technology for instant morphing with no Photoshop skills. 

Android

5. Collage.Click Face Switch

Image Credit: Face Switch Collage.Click

Collage.Click Face Switch allows users to swap their face and claims to be the best face swap app on smartphones. Regular face swap deepfake apps use a circle to capture your facial features and place them in another image, GIF, or video. Whereas Collage.Click will also copy the facial features and place them naturally into another GIF, image, or video. The Collage.Click Face Switch deepfake app is available for both Android and iOS devices for free. 

Android | iOS

6. B612

Image Credit: B612

B612 is a famous photo editor app that has various editing functions and filters. However, B612 also allows users to create a deepfake image. You only have to capture a picture with your camera, and the app will combine your face with a selected photo. Since it’s a photo editing app, you can edit the resulting photo with dozens of different effects and filters. 

Android | iOS

7. Wombo

Image Credit: Wombo

Wombo is an AI-powered deepfake app for lip sync where you can transform any face in a still image into a singing face. There is a list of songs for users to make any face in a photo to sing it. The app uses AI technology to create a deepfake singing video that seems animated and not realistic. Once you download and launch Wombo, add a selfie, and pick a song. The app will work its magic, and you’ll see your selfie transformed into a quirky singing selfie. 

Android | iOS

8. Jiggy

Image Credit: Jiggy

Jiggy is another deepfake maker for creating fake but fun content on social media. With Jiggy, you can animate still images with stickers, GIFs, or videos. You can use this bizarre and fun content to prank your friends or family. Besides a full-body swap, you’ll find over 100 GIFs and unique dances to animate your photos and share them among your friends. 

Android | iOS

9. Facemagic

Image Credit: Facemagic

FaceMagic is a deepfake video maker that allows you to change faces in any video, image or GIF in a few simple steps. With Facemagic deepfake video app, you get funny faces in any video or GIF in a matter of seconds. You only have to upload a photo from your gallery and the AI-powered FaceMagic deepfake app will do its magic. Besides face swap, you can even make the photo dancing, morph your face or friend’s face into celebrities, and swap gender. 

Android | iOS

10. Anyface

Image Credit: Anyface

Anyface is a deepfake generator to surprise your friends with an unusual look. Anyface animates your photos and makes them alive. All you have to do is choose a selfie or photo, and Anyface will work its magic. This deepfake application is loaded with fun features where you can also bring the photos to life and make them talk. You can create talking photos with pre-loaded phrases and also add beautiful filers to improve the image. It has a straightforward interface that doesn’t require users to have any editing skills because it includes easy-to-use tools.

Android

Advertisement

European Parliament votes for Ban on Facial Recognition: Why is it a Historic moment?

european parliament facial recognition ban
Image Source: The Verge

The pros of new emerging technologies are always followed by the cons that accompany after leveraging them. While facial recognition was met with the promise that it will help with better biometrics, hence leading to effective security and monitoring, it is now viewed as a technology that invades personal freedom. In what is being labeled as a historic moment, the European Parliament has voted for a total ban on predictive policing and biometric mass surveillance. 

The European Parliament stated in its explanation of the resolution that the use of AI by law enforcement currently poses a number of risks. This includes opaque decision-making, discrimination, privacy intrusion, challenges to personal data protection, human dignity, and freedom of expression and information.

These potential concerns are exacerbated in the sector of law enforcement and criminal justice, the European Parliament warned. According to the official statement, this is because mass surveillance using facial recognition may impair the presumption of innocence, the fundamental rights to liberty and security of the individual, as well as to an efficient remedy and a fair trial.

MEPs also decided to restrict social scoring systems like iBorderCtrl virtual border agent, the use of AI in court judgments, and private facial recognition databases, such as the ClearviewAI system, in the resolution passed on Tuesday. A majority of parliamentarians voted in favor of a comprehensive resolution on the use of artificial intelligence in criminal law and policing. The final vote passed 377-248, with 62 abstentions.

“This is a huge win for all European citizens,” said Petar Vitanov (S&D), the resolution’s author.

According to the MEPs, citizens should be monitored only when they are accused of committing a crime. They expressed worries about algorithmic bias in AI and concluded that both human supervision and legal protections are essential to combat prejudice. According to the Lawmakers, there is evidence that AI-based identification systems misidentify minority ethnic groups, LGBTI+ individuals, seniors, and women at a higher rate. As a result, they believe that algorithms should be transparent, traceable, and well documented, with open-source alternatives being utilized whenever possible.

Though Parliament passed the resolution, it is not legally obligatory. However, it comes as the European Union is working on new AI guidelines that would affect both the public and private sectors.

Brando Benifei (S&D), the primary negotiator for the AI Act, has asked for a total prohibition on facial recognition, as have practically all of his co-negotiators from other political parties in Parliament. This is in stark contrast to measures established in several EU member countries, eager to employ these technologies to strengthen their security forces.

The use of facial recognition software indeed escalated since the onset of the pandemic. However, face recognition software was already popular years before the pandemic, and its shortcomings have been well documented. Critics have exposed how police departments and government agencies across the globe have misused enormous databases of images in investigations. 

On one side, due to evidence that the technology does not operate as well on people of color, companies have ceased or reduced their usage of it. On the obverse side, it is spreading as some federal agencies aim to utilize it more, and it is being deployed in places ranging from retail malls to accommodation venues. 

Read More: UN warns that Artificial Intelligence can Threaten Human Rights

It’s encouraging to see even significant international organizations like the EP working to find a solution to the problem of unrestrained AI-powered facial recognition systems. Privacy and respect for citizens’ basic rights should be utmost priorities when deciding how or whether this technology should be rolled out to the rest of the world, and it appears that Parliament agrees. MEPs argue that face recognition isn’t ready for prime time and that a robust legislative framework to protect personal privacy must be created before police could consider deploying it.

The European Union executive had proposed draft legislation in April to regulate high-risk applications of artificial intelligence technology, which included a ban on social scoring and a blanket restriction on the use of remote biometric surveillance in public.

However, civil society, the European Data Protection Board, the European Data Protection Supervisor, and a number of MEPs swiftly expressed their dissatisfaction with the Commission’s plan.

Meanwhile, in the United States, there is no federal policy on the legal use of face recognition technology. While Congress has expressed their outrage on various issues to CEOs of Facebook and Google, legislators are yet to translate that anger into designing meaningful privacy legislation, which also focuses on biometric monitoring restrictions. 

However, passing privacy legislation won’t suffice as critics fear Silicon Valley lobbyists will influence its content excessively. This would imply that the government may agree to pass a bill that still favors misuse of facial recognition software by legal and federal houses and the corporate companies who scavenge private data. Even China has created a national set of guidelines on the ethical use of AI, it is still full of loopholes.

Because the ‘Artificial Intelligence Act’ is still being discussed and developed, the European Parliament’s resolution is just catalyzing at this time. The MEPs are sending a statement about what will be acceptable and where further protection should be implemented.

The European Parliament’s decision on AI mass surveillance comes only days after UN High Commissioner for Human Rights Michelle Bachelet called for a prohibition on the deployment of AI systems that violate human rights rules.

According to Bachelet, the sale of AI systems, like face recognition systems that monitor individuals in public areas, should be postponed until appropriate anti-violation procedures are in place.

Advertisement

Tata Power teams up with Bluwave-ai for AI automation of real-time electricity operations

tata power and bluwave-ai
Image Credit: Analytics Drift Team

Tata Power, one of India’s major integrated power utilities, has announced a three-year commercial deal with BluWave-ai, the world’s first renewable energy artificial intelligence firm. Tata Power reached the agreement following a successful trial project in which the BluWave-ai cloud platform was used to create intra-day and day-ahead dispatches for its power scheduling operations.

India has enacted legislation requiring accurate energy scheduling and introducing a real-time market to facilitate the integration of renewable energy into the national system. As a result, electricity distribution firms now face harsh fines if they deviate from their projected energy use, which is exacerbated by power scheduling inaccuracy. As a leader in its field, Tata Power has opted to use artificial intelligence (AI) to optimize power scheduling and, therefore, tackle the new regulatory changes.

The company has implemented several AI solutions, including The Central Control Room for Renewable Assets (CCRA), which employs machine learning for loss estimates, forecasting, and alert/notification.

Tata Power’s Coastal Gujarat Power Limited (CGPL) and Maithon Power Plant (MPL) facilities employ the pit to plant Coal Supply Management (Coal SCM) and Management Strategic Review (MSR) systems to optimize coal supply and order inventory management. The company’s Mumbai Distribution team, which will aid in the proactive evaluation of consumer demands has also introduced a sentiment analysis tool for email categorization and routing.

Mr. Sanjay Banga, President, T & D, Tata Power said, the company is working with BluWave-ai to operationalize Artificial Intelligence in our day-to-day power distribution in Mumbai. “Working with AI-enabled system improvements via cloud computing in real-time operations enhances our baseline systems resulting in higher operational efficiency and accuracy,” Sanjay explains.

Aside from BluWave’s forecast, the PSCC team has developed and deployed change overload prediction and RTM optimization, which use neural networks and linear programming, respectively, to ensure optimal power purchase and hence keep power purchase costs at the optimum levels.

“Our team at BluWave-ai has sought out innovative early adopters of complex AI technologies to onboard our products. We have focused on working with leading global energy companies, such as Tata Power, to build the world’s premier AI cleantech company,” said Devashish Paul, CEO of BluWave-ai.

He went on to say that the company’s solutions in Canada handled the first real-time AI electric utility dispatch and stand-alone industrial applications. BluWave-ai quickly trained its platform for the Indian market by integrating real-time data from Tata Power Mumbai DISCOM with its operations. Customers can use this technology to get a significant financial benefit from the 35,000+ annual electricity dispatches.

Read More: Indian Artificial Intelligence Market to value at US$7.8 Billion says IDC

Since February 2020, the BluWave-ai software-as-a-service (SaaS) has been operational 24 hours a day, seven days a week, 365 days a year. BluWave-ai’s grid energy optimization platform balances the cost, availability, and reliability of different energy sources, i.e., both renewable and non-renewable — with energy demand in real-time. It also quickly adjusted to last year’s massive COVID-related shutdowns and subsequent Mumbai lockdowns. The current agreement is for three years, with a five-year extension option.

Recently, Tata Power and BluWave-ai were recognized for their leadership in the Utility Distribution category with an India Smart Grid Forum (ISGF) top-level ‘Diamond Trophy.’ The award acknowledges the benefit of BluWave-ai’s method for forecasting energy consumption load at Tata Power.

Advertisement

How StockEdge uses AI to provide Enhanced Stock Investment Experience to Customers

StockEdge AI Stock Investment

The Indian stock market is witnessing an all-time boom since the pandemic. An increasing trend can be seen in the openness of Indian citizens in investing in stocks. A recent SBI report mentioned that over 44.7 lakh new retail investor accounts were opened during the first two months of 2021. The total number of individual investors in the country hit the highest of 142 lakh this year. 

The rising number of individuals willing to invest in the stock market has allowed many technological solutions and platforms to enter the consumer market to aid people in investing effortlessly.

Companies are now using artificial intelligence and machine learning technologies to provide predictive analytics services to customers for giving them a better understanding of the possible performance of stocks before making an investment. 

Read More: Introducing LEO: Caltech’s Humanoid Robot can Fly, Walk, and Jump

Most of the available platforms generate real-time stock analysis that consumers use to decide whether to invest in a stock or not. However, not every individual can devote their whole day to check trends and make accurate investing decisions.

StockEdge has precisely targeted those individuals who needed a platform where they can find accurate stock signals altogether at the end of the day. StockEdge is an online research and analytics platform that provides its users with the right learning and impactful data analytics to help them make independent investment decisions. 

StockEdge is a Kolkata-based fintech company founded by Vinay Pagaria, Vivek Bajaj, and Vineet Patawari. The company specializes in providing ‘end of the day’ analytics that help customers effectively invest in the stock market without the need to track stock signals 24/7. 

To better understand the use of technologies like artificial intelligence and machine learning in stock analytics, we connect with the Co-Founder and CTO of StockEdge, Vinay Pagaria. “Our motto as a company is to simplify finance for our users and take jargon away,” said Pagaria. 

StockEdge provides various technical and fundamental analysis tools mainly focused on statistical studies that help customers get an easy understanding of different indicators like moving averages, relative strength index, financial ratios, and many more. StockEdge runs available data through its defined filtered mechanism to generate accurate technical and fundamental analytics. The company simplifies the tasks involved in these analyses by shifting the entire process to their back-end tech, resulting in an easy-to-use and seamless customer experience. 

For example, users can search “Stocks with high Relative Strength Index,” and the StockEdge application will automatically analyze stocks and show users the required results. “I have always felt that good analytics will always be a combination of artificial intelligence and human intelligence,” said Vinay Pagaria. 

StockEdge firmly believes that human intelligence should drive artificial intelligence in order to generate better outcomes. StockEdge currently uses artificial intelligence solutions for mainly two purposes. Firstly, for better analytics for the customers and secondly, for analysis of the User Behavior.

StockEdge uses artificial intelligence and machine learning to analyze vast amounts of gathered data related to the stock market and generate unified results. For instance, the Indian stock market allows companies to report the names of their investors in a manually typed format, which creates confusion as an investor’s name can be spelled or written differently by two companies. This issue makes it difficult for individuals to analyze the market.

StockEdge’s artificial intelligence algorithms group all those related entities and club them into a single entity that gives its customers a consolidated view of the Investor’s portfolio and actions. Earlier, StockEdge used a rule-based approach to provide this unique feature, but eventually, they integrated artificial intelligence solutions to automate this process of identifying and clubbing related entities. 

Vinay Pagaria said, “Once we did it manually, we had a lot of data with which we could train the AI algorithm.” This feature helps StockEdge reduce the time consumed in analytics and also provides almost close to accurate and insightful results to its customers. It also makes StockEdge stand out from the competition as no other platform uses such artificial intelligence algorithms in this format. 

The fintech firm provides a sentiment analysis tool that assigns scores to Concall Summaries of Companies to aid customers in deciding which company reports to focus on.

StockEdge also uses artificial intelligence and machine learning tools to determine, predict its customer behavior, and provide them with personalized user experience. StockEdge gathers data from the actions taken by users on the application and uses them to send personalized recommendations, reminders, and notifications to customers. 

The company has developed an artificial intelligence algorithm that analyzes critical customer information like past purchases, activity, and more to forecast the chances of customers renewing their StockEdge subscriptions. As the StockEdge application is a subscription-based platform, this artificial intelligence tool helps the sales team to focus more on customers who are more likely to make a purchase. This helps the company effectively utilize time in order to increase sales. 

StockEdge also offers a social platform service to its customers named StockEdge Club that allows customers to interact with the company’s analytics team. This feature helps customers learn more about stocks and the stock market. It is a stock market-oriented community that lets users share learnings and ideas related to investments. 

When asked about stock price prediction tools, Vinay Pagaria said that the company had earlier tried multiple algorithms for predicting stock price, but they achieved mixed results. He said, “It is not that simple and straightforward that artificial intelligence does everything and tells every morning what to buy and what to sell.” According to him, the stock price prediction AI algorithm is still a very open-ended subject across the world, and more research is needed to achieve high-accuracy results that can be shared with the retail audience. 

“We do not give direct tips in terms of buying and selling. We have always restricted ourselves to be a knowledge and unbiased research tool provider, which people can trust,” he added. 

While talking about projects in the pipeline, Vinay Pagaria mentioned that the company is working on algorithms that can be used to score stocks using various existing parameters. This feature will help StockEdge categorize stocks based on the scores to help customers make informed investment decisions. 

StockEdge is committed to providing a transparent learning and research service to its customers by continuously evolving with time to help users invest in the Indian stock market. 

Advertisement

Introducing LEO: Caltech’s Humanoid Robot can Fly, Walk, and Jump

caltech leonardo
Source: Caltech

Caltech researchers have created a bipedal robot that combines walking and flying to produce a new style of locomotion that allows it to be extremely agile and perform complicated motions. Dubbed Leonardo, is a bipedal robot with a thruster that allows it to balance well and move quickly. 

The project’s goal is to look at the confluence of walking and flying from the standpoints of dynamics and control, in order to “offer unparalleled walking capabilities and address the difficulties posed by hybrid motion,” as the project’s engineers put it.

LEO can undertake a wide range of robotic activities, including high-voltage line inspection and monitoring of high bridges, thanks to its combination of walking and flying abilities.

Weighing just 2.58 kg and 0.75 meters tall, LEO is primarily made of carbon fiber. It’s very light, allowing its drone-like thrusters to lift the robot off the ground. It can walk on a slackline, hop on a trampoline, and even skateboard. LEO is the first robot to accomplish precise balance control through the use of multi-joint legs and propeller-based thrusters.

LEO consumes 544 watts while walking on the ground, with 445 watts going to the propellers and 99 watts going to the electronics and legs. LEO’s power usage nearly doubles while flying, but it’s obviously a lot faster—the robot has a cost of transport (a measure of self-movement efficiency) of 108 when walking at 20 cm/s, decreasing to 15.5 when flying at 3 m/s.

This version of LEO differs substantially from the one we originally met two years ago. Most significantly, instead of “LEg ON Aerial Robotic DrOne,” “Leonardo” now stands for “LEgs ONboARD drOne.”

Leo GIF
Future Version of LEONARDO

The redesigned LEO now has considerably lighter weight legs operated by lightweight servo motors, allowing it to fly. Two coaxial propellers have been replaced with four slanted propellers, allowing for attitude control in all directions. The four propellers of LEO’s tilting electric thrusters are timed to allow LEO  to make jumps. And everything, including computers, batteries, and a new software stack, is now onboard, which implies no complicated wiring and full autonomy. 

According to Caltech, LEO is still a prototype—a kind of proof-of-concept to determine if a bipedal flying robot can execute tasks that would be difficult or impossible for ground robots or aerial drones to do on their own. Researchers envision that the full-fledged version of the robot might be entrusted with tough or risky tasks in the foreseeable future, such as inspecting and repairing damaged infrastructure, installing new equipment in difficult-to-reach locations, and responding to natural catastrophes and industrial accidents. A robot similar to LEO may eventually deliver sensitive equipment to the surface of a celestial planet like Mars or Saturn’s moon Titan. On the grim side, the agile bipedal flyer might be employed for protection or warfare.

The Caltech team also hopes to develop adaptable landing gear for VTOLs (vertical take-off and landing) on challenging terrain. Researchers are already considering methods to improve LEO’s energy efficiency by redesigning the legs so that it relies less on the propellers for balance while walking. The team is also focusing on making it more self-aware so that it can analyze its surroundings and choose the best course of action. Further, they are working on improving the performance of the LEO by creating a stiffer spindle design capable of supporting more robot weight and increasing propeller thrust. 

Read More: Xiaomi Launches new open source Quadruped robot CyberDog

Soon-Jo Chung, the corresponding author of the paper published this week in the journal Science Robotics, noted in a statement, “We drew inspiration from nature.” “A complex yet intriguing behavior happens as birds move between walking and flying,” said Chung, who is also Bren Professor of Aerospace and Control and Dynamical Systems.

Chung explains that the manner in which a human in a jet suit controls their legs and feet when landing or taking off is comparable to how LEO employs synchronized control of distributed propeller-based thrusters and leg joints. From a dynamics and control aspect, the Caltech team wanted to investigate the interface between walking and flying.

Biped robots can navigate complex, real-world terrain using the same actions as humans, such as jumping, jogging, and even climbing stairs, although rough terrain hinders them. Flying robots can easily navigate rough terrain by avoiding contact with the ground, but they have their own set of limitations, including high energy consumption in flight and a limited payload. LEO proposes to close the gap between the two distinct realms of air and pedestrian navigation, which are rarely combined in present robotic systems.

Advertisement

Microsoft demonstrates face analysis with synthetic data alone

face analysis with synthetic data

Microsoft published a study, ‘Fake it till you make it: face analysis with synthetic data alone.‘ The research demonstrates that it is possible to use synthetic data for training facial analysis algorithms before using them in real-life scenarios. According to the software giant, the face biometrics scientific community has already been employing synthesizing training data with graphics for a long time.

However, the paper argues for a new method to bridge the domain gap between the real and synthetic applications when considering human faces. Microsoft’s new research synthesizes data by combining a procedurally-generated parametric 3D face model with a comprehensive library of hand-crafted assets to circumvent the issue. These assets render diverse training images and high realism.

Traditionally, researchers have employed a combination of data mixing, domain adaptation, and domain-adversarial training. The new process combines synthetic data with hand-crafted assets that generate rich labels that are otherwise impossible to label by hand. Researchers also have complete control over variation and diversity in a data set.

Read more: Byju’s launches innovation hub: Byju’s Lab

The procedurally constructed synthetic 3D faces are based on an initial face template. They are realistic and expressive and then scrambled with random expressions and textures. The researchers administered a training dataset of 100,000 synthetic face images, then evaluated the synthetic data on face pausing, face analysis tasks, and landmark localization. 

The trained networks never saw a single real image, and researchers used label adaptation to minimize human-annotated labels. According to the Microsoft team, the major difficulty was converting the jawline in 3D facial images into a 2D face outline.

Advertisement

ManCorp Innovation Labs: Digitizing the course of Justice in India

ManCorp innovation lab

In the first week of April 2021, the apex court of India unveiled an AI-powered initiative called SUPACE or the Supreme Court Portal for Assistance in Courts Efficiency. SUPACE will address one of the country’s widespread problems of a massive backlog of pending cases in all levels of courts. SUPACE helps judges and law clerks quickly extract information from large amounts of legal data in their research. 

“The role of AI will be the collection and analysis of data. It will process the facts and make it available to the judges seeking input for the decision. We will not let it spill over to decision making,” the then Chief Justice of India SA Bobde said during the launch.

A Pune, Nagpur, and Delhi-based startup, ManCorp helped build this AI system. ManCorp leverages technology to simplify the justice delivery process. The startup conducted pilot projects in the High Courts of Patna, Bombay, and Jharkhand. In Patna, they built an AI solution to help with the allocation of cases. In Bombay, it was a machine encoding handwritten or printed text with Optical Character Recognition and a chatbot named Jharkhand Samvad in Jharkhand High Court. 

Read more: Byju’s launches innovation hub: Byju’s Lab

ManCorp’s primary focus is B2B, even though they have a strong foothold in Indian government organizations. ManCorp Innovation Labs was founded in 2018 by Manthan Trivedi, Vishnu Gite, and Rathin Deshpande. The company started as a deep-tech think tank and evolved into an R&D lab.

A new smart solutions product from ManCorp is an integrated system that brings together multiple systems such as PDF Reader, Microsoft Word, Zoom, or Google Meet on one platform. ManCorp is also working with the Income Tax Appellate Tribunal to find solutions for an automated defect detection system. This system aims to automate the income tax filing process so mistakes can be quickly notified and rectified. 

Advertisement

TESLA receives Fresh Patent for Self Improving Neural Network

Tesla neural network processor

Recently, Tesla received a patent for what it claims to have perfected the art of creating neural networks (NN) that understand “self-improvement.” The patent, titled ‘System and method for handling errors in a vehicle neural network processor,’ proposes a technique for neural networks to identify errors that occur during its execution. It can receive an error report from the error detectors and then communicate that a pending neural network’s result is contaminated — all without the NN’s expectation being terminated.

In other words, Tesla has patented a method for a neural network to detect and correct a mistake. This patent is a continuation of a patent application from 2019 called ‘System and method for handling errors in a vehicle neural network processor‘. This patent outlines a method for securely handling mistakes in self-driving software. Instead of risking delays in driving replies due to data input mistakes, a signal is provided to ignore the incorrect data and continue processing as normal.

Streams of real-time input data are received during self-driving operations in Tesla’s software and utilized to train its neural network as well as trigger a vehicle response to what’s being analyzed. If data is incorrect or the processing is delayed, the real-world consequences might be devastating if not managed appropriately. Sensor data might get stale quickly in a fast-moving vehicle, causing the self-driving software to respond to an environment that no longer exists. Accidents, property damage, injury, and death may occur as a result of this. The approach proposed in Tesla’s patent application aims to eliminate such processing delays entirely, therefore improving the system’s safety.

Tesla’s vision is to modernize and develop autonomous car deep learning technologies. In the past, these systems have retrieved data that was recorded via sensors. Tesla recognizes that more sensors are required, especially as data gets more nuanced. The captured sensor data is generally utilized as input for deep learning algorithms needed to perform autonomous driving. The collected sensor data is made suitable with a deep learning system in traditional learning systems by transforming it from a sensor format to a format compatible with the learning system’s first input layer. However, there is another major bottleneck in this process as compression and downsampling of data can degrade the signal quality of the original sensor data. 

Meanwhile, designing a neural network for a specific application can be challenging since various neural networks may have varied hardware and software needs, imposing complicated limitations on setups. Developers must draw inferences depending on the available alternatives for each configuration setting, such as which algorithms to implement, which data layout to utilize, and so on.

Read More: Tesla AI Day Announcements: What to Expect from the AV Leader

In the patent summary, Tesla offers an insight into the innovation via giving several instances. One example is a mechanism for dealing with neural networks faults. In one example, the neural networks processor contains an error detector that is set to identify a data error associated with that neural network’s execution. As mentioned earlier, the neural networks controller can receive the data error report from the error detector, and upon getting that report, the neural networks controller can indicate if the pending result is tainted — without stopping the execution of the neural model.

In another scenario, the system’s neural network processor is running a neural network related to autonomous vehicle operation, and an interrupt controller is linked to the neural network processor to handle interrupt requests from various sources. The error signal from the neural processor may be received by the interrupt controller, and the data can be accessed in a variety of ways.

A solution for addressing failures in a neural network processor was presented in the final example. This includes the following:

  • Receiving an error report as a result of an error encountered by the car’s neural network processor while running the vehicle.
  • Using the error report to determine the kind of error.
  • Identifying how it relates to that data mistake, in answer to the second item above.
  • Signaling that a pending result of the vehicle’s neural network processor is corrupt while enabling it to continue to operate simultaneously.
Advertisement

Byju’s launches innovation hub: Byju’s Lab

Byju's launches innovation hub

Today, Byju’s launches innovation hub called Byju’s Lab to provide cutting-edge technologies, incubate new ideas, and deliver breakthrough solutions. The innovation hub will be based out of the United Kingdom, US, and India. Byju’s Lab aims to assemble a productive team of high-caliber individuals to make tech-enabled education transparent and accessible to a large number of people.  

The Lab will create entry-level and experienced jobs for Machine Learning (ML) and AI professionals. Byju’s wants to make digital learning more engaging, interactive, democratized, and personalized and empower students to become lifelong learners. 

Byju’s is now a global company because of steady growth and international expansion. With the launch of Byju’s Lab, it is looking to hire a diverse set of candidates in the UK, US, and India. It wants to harness the global talent pool to leverage new technologies to impact the digital learning experiences across the world positively.

Read more: Google’s DeepMind Makes a Profit for the First Time

Dev Roy, Chief Innovations and Learning Officer, Byju’s said, “As we continue to grow and experiment, we will operate at the intersection of business and technology to make innovation real and relevant for our end-customers. We are looking at strengthening our team and look forward to working with bright and curious minds to transform the way children learn.” 

Meanwhile, according to regulatory documents, Byju’s has raised about ₹2,200 crores in funding from a clutch of investors, including XN Exponent Holdings Ltd, Oxshott Venture Fund X LLC, and others. Funding from a slew of investors has placed Byju’s, a significant edtech, among the most-valued start-ups in the country. Byju’s has 6.5 million paid subscribers and over 100 million registered students. With the surging COVID-19 pandemic, the edtech space has seen strong growth in India and globally.

Advertisement

EPFL open sources ‘deepImageJ’ plugin for Deep Learning–based image analysis

deepImageJ plugin

For the past five years, techniques to analyze images have evolved drastically from traditional mathematical and observation-based methods to data-driven processing and artificial intelligence. A team of engineers from EPFL and Universidad Carlos III de Madrid (UC3M) has developed a plugin to assimilate artificial intelligence into image analysis primarily for life-science research. The plugin called ‘deepImageJ’ is described in a paper in Nature Methods and conducted under EPFL’s Center for Imaging.

Major developments in image processing have made detection and identification of valuable information more accessible, faster, and automated. When it comes to life science, deep learning models have gained increased attention for bioimage analysis. However, implementing deep-learning models often require coding skills like Python limiting users to derive various other applications. To make the image analysis an effortless process, experts from EPFL and UC3M, in association with EPFL’s Centre for Imaging, developed deepImageJ — an open-source plugin.

Deep-learning models are effective in many domains that rely on imaging for diagnostics and drug development. For instance, deep learning can be used in bio-imaging to process vast collections of images, detect anomalies in organic tissue, detect synapses between nerve cells, and determine the structure of cell membranes and nuclei. 

This plugin in biomedical research is ideal for image classification, identifying specific elements, and predicting experimental results. As deep learning-based models are based on artificial neural networks trained for specific research purposes, the trained neural networks are saved as a computer model. Typical applications resemble CCTV systems that perform facial recognition or mobile-camera that enhance photos.

Read More: Deep learning framework will enable material design on unseen domain

As life scientists do not have the experience to train these models, researchers require prior understanding of complex math and machine learning algorithms. A consortium of European researchers has started developing a repository of pre-trained deep learning models for biomedical imaging to assist scientists. The ‘deepImageJ’ plugin will bridge the gap between artificial neural networks and researchers to break the barrier for life scientists.

The ‘deepImageJ’ is an open-source plugin that will speed the dispersal of new developments in biomedical research. Being a collaborative source, it brings engineers, computer scientists, mathematicians, and biologists together.EPFL’s Center for Imaging challenged building an AI system that can assist life scientists in facilitating more contribution. Currently, ‘deepImageJ’ is headed by Daniel Sage and Michel Unser, the Center’s academic director, together with Arrate Barrutia, associate professor at UC3M. Many researchers have started developing virtual seminars, training materials, and online resources to help users exploit the full potential of AI.

Advertisement