Home Blog Page 281

Indian Government launches Face Recognition system for Pensioners

Indian government face recognition pensioners

The government of India recently announced the launch of a new face recognition system that will be used to provide certificates of life. The newly launched system will help pensioners and elderly individuals to quickly receive their pensions. 

According to the government, the face recognition system will improve the ease of living for senior citizens. The new system was unveiled by the Minister of State for Personnel Jitendra Singh on Monday. The system will be practical and useful for those senior citizens who are incapable or have problems in submitting fingerprints as biometrics proof. 

The Ministry of Personnel, Public Grievances and Pensions mentioned that the face recognition system would affect 68 lakh retired government employees along with workers under the Employee Provident Fund Organization and state governments. 

Read More: Last few days to apply for Generation Google Scholarship for Women in the Asia Pacific region

Minister Jitendra Sing said, “the central government has been sensitive to the needs of pensioners and to ensure the ease of living for them. Soon after coming to power in 2014, the government decided to introduce and implement digital certificates for pensioners. This unique technology will further help pensioners.” 

He also thanked the Ministry of Electronics and Information Technology and the Unique Identification Authority of India for developing this face recognition technology. The launch marks the introduction to a standard software named Bhavishya, which will be used to process every pension case. The software will be used by all the ministries of the Indian government. 

To use this face recognition service, individuals must have a smartphone with a 5-megapixel camera, internet connection, and Adhaar card. According to PTI, “The identity of a pensioner or family pensioner will be determined using a face recognition technique under this capability. Any Android-based smart phone will be able to submit a Life Certificate utilising this technology.” 

Interested people can visit here or download the smartphone application named AdhaarFace ID to register for this facial recognition system. 

Advertisement

Last few days to apply for Generation Google Scholarship for Women in the Asia Pacific region

Generation Google Scholarship for women

Technology giant Google earlier had launched its Generation Google Scholarship program in the Asia Pacific region. The program is meant exclusively for women and would provide them with training in computer science. 

The Generation Google Scholarship program would help students to inculcate critical skills related to computer science for making them industry-ready. The program has been available for quite some time, and the last date of submitting applications is round the corners. 

Eligible applications should not miss this opportunity and submit their application forms by 10th December 2021. Google encourages women with a keen interest in computer science to apply for this unique program. The selected applicants will receive a scholarship of $1000 for the 2022-2023 academic year. 

Read More: Amazon introduces Trn1 chips to speed up the training process of ML models

Below mentioned are the eligibility criteria to apply for the Generation Google Scholarship program – 

  1. Applicants must be enrolled in a full-time bachelor’s degree program for 2021-2022.
  2. After the completion of this scholarship program, applications must be in the second year of their bachelor’s degree from a recognized university in the Asia Pacific region. 
  3. Applicants must be enrolled in computer science, computer engineering, or other programs of related fields. 
  4. Must have a good academic record.
  5. Applicants should have a passion for improving the representation of underrepresented groups in computer science and technology.

Interested candidates can apply here

Apart from the Generation Google Scholarship program, Google recently also launched its new scholarship program to train Indian job seekers in digital technologies. The program is offering 100,00 free scholarships to help job seekers gain the necessary skills that the market currently requires. Google has tied up with many industry-leading companies including, TATA, Accenture, Wipro, Tech Mahindra, and many others, to provide employment opportunities to applicants. 

Advertisement

Amazon introduces Trn1 chips to speed up the training process of ML models

Amazon Trn1 Chips

On November 30, 2021, Amazon introduced three new amazon EC2 instances powered by AWS-designed Chips. The newly launched chips help developers/customers improve the performance and energy efficiency of their ML models. In the AWS re:Invent event, Amazon launched three instances called Graviton3, Trn1 and Nitro SSDs. However, Trn1 gained more attention among ML enthusiasts as Trn1 instances will be capable of delivering bandwidth of about 800 gigabytes per second. 

This feature from AWS makes it more suitable for large-scale and multi-node distributed training use cases like natural language processing, object detection, recommendation engines, image recognition, etc. The company claims that these processors are also optimized for high-performance computing, media encoding, batch processing, scientific modeling, ad serving, and distributed analytics. 

In the traditional cloud ML process, 90% of the cost of ML operations is spent on performing inference about the ML models. To avoid this, in 2019, Amazon came up with a processor called Inferentia. It delivers the best performance and throughput needed for machine learning inference at a lower price than GPU-based instances. 

Read more: Q-learning algorithm to generate shots for walking Robots in Soccer Simulations

Similar to the inference process, ML training will also be costly since it requires high-performance computing features with parallel processing methods. To simplify the training process of ML models, last year, Amazon introduced a Trainium chip that is specifically designed for machine learning models. 

Yesterday, Amazon released the Trn1 chip, considered a sequel of previously launched Inferentia and Trainium chips. The critical feature of the Trn1 chip is that it boosts ML model training by internally performing highly parallel math operations with the highest computing power. The newly released chip provides a 25 percent higher performance compared to previously launched chips. 

In Trn1, the company doubled the networking bandwidth to 800 gigabytes per second from 400 gigabytes per second, which is the bandwidth of previous chips. The increase in bandwidth brings down the latency and provides the fastest ML training methodology available in the overall cloud services. The Trn1 instances can be combined with thousands of instances to train even the most complicated machine learning models with trillions of parameters.

To have a preview of Trn1 instances, visit the link

Advertisement

Nasscom launches a new Center of Excellence for IoT and AI in Visakhapatnam

nasscom center of excellence iot ai

The National Association of Software and Companies (NASSCOM) has partnered with the Ministry of Electronics and Information Technology and the government of Andhra Pradesh to launch a new Center of Excellence for IoT and AI in Visakhapatnam. 

The center is located in the Andhra University Campus and will promote the development of innovative emerging technologies in various sectors, including robotics. The center will provide all the infrastructure required for developers to build cutting-edge solutions and also help promote entrepreneurship by providing incubation facilities. 

The new Center for Excellence was inaugurated by Union Ministers Rajeev Chandrasekhar, Mekapati Goutham Reddy, and various other government officials, including  P. V. G. D. Prasad Reddy. Researchers in the center of excellence will have all the necessary facilities to develop solutions for real-world challenges. 

Read More: AI-powered Lie Detector tool that reads Micro Facial Expressions

President of Nasscom, Debjani Ghosh, said, “The COE’s, the Center of Excellence have become almost a melting point that beautifully connects the different ecosystems to understand the big problems that technology can solve, brainstorms the best use technology to address these challenges or problems, and jointly co-create solutions.” 

The newly launched Center for excellence will use the capabilities of artificial intelligence and the internet of things to bring in revolutions for industries, startups, and academia. Minister for Industries & Commerce, Information Technology and Skill Development, Government of Andhra Pradesh, Mekapati Goutham Reddy, said that if they can become world leaders in nine advanced technologies, namely artificial intelligence, robotics process automation, edge computing, quantum computing, virtual & augmented reality, blockchain, IoT, 5G, and cyber security, then the state can reach a trillion-dollar economy. 

“It is absolutely essential that the Center of Excellence becomes not just academic extensions of university, but they become living, breathing growing centers of energy, dynamism, entrepreneurship, and technology development of the kind that we must deliver on in the coming months and years,” said Union Minister of State for Skill Development and Entrepreneurship & Electronics and Information Technology, Rajeev Chandrasekhar.

Advertisement

AI-powered Lie Detector tool that reads Micro Facial Expressions

AI lie detector tool

Scientists from Tel Aviv University have come up with a new AI-powered lie detector system that reads micro facial expressions of humans to determine the authenticity of statements. 

Professor Yael Hanein and Dino Levy led the research team during the development of this new artificial intelligence-enabled lie detector. Researchers conducted several experiments where they tracked and analyzed micro-expressions of humans that disappear in 40 to 60 milliseconds. 

The technology is yet not perfect, but according to the researchers, it is the most accurate facial recognition lie detector tool that has been developed to date that delivers over a 73% accuracy rate. 

Read More: Q-learning algorithm to generate shots for walking Robots in Soccer Simulations

Researchers experimented on 48 individuals and asked them to pull eyebrows or cheek muscles while developing the lie detector system. The system uses electrodes that motors muscle movements near eyebrows and cheeks to generate results. 

Behavioral neuroscientist and project co-head, Dino Levy, said, “We successfully detected lies in all participants and did it significantly better than untrained human detectors. Interestingly, individuals who were able to successfully deceive their human counterparts were also poorly detected by the machine learning algorithm.” 

According to experts, the newly developed artificial intelligence-powered lie detection tool is extremely promising and can play vital roles in various industries, sectors, and security areas, including border security. 

“Since this was an initial study, the lie itself was very simple. However, in reality, longer lies have chunks of deception and truth both,” added Levy. 

The research team is now training the AI lie detector tool using advanced machine learning techniques based on the data collected from the trials conducted during the initial stage.

Advertisement

Amazon launches SageMaker Canvas at its re:Invent 2021 conference

amazon SageMaker Canvas
AWS CEO Adam Selipsky kicks of re:Invent 2021. Image Source: AWS

Amazon launched SageMaker Canvas yesterday during a keynote talk at its re:Invent 2021 conference, which allows users to develop machine learning models without writing any code. This will enable AWS users to execute a machine learning process with a point-and-click user interface using SageMaker Canvas to produce predictions and publish the findings. 

The keynote began with the narration of how AWS’s EC2 instances were the first true game-changer for the corporation and its upcoming plans focusing on Arm-based Gravitron processors. At the event, Amazon unveiled its third-generation self-developed server processor Graviton3, IoT TwinMaker, self-developed cloud AI training chip Trn1, and AWS Mainframe Modernization. The event also announced the preview of a new managed service called AWS Private 5G that helps enterprises set up and scale private 5G mobile networks in their facilities in days instead of months.

With emphasis to help users lacking extensive artificial intelligence knowledge or training, SageMaker Canvas supports multiple problem types such as binary classification, multi-class classification, numerical regression, and time series forecasting. The perk of having such extensive support is that you can solve business-critical use cases like fraud detection, churn reduction, and inventory optimization without creating a single line of code.

According to AWS CEO Adam Selipsky, Canvas allows customers to browse and access petabytes of data from both cloud and on-premises data sources, such as Amazon S3, Redshift databases, and local files. Using automated machine learning technology, Canvas creates models, and users can then explain and analyze these models, as well as share them with others to contribute and deepen findings. They can also integrate data sets with a single click, train reliable models, and then create updated predictions as new data is available.

Read More: Microsoft Lobe — A Free Platform For No-Code Machine Learning

SageMaker Canvas automates the most time-consuming aspects of data preparation. The application aids in the detection of errors such as missing spreadsheet fields and automates the tedious effort required in merging data from several sources.

After curating the training dataset, businesses can begin creating their AI model where  SageMaker Canvas could determine the accuracy of a neural network before it is launched. They can analyze the estimate and, if necessary, change their datasets to increase accuracy.

SageMaker Canvas examines hundreds of AI models and selects the most effective one for the processing task the user wants to automate. With a few clicks, workers may train the neural network on their datasets.

SageMaker Canvas employs Amazon SageMaker’s sophisticated AutoML technology, which automatically trains and constructs models depending on your dataset. SageMaker Canvas can then use this information to choose the optimal model for your dataset and deliver single or bulk predictions. Business analysts may easily exchange models with data scientists using SageMaker Canvas, which is connected with SageMaker Studio.

Building a machine learning model often necessitates not just coding expertise but also experience with AI-specific development tools like TensorFlow. Enterprise AI initiatives might be difficult in numerous ways due to the demand for specific expertise.

If a corporation lacks AI knowledge in-house, it may need to outsource experts to help with machine learning initiatives. Meanwhile, companies that already have the requisite technological know-how could face difficulties too. When it comes to launching AI or machine learning initiatives, business users frequently need developer support, which slows down the rate at which businesses can implement AI. Hence, no code solutions are being increasingly adopted across industries to boost machine learning-powered AI tools’ use and deployment. According to Gartner, by 2024, no-code development will account for 80% of all ICT goods and services.

Amazon SageMaker Canvas is now available in the US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (Ireland) AWS Regions.


To read more about SageMaker Canvas, visit here.

Advertisement

Spotify CEO invests in AI-powered Defense Technology startup Helsing

Spotify CEO invests Helsing

Daniel Ek, CEO of popular commercial music streaming platform Spotify, invests over $133 million in artificial intelligence-powered defense technology startup Helsing. Daniel made the investment through his funding company named Prima Materia during the Series A funding round of Helsing. 

Additionally, Daniel Ek will join Helsing’s board of directors as a part of the investment. Helsing plans to use the newly raised funds to integrate artificial intelligence technology in military equipment and weapons to expand their capabilities. 

According to the company, the developed AI-powered equipment will first be made available to French, German, and British militaries. Helsing aims to provide an information advantage to armies of democratic countries with the use of artificial intelligence. 

Read More: Xenobots: the world’s first living robot that can reproduce

The company mentioned, “We believe AI in defence and security must be based on both the highest ethical standards and strong democratic oversight. We follow a multi-domestic approach, providing each country with sovereign, national-eyes-only technology access.” As of now, Helsing operates in the United Kingdom and Germany, but it plans to open its office in France by the first quarter of 2022 to help them expand into new markets. 

Berlin-based artificial intelligence defense company Helsing was founded by Gundbert Scherf, Niklas Kohler, and Torsten Reil in 2021. The startup specialized in developing AI-enabled security systems for governments and militaries. 

However, the investment made by Daniel Ek has been in constant criticism on various social media platforms, including Twitter. According to a report, over 80% of the viewers of Daniel’s tweet about the investment were not happy with the development. 

Twitterati stormed the internet claiming that music is a channel to attain peace, and the investment made by the CEO of Spotify is a complete contradiction of music philosophy. Many Spotify users also got exasperated and shared their thoughts about unsubscribing Spotify. 

Advertisement

Q-learning algorithm to generate shots for walking Robots in Soccer Simulations

Q-learning soccer robot

RoboCup is an annual international robotics competition where robots play soccer and compete with different robot teams worldwide. The idea behind the game is the research project, which uses footballing robots in a quest to develop machines that could one day assist humans. The ultimate goal of the robotics tournament is to create a team of robots capable of competing and winning a soccer match against human soccer players by 2050.

The humanoid robots play soccer based on the vision that helps make decisions depending on the robot’s position. It auto-analyses the field using the information that has already been programmed into it. The robots are made of stern stuff consisting of hardware for the brain, webcams for vision, and a body full of motorized parts, that allows them to pursue the ball around the pitch and shoot for goal. Using the camera, the robot determines the field setting around them and then transmits Wi-Fi to other robots in the team to make a move. 

The critical factor of robotic soccer games is that the robots in a team can simultaneously make decisions for shooting the goal. 

Read more: Researchers used GAN to Generate Brain Data to help the Disabled

The scientists and the AI researchers who have designed those soccer robots can test their computational tools by employing the RoboCup 3D soccer simulation league. This tool accurately replicates the real RoboCup environment in simulation, allowing AI scientists to design and implement AI strategies and methods for robots to shoot more goals.

The ultimate goal of the teams participating in the league is to increase the number of shots/goals compared to the other team. To achieve this, researchers from China and Iran have recently created a new simulation technique that enhances the ability of robots to efficiently shoot the ball while walking. Researchers have developed a computational approach called Q-learning algorithm to achieve this process, which allows the robots to shoot goals even when walking. 

Mainly to generate shots on simulation, the inverse kinematics and point analysis approaches are followed. If these mathematical methods are succeeded in simulations, they can be used in real-world soccer robots as well. This paper is published in SpringerLink, where the researchers clearly explained the approach of the Q-learning algorithm used in soccer simulations 3D league.

Advertisement

Xenobots: the world’s first living robot that can reproduce

world’s first living robots that can reproduce

Researchers from the University of Vermont, Wyss Institute for biologically inspired engineering at Harvard University, and Tufts University have discovered a new form of biological reproduction and applied it to the first-ever self-replicating living robot. The same team first reported the stem cells of African clawed frogs or Xenopus laevis were first reported in 2020 by the same team. Now, they have designed a Xenobot, the world’s first living robot that can reproduce.

Xenopus laevis is a computer-designed and hand-assembled organism that could swim out into the tiny dish, find single cells, and gather hundreds of them to assemble the baby Xenobots inside their PacMan-shaped mouth. These new Xenobots could look and move just like themselves and even go out, find cells, and build copies of themselves. 

In a Xenopus laevis frog, the embryonic cells developed into the skin. However, rather than becoming tadpoles, these cells have a frog’s genome and are replicated into the unaltered frog genome. This computer-designed collection of cells are replicating in a way that is different from original frogs. These frog cells are replicating in a way that no known species in animals or plants can. 

Read more: UC San Diego uses deep learning to map the human cell, finds new cell components

The Xenobot parent made of 3000 cells forms a sphere, but the system typically dies since it’s challenging to get the system to keep reproducing. However, with the help of an AI program working on a Deep green supercomputer cluster, could test millions of body shapes such as triangles, pyramids, starfish, squares, etc., to find the one that would allow the Xenobot cells to be more effective in replication. 

The AI program came up with strange designs, including the one that resembled Pacman. Once scientists changed the initial Xenobot parent structure to Pacman, these parents built grandchildren and grew great-grandchildren and another generation of Xenobots. The right design was remarkably able to extend the number of generations the Xenobot could produce. 

The notion of self-replicating biotechnology can seem exhilarating. However, for the research team, the goal was to understand the property of replication. These mm size living machines were entirely contained in the laboratory, easily extinguished, and vetted by federal, state, and institutional ethics experts. Bongard, one of the research scientists, points that developing technology where you can quickly tell an AI system: We need a biological tool that does X and Y and suppresses Z could be beneficial when producing solutions that matter deeply, such as the vaccine for the COVID pandemic. 

Advertisement

OpenAI announces new Residency program for AI talents, will offer them full-time roles

Open AI Residency Program

Artificial intelligence research and deployment company OpenAI announces that it will offer a new Residency program for AI talents that will provide them full-time job opportunities in the company. 

The program will help engineers and researchers join OpenAI who do not work in the artificial intelligence domain, as the OpenAI residency program aims to recruit talents from underrepresented fields of technology and science. It is a six-month course where all the resident learners will be provided a stipend of $750 for the entire duration. 

The curriculum of the program is unique as it will provide hands-on experience to learners by allowing them to work with the team of OpenAI rather than giving them theoretical knowledge. OpenAI has started accepting applications for the spring intake of 2022, and the last date for submitting applications is on 14th January 2022. 

Read More: Researchers used GAN to Generate Brain Data to help the Disabled

Chief scientist of OpenAI, Ilya Sutskever, said, “We’ve welcomed incredible new talent to OpenAI through our Fellows and Scholars programs, who have made major research contributions and helped advance OpenAI’s goal of building beneficial AGI.” 

He further added that this residency program targets those talents in the market who aim to contribute to the artificial intelligence domain but can not find a medium to do so. The course will focus on teaching learners the most practical artificial intelligence skills that would help them contribute to this sector.

Individuals from unconventional education backgrounds can readily apply for this residency program. OpenAI will also provide relocation and immigration support to highly talented applicants. The program will offer two diverse tracks named AI software engineering and AI research. The prior is suitable for individuals with a background in engineering and the latter for people looking for a shift as a research scientist from a non-ML scientific field. 

OpenAI residency program is set to commence from 18th April 2022, and the selection list will be declared by the mid of March. Interested candidates can apply here

Advertisement