On 30th November 2021, GitHub announced the winter cohort of GitHub Externships for students in India. GitHub externships were first announced at GitHub Satellight India 2021 to help college students get real-world experience and collaborate open source projects with other developers on GitHub. It builds on GitHub’s Campus Program, which allows students to engage with Indian companies via fellowship programs.
GitHub along with partners’ support, including Appwrite, TigerGraph, Deepsource, Hoppscotch, FamPay, Symbl.AI, Zeeve, Abridged Inc, Saraverse, Leverice, and many more, will nurture talent and enable the next generation of developers to drive the country’s digital future.
The program teaches skills that are important to succeed in an enterprise environment. Students via this program are mentored on real-world projects. They also get to work on cloud technologies and improve software development skills.
Maneesh Sharma, General Manager, GitHub India, said, “Students hold a special place in our hearts. They are our next generation of developers who will continue to drive software innovation forward for our digital nation. That’s why, as we continue to support India’s talented student developers, we’re thrilled to announce the GitHub Externships Winter Cohort.”
The summer cohort, the first season of GitHub externships, received 1300 applications from 175 higher education institutions across India. However, only 62 students were selected as GitHub Externs to work with mentors from 18 organizations. During the program, students received hands-on experience on diverse projects enhancing their job prospectus.
The winter season of GitHub externships will include the DevSecOps track to help students understand software development skills. Interested students can apply until December 14, 2021. Third-year students from higher education institutions in India that have signed up for the GitHub Campus Program are eligible to apply for GitHub externships.
Computer vision and deep learning firm SenseTime reveals its plans to open a new artificial intelligence-powered casino in Singapore. The AI casino will feature various camera systems to detect the impudent behavior of customers and many other complex tasks.
However, due to some technical difficulties, the launch of the AI casino has been delayed. The COVID-19 pandemic is one factor that has caused the delay, as SenseTime mentioned that its business was affected by multiple restrictions imposed by governments across the globe. Hence, it has become difficult for the company to commercialize its new products in the market.
An AI casino project official said, “SenseTime was very proud of the Genting project. They wanted to do a big announcement with the government and resort, but it keeps getting delayed.”
The highly advanced CCTV camera that will be installed in the AI casino will monitor and identify every movement of customers when they move from one camera to the other. According to SenseTime, the main focus of the artificial intelligence system is to reduce the chances of any fraudulent activities within the premises, which is quite common in casinos across the globe.
SenseTime faced a massive challenge as the existing CCTV camera installed in the casino could not generate high-quality images, especially in low-lit areas making it difficult for the AI system to analyze. Beijing-based artificial intelligence company SenseTime was founded by Bing Xu, Li Xu, and Xiaogang Wang in 2014. The firm specializes in developing face recognition systems that can be applied to payments and picture analysis.
The company has been listed in the fifth position in China’s AI top 10 ranking list 2017. Many industry-leading companies, including China Mobile Communication, China UnionPay, Huawei Technologies, Xiaomi, and JD.com, use SenseTime’s solutions for facial recognition. The company also provides text and vehicle recognition systems to various companies operating in multiple industries.
Amazon announced that its cloud-based analytics services Amazon Redshift, Amazon EMR, Amazon MSK, and Amazon Kinesis are currently available as serverless and on-demand services.
Usually, developers have to manage and configure the cluster instances to perform analytics operations in the cloud. It is time-consuming for customers to simultaneously manage the storage based on the demand for analytics operations. To avoid this issue, AWS made its analytics services serverless to ease customer’s work and provide high-performance rate results even with large-scale data. With this, users do not have to set up and manage clusters since AWS takes care of it behind the scenes.
Since AWS analytics is serverless, users can also use it as an on-demand service. This is more advantageous because it makes the customers only pay for the moment they use the cloud for analytics. Users will only pay for the duration in seconds when they use the warehouse, and they can stop paying once they stop working or when the warehouse is idle.
In other words, users will only pay for the space used for the workloads on a per-second basis. To further control the cost limits, users can specify usage limits to avoid overcharge. Users can also specify their daily, weekly, or monthly usage limits to access the service, making them pay seasonally instead of paying frequently. This allows users/developers to achieve low-cost analytics compared to other cloud service providers.
AWS serverless services are currently available for public preview. Currently, Amazon Redshift Serverless is available in the AWS Regions such as US East (N. Virginia), US West (N. California, Oregon), Europe (Frankfurt, Ireland), Asia Pacific (Tokyo) for public preview.
In the case of serverless Redshift, the company provides a $500 upfront as AWS credits to try the serverless data warehouse public preview. Users will get the credits once they create a database with serverless Redshift. Those credits can be further used to cover the costs for computing, storage, and snapshot usage of Amazon Redshift Serverless.
Note: Amazon’s Redshift, EMR, and MSK are currently available for public preview, while Amazon Kinesis is generally available.
Amazon Web Services (AWS) announces the launch of its new scholarship program for artificial intelligence and machine learning. The scholarship program is worth $10 million and aims to train underrepresented individuals with critical skills in artificial intelligence to make them industry-ready.
Amazon has collaborated with Udacity and Intel to provide scholarships to over 2000 learners annually and provide them with educational content. Amazon will also offer career mentorship and access to various ML technologies for the selected students.
The newly launched initiative aims to nurture a diverse and talented workforce for artificial intelligence and machine learning technologies in the future. The program will allow students to access multiple free training modules and tutorial videos on the basics of machine learning from any part of the world.
Vice president of Amazon machine learning at AWS, Swami Sivasubramanian, said, “Machine learning will be one of the most transformational technologies of this generation. If we are going to unlock the full potential of this technology to tackle some of the world’s most challenging problems, we need the best minds entering the field from all backgrounds and walks of life.”
It is a four-month long virtual program that teaches students above the age of 16 the foundational skills that one requires to kick start their career in artificial intelligence and machine learning.
The course will also provide “Ask me Anything” webinars on a monthly basis, where students can resolve their doubts and get the best available mentorship from experts from the AWS team.
In order to participate in the scholarship program, students must successfully complete prerequisites within AWS DeepRacer Student and submit their applications on the official website of Udacity before the deadline.
Interested candidates can apply for the AWS scholarship here.
The Snapdragon 8 Gen 1 Mobile Platform is Qualcomm’s most powerful mobile platform to date, and it is the first in the company’s line-up to use a new branding approach that avoids the triple-digit naming tradition of its predecessors. From the cutting-edge process used to build the chips to its updated CPU, GPU, and AI processing engines, to its extensive camera and imaging technologies, and its comprehensive array of wireless connectivity options, Snapdragon 8 Gen 1 boasts significant advancements in virtually every aspect of the platform.
Image Source: Qualcomm
The Snapdragon 8 Gen 1 will be built on cutting-edge 4nm technology and include a combination of Arm CPU cores — a total of eight. A single high-performance Prime Cortex-X2 core (up to 3GHz), three Cortex A71 Performance cores (up to 2.5GHz), and four Cortex A51 Efficiency cores make up the upgraded Kyro CPU complex (up to 1.8GHz). The Prime core is utilized for threads that require the most priority (and performance), while the Performance cores do the remainder of the heavy lifting, with the Efficiency cores providing support for less-demanding background operations.
Image Source: Qualcomm
Through Snapdragon 8 Gen 1, Qualcomm Technologies will offer high precision AI with low latency to low-power devices like IoT, medical images, automobiles, and mobile devices using Google Cloud’s Vertex AI NAS, while maintaining memory and energy efficiency.
While the NAS will be accessible on Qualcomm’s new flagship Gen 1 mobile platform, it will gradually be rolled out throughout the Qualcomm range.
According to Qualcomm, NAS will be used to “accelerate neural network development and differentiation” for Snapdragon mobile, ACPC, XR, the Snapdragon Ride automotive platform, and IoT activities once it is integrated with 7th Gen Qualcomm’s Artificial Intelligence (AI) Engine.
Thanks to an improved Qualcomm Hexagon processor with double the shared memory and a tensor accelerator that’s twice as fast, the 7th Gen Qualcomm Artificial Intelligence (AI) Engine on-board is supposedly 4X quicker than its predecessor Snapdragon 888+ 5G. A Qualcomm Sensing Hub 3rd Generation is also included in the design, which handles the always-on sensors and consumes less power than its predecessors. Qualcomm claims a 1.7X gain in battery efficiency in addition to its massive AI performance boost.
Qualcomm claims that the Snapdragon 8 Gen 1’s Adreno GPU renders graphics 30% quicker than the Snapdragon 888 in this generation. This generation also outperforms the Snapdragon 888 in terms of power efficiency by a stunning 25%.
For developers, Google Cloud Vertex AI NAS will be included in the chipmaker’s Neural Processing SDK, operating on the Qualcomm AI Engine. Platforms that use the AI Engine will be able to benefit from “optimizations and performance enhancements,” adds Qualcomm. This will also allow the company to build and optimize new AI models in weeks rather than months.
Vertex AI Neural Architecture Search was launched by Google Cloud in May as a single platform for designing, deploying, and maintaining AI models. Vertex AI requires over 80% fewer lines of code to train models than existing platforms. Google claims it is the same framework that is used internally to power Google, with capabilities ranging from computer vision to language and structured data.
While Vertex AI is an assortment of various tools, Qualcomm emphasized on the Neural Architecture Search. Its goal, as the name suggests, is to improve AI models. NAS allows data scientists to tune a model for specific hardware without having to train it manually. Based on the use case, they can also impose limitations on the model’s size or other characteristics.
“The ability to utilize Google’s NAS technology to create and optimize new AI models in a condensed time frame is a game changer for our business,” said Ziad Asghar, Vice President of product management at Qualcomm. Ziad added, “We are happy to be the first chipset company to work with Google Cloud on NAS and eager to roll out this technology to further our momentum in connecting the intelligent edge.”
Artificial intelligence and machine learning firm BigBear.ai secures the first position in the Naval Information Warfare Systems Command’s (NAVWAR) artificial intelligence and Networks Advanced Naval Technology Exercise challenge.
L3 Harris, one of the global leaders in developing avionics, tactical communications, and geospatial systems, secures the second position. The contest was arranged to address the gap between current and future warfare techniques and technologies.
The prize challenge was organized to support the US Navy’s Overmatch initiative, which focuses on modernizing naval warfare using artificial intelligence-powered weapons and sensors in a Naval Operational Architecture.
Chief Technology Officer of BigBear.ai, Brian Frutchey, said, “This challenge allowed us to demonstrate how our automated course of action assessment AI can assist the Navy in enabling warfighters to make critical decisions quickly in operationally relevant maritime environments.”
He further added that the company feels proud to participate in the competition to support Naval initiatives. The competition had numerous participants across various fields, including commercial, government, and academics.
Science and Technology director at NAVWAR, Carly Jackson, said, “The participants had less than three months but the results we are seeing are quite compelling. By quickly leveraging the lab infrastructure and expertise resident across the Naval Research and Development Establishment, this new type of digital platform-powered ANTX enables us to identify and field technologies, components, or algorithms at the speed of the threat.”
She also mentioned that the participating teams sought to motivate industry-leading companies to bring new innovation in their platforms and architectures.
Earlier this month, BigBear.ai also partnered with Palantir, a leading data analysis and visualization software developer. With the strategic partnership, both companies plan to develop new artificial intelligence-powered solutions to provide actionable insights into complex business decisions and grow their respective customer base globally.
This week, Dataiku debuted version 10 of their unified AI platform. Dataiku is an AI platform that allows businesses to turn raw data into advanced analytics and data visualization. It provides a seamless online version of the Enterprise AI platform to quickly and successfully promote development in smaller businesses.
Dataiku 10 comes with a built-in set of tools to assist IT administrators and data scientists in evaluating, monitoring, and comparing models in development or production. With integrated industry solutions, dedicated workspaces for business users, and accelerators for exploratory data analysis, geospatial analytics, and machine vision, Dataiku 10 enables enterprises to provide AI results quicker. Furthermore, Dataiku 10 helps businesses provide value quicker by combining industry-specific solutions, enterprise work environments, faster exploratory data analytics, spatial data analytics, and photo and video analytics (computer vision).
The new Dataiku 10 version has received a substantial upgrade that focuses on three key features and themes:
Scaling analytics and ML activities in a secure manner
The MLOps capabilities in Dataiku 10 have been enhanced to enable IT operators and data scientists to analyze, monitor, and compare machine learning models in development and production. Teams can gain a better understanding of the behavior and performance of live models with automatic drift analysis and enhanced “what-if” simulations.
In an attempt to ease the work of data scientists, Dataiku’s AI platform offers no-code GUIs for data prep and AutoML features to do ETL, train models, and assess their quality. This feature is designed for technically skilled users who do not know how to code and allows them to perform numerous data science activities. Users can use a no-code GUI to decide which ML models the AutoML algorithm can utilize and conduct simple feature manipulations on the supplied data.
Following training, the page offers visuals to help in model interpretability, including not just regression coefficients, hyperparameter selection, and performance indicators, but also more advanced diagnostics such as subpopulation analysis.
Governance and oversight
AI professionals, risk managers, and key stakeholders may use Dataiku’s AI Governance to comprehensively manage projects and models, as well as monitor the overall progress of the AI portfolio. Customers can view all of their models in a central model registry, regardless of whether they were created in Dataiku or with third-party tools like MLflow. Superior AI oversight is provided through structured frameworks for project workflows, approvals, and project qualification.
Accelerate value realization with business solutions and accelerators
Customers can choose and harness the appropriate tools for their goals from projects produced for various use cases in many sectors, and organizations can expedite the pace of value realization. These advanced tools include new geographic analytics, native deep learning capabilities, aided data exploration, and better visual and interactive insights.
According to Clément Stenac, CTO and co-founder of Dataiku, “We have always been convinced that non-specialists must also be involved throughout the organization in order to be able to apply AI on a large scale. Dataiku 10 makes that possible.” By focusing on the above three core areas, Dataiku 10 aims to increase the engagement of AI adjacent roles such as IT operators, risk managers, project managers, and domain experts.
Minister for Road Transport & Highways in the Government of India, Nitin Gadkari, said that the govt. aims to reduce the total number of road accidents to half by the end of 2024. According to him, road safety is now the government’s top priority, and they plan to use AI solutions to achieve their goals.
Currently, India records 150,000 deaths and over 300,000 serious physical injuries annually because of road accidents. The government will use artificial intelligence to manage and predict traffic conditions.
The AI solution would help authorities drastically reduce any chances of a traffic mishap. Gadkari said, “We have implemented advanced technology and electronic monitoring systems for safe and efficient traffic movement on Indian highways. We have implemented several initiatives in collaboration with governments, NGOs, industries and various institutions within civil society.”
He further added that the government understands the need for collaborative effort at the grassroots level to minimize road accidents. The government plans to deploy artificial intelligence solutions for the following purposes-
Improving lane discipline on National Highways.
Vehicle Overspeed and seat belt detection.
Post-forensic accident investigation.
Accident patterns with black spots.
Driver fatigue and sleep indicator.
Advanced vehicle collision system.
“AI can be used to combine data from all applicable sources above with data used to make appropriate changes at the policy level. The private sector has expanded cooperation and can combine it,” said Gadkari. He also mentioned that the government has plans to provide technical assistance to engineering colleges and institutions to investigate and submit accident analysis reports using artificial intelligence.
Chandigarh authorities also took a similar kind of approach earlier this year to effectively manage the city’s traffic conditions. The artificial intelligence-powered system analyzes video footage collected from various CCTV cameras for calculating the duration of traffic lights to maintain a smooth and uninterrupted flow of traffic.
At the AWS re:Invent 2021 conference, Amazon launched a new robotic service called AWS IoT RoboRunner. With IoT RoboRunner, developers and enterprises can build fleet management applications for robots.
RoboRunner assists enterprises in managing and optimizing the lifecycle of various robot fleets since it offers an automated infrastructure for fleet management. This service can be used in public warehouses, hospitals, retail shops, supermarkets, shipping harbors, and even in homes for domestic purposes.
Usually, enterprises rely on domestic robots made by different vendors to perform various fleet management operations. However, each robot has its own control software, data format, and processing speeds; it is difficult for enterprises to use robots with a centralized application.
Since each robot has a unique architecture, it becomes difficult for enterprises to build applications for managing robots according to the use cases. To make effective robotics automation systems, companies are required to optimize robots according to their applications and task orchestrations to allow individual robots to perform a series of tasks together.
But, it is challenging for enterprises to build and deploy different robots for different operations; it requires complex integration and high technical knowledge for building advanced software.
To ease the process of deploying different robots in the enterprise’s warehouse, AWS is releasing IoT RoboRunner after finding success within their own warehouses to manage fleets of different robots. Amazon was able to manage more than 350,000 robots in their own fulfillment centers and warehouses worldwide.
To have a preview of AWS RoboRunner, check this link.
The government of India recently announced the launch of a new face recognition system that will be used to provide certificates of life. The newly launched system will help pensioners and elderly individuals to quickly receive their pensions.
According to the government, the face recognition system will improve the ease of living for senior citizens. The new system was unveiled by the Minister of State for Personnel Jitendra Singh on Monday. The system will be practical and useful for those senior citizens who are incapable or have problems in submitting fingerprints as biometrics proof.
The Ministry of Personnel, Public Grievances and Pensions mentioned that the face recognition system would affect 68 lakh retired government employees along with workers under the Employee Provident Fund Organization and state governments.
Minister Jitendra Sing said, “the central government has been sensitive to the needs of pensioners and to ensure the ease of living for them. Soon after coming to power in 2014, the government decided to introduce and implement digital certificates for pensioners. This unique technology will further help pensioners.”
He also thanked the Ministry of Electronics and Information Technology and the Unique Identification Authority of India for developing this face recognition technology. The launch marks the introduction to a standard software named Bhavishya, which will be used to process every pension case. The software will be used by all the ministries of the Indian government.
To use this face recognition service, individuals must have a smartphone with a 5-megapixel camera, internet connection, and Adhaar card. According to PTI, “The identity of a pensioner or family pensioner will be determined using a face recognition technique under this capability. Any Android-based smart phone will be able to submit a Life Certificate utilising this technology.”
Interested people can visit here or download the smartphone application named AdhaarFace ID to register for this facial recognition system.