iHub-Data located at IIIT Hyderabad campus announces the launch of its new course on machine learning for chemistry and drug design.
The course intends to cater to the growing demand of skilled professions in the artificial intelligence and machine learning industry as the technologies are revolutionizing the current world.
In conjunction with IIIT-H, iHub-Data is delivering a one-of-a-kind course on “Machine Learning for Chemistry,” with a focus on drug development.
It is a twelve-month certification course that includes theoretical lectures from renowned professors in the domains of computer science and natural sciences including Prof. Deva Priyakumar, Prof. C. V. Jawahar, Prof. Girish Verma, Dr. Maitreya Maity, and others.
Indian Students, researchers, and professionals with a science background, wanting to learn about artificial intelligence, machine learning technologies and applications in fields including chemistry, biology, and pharmaceutical science can readily apply for the newly launched course. However, participants are required to have a +2 level understanding of math.
Students and researchers interested in developing interdisciplinary skills in solving computationally complex problems in natural sciences should opt for this program.
iHub-Data and IIIT Hyderabad have meticulously designed the course to impart key skills and knowledge to learners including tutorials to aid in the development of practical skills.
Learners will gain hands-on experience with various machine learning and deep learning methods utilizing tools and libraries such as Python, Pytorch, Scikit-Learn, numpy, and pandas. The cost of this 12-week course is Rs 7,500 for undergraduate and master’s students, Rs 15,000 for Ph.D. and postdoctoral students, and Rs 30,000 for industry experts.
Interested candidates can register for this certification course from the official website of iHub-Data.
Baidu AI Cloud announces the launch of its new artificial intelligence-powered on-device sign language platform that can translate speech to hand signs in real-time.
The recently released AI technology creates digital avatars for sign language translation and live interpretation within minutes. Baidu aims to help the deaf and hard-of-hearing (DHH) population break down communication barriers by increasing the accessibility of automated sign language translation using the translator.
Baidu has released two all-in-one AI sign language translators that offer a one-stop-shop with a simple set-up process and plug-and-play functionality. According to the company, their translator will be deployed during the Beijing Winter Paralympics Games 2022.
With its “action fusion algorithm,” the platform has categorized approximately 11,000 actions based on the National Universal Sign Language Dictionary, ensuring that all digital sign language gestures are as coherent and expressive as human sign language.
The production and management expenses of digital avatars have been significantly decreased because of AI’s technological enablement, allowing artificial intelligence sign language to scale and serve more deaf and hard-of-hearing people.
There are 27.8 million deaf and hard-of-hearing (DHH) people in China, yet there is a severe scarcity of skilled experts to meet their needs, with only 10,000 sign language translators. Therefore, Baidu’s new technology will considerably help such people effectively communicate with the world.
Baidu recruited over 500 Chinese professors and students with hearing loss to help expand and vet the sign language corpus. The recruitment will help Baidu maintain high accuracy standards for its speech to hand sign translator.
Tiantian Yuan, associate dean of Tianjin University of Technology’s Technical College for the Deaf, said that she and her students are immens
Microprocessor designing company GraphCore debuts in the 3D artificial intelligence (AI) chip sector with its new Wafer-on-Wafer technology.
GraphCore has collaborated with global semiconductor manufacturing giant TSMC to develop the new technology. Moreover, TSMC also manufactures processors for GraphCore. According to the company, its new hip named Bow is the first in the world to use the Wafer-on-Wafer technology.
For real-world AI applications, Graphcore’s new Bow IPU processor can handle up to 350 trillion processing operations per second, providing up to 40% higher performance and 16% better power efficiency than its predecessors.
Bow Pod256 provides over 89 PetaFLOPS of AI computation, while the superscale Bow POD1024 provides 350 PetaFLOPS of AI computation. This enables machine learning researchers to keep up with the continually rising size of AI models while also achieving new levels of machine intelligence.
Bow Pod is now available, and the company has started shipping the product across the globe. The United States Department of Energy has become one of the first customers of GraphCore’s newly launched product.
Co-founders of GraphCore said, “One wafer for AI processing, which is architecturally compatible with the GC200 IPU processor with 1,472 independent IPU-Core tiles, capable of running more than 8,800 threads, with 900MB of In-Processor Memory, and a second wafer with power delivery die.”
They further added that two wafers are joined together to create a new 3D die in the BOW IPU with Wafer-on-Wafer technology.
United Kingdom-based microprocessor designing firm GraphCore was founded by Nigel Toon and Simon Knowles in 2016. The company specializes in designing processors and intelligent processing units for artificial intelligence and machine learning applications. Earlier this year, GraphCore also opened its first office in India while the country is witnessing an artificial intelligence revolution.
To date, GraphCore has raised more than $680 million from investors like Ontario Teachers’ Pension Plan, Sequoia Capital, Fidelity International, and many others over seven funding rounds.
Non-profit medical center Cedars-Sinai announces the launch of its new artificial intelligence unit in its medicine division to study the deployment of AI solutions in the healthcare industry.
Sumeet Chugh, associate director of the Smidt Heart Institute and a renowned expert in sudden cardiac arrest, is in charge of the newly formed Artificial Intelligence in Medicine (AIM) team.
Cedars-Sinai, in collaboration with AIM, has developed a number of critical programs in which AI solutions are increasingly used. Cardiovascular imaging, abrupt cardiac arrest, COVID-19, and clinical genetics are among the AIM’s main priorities.
However, in the coming years, it plans to further expand its operations in multiple fields, including public health, medical, and surgical issues.
Sumeet Chugh said, “Using a disease-based approach, AIM will enable cross-disciplinary connections between clinicians, scientists, and trainees at Cedars-Sinai at multiple levels.”
He further added that they aspire to be innovators and stewards of patients’ healthcare interests and needs, but also, most importantly, to apply findings directly to patient care. Chugh and his colleagues are developing ethically reviewed, evaluated, validated, and implemented clinically relevant questions from the Cedars-Sinai Health System.
The Enterprise Data Intelligence team at Cedars-Sinai, led by Mike Thompson, has a history of applying artificial intelligence to improve patient care at the hospital level. In the Journal of Nuclear Medicine, AIM recently released a study that used AI-powered algorithms to predict heart attack risk in patients who already had coronary artery disease.
Shlomo Melmed, M.B, Ch. B, said, “Dr. Chugh has extensive experience using artificial intelligence to solve clinical problems for sudden cardiac arrest, one of our most difficult conditions.”
He also mentioned that the new division would use the Cedars-Sinai systemwide clinical data repository to propose clinically relevant solutions to key health challenges under Chugh’s supervision.
Reinforcement learning has been a cornerstone of the latest developments in artificial intelligence applications. Researchers have been leveraging reinforcement learning algorithms to bring avant grade models in robotics, gaming (AlphaGo), and self-driving vehicles in the past few years.
Reinforcement learning aims to direct how machine learning models, also known as agents, should act in a given environment. The scope of its use is expanding, attracting more interest from the scientific community.
However, the primary problem with most reinforcement learning algorithms is that they can only tackle the particular task they were trained on and cannot generalize across tasks or domains. This is because most reinforcement learning agents are trained on limited or single application-specific data. As a result, these agents tend to become overly reliant on the single extrinsic reward, reducing their capacity to generalize in the real world. Hence, scientists are working on building new RL models that can also provide satisfactory results in real-world scenarios. They are also working on devising an RL model that takes comparatively less amount of time to find out the best solution that yields maximum rewards.
One of the most exciting opportunities for reinforcement learning research has been motion planning in self-driving vehicles. A self-driving vehicle (or an autonomous car) is a vehicle that travels between locations without the assistance of a human driver using a mix of sensors, cameras, radar, and artificial intelligence (AI). To be considered entirely autonomous, a vehicle must be able to go to a predefined location without human intervention on roads that have not been redesigned for its usage.
The most important task for a self-driving vehicle is interacting with the surroundings. The first phase is perception, in which you must assume that the vehicle is traveling in an open context environment and train your model with all potential scenes and scenarios in the actual world. This is where a reinforcement learning agent comes in handy, taking environmental data and moving from one state to the next based on a set of rules to maximize rewards. These incentives can be either short-term, such as safe driving, or long-term, such as arriving at the destination early.
According to a report published on arXiv last Wednesday by scientists at the University of California, Berkeley, the team constructed a wheeled robot that can traverse kilometers across residential terrain. The robot stays on pathways and avoids barriers it hasn’t encountered before. It is critical to note that it does not map its environment, as some other systems have done, such as in AI algorithms for autonomous driving.
Instead of a detailed map, it uses heuristics gleaned from thirty hours of footage of prior trips and some overhead landscape maps to generate an enhanced schematic of how stations along the route connect to one another. Dhruv Shah, a Ph.D. candidate, and Sergey Levine, an assistant professor at UC Berkeley, co-authored the study titled “ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints.” Last year, Shah and Levine presented a predecessor method named “RECON,” which stands for “Rapid Exploration Controllers for Outcome-driven Navigation” from the sound of system names, it is obvious that both ViKiNG and RECON heavily draw inspiration from reinforcement learning.
Over the course of 18 months, RECON was trained by having the wheeled robot, a Clearpath Robotics Jackal autonomous ground vehicle, do “random walks” across various locations such as parking lots and fields, capturing hours of footage via mounted RGB cameras, LiDAR, and GPS. RECON learned “navigational priors” thanks to a neural network that compressed and uncompressed picture input as an “information bottleneck,” a signal processing method first proposed by Naftali Tishby and colleagues in 2000.
During the test phase, RECON was presented with an image of a destination, e.g., a specific building, and tasked to figure out how to travel to that new location. RECON created an improvised map out of a graph of steps along a path to that destination. The Jackal robot was able to navigate up to 80 meters toward a destination in unfamiliar settings it had never experienced before using these tactics. It was able to do so even though every other method of robot navigation had failed to achieve the desired result.
Next, the University of California, Berkeley team expand RECON in one specific hint in ViKiNG, i.e., they provide either overhead satellite photos of the new landscape or overhead maps to Jackal’s software. Unlike RECON, which conducts an uninformed search, ViKiNG includes geographic hints in the form of estimated GPS locations and overhead maps, according to Shah. When exploring a new area, this allows ViKiNG to achieve faraway goals up to 25 times farther away than the farthest goal given by RECON, and to accomplish targets up to 15 times faster than RECON. When outfitted with ViKiNG, Jackal travels much beyond RECON’s 80 meters, traversing over 3 kilometers (nearly two miles) from start to finish.
ViKiNG builds upon its predecessor program, RECON, by adding “hints” in the form of overhead satellite or overhead schematic data of the landscape.Shah et al. 2022
Sources note that the ViKiNG program has included a further 12 hours of film from “teleoperated” trips, in which a human-led the Jackal to explore pathways like sidewalks or hiking trails to build up those preceding instances.
Further effort and trial-and-error testing are required to deal with a vehicle driving at high speeds and with unseen elements such as jay-walking people. The team is hopeful that the present study will lay the groundwork for full-scale autonomous cars. For now, the University of California describes, ViKiNG as the first step toward a “sidewalk delivery robot.” Simultaneously, this is a major win in the application of reinforcement learning in self-driving vehicles.
The Indian government’s Department of Science and Technology (DST) has partnered with semiconductor manufacturing giant Intel to boost artificial intelligence (AI) readiness in the country.
The newly announced joint program named ‘Building AI Readiness among Young Innovators’ was officially launched on the National Science Day by Union Minister of State for Science and Technology, Dr. Jitendra Singh.
The program aims to improve digital readiness among Indian students participating in DST’s INSPIRE-Awards MANAK scheme from grades 6 to 10. The newly launched program will teach students the fundamentals of AI and its various disciplines, such as computer vision, natural language processing, statistical data, and more.
Building AI Readiness among Young Innovators was held as part of the National Science Day celebrations, which marked the end of a fantastic science week organized by the ‘Azadi ka Amrit Mahotsav’ program. This new development is also a step towards Intel’s commitment to promoting artificial intelligence readiness in India.
Dr. Jitendra Singh said, “Scientists must be classified in three categories as per age – the first one being children below 15 years. Every day, we come across incredible innovations by young minds.” He further added that we need to identify them when they are young and offer them mentorship along with an opportunity to experiment and explore.
The program will combine both technology and socio-economic aspects in order to create an AI-ready generation by equipping students with the knowledge and skills necessary to use AI effectively.
“We are collaborating with governments globally to build a digital-first mindset and expand access to the AI skills needed for current and future jobs,” said Director APJ – Government Partnerships & Initiatives, Global Government Affairs Group, Intel, Shweta Khurana.
She also mentioned that they are looking forward to working with the Department of Science and Technology to help India become more digitally ready.
Multinational retail corporation Walmart announces the launch of its new artificial intelligence-powered technology named Choose My Model that allows shoppers to try clothes virtually.
The recently released feature will considerably help buyers in shopping and aid Walmart in providing a better buying experience to its customers.
Customers can choose from 50 models with heights ranging from 5’2″ to 6’0″ and sizes ranging from XS to XXXL using this feature.
Additionally, Walmart plans to provide 70 new model options in the coming weeks, allowing customers to choose from a more extensive range of sizes, skin tones, and hair colors.
Earlier, Walmart had acquired Zeekit, and this new technology is a product developed after the acquisition. Walmart’s goal with Zeekit is to provide a more inclusive, immersive, and personalized digital experience that more closely resembles physical shopping.
Choose My Model will allow customers to visualize how items will appear on them without going to a store.
Executive Vice President of Apparel and Private Brands at Walmart US, Denise Incandela, said, “We want to have a best in class shopping experience online, and we feel like this is shopping of the future, and we wanted to lead the way.”
She further added that everything boils down to giving the customer the assurance they need to make that purchase. Choose My Model is currently available on select items from Free Assembly, Scoop, ELOQUII Elements, Sofia Jeans by Sofia Vergara, Time and Tru, Athletic Works, No Boundaries, Terra & Sky, Avia, and numerous others.
Walmart mentioned it had received multiple positive feedbacks from customers for its Choose My Model feature, and they plan to increase its capabilities in the future.
“The extraordinary, positive customer feedback out of the gate underscores our opportunity and ability to solve a common online shopping problem and build a true, personal connection between Walmart and our customers,” added Denise.
Online learning platform Udacity announces that it has partnered with Standard Chartered to offer 100 scholarships to science, technology, engineering, and mathematics (STEM) students and technology professionals.
The fully-funded scholarships will be provided to individuals from Bengaluru and Chennai. Those who get selected as one of the 100 scholarship winners will begin their Nanodegree program.
The primary goal of Udacity and Standard Chartered is to develop a pipeline of qualified candidates who will be considered for job openings at Standard Chartered after successful completion of the program. After finishing the course, participants will obtain an official diploma and a portfolio of real-world projects.
Udacity and Standard Chartered’s course will impart fundamental knowledge and skills that the market demands to make them industry-ready. Learners will be considered for immediate job opportunities at Standard Chartered, involving data science, full-stack engineering, and blockchain development fields.
Udacity and Standard Chartered have developed three programs to train learners for specific roles, namely –
Data Scientist – Learners will be taught subjects like NLP, computer vision, TensorFlow, deep learning, artificial intelligence, and more to help them gain real-world data science experience.
Full-stack Software Engineer – The course will teach data modeling, SQL, full-stack app development on AWS, Azure applications, and more.
Blockchain Developer – The program will impart essential skills like cryptography basics, blockchain fundamentals, infrastructure security, blockchain architecture, and many more.
The selected individuals will be placed in complex projects and use cases, including artificial intelligence, machine learning, and blockchain at Standard Chartered. The application process started on 2nd March and will continue till 1st April 2022.
The course will commence on 11th April and will end on 11th August 2022. The nanodegree program also includes multiple opportunities to participate in networking and mentoring sessions to learn industry best practices.
Interested candidates can apply for this scholarship from the official website of Udacity.
Technology giant Microsoft and NASSCOM announce their 2nd AI Gamechangers awards to boost artificial intelligence adoption in India.
The newly announced award program will provide startups, enterprises, academia, governments, and NGOs a platform to showcase their innovations in the artificial intelligence space and also promote the adoption of such technologies.
In India, over 5,000 AI patents have been registered in the recent decade, with 94% of them filed in the last five years. Therefore, initiatives like AI Gamechangers will also motivate other companies and bodies to develop new products and showcase them in front of a larger audience.
The winners will be featured in NASSCOM’s flagship Xperience AI Summit’s annual ‘AI Gamechangers’ compendium.
President of NASSCOM, Debjani Ghosh, said, “India was ranked 8th in the top 10 countries by AI patent families on a global level, an impressive accomplishment considering India had no AI-related patent filing prior to 2002.”
She further added that AI has enormous growth potential, but realizing it will require scalable partnerships among stakeholders, the correct mix of skills, a strong focus on incentivizing R&D, data availability, and rules that enable responsible AI.
By 2025, data and AI might add $450-500 billion to India’s GDP, accounting for 10% of the $5 trillion targets. Therefore it is important that initiatives are taken to prompt the adoption of artificial intelligence solutions in the ever-developing world.
National Technology Officer at Microsoft India, Dr. Rohini Srivathsa, said, “We have a huge opportunity to make AI work at scale for the country, enabling investment, jobs, and inclusive growth.”
According to recent data, if customer needs are not satisfied, users are more likely to discontinue their contracts and switch to a different service provider. Therefore the new feature will allow companies to identify such users and take measures to retain them.
Communication service providers can create a single picture of their customers with Churn Predictions for Tableau CRM, which includes revenue and service history, historical customer service interactions, and a measure of account health.
In addition, the artificial intelligence-powered feature also comes with predictive analytics capabilities that display suitable product recommendations and additional discounts for customers to help CSPs retain their customers.
Employees who are in direct contact with customers can use this functionality to better anticipate customer needs and make recommendations for the next steps to improve customer satisfaction.
Gartner analyst Jason Wong said, “To telcos, it’s a big deal because churn is everything to them. (They want to) retain as many customers as possible because switching costs are very expensive — especially when you factor in some of the device giveaways.”
Churn Predictions combines signals and insights from customer interactions and data, including the type of customer service, history, and resolution information, with external data, such as billing, network, and data consumption, using artificial intelligence.
Then the tool analyses gathered data to provide predictive analytics to help users identify which customers may be at a higher risk of terminating services.
Churn Predictions for Tableau CRM is now broadly accessible, and it is a part of Salesforce’s Communications Cloud offering for telecom firms.