The Indian Institute of Management (IIM) Ahmedabad now launches its new research center for artificial intelligence and data science named Brij Disa Center for Data Science and Artificial intelligence (CDSA).
According to IIM officials, the center will be used to conduct research to develop artificial intelligence solutions for challenges faced by businesses, policymakers, and the government. Researchers at CDSA will collaborate with other data-intensive enterprises to build case studies for classroom teaching.
The center will also host seminars and conferences to educate a large number of people in the field of artificial intelligence and data science. Director of IIM Ahmedabad, Errol D’Souza, said, “Data Science and artificial intelligence are increasingly impacting businesses across the world. AI has become an inevitable part of our lives.”
“To harness the power of these advanced technologies for augmenting businesses, the need of the hour is to bring together different stakeholders onto the same platform, conduct intensive research to identify challenges, determine potential and provide impactful insights.”
He further explained. Experts believe this center will be extremely useful as there is a massive demand for new solutions in the data science and artificial intelligence industry worldwide.
CDSA will make a report on the adoption of artificial intelligence in the country every year that would help practitioners to understand the challenges that the industry is facing, which would enable them to develop solutions for them.
Co-chairperson CDSA, Prof Anindya S Chakraborti, said, “AI has transitioned from being a field of science to an integral element of our daily lives. There is a pressing need to harness the power of Data Science and Artificial Intelligence.”
She further mentioned that the center would be a game-changer as it would provide important insights about the industry that would help policymakers and scientists.
Bangalore traffic police department is all set to introduce a new artificial intelligence technology in the streets to check traffic norms violations. The technology will help traffic police personnel to invest their time in other necessary duties.
The advanced artificial intelligence-powered technology will be able to detect motorcyclists who are not wearing helmets, rash driving, hopping red signals, not wearing seatbelts, and using mobile phones while driving.
Bangalore traffic police’s new program named Integrated Traffic Enforcement Management System (ITeMS) will install more than three hundred artificial intelligence-enabled CCTV cameras in the streets of the city by the end of this year.
A senior traffic police officer said, “The cameras can alert the command center whenever a vehicle with a history of traffic violations passes by. The police at the next signal can stop this particular vehicle. This will also reduce unnecessary stopping of vehicles for checks.”
The artificial intelligence technology will automatically generate an e-challan for traffic norms violators and send it via mobile phone. Eighty dedicated cameras will be installed in locations like TC Palya, BTMC Shantinagar, ACS Center, and Deve Gowda Junction to check for red light violations using RLVD (Red Light Violation Detection) technology.
With this new technology, criminals won’t be able to escape from the police as the e-challan generation process will be completely automated. The artificial intelligence technology will also have an Automatic Numberplate Recognition System (ANRS).
A transportation expert, Vinoba Isaac, said, “This is standard equipment used in many cities. Bosch had installed ten cameras on an experimental basis a few years ago. The advantage is that violations can be detected without the intervention of police.”
He further mentioned that this technology would also be helpful in reducing many corrupt practices of police personnel.
Indian Institute of Management (IIM) Jammu launched its new executive program in machine learning and artificial intelligence for business. It is a seven months-long online course that would teach applicants coding in Python, Tableau, and R Programming.
Graduates and diploma holders from recognized institutes can apply for this new program. The program would considerably increase learners’ knowledge and condition them to meet the current industry requirements.
The first round of registrations will end on 25th September 2021. The unique program is specially designed for working professionals who intend to gain knowledge about artificial intelligence and machine learning to stand apart from others.
It will also help learners to add value to their respective businesses or companies. The course will begin on 30th September 2021. The seven months-long course will cover topics like exploratory data analysis and visualization, statistics for business, big data analysis, applications of various machine learning and artificial intelligence technologies, and AI at the production level.
The course will also feature real-life case studies from Harvard University and IIM Bangalore. Interested candidates can submit their applications from the official website of IIM Jammu. The course is priced at Rs 71,500, excluding tax. Earlybirds will also get a discount of Rs. 7,000. Applicants with more than 60% attendance will be given a certificate after the completion of the course.
They would also get a chance to get IIM Jammu Executive Alumni status. All the classes will be conducted online on Sundays from 3:30 PM to 6:30 PM. Earlier this month, IIM Jammu had invited job applications to fill various non-teaching positions.
Artificial intelligence startup ThirdAI raises $6 million in its seed funding round led by Neotribe Ventures. Other investors like Cervin Ventures and Firebolt Ventures also participated in the funding round.
ThirdAI plans to use the fresh funds to conduct research and development operations for its deep learning technology to democratize artificial intelligence.
The company’s unique platform allows businesses to increase the processing speed of deep learning tasks without the requirement of dedicated hardware. The firm also plans to use the finances to expand its workforce and invest in various computer resources.
The CEO of ThirdAI, Anshumali Shrivastava, said, “When we looked at the landscape of deep learning, we saw that much of the technology was from the 1980s, and a majority of the market, some 80%, were using graphics processing units.”
He further added that they aim to change the traditional computing process that requires expensive hardware and engineers. The company has developed a solution that uses general-purpose central processing units to compute vast amounts of data quickly without the need for any additional graphic processing unit.
Shrivastava got this idea of deep learning when he was studying mathematics at Rice University. Houston-based startup ThirdAI was founded by Anshumali Shrivastava, Paul Holzhauer, and Tharun Medini on April 15th, 2021. It took over ten years for the researchers to develop ThirdAI’s unique deep learning technology.
The founder and managing partner of Neotribe Ventures, Swaroop Kolluri, said, “It is not just the computing, but the memory and ThirdAI will enable anyone to do it, which is going to be a game-changer. As technology around deep learning starts to get more sophisticated, there is no limit to what is possible.”
He also added that this technology is at a very early stage, and he invested in ThridAI as he believes that the company is capable of enhancing its deep learning tool further.
Tech-driven prestige makeup manufacturing company IL Makiage Israeli computer vision startup Voyage81. No information has been shared regarding the valuation of the acquisition deal.
Voyage81 is an industry-leading deep-tech artificial intelligence-based computational imaging startup that specializes in developing hyperspectral imaging applications for smartphones.
With this acquisition, IL Makiage plans to integrate Vouage81’s platform with its own to enhance its machine learning capabilities that would help IL Makiage to bring innovations in the beauty and wellness industry at a larger scale. The acquisition will also help IL Makiage to launch its new brand in the beauty industry.
CEO of IL Makiage, Oran Holtzman, said, “For the past two years, we have been searching for computational imaging solutions that can work in beauty and wellness to advance our existing AI capabilities further. I have met dozens of computer vision startups but could not find a technology that can fit our industry and was strong enough to fulfill our goals.”
He further added that the integration of Voyagfe81’s patent technology and experienced team with their data science department would enable the brand to bring groundbreaking innovations.
Voyage81 is currently developing new camera hardware in collaboration with many industry-leading smartphone manufacturers that would allow users to capture high-quality images in low lighting conditions. The hardware will use Voyage81’s artificial intelligence-powered hyper-scale imaging technology to carry out its operations.
Co-founder and CEO of Voyage81 Dr. Boaz Arad, said, “combining Voyage81’s physics-based algorithms with IL MAKIAGE’s existing data science team and utilizing the company’s one billion+ data points and unprecedented daily incoming data flow, will further boost our AI vision capabilities.”
He also added that both the companies together would be able to achieve technological superiority over the competitors in order to conquer the next frontier in the beauty and wellness industry.
With the proliferation of the Internet of Things and social networking channels, the world is generating a massive flow of data for commercial and scientific use. We frequently hear how artificial intelligence (AI) has advanced with the introduction of massive datasets enabled by the emergence of social media and our growing reliance on digital solutions in everyday life. While privacy regulations restrict the use of user data, a more pressing concern is that customer behavior is rapidly changing, and historical data risks becoming obsolete by the time it is gleaned, processed, and prepared for AI training. Further, the presence of bias in an AI algorithm can make it ineffective.
While training an artificial intelligence model, developers feed the input data and the expected outcomes to the model — based on this, the model can configure its own rules to make the most out of the given information. Hence the adage: AI is as good as the data it trains on. So, to push technological development, we need more data. The more data a model can train on, the better the model will perform.
It is imperative that the models are trained on the qualitative dataset as well. However, with new privacy regimes like the GDPR in Europe and the CCPA in California, accessing real-time quality data is not always possible. Fortunately, there is a catch. According to GDPR, data protection rules apply to all information relating to a natural person who may be identified or identifiable. The data protection rules do not extend to anonymous information — information that does not relate to an identified or identifiable natural person, or to personal data that has been rendered anonymous in such a way that the individual is no longer identifiable.
This led to the rise of leveraging synthetic data or data that is synthesized with an emphasis on privacy. Synthetic data is based on the assumption that the synthesized data is mathematically and statistically equivalent to the real-world dataset it is replacing. This enables data analysts to derive the same mathematical and statistical conclusions from the examination of a synthetic dataset as they would from the analysis of actual data. According to Gartner, by 2024, 60% of the data employed in the development of AI and analytics projects would be synthetically generated.
It might be excessively expensive and tedious to collect real-world data with required features and variety. After data collection, annotating data points with the right labels is a must as mislabeled data might lead to incorrect model results. These procedures can take months and require a significant amount of time and money. Moreover, if there is a need to gather data about a rarely occurring event, it is possible that the final data may contain non-uniform data points that may not help in making an informed decision.
Synthetic data does not require manual data capture and can have almost flawless annotations because it is generated programmatically. This data is automatically categorized and can contain unusual but critical corner instances, allowing for better prediction of rare events. For instance, getting data to train an autonomous vehicle to maneuver in roads filled with potholes may not be possible. This is when manufacturers can use synthetic data to test the efficiency of the vehicles by introducing desired test conditions.
At present there are two widely used methods to obtain synthetic data:
Variational Autoencoder: It’s an autoencoder whose encodings are regularised during training to ensure that its latent space has excellent characteristics and can create fresh data. Here the original data set is compressed and sent to the decoder, which then produces an output that is identical to the original data set. The system is set up to maximize data correlation between input and output by minimizing the reconstruction error between the encoded-decoded data and the initial data.
Generative Adversarial Network: GANS comprises two neural network models — a generator and a discriminator — that make up the system. The Generator is trained to create plausible false data from a random source, while the Discriminator is trained to tell the difference between the Generator’s simulated data and actual data. The process goes on until the discriminator function can’t tell the difference between natural and synthetic data any more.
Other methods for generating synthetic data are Wasserstein GAN, Wasserstein Conditional GAN, and Synthetic Minority Oversampling (SMOTE).
The main advantage of generating synthetic data is nullifying the risks of privacy violations. While it is more difficult to extract sensitive information from training sets and parameters than from plain input/output, there is clear evidence from studies that reverse-engineering models and recreating data are possible. Since a synthetic dataset does not contain information about real people while retaining the key properties of a real dataset, it advances the cause of privacy in artificial intelligence models. Amid the alarming news of data leaks happening now and the fallout of traditional data anonymization tools, synthetic data is a better solution. The anonymity of the synthetic data allows organizations to comply with privacy rules and avoid being penalized for non-compliance. Synthetic data enables rapid prototyping, development, and testing of sophisticated computer vision systems without jeopardizing human identification.
The excitement revolving around the use of data has been hampered by outrage about data privacy. Having similar statistical fidelity as real data analysis, synthetic data promises to address not only privacy but also data unavailability, data restrictions and AI bias concerns. When producing synthetic data, it’s crucial to take a step back and look at the entire dataset to determine what changes could be needed to improve the model’s ability to predict the expected results. Despite the fact that research is still in its infancy, several innovative solutions have emerged with an integrated focus on academics and industry, which go a long way in ushering a new shift from real data to synthetic data in this decade.
In recent years, scientists have employed artificial intelligence to enhance translation across programming languages and automatically fix problems. Machines can now construct increasingly sophisticated word representations thanks to advances in natural language processing (NLP).
NLP models can recognize the human voice and written text, interpret it in a machine-readable format, and communicate in human language rather than code. Every year new iterations of existing NLP models are introduced that can perform tasks like writing emails, articles, sentiment analysis, text extraction, etc., with better accuracy. Despite these advancements, a lack of diversity in artificial intelligence can result in additional systemic issues. For instance, NLP research primarily concentrates on spoken languages, ignoring the more than 200 signed languages in the world and the nearly 70 million people who use them to communicate.
Although signed languages make up a large portion of the world’s languages, they fail to be included in the training dataset for NLP models. Hence there is a rising demand for technology that can handle signed languages, as well as their importance.
Kayo Yin, a master’s student at the Language Technologies Institute, recently co-authored an article advocating for the inclusion of signed languages in NLP research. The paper titled, “Including Signed Languages in Natural Language Processing,” won the Best Theme Paper Award at this month’s 59th Annual Meeting of the Association for Computational Linguistics.
However, bringing this change won’t be easy as sign languages are not universal. They vary from country to country and even in different regions of a large country. For instance, the thumbs-up gesture is considered as approval in India and the USA. However, giving somebody a thumbs-up in Greece, Iran, Russia, Sardinia and parts of West Africa could get you in trouble! Similarly, while the V gesture means victory in the USA, in ASL, it stands for number 2, and in China and Thailand, it is used during posing for photos. But the same sign created considerable controversy for George W. Bush when he flashed it to an Australian audience with the palm pointing inside, which is considered a huge insult.
While humans can completely comprehend the nuances of a language, computers may not be adept in the same. For example, it may find it challenging to process abstract input like a sarcastic comment or learn that books are the plural form of the word book, yet the plural of deer is deer. So now, when it comes to signed languages, the NLP models must be carefully trained on the basis of the target audience and then slowly be upgraded to function in diverse situations.
Yin says researchers need to work hand in hand while developing the NLP models using signed language datasets. “We can’t fully understand signed language if we only look at the visuals,” she adds. Meanwhile, Yin is happy that her paper was received well by natural language processing researchers and people studying and using signed languages. Yin hopes that the paper motivates people to make a significant change in the community.
Renowned investment company Temasek launches its new platform named AIcadium. The platform is a global center for artificial intelligence technologies that will provide solutions to businesses for better workflow and outcomes across the globe.
The new platform uses machine learning technology to deliver the best services to its clients. Under the supervision of the head of AI strategy and solutions of Temasek, Michael Zeller, AIcadium is now building a strong and experienced team of researchers, data scientists, and engineers to carry out its operations.
Temasek wanted to increase its network portfolio at a global scale by launching a product that would enable enterprises to achieve better business outcomes using artificial intelligence technologies.
AIcadium mentioned in a blog, “Temasek is committed to driving digital transformation across its portfolio,” said Michael Zeller. “We recognize that in order to meet the global challenges we will face in the coming years, companies across industries need to be efficient, sustainable, and growth-oriented, often in exponential ways.”
Using the vast amount of data available with Temasek, AIcadium will provide the best available artificial intelligence solutions for businesses to tackle various commonly faced challenges. The platform uses deep learning technology to analyze multiple use cases offered by organizations.
It then feeds the gathered data to its artificial intelligence model to provide solutions for business challenges like predictive maintenance and computer vision-powered employee safety assurance. AIcadium is also capable of performing accurate natural language processing across various languages.
Michael Zeller said, “Artificial Intelligence has a key role to play in enabling our portfolio and talent to be resilient, as well as ready to seize future opportunities and growth. Initiatives like Aicadium are how we deploy our capital to catalyze novel solutions to transform businesses and further social impact.” Data scientists who are interested in joining AIcadium can visit the official website of KDD Data Science Conference 2021 from 14th to 18th August.
Artificial intelligence (AI) and drones are predicted to alter the course of combat and warfare in the future. While the military is planning to leverage the computer vision capability of artificial intelligence to hunt down submarines, detect an enemy intrusion, or decode messages using machine learning abilities, several countries around the world have given nod to using drones in the name of national security. The information gathered by these military drones can assist in solving several problems at once and determining the best course of action. However, not many are on board with using AI-powered drones!
Difficult geographical locations can pose a daunting challenge in terms of accessibility and mobility becomes more difficult in regions where the army does not have authorization. Against this backdrop, the military may consider it critical to study population activities in general, and drones in particular, for their ability to address significant issues in transportation and space maneuvering. Also, advancement in military research on the use of artificial intelligence in drones will mean global dominance in technology and weaponry.
Most recently, President Joe Biden has stated that the United States of America will engage in “intense rivalry” with the People’s Republic of China. This means being able to confront Beijing for control of global trade, influence trade, and technology laws, and, if necessary, fight and win a war with the world’s second-biggest economy. The Pentagon is already researching war scenarios in which AI is permitted to operate on its own after receiving commands from a human. Though the Pentagon promises to build an “ethical” AI army, it would not be easy.
The drone market witnesses huge growth thanks to its commercial applications like aerial photography, medical and groceries delivery — all at a fraction of the cost to the military counterparts. This is also quintessentially why commercial drones outpaced military ones in terms of adoption.
However, arming these commercially available drones with human-like cognitive skills via artificial intelligence would transform them into strong targeted weapons available to rebel militaries and terrorists, for a fraction of the cost of the military drones used by the US government. They can misuse this technology for collecting intelligence, surmounting ground-based physical barriers, and carrying out highly effective airstrikes.
Meanwhile, leaders want drones to have more autonomy, allowing warfighters to delegate crucial duties to them. Commercial providers, on the other hand, have yet to attain advanced autonomy, despite several tries. As a result, military groups worldwide are seeking systems that can operate autonomously in highly complicated, disputed, and congested settings like GPS-denied areas or military camps areas protected by severe electromagnetic interference. This has forced the national defense bodies to turn to startups for gaining technical support.
For instance, since 2018, US Special Operations Command has been using Shield AI’s autonomous technologies onboard smaller quadcopter drones. The company claims that their patented Hivemind AI technology is well-suited to allowing unmanned aircraft to carry out a variety of duties, including “infantry clearance operations” and “breaching integrated air defense systems with unmanned aircraft.” Amid the demands of the Pentagon for VTOL drones that require less footprint during vertical take-off and landing (helpful in crowded areas) and have operational efficiency in GPS disable areas, Shield AI will be integrating its abilities with the V-Bat Drone.
The V-Bat drone, which has a wingspan of 10 feet and is equipped with a 183cc two-stroke engine that powers a ducted-fan propulsion and control system allows it to reach top speeds of around 90 knots and altitudes of up to 20,000 feet, has become a new favorite among US military personnel. The drone was recently selected as one of the finalists in the Navy’s Mi2 Challenge. This competition aims to “accelerate the identification and evaluation of Unmanned Aerial Equipment (UAS) capable of operating in austere deployed situations without ancillary support systems. Meanwhile, the Army Expeditionary Warrior Experiment, which looks for “concepts and capabilities at the lowest tactical echelon in support of Multi-Domain Operations (MDO),” is investigating the V-Bat for future usage.
China is not lagging behind either. This year the mandarin nation unveiled a shark drone to aid in the surveillance and tracking of hostile ships and submarines. Developed independently by Beijing-based Boya Gongdao Robot Technology, this stealthy shark drone can travel at speeds of six knots and will assist the country’s military with surveillance and search and destroy tasks. It was revealed at the 7th China Military Intelligent Technology Expo. This news came after researchers from Harbin Engineering University revealed they have developed an AI automated underwater drone capable of identifying and destroying enemy craft without human input.
This month South Korea’s Defense Acquisition Program Administration (DAPA) announced that the country will test grenade-launching drones that can be controlled remotely over a two-kilometer range while carrying gunpowder-filled 40mm rounds in 2022. Even Russia is ready with its Forpost drones, Kronshtadt Orion combat drones, Okhotnik-B (“Hunter”) stealth combat drones, Orlan-10 surveillance drones, and more.
This may not be the first time we are hearing the news on the possible usage of artificial intelligence-powered military drones, as scattered incidents have already taken place in history. In September 2019, Iran attacked Saudi Arabia with drones and cruise missiles. Turkey has developed a drone named Kargu-2 that allegedly “hunted down” retreating soldiers loyal to Libyan General Khalifa Haftar. However much remains unknown about how autonomous the drone was. According to its manufacturer, Defense Technologies and Trade (STM), Kargu-2 uses machine learning-based object classification to select and engage targets and also has swarming capabilities to allow 20 drones to work together.
While nations are busy adding killer drones to their arsenal, some of them are simultaneously working on developing anti-combat drone solutions too. US Start-up in the defense industry Epirus has developed Leonidas, a technology that can disable a hostile drone while leaving a friendly drone a few feet distant unharmed. Using super-dense Gallium Nitride power amplifiers and AI algorithms to stabilize, focus, and direct energy to precise frequencies, Leonidas can take out both large fixed-wing drones and small quadcopters.
In July this year, Anduril Industries was given a five-year contract for up to $99 million by the Pentagon’s Defense Innovation Unit to make the company’s counter-drone artificial intelligence technology available across the military. The autonomous c-UAS solutions from Anduril ingest surveillance data to detect, monitor and notify military users of possible threats.
While leveraging artificial intelligence in military drones seems like a plausible future, there are certain caveats that need immediate attention.
Experts warn that the datasets used to train these autonomous killer robots to classify and recognize items like buses, vehicles, and humans may not be sufficiently complex or resilient and that the AI system may learn incorrect lessons. In addition to this, the black-box nature of these algorithms may prevent understanding why the system made a certain decision especially during training and legal concerns.
Even during the data transmission between drones in a swarm and to the ground control unit, the presence of slower, more segmented one-point to one-point information transmission can cause significant latency challenges. Much research is needed in network time-sensitive targeting and transmission of intelligence information in real-time across a widely dispersed army of integrated combat “node points” of drones.
To make matters worse, artificial intelligence models still have trouble correctly identifying objects and faces in the field. When a picture is slightly altered by introducing noise of any type, the models are easily confused. This will create problems either while deploying a drone attack or disabling a swarm of drones. Critics also point out that the use of facial recognition in drones (especially military surveillance drones) raises ethical, moral, and legal dilemmas.
However, on the brighter side, if more countries acquire armed drones, there is a strong likelihood to set up international legal frameworks, as well as democratic principles such as transparency, accountability, and the rule of law.
Bengaluru-based conversational artificial intelligence developing company Senseforth.AI raises $14 million from Fractal in its corporate funding round. Sensforth.AI plans to use this funding to expand into the global market and penetrate the customer base of Fractal.
The investor will also use this opportunity to further enhance its artificial intelligence platform with the help of Senseforth.AI’s expertise. It is being predicted that by 2022 more than 30% of the customer support sector will adopt conversational artificial intelligence solutions for their operations.
Sensefoth.AI’s platform A.wire enables businesses to quickly train and deploy conversational AI models cost-effectively and efficiently. The platform also helps to reduce operational costs and increases productivity.
Co-founder and the Chief Executive Officer of Senseforth.AI, Shridhar Marri, said, “This strategic investment creates a new growth blueprint for Senseforth.ai. We are thrilled to deliver on our vision “to make technology human-like,” enabling continuous Human-AI interaction, transforming complex business processes.”
He also mentioned that this funding would allow them to invest more in the research and development process in order to offer better services to its customers.
Senseforth.AI was founded by Krishna Kadiri, Shridhar Marri, and Ritesh Radhakrishnan in the year 2017. The firm has raised a total funding of $16 million to date. Senseforth.AI’s platform also offers pre-built business bots a better customer service experience.
Co-founder and Vice-Chairman of Fractal, Shrikant Velamakanni, said, “Senseforth.ai’s founders Shridhar, Krishna and Ritesh have built a great team and a robust platform that’s deployed at scale with marquee clients like HDFC Bank.”
He further added that Senseforth.AI’s technology is the current market-leader in the conversational AI industry and they are thrilled to be a part of the next phase of the company’s growth