World’s first data-powered, full-stack music licensing platform, Songtradr, acquires artificial intelligence (AI) metadata and music search firm Musicube. Neither company provided any information regarding the valuation of this recently signed acquisition deal.
Songtradr’s portfolio of tech-enabled music solutions, which include licensing the correct music for any content, driving higher ROI, and enabling companies to quantitatively engage their target audience, has been expanded with this acquisition.
Musicube’s technology further enhances Songtradr’s B2B music solutions that use data and tech-enabled intelligence with creative excellence.
“Their (Musicube) impressive team of passionate musicologists and data scientists understand the power of data and its relationship with music, which ultimately benefits our brand and agency customers as well as music rights holders,” said CEO of Songtradr, Paul Wiltshire.
Paul further added that they require outstanding metadata enrichment technology, a crucial component of the B2B music supply chain, to have the world’s top B2B music search and recommendation technology, and this acquisition will help them achieve that goal.
Musicube’s semantic search has achieved product leadership in quality and data depth by utilizing neural networks and unique artificial intelligence.
Germany-based AI metadata and music search company Musicube was founded by Agnes Chung and David Hoga in 2019. The startup specializes in providing an artificial intelligence-powered solution that offers rich metadata to music labels, publishers, rights holders, and anyone looking for song discoverability tools. The solution’s database contains over 50 million song titles with ISRCs from all contributors and over 500 keywords.
“Songtradr is the most exciting music company we have seen in a very long time,” said CEO and Founder of Musicube, David Hoga. He also mentioned that they are overjoyed to be joining this extraordinary team of music lovers and engineers since every interaction they had has been marked by both cordiality and an ambitious objective.
The tech giant Axon recently hit a pause on its ‘Dystopian’ taser drone project that it had begun developing formally as a deterrent to school mass shootings. The project was termed to be “non-lethal and remotely-operated.” However, the AI Ethics Board aired on the side of caution and resigned that such a system would require strict monitoring and real-time surveillance.
In recent years, Axon has developed into a law enforcement software and hardware behemoth. Ahead of the mass shootings, it aimed to produce Taser-equipped drones, place them in perspective targets for school shootings, and surround those sites with surveillance cameras capable of real-time streaming.
The AI Ethics Board was briefed and deliberated about the taser-equipped drones. Nine of the twelve board members said, “After several years of work, the company has failed to embrace the values that we have tried to instill. We have lost faith in Axon’s ability to be a responsible partner.”
Despite the deposition, the board was made aware at concise notice that Axon was planning to conceptualize the idea worldwide, bypassing the vote of caution. The panel ended up with fewer members due to Axon’s recent tech decisions in the context of mass shootings.
The unanticipated series of events in the progress of Axon’s dystopian taser drone project made the company pause working on it. Axon’s CEO Patrick W. Smith penned his response, “In light of feedback, we are pausing work on this project and refocusing to further engage with key constituencies to fully explore the best path forward.” Axon is looking forward to enhancing the drone system and hopes to be able to deploy and incapacitate an active shooter within 60 seconds of being notified.
Apple introduces its latest remodeled MacBook Air and an upgraded 13-inch MacBook Pro, both of which are powered by the latest M2 chip. The chip has taken the exceptional performance and capabilities of the M1 chip even further.
M2 marks the beginning of the second generation of Apple’s M-series chips. With best in the industry power efficiency, a unified memory build, and custom technologies, this M2 chip will bring superior performance and capabilities to Apple’s most popular Mac notebooks, viz. the MacBook Air and the 13-inch MacBook Pro.
Apple’s new flagship chip inhibits a next-generation 8-core CPU with advanced developments in both performance and efficiency cores. It also features Apple’s next-generation GPU, which now has up to 10 cores i.e., two more than the M1 chip. M2 can deliver 100GB/s of unified memory bandwidth and support up to 24GB of fast unified memory. This advancement enables M2 to handle more prominent and complex workloads with the most remarkable ease.
M2 incorporates a next-generation media engine and a potent ProRes video engine for hardware-accelerated encode and decode. This design will drastically improve the speed of video workflows by allowing systems with M2 to play back more streams of 4K and 8K video as compared to earlier.
With an M2 chip, the latest MacBook Air features a 13.6-inch Liquid Retina display, up to 18 hours of battery life, a 1080p FaceTime HD camera, a four-speaker sound system, and MagSafe charging.
At the same time, the 13-inch MacBook Pro delivers excellent performance with up to 24GB of unified memory, ProRes acceleration, and about 20 hours of battery life, courtesy of M2.
IBM is set to follow its roadmap for achieving large-scale, practical quantum computing by announcing plans to build 4000+ qubit quantum computers by the end of 2025.
The company aims to build a modular architecture that will enable more qubits in quantum computers. Along with that, IBM is working on a software orchestration layer to distribute the workloads across classical and quantum resources.
IBM announced its quantum roadmap in the year 2020, and since then, the company has delivered on its targets on predetermined timelines. The company is set to launch IBM Osprey, a 433-qubit processor, later this year, while IBM Condor, a first-ever universal quantum processor with 1000+ cubits, will be available by 2023.
Moreover, IBM is succeeding in achieving its set goals for 2023 for developing Qiskit Runtime and other workflows built in the cloud to bring a serverless approach to the core quantum software stack, giving developers advanced flexibility and simplicity.
The serverless approach will be a pioneering step in achieving the effective and efficient distribution of problems across quantum and classical systems, creating the fabric of quantum-centric supercomputing.
According to IBM CEO Arvind Krishna, there are a few hurdles that might hinder the plans to build the 4000 qubit system. Scaling the quantum computer systems, communicating amongst them, and getting the software to work and scale from a cloud into the computers are some of the problems Krishna mentioned.
— University of Glasgow (@UofGlasgow) June 2, 2022
In order to learn, the newly designed artificial skin includes a new form of processing system based on ‘synaptic transistors,’ which replicate the brain’s neural networks.
According to the researchers, it could aid in developing a next-generation series of robots with human-like sensibility.
Researchers from the University of Glasgow also released a video on YouTube demonstrating a robot hand that employs smart skin and has a remarkable ability to learn and respond to external stimuli.
The newly developed e-skin is inspired by how the human peripheral nervous system processes impulses from the skin to decrease latency and battery waste.
The researchers printed a grid of 168 synaptic transistors manufactured from zinc-oxide nanowires directly onto the surface of a flexible plastic surface to create an electronic skin capable of a computationally efficient, synapse-like response. Later, the synaptic transistor was attached to a skin sensor located on the palm of a fully articulated, human-shaped robot hand.
“What we’ve been able to create through this process is an electronic skin capable of distributed learning at the hardware level, which doesn’t need to send messages back and forth to a central processor before taking action,” said lead researcher Ravinder Dahiya.
He further added that they feel this is a significant step forward in their efforts to develop large-scale neuromorphic printed, electronic skin that can respond to stimuli correctly.
Last year, Meta AI collaborated with researchers at Carnegie Mellon to develop a kind of synthetic skin named ReSkin that can be used in robotic hands. The one-of-a-kind robotic skin can help artificial intelligence-powered robots retrieve information such as item weight, texture, temperature, and state.
Artificial intelligence and deep learning startup Landing AI adds new edge capabilities to its LandingLens with the launch of its new LandingEdge.
Manufacturers will be able to deploy deep learning visual inspection solutions to edge devices on the manufacturing floor, allowing them to detect product flaws more accurately and consistently using LandingEdge.
The company’s flagship product, the LandingLens platform, includes a variety of capabilities to assist teams in developing and deploying reliable and repeatable inspection systems for various jobs in a production setting.
LandingLens users will now be able to more easily integrate with industrial infrastructure to interact with cameras, apply models to pictures, and create predictions to support real-time decision-making on the manufacturing floor, thanks to the enhanced edge capabilities.
Vice President of Outreach Technology and Vision Technology at Landing AI, David L. Dechow, said, “These products mark huge steps in bringing deep learning solutions to the factory floor that are easily integrated to perform an automated inspection for a broad range of applications.”
He further added that they give manufacturers and systems integrators more powerful tools to swiftly adopt inspection solutions that cut costs, boost productivity, and improve consumer product satisfaction.
Moreover, LandingLens has also been enhanced to allow for up to seven times quicker deep learning model training than previously.
Recently, Landing AI also joined the NVIDIA Metropolis Partner program to accelerate AI performance and edge deployment. This will help the company improve its quality control in manufacturing and industrial applications.
United States-based technology startup Landing AI was founded by a former Chief Scientist at Baidu and founding lead of the Google Brain team, Andrew Ng, in 2017. To date, the startup has raised $57 million from several investors like McRock Capital, Intel Capital, Insight Partners, Samsung Catalyst Fund, and others over two funding rounds.
Succumbing to the pressure from its employees, Google cancels a talk by Thenmozhi Soundararajan, a US-based Dalit activist scheduled to speak on the topic of caste equity.
As an initiative of Google’s Diversity Equity Inclusivity (DEI) program meant for employee sensitization, the talk was being coordinated by the senior manager at Google News, Tanuja Gupta, who has now quit the company in protest.
Ahead of the talk, which was scheduled for April 18, the authorities, including Tanuja, allegedly received inflammatory mass emails from a group of pro-Hindu employees who accused Soundararajan of being anti-Hindu and claimed that their lives would be at risk if the talk proceeded.
A report by the Washington Post alleges the cancellation of the discussion was the direct consequence of the pressure from the South-Asian employees of the company.
After hearing about the talk’s cancellation, Thenmozhi wrote an email to Sundar Pichai, CEO of Google, expressing how alarming the situation was and how the issue must be addressed to achieve an equitable society.
Thenmozhi Soundararajan, a world-renowned anti-caste campaigner and former president of the Ambedkarites Association of North America (AANA), has led many causes that have drawn global attention to the discriminatory caste system in the South Asian culture. Her California-based non-profit initiative, Equity Labs, advocates for the civil rights of Dalits or formerly addressed as “untouchables” in a millennia-old system of social hierarchy.
Diversity Equity Inclusivity (DEI) programs, a standard part of most multinational companies of today, primarily focus on race, gender, and sexual orientation. However, Dalit activists have been lobbying to include caste sensitization in the programs over the years.
Google releases a new google cloud architecture diagramming tool to help organizations go from architecture to implementation in a few steps. The idea is to guide users with a cloud use case to go from ‘ideas’ to ‘implementation’ via architecture.
The architecture diagram forms the foundation of the implementation journey. It allows users to share ideas, collaborate with people, and integrate the designs. To design a full-fledged architecture diagram, users need assistance in the form of a ‘reference architecture.’ This reference diagram can be tweaked to fit the purpose.
Sometimes, the users may be aware of the starting point, but other times they may not have a clue. Furthermore, transitioning from the architecture to the actual implementation is an intimidating process. The Google Cloud architecture diagramming tool has been put into the scenario to address these challenges in implementation. The tool helps in the translation process to go from building the architecture to deploying the application with a few clicks.
The tool’s interface provides a centralized listing of all Google Cloud goods and services. These services have been structured based on categories like database, compute, etc. Several images and icons have been incorporated into the interface so the user can build an architecture with a drag and drop feature. The tool is integrated with the Google Cloud Developer Cheat Sheet, allowing you to review the explanations and documentation associated with each component.
Other than architecture building assistance, the tool also includes 10+ prebuilt reference architectures that may be used, to begin with. These references cater to common use cases like microservices, websites, compute, ML, data science, etc.
Once the architecture is ready, its components can be easily deployed in Google Cloud with just one click!
Image Source: ST Telemedia Global Data Centre, Singapore
The artificial intelligence (AI) adoption rate has been growing exponentially in the past few years. While developed countries like the USA, China, and Japan are already at the forefront of adopting this technology, countries like Singapore are not trailing behind either. Most recently, Minister for Communications and Information, Josephine Teo, announced piloting the world’s first AI Governance Testing Framework and Toolkit, at the World Economic Forum Annual Meeting in Davos in May this year. A.I. Verify provides a means for companies to measure and demonstrate how safe and reliable their AI products and services are.
The Infocomm Media Development Authority (IMDA) and the Personal Data Protection Commission (PDPC), which oversees the country’s Personal Data Protection Act, created the new toolkit to bolster the nation’s commitment to encouraging the ethical use of AI. This development adheres to the guidelines imposed in the Model AI Governance Framework in 2020 and the core themes of the National AI Strategy in 2019. Through self-conducted specialized testing and process inspections, A.I. Verify aims to improve transparency in the usage of AI between organizations and their stakeholders. Being the first of its type, A.I. Verify is ready to help organizations navigate the complex ethical issues that arise when AI technology and solutions are used.
IMDA also mentioned that A.I. Verify abided with globally established AI ethical standards and norms, including those from Europe and the OECD, and covered critical aspects such as repeatability, robustness, fairness, and social and environmental wellness. Testing and certification regimes that included components like cybersecurity and data governance were also included in the framework.
The new toolkit is now available as a minimum viable product (MVP), which includes ‘basic’ capabilities for early users to test and provide feedback for product development. It performs technical testing based on three principles: “fairness, explainability, and robustness,” combining widely used open-source libraries into a self-assessment toolbox. For explainability, there’s SHAP (SHapley Additive exPlanations), for adversarial robustness, there’s Adversarial Robustness Toolkit, and for fairness testing, there’s AIF360 and Fairlearn.
The MVP Testing Framework tackles five Pillars of concern for AI systems, which encompass 11 widely recognized AI ethical principles, namely, transparency, explainability, repeatability or reproducibility, safety, security, robustness, fairness, data governance, accountability, human agency, and oversight, and inclusive growth, social and environmental well-being.
The five pillars are as follows:
transparency in the usage of AI and its systems;
knowing how an AI model makes a decision;
guaranteeing AI system safety and resilience;
ensuring fairness and no inadvertent discrimination by AI;
and providing adequate management and monitoring of AI systems.
Organizations can participate in the MVP piloting program, gaining early access to the MVP and using it to self-test their AI systems and models. This also allows for developing international standards and creating an internationally applicable MVP to reflect industry needs.
Finally, A.I. Verify intends to analyze deployment transparency, support organizations in AI-related enterprises, evaluate goods or services to be offered to the public, and assist prospective AI investors through AI’s advantages, dangers, and limits.
The pilot toolkit also creates reports for developers, managers, and business partners, covering essential areas that influence AI performance and putting the AI model to the test. It’s packed as a Docker container, so it is quickly installed in the user’s environment. The toolkit currently supports binary classification and regression algorithms from popular frameworks such as scikit-learn, Tensorflow, and XGBoost, among others.
The test framework and tools, according to IMDA, will even allow AI system developers to undertake self-testing not only to ensure that the product meets market criteria but also to provide a common platform for displaying the test results. Overall, A.I. Verify attempted to authenticate claims made by AI system developers regarding their AI use along with the performance of their AI products, rather than defining ethical norms.
However, there is a flaw in this innovation. The toolbox would not ensure that the AI system under examination was free of biases or security issues, according to IMDA. Furthermore, the MVP is unable to specify ethical criteria and can only verify statements made by AI system creators or owners on the AI systems’ methodology, usage, and verified performance.
Because of such constraints, it’s difficult to say how AI Verify will aid stakeholders and industry participants in the long term. For now, there is no knowledge about how developers will ensure that the information provided in the toolkit prior to self-assessment is correct and not based on speculations. This is quite a technological challenge, A.I. Verify will have to overcome.
Singapore intends to cooperate with AI system owners or developers throughout the world in the future to collect and produce industry benchmarks for the creation of worldwide AI governance standards. Singapore has engaged in ISO/IEC JTC1/SC 42 on AI for the interoperability of AI governance frameworks and the creation of international standards on AI, and is working with the US Department of Commerce and other like-minded governments and partners. For instance, Singapore collaborated with the US Department of Commerce to guarantee interoperability between their AI governance frameworks.
According to IMDA, several organizations have already tested and provided comments on the new toolset, including Google, Meta, Microsoft, Singapore Airlines, and Standard Chartered Bank. With industry input and comments, more functionalities will be gradually introduced.
Singapore hopes to strengthen its position as a leading digital economy and AI-empowered nation by introducing the toolkit as it continues investing in and developing AI capabilities. While Singapore aspires to be a leader in creating and implementing scalable and impactful AI solutions by 2030, it is evident that the country places a substantial value on encouraging ethical AI practices.
Head of artificial intelligence and machine learning at Growfin, Sudalai Rajkumar, becomes Kaggle Quadruple Grandmaster. This information was revealed by Rajkumar through a post on his LinkedIn account.
Kaggle, a Google subsidiary, is an online community of data scientists and machine learning experts.
Users can use Kaggle to search and publish datasets, study, and construct models in a web-based data science environment, collaborate with other data scientists and ML experts, and compete in data science contests.
With this new development, Sudalai Rajkumar becomes the third Indian to receive the title of Kagle Quadruple Grandmaster.
Apart from Rajkumar, Abhishek Thakur, Chris Deotte, Bojan Tunguz, and Rohan Rao have earlier achieved this title. Rajkumar has been working with Growfin.ai since 2021 and currently serves as the company’s head of AI & ML.
In his LinkedIn post, he mentioned, “Elated to join the elite group of Quadruple Kaggle Grandmasters. Thank you all for the love and support.”
Rajkumar has a Bachelor of Engineering degree from PSG College of Technology and a degree in Business Analytics and Intelligence from the Indian Institute of Management Bangalore. Before joining Growfin.ai, Rajkumar had worked with companies like H2O.ai, Argoid, and GeoIQ.io.