Saturday, January 17, 2026
ad
Home Blog Page 281

Amazon launches SageMaker Canvas at its re:Invent 2021 conference

amazon SageMaker Canvas
AWS CEO Adam Selipsky kicks of re:Invent 2021. Image Source: AWS

Amazon launched SageMaker Canvas yesterday during a keynote talk at its re:Invent 2021 conference, which allows users to develop machine learning models without writing any code. This will enable AWS users to execute a machine learning process with a point-and-click user interface using SageMaker Canvas to produce predictions and publish the findings. 

The keynote began with the narration of how AWS’s EC2 instances were the first true game-changer for the corporation and its upcoming plans focusing on Arm-based Gravitron processors. At the event, Amazon unveiled its third-generation self-developed server processor Graviton3, IoT TwinMaker, self-developed cloud AI training chip Trn1, and AWS Mainframe Modernization. The event also announced the preview of a new managed service called AWS Private 5G that helps enterprises set up and scale private 5G mobile networks in their facilities in days instead of months.

With emphasis to help users lacking extensive artificial intelligence knowledge or training, SageMaker Canvas supports multiple problem types such as binary classification, multi-class classification, numerical regression, and time series forecasting. The perk of having such extensive support is that you can solve business-critical use cases like fraud detection, churn reduction, and inventory optimization without creating a single line of code.

According to AWS CEO Adam Selipsky, Canvas allows customers to browse and access petabytes of data from both cloud and on-premises data sources, such as Amazon S3, Redshift databases, and local files. Using automated machine learning technology, Canvas creates models, and users can then explain and analyze these models, as well as share them with others to contribute and deepen findings. They can also integrate data sets with a single click, train reliable models, and then create updated predictions as new data is available.

Read More: Microsoft Lobe — A Free Platform For No-Code Machine Learning

SageMaker Canvas automates the most time-consuming aspects of data preparation. The application aids in the detection of errors such as missing spreadsheet fields and automates the tedious effort required in merging data from several sources.

After curating the training dataset, businesses can begin creating their AI model where  SageMaker Canvas could determine the accuracy of a neural network before it is launched. They can analyze the estimate and, if necessary, change their datasets to increase accuracy.

SageMaker Canvas examines hundreds of AI models and selects the most effective one for the processing task the user wants to automate. With a few clicks, workers may train the neural network on their datasets.

SageMaker Canvas employs Amazon SageMaker’s sophisticated AutoML technology, which automatically trains and constructs models depending on your dataset. SageMaker Canvas can then use this information to choose the optimal model for your dataset and deliver single or bulk predictions. Business analysts may easily exchange models with data scientists using SageMaker Canvas, which is connected with SageMaker Studio.

Building a machine learning model often necessitates not just coding expertise but also experience with AI-specific development tools like TensorFlow. Enterprise AI initiatives might be difficult in numerous ways due to the demand for specific expertise.

If a corporation lacks AI knowledge in-house, it may need to outsource experts to help with machine learning initiatives. Meanwhile, companies that already have the requisite technological know-how could face difficulties too. When it comes to launching AI or machine learning initiatives, business users frequently need developer support, which slows down the rate at which businesses can implement AI. Hence, no code solutions are being increasingly adopted across industries to boost machine learning-powered AI tools’ use and deployment. According to Gartner, by 2024, no-code development will account for 80% of all ICT goods and services.

Amazon SageMaker Canvas is now available in the US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (Ireland) AWS Regions.


To read more about SageMaker Canvas, visit here.

Advertisement

Spotify CEO invests in AI-powered Defense Technology startup Helsing

Spotify CEO invests Helsing

Daniel Ek, CEO of popular commercial music streaming platform Spotify, invests over $133 million in artificial intelligence-powered defense technology startup Helsing. Daniel made the investment through his funding company named Prima Materia during the Series A funding round of Helsing. 

Additionally, Daniel Ek will join Helsing’s board of directors as a part of the investment. Helsing plans to use the newly raised funds to integrate artificial intelligence technology in military equipment and weapons to expand their capabilities. 

According to the company, the developed AI-powered equipment will first be made available to French, German, and British militaries. Helsing aims to provide an information advantage to armies of democratic countries with the use of artificial intelligence. 

Read More: Xenobots: the world’s first living robot that can reproduce

The company mentioned, “We believe AI in defence and security must be based on both the highest ethical standards and strong democratic oversight. We follow a multi-domestic approach, providing each country with sovereign, national-eyes-only technology access.” As of now, Helsing operates in the United Kingdom and Germany, but it plans to open its office in France by the first quarter of 2022 to help them expand into new markets. 

Berlin-based artificial intelligence defense company Helsing was founded by Gundbert Scherf, Niklas Kohler, and Torsten Reil in 2021. The startup specialized in developing AI-enabled security systems for governments and militaries. 

However, the investment made by Daniel Ek has been in constant criticism on various social media platforms, including Twitter. According to a report, over 80% of the viewers of Daniel’s tweet about the investment were not happy with the development. 

Twitterati stormed the internet claiming that music is a channel to attain peace, and the investment made by the CEO of Spotify is a complete contradiction of music philosophy. Many Spotify users also got exasperated and shared their thoughts about unsubscribing Spotify. 

Advertisement

Q-learning algorithm to generate shots for walking Robots in Soccer Simulations

Q-learning soccer robot

RoboCup is an annual international robotics competition where robots play soccer and compete with different robot teams worldwide. The idea behind the game is the research project, which uses footballing robots in a quest to develop machines that could one day assist humans. The ultimate goal of the robotics tournament is to create a team of robots capable of competing and winning a soccer match against human soccer players by 2050.

The humanoid robots play soccer based on the vision that helps make decisions depending on the robot’s position. It auto-analyses the field using the information that has already been programmed into it. The robots are made of stern stuff consisting of hardware for the brain, webcams for vision, and a body full of motorized parts, that allows them to pursue the ball around the pitch and shoot for goal. Using the camera, the robot determines the field setting around them and then transmits Wi-Fi to other robots in the team to make a move. 

The critical factor of robotic soccer games is that the robots in a team can simultaneously make decisions for shooting the goal. 

Read more: Researchers used GAN to Generate Brain Data to help the Disabled

The scientists and the AI researchers who have designed those soccer robots can test their computational tools by employing the RoboCup 3D soccer simulation league. This tool accurately replicates the real RoboCup environment in simulation, allowing AI scientists to design and implement AI strategies and methods for robots to shoot more goals.

The ultimate goal of the teams participating in the league is to increase the number of shots/goals compared to the other team. To achieve this, researchers from China and Iran have recently created a new simulation technique that enhances the ability of robots to efficiently shoot the ball while walking. Researchers have developed a computational approach called Q-learning algorithm to achieve this process, which allows the robots to shoot goals even when walking. 

Mainly to generate shots on simulation, the inverse kinematics and point analysis approaches are followed. If these mathematical methods are succeeded in simulations, they can be used in real-world soccer robots as well. This paper is published in SpringerLink, where the researchers clearly explained the approach of the Q-learning algorithm used in soccer simulations 3D league.

Advertisement

Xenobots: the world’s first living robot that can reproduce

world’s first living robots that can reproduce

Researchers from the University of Vermont, Wyss Institute for biologically inspired engineering at Harvard University, and Tufts University have discovered a new form of biological reproduction and applied it to the first-ever self-replicating living robot. The same team first reported the stem cells of African clawed frogs or Xenopus laevis were first reported in 2020 by the same team. Now, they have designed a Xenobot, the world’s first living robot that can reproduce.

Xenopus laevis is a computer-designed and hand-assembled organism that could swim out into the tiny dish, find single cells, and gather hundreds of them to assemble the baby Xenobots inside their PacMan-shaped mouth. These new Xenobots could look and move just like themselves and even go out, find cells, and build copies of themselves. 

In a Xenopus laevis frog, the embryonic cells developed into the skin. However, rather than becoming tadpoles, these cells have a frog’s genome and are replicated into the unaltered frog genome. This computer-designed collection of cells are replicating in a way that is different from original frogs. These frog cells are replicating in a way that no known species in animals or plants can. 

Read more: UC San Diego uses deep learning to map the human cell, finds new cell components

The Xenobot parent made of 3000 cells forms a sphere, but the system typically dies since it’s challenging to get the system to keep reproducing. However, with the help of an AI program working on a Deep green supercomputer cluster, could test millions of body shapes such as triangles, pyramids, starfish, squares, etc., to find the one that would allow the Xenobot cells to be more effective in replication. 

The AI program came up with strange designs, including the one that resembled Pacman. Once scientists changed the initial Xenobot parent structure to Pacman, these parents built grandchildren and grew great-grandchildren and another generation of Xenobots. The right design was remarkably able to extend the number of generations the Xenobot could produce. 

The notion of self-replicating biotechnology can seem exhilarating. However, for the research team, the goal was to understand the property of replication. These mm size living machines were entirely contained in the laboratory, easily extinguished, and vetted by federal, state, and institutional ethics experts. Bongard, one of the research scientists, points that developing technology where you can quickly tell an AI system: We need a biological tool that does X and Y and suppresses Z could be beneficial when producing solutions that matter deeply, such as the vaccine for the COVID pandemic. 

Advertisement

OpenAI announces new Residency program for AI talents, will offer them full-time roles

Open AI Residency Program

Artificial intelligence research and deployment company OpenAI announces that it will offer a new Residency program for AI talents that will provide them full-time job opportunities in the company. 

The program will help engineers and researchers join OpenAI who do not work in the artificial intelligence domain, as the OpenAI residency program aims to recruit talents from underrepresented fields of technology and science. It is a six-month course where all the resident learners will be provided a stipend of $750 for the entire duration. 

The curriculum of the program is unique as it will provide hands-on experience to learners by allowing them to work with the team of OpenAI rather than giving them theoretical knowledge. OpenAI has started accepting applications for the spring intake of 2022, and the last date for submitting applications is on 14th January 2022. 

Read More: Researchers used GAN to Generate Brain Data to help the Disabled

Chief scientist of OpenAI, Ilya Sutskever, said, “We’ve welcomed incredible new talent to OpenAI through our Fellows and Scholars programs, who have made major research contributions and helped advance OpenAI’s goal of building beneficial AGI.” 

He further added that this residency program targets those talents in the market who aim to contribute to the artificial intelligence domain but can not find a medium to do so. The course will focus on teaching learners the most practical artificial intelligence skills that would help them contribute to this sector.

Individuals from unconventional education backgrounds can readily apply for this residency program. OpenAI will also provide relocation and immigration support to highly talented applicants. The program will offer two diverse tracks named AI software engineering and AI research. The prior is suitable for individuals with a background in engineering and the latter for people looking for a shift as a research scientist from a non-ML scientific field. 

OpenAI residency program is set to commence from 18th April 2022, and the selection list will be declared by the mid of March. Interested candidates can apply here

Advertisement

Researchers used GAN to Generate Brain Data to help the Disabled

Researchers brain data

On November 18, 2021, researchers at the USC Viterbi School of Engineering published a paper on generating synthetic brain activity data. The research paper was published in the Nature Biomedical Engineering forum, in which it was reported that research scientists had trained an AI model to produce neural data for building a Brain-Computer Interface (BCI) system. 

The Brain-Computer Interface technology is user-specific and must be well trained based on the dysfunctions of the respective patients. For training the BCI algorithm, there is a vast need for brain data. Sometimes it is difficult, expensive, or even impossible when the paralyzed individuals cannot produce sufficient brain signals. To overcome this issue of collecting real-world data for the BCI algorithm, researchers thought of generating synthetic data that are artificially generated by computers. 

The process of generating brain data is done by employing Generative Adversarial Networks (GAN). GANs are the computational frameworks that compete two neural networks against each other to produce new synthetic data instances. The computer-generated brain data is now used for training an algorithm for BCI systems. 

Read more: Synopsys to use AI for optimizing Samsung’s latest Smartphone designs

According to researchers, using GAN synthesized neural data, the overall training speed of the BCI system is increased by 20 times compared to the traditional methods.

The brain-computer interface works based on feeding the neural signals called spike trains into the algorithm. The BCI system analyzes the brain signals or impulses and converts them into digital instructions. This method allows impaired patients to operate digital devices like computer cursors or joysticks only by mental commands. 

Even people with paralysis, motor dysfunction, and locked-in syndrome can also benefit from the BCI system for enhancing their quality of life. The BCI devices are available in various forms like caps, helmets, and hair bands that continuously monitor the brain signals once the patient wears them. 

Advertisement

UNESCO unveils First Global Agreement On Ethics Of Artificial Intelligence: What Next?

UNESCO Ethics Of Artificial Intelligence ai
Image Source: UNESCO

The United Nations Educational, Scientific, and Cultural Organization (UNESCO) has published a global standard on artificial intelligence (AI) ethics that almost 200 member nations are expected to embrace.

The standard establishes shared values and concepts that will guide the creation of the legal infrastructure required to support AI’s healthy growth. According to UNESCO, there are various advantages to using AI, but there are also drawbacks, such as gender and ethnic bias, substantial risks to privacy, dignity, and agency, dangers of mass surveillance, and greater use of inaccurate AI technologies in law enforcement.

This is not the first time the concerns about the accountability of leveraging AI technologies have come into the limelight. While sure, it ushered discussions on the ethics of AI and a regulatory check on the possibility of misuse of AI outstripping its benefits and promises; there is still a gap. 

While AI technologies offer enormous promise for social and economic growth, policymakers face complicated and divergent hurdles in implementing them. Bias, stereotyping, and prejudice are among issues that AI brings to the table when such discussion happens. Amid these concerns, AI-generated analysis is increasingly being used to make decisions in both the public and private sectors. Hence, UNESCO has asked for AI to be created in a way that promotes fair outcomes.

Presenting the 28-page document, formally titled “Recommendation on the Ethics of Artificial Intelligence,” Audrey Azoulay, director-general of the UNESCO, stated that the content of the recommendation focuses on the following pointers:

  • Individuals will be better protected by ensuring openness, agency, and control over their personal data, which will go beyond what digital companies and governments are doing. It emphasizes that everyone should be allowed to access and even delete their personal data records.
  • The standard strictly prohibits the use of artificial intelligence (AI) systems for social scoring and mass monitoring. It insists that while building regulatory frameworks, member states should keep in mind that ultimate responsibility and accountability must always rest with people and that artificial intelligence technology should not be given legal authority in itself.
  • The ethical impact assessment is designed to assist nations and businesses in assessing the impact of AI systems on persons, society, and the environment as they develop and deploy AI systems.
  • The standard suggests that governments evaluate the AI system’s direct and indirect environmental impacts throughout its life cycle, including its carbon footprint, energy consumption, and the environmental effect of raw material extraction for manufacturing AI technologies. 

The Recommendation will include provisions to prevent real-world biases from being repeated online, as well as tangible policy actions based on universal values and principles. It will also instruct UNESCO to assess each country’s progress in the field of AI to assist them in the implementation phase.

In 2018, Audrey Azoulay, the Director-General of UNESCO, announced an ambitious initiative of establishing an ethical framework for using artificial intelligence across the world. Three years later, owing to the mobilization of hundreds of experts worldwide and extensive international talks, the 193 UNESCO member nations have recently officially embraced this ethics of AI standard framework.

According to Azoulay, “The world needs rules for artificial intelligence to benefit humanity. The recommendation on the ethics of AI is a major answer. It sets the first global normative framework while giving states the responsibility to apply it at their level.” Azoulay asserts that UNESCO will support its 193 member states in its implementation and ask them to report regularly on their progress and practices.

Meta (formerly known as Facebook) has been at the center of various controversies surrounding the ethics of AI in recent years. Cambridge Analytica, a now-defunct British political consulting business, exploited Facebook’s data to sway the Brexit referendum in the United Kingdom and Donald Trump’s victory in the United States. In 2018, Timnit Gebru, a former employee of Google’s Ethical AI team, and Joy Buolamwini, a researcher, demonstrated that face recognition software was less accurate in recognizing women and persons of color than it was in identifying white men.

Unesco’s proposals come less than a month after China released its own set of AI ethical principles, focusing on user rights and aligning with its ambition of becoming a global AI leader by 2030. 

Read More: China releases Guidelines on AI ethics, focusing on User data control

Meanwhile, AI experts are concerned that a few African voices have been included in the worldwide ethical regulations that offer guidelines for AI research. This is crucial as African countries are investing more in AI and machine learning research and development today. Data Science Africa, Data Science Nigeria, and the Deep Learning Indaba with its satellite IndabaX events, which have taken place in 27 African nations to date, demonstrate the interest and public investment in the AI disciplines.

While, surprisingly, China came out in favor of the 28-page document suggestion as one of UNESCO’s 193 member nations, surveillance cameras with biometric facial recognition dot the cityscape of the Mandarin nation. China is already in hot waters for its role in technologically aided persecution of the Muslim Uyghur minority in the autonomous area of Xinjiang, as well as the struggle against Hong Kong’s democracy movement — fueling the fears of the use of AI to control public behavior. 

Hundreds of towns, from Dubai to Nairobi, Moscow to Detroit, have placed cameras with FRT, with the assurance of feeding data to central command centers under smart city crime solutions. And it’s worth noting that most of these cities’ governments didn’t get public approval for the unbridled use of face recognition in the name of law enforcement.

On the brighter side, at the same time, several countries, such as Belgium and Luxembourg, have stated their opposition to the use of face recognition technology.

The success of this initiative lies with governments around the world, willing to adhere to the basic protocols and guidelines that will ensure ethical use of AI in the long run. A comprehensive follow-up will be necessary to ensure that the UNESCO recommendation can successfully offer universal standards for policy and legislation. As per the document, governments should create a regulatory framework that lays out a mechanism for conducting ethical impact assessments on AI systems to foresee outcomes, minimize risks, avoid adverse repercussions, increase citizen engagement, and address societal concerns. Algorithms, data, and design processes should all be auditable, traceable, and explainable.

While this is not a one-day assignment, experts hope this will encourage more conversations. Other international bodies have been working on AI ethics as well. For instance, the Organization for Economic Cooperation and Development (OECD) issued “Principles on Artificial Intelligence” in 2019, which supports “respect [for] human rights and democratic ideals” while adopting the technology.

Advertisement

Kolkata Police to use AI technology to identify traffic norms violators

Kolkata police AI traffic norms violators

Kolkata police announce that it plans to use AI technology to identify traffic norms violators. Police handle thousands of such traffic violation cases on a daily basis. This new artificial intelligence-powered solution will help them in carrying out their day-to-day operations. 

The AI system will use over 2500 CCTV cameras installed across the city to recognize helmet-less bike riders and parking violations. The system will process the complaint and send it to violators, which, if done manually, would consume a lot of time. 

The police department has requested a session with experts to understand the functionalities and capabilities of the newly developed artificial intelligence surveillance system. The final plan regarding the deployment of the AI system is yet to be discussed. 

Read More: Synopsys to use AI for optimizing Samsung’s latest Smartphone designs

“There are some plans in this regard. It is too early to comment till anything gets finalized,” said DCP Traffic, Ajit Sinha. The artificial intelligence-powered video analytics software will be installed at the LalBazar police control room, which will scan the number plates of traffic norm violators in real-time. 

The system will also reduce the chances of any injury caused to police officers while physically stopping violators. The AI system will be connected to the RTO server so that it can instantaneously send e-challans to traffic norm violators through SMS. It is believed that the deployment of the new AI system will also help in reducing any sort of corrupt practices. 

Earlier this year, Bengaluru police also introduced a similar kind of AI-powered system in the streets to check traffic norms violations. The system named Integrated Traffic Enforcement Management System (ITeMS) is capable of recognizing various types of traffic rule violations, including red light hopping, rash driving, helmet-less riders, using mobile while driving, and many more.

Advertisement

Synopsys to use AI for optimizing Samsung’s latest Smartphone designs

Synopsys AI design samsung smartphone

Electronic design automation and semiconductor developing company Synopsys announces that it is using an AI design system to optimize Samsung’s latest smartphone designs. The technology has already been successfully implemented by Samgsung to create a high-performance design at an advanced process technology. 

Apart from smartphone designs, Synopsys had collaborated with Samsung to develop and design next-generation Exynos processors that will be featured in the upcoming smartphones of Samsung. Synopsys used its DSO.ai platform to help Samsung develop performance-oriented chipsets for its smartphones. 

Samsung will use the same award-winning DSO.ai platform coupled with Synopsys Fusion Compiler RTL-to-GDSII solution to optimize the designs of its new smartphones. Synopsys’s DSO.ai uses reinforced learning technology to achieve better results. The DSO.ai technology has been deployed at various levels of phone manufacturing and designing process that has helped Samsung in reducing the research time and effort of engineers. 

Read More: Worldwide AI Software Market to Reach $62 Billion in 2022

Chairman and co-CEO of Synopsys, Aart De Geus, said, “This pivotal moment in semiconductor history will breathe new life into Moore’s law. We congratulate Samsung on this remarkable achievement, and we look forward to catalyzing its next 1000x.” 

He further added that autonomous chipset designing only existed in science fiction, but Sysnopsys’ technology has made it possible in the real world. Excerpts suggest that this new technology is a breakthrough in the microchip manufacturing and designing industry as it would considerably help architects to design better processors in a short time with reduced manual efforts. 

EVP of infrastructure and design technology center at Samsung Electronics, Thomas Cho, said, “Not only have we demonstrated that AI can help us achieve PPA targets for even the most demanding process technologies, but through our partnership, we have established an ultra-high-productivity design system that is consistently delivering impressive results.”

Earlier this year, Synopsys had acquired Concertio, an artificial intelligence-enabled real-time performance optimization firm to further strengthen its capabilities as a Silicon to Software partner for electronics product developing companies. 

Advertisement

Pony.ai Receives Clearance for Paid Autonomous Robotaxi Services in Beijing

Pony.ai clearance paid robotaxi Beijing

Full-stack autonomous driving solutions developing company Pony.ai receives its clearance for paid autonomous robotaxi service in Beijing, China. The recent approval will allow Pony.ai to provide its self-driving taxi service to people in the Beijing Intelligent Networked Vehicle Policy Pilot Zone located in the southern part of the capital city. 

Pony.ai will now be able to charge relevant fees from passengers to access the autonomous driving taxis. This new development will allow the company to deploy up to 100 autonomous taxis in the designated locality. This is a step forward towards the company’s goal to commercialize robotaxis at a global level. 

CEO and Co-founder of Pony.ai, James Peng, said, “Supportive policies, development of safe technology, and public acceptance are the keys to accelerating commercialization of the autonomous driving industry, and Pony.ai has conducted abundant testing of the application scenarios and product forms of autonomous driving over the past five years.” 

Read More: Standard AI to drive future of Autonomous Checkouts at Retail Stores

He further added that this approval marks the next step in self-driving car development as the new policy will allow Pony.ai to test and validate its commercialization model. Prior to this policy, Pony.ai’s PonyPilot+ robotaxi service was operational free of cost for passengers to travel in a restricted area of 60km2. 

PonyPilot+ uses a smartphone application and an in-car solution named PonyHi to provide a safe, comfortable, and hassle-free experience to its passengers. CTO and Co-founder of Pony.ai, Tiancheng Lou, said, “Today, our autonomous driving technology has advanced beyond conventional driving scenarios, demonstrating the technological capability to handle more complex driving scenarios and at a larger scale.” 

He also mentioned that the new emerging technologies are transforming Pony.ai’s mobility at a much faster rate than they expected. Earlier this year, Pony.ai had also started testing its robotaxis in various regions of California, United States and plans to commercialize the deployment by 2022. To achieve its goals, Pony.ai is also considering going public in the US to fund the deployment and commercialization process. 

Along with Pony.ai, Chinese technology group Baidu also received approval to launch paid autonomous robotaxis in the same locality. According to Baidu, this will be the company’s Apollo Go service’s first-ever commercial deployment.

Advertisement