Wednesday, November 12, 2025
ad
Home Blog Page 314

DataRobot Raises $300 Million And Plans To Acquire Algorithmia

DataRobot Raises $300 Million And Plans To Acquire Algorithmia

DataRobot raises $300 million in its series G funding round and also announces its plans to acquire MLOP software developing startup Algorithmia. With this fresh funding, DataRobot’s market valuation has increased to $6.3 billion. 

The company plans to use the funds to conduct research and enhance its augmented intelligence platform. The tech firm also mentioned that it wants to integrate Algorithmia’s sharp focus on model serving with DataRobot’s model management platform to provide low-cost model management services, including deep learning operations for natural language processing to its consumers worldwide. 

Algorithmia developed an artificial intelligence platform that enables IT professionals to manage high-volume model production in a secure environment effectively and enables businesses to integrate MLOPs into their existing management platforms. 

Read More: Toyota Unveiles a Basketball Playing Robot in Tokyo Olympics 2020

DataRobot will use Algorithmia’s expertise to develop industry-leading MLOP infrastructures for its customers. 

The CEO of DataRobot, Dan Wright, said, “This new investment further validates our vision for combining the best that humans and machines have to offer to power predictive insights, enhanced data-driven decisions, and unprecedented business value for our customers.” 

He further added that the company’s demand is increasing on a global scale, and they are grateful to the investors for supporting them in developing better solutions. 

Algorithmia is a Washington-based startup founded by Diego Oppenheimer and Kenny Daniel in the year 2014. It has received a total funding of $38.1 million from investors like Gradient Ventures and Mandora Venture Group. 

Co-founder and CEO of Algorithmia, Diego Oppenheimer, said, “t’s been clear to us for many years that DataRobot shares this philosophy, and we’re thrilled to combine our dedication of enabling customers to thrive in today’s market by delivering more models in production, faster while protecting their business.” 

He also mentioned that they understand the importance of increasing their customer reach to let organizations know the value of their machine learning model. Diego believes that this acquisition will be a step forward towards their goal.

Advertisement

NASSCOM Announces XperienceAI Virtual Summit 2021

NASSCOM Announces XperienceAI Virtual Summit 2021

India’s information technology governance body NASSCOM announces the dates of its XperienceAI Virtual Summit 2021. After the massive success of last year’s event, NASSCOM now announces the second edition of its virtual summit to be held in August. 

The online summit will commence from 3rd August and will end on 6th August 2021 from 10:00 AM to 7:00 PM IST. The summit will host veteran individuals from the artificial intelligence industry to discuss innovations to revolutionize various sectors using artificial intelligence technologies. 

Last year the summit hosted more than 3000 participants from 20 countries around the world. This year XperienceAI has a theme ‘Artificial intelligence as a catalyst for better normal.’ 

Read More: Google’s Waymo Argues That UK Government Shouldn’t Cap Autonomous Vehicles On The Road

It aims to ponder upon certain vital areas that artificial intelligence can help improve, like socio-economic growth and India’s roadmap for artificial intelligence innovations and adoption. 

The summit will also focus on the development of artificial intelligence solutions to tackle the challenges like climate change and the role of AI in building a circular economy. More than 60 reputed speakers like Don Tapscott, Dilip Asbe, Debjani Ghosh, Ashwini Rao, Arunima Sarkar, and Arundhati Bhattacharya will be present at the event. 

Speakers will also discuss the application of artificial intelligence to predict future pandemics. The four-day event will have live workshops, use case discussions, and exclusive keynote sessions. The summit has been sponsored by many industry-leading enterprises like Intel, TCS, Accenture, HCL, and Fractal. 

The XperienceAI summit 2021 will be helpful for people working in various sectors. Startups, coders, policymakers, tech companies, government officials, public sector workers, research scholars, and academicians will find this summit to be very useful and informative. 

Interested candidates can register for free on the official website of NASSCOM. 

Advertisement

IMD to leverage Artificial Intelligence for Better Weather Forecasting

Artificial intelligence weather forecasting, machine learning
Image Credit: The Week

The India Meteorological Department (IMD) plans to employ artificial intelligence (AI) and machine learning (ML) to ensure more accurate weather forecasts. 

On Sunday, IMD Director General Mrutyunjay Mohapatra said in an official statement that the usage of artificial intelligence can enhance weather forecasting, especially for issuing nowcasts, which can help improve 3-6 hours prediction of extreme weather events. He reiterates that while artificial intelligence and machine learning in weather forecasting are not as prevalent as in other fields, this step will ensure new beginnings in this area.

According to Mohapatra, the Ministry of Earth Sciences has asked research organizations to examine how artificial intelligence (AI) can be leveraged for fine-tuning weather forecasting processes. He also stated that the IMD intends to collaborate on this idea with the Indian Institutes of Information Technology (IIITs) at Prayagraj and Vadodara and IIT-Kharagpur for the technology upgrade. The IMD has also teamed up with Google to provide precise short-term and long-term weather forecasts.

The IMD issues forecasts for extreme weather, such as thunderstorms and dust storms. However, thunderstorms, which bring lightning and torrential rains, are more difficult to anticipate than cyclones because of their extreme weather phenomena that form and disperse in a relatively short amount of time.

At present, IMD uses radars, satellite imagery, and other tools to issue nowcasts, which help in offering information on extreme weather events occurring in the next 3-6 hours. The satellite imagery comes from the INSAT series of geosynchronous satellites, as well as the Real-Time Analysis of Products and Information Dissemination (RAPID), a weather data explorer application that serves as a gateway and provides quick interactive visualization as well as 4-Dimensional analysis capabilities.

It also relies on ISRO for ground-based observations from the Automatic Weather Stations (AWS), the Global Telecommunication System (GTS) that measure temperature, sunshine, wind direction, speed, and humidity. All these data include cloud motion, cloud top temperature, and water vapor content, all of which aid in rainfall estimation, weather forecasting, and cyclone genesis and direction. 

Read More: IBM Uses Blockchain And Artificial Intelligence To Help Farmers 

Despite the vast dataset, the weather is a dynamic phenomenon i.e. temperature, wind speed, tides, etc., are never constant. Further, the rising global warming and greenhouse effect have introduced more unpredictability of changes in oceanic and wind activity that are key parameters in weather forecasting. 

The pattern-recognition capabilities of artificial intelligence and machine learning make them suitable assets in weather prediction and forecasting. AI models are fed with enormous quantities of weather data and trained to identify a storm that can bring lightning or tornadoes. This enables the models to predict the possibility of the occurrence of thunderstorms or any other meteorological event using weather and climate datasets. Now meteorologists can make predictions with improved accuracy and thus save lives and money. 

Also, these artificial intelligence models depend on the computational power of supercomputers for large-scale data processing and pattern recognition. As per IMD, an increase in computing power, from 1 teraflop to 8.6 teraflops, has helped the nodal department in better processing observational data.

Advertisement

Qualcomm And Foxconn Announce High Performance Artificial Intelligence Edge Box

Qualcomm And Foxconn Announce High Performance Artificial Intelligence Edge Box

Microchip manufacturing giant Qualcomm and Foxconn announce their new high-performance artificial intelligence edge box. The companies announce the design and launch of Gloria Edge Box through a press release

The Edge Box is powered by Qualcomm Cloud artificial intelligence 100 accelerator. This high-end technology is capable of performing more than seventy trillion artificial intelligence-related operations per second. 

Gloria Edge Box uses Snapdragon 865 mobile processor to support a maximum of 24 Full HD cameras. This technology can be used for various research purposes like traffic analysis, security, and intelligent retailing solution development. 

Read More: IBM,‌ ‌MIT,‌ ‌and‌ ‌Harvard‌ ‌release‌ Common‌ ‌Sense‌ ‌AI ‌Dataset‌ ‌at‌ ‌ICML‌ 2021‌

Senior Vice President and General Manager of Computing and Edge Cloud at Qualcomm, Keith Kressin, said, “We anticipate broad areas of adoption of Foxconn Industrial Internet’s Gloria platform using our leading performance per watt AI solution, the Qualcomm Cloud AI 100. We expect customer adoption of Gloria to include a variety of environments such as retail centers, warehouses, data centers, and factories.” 

He also mentioned that the company is excited about working with Foxconn Industrial Internet to boost the adoption of artificial intelligence-powered edge applications. BKAV Corporation will be the first enterprise to avail the new service provided by Qualcomm and Foxconn. 

The solution is also capable of connecting to 5G networks using 5G sub6 and mmWAVE WAN connectivity technology powered by the Qualcomm Snapdragon X55 5G modem RF system. 

Gloria Edge will deliver best-in-class performance with very low power consumption at a competitive price, said company officials. The solution will help businesses and governments to adopt artificial intelligence services and aid them in building more smart cities. 

The expected commercial launch of Gloria Edge Box is in the second quarter of 2022. Chief Technology Officer of Foxconn Industrial Internet, Dr. Tai Yu Chou, said, “We are pleased to collaborate with Qualcomm Technologies to develop and announce the availability of the outstanding Gloria AI Edge Box.”

Advertisement

Toyota Unveils a Basketball-Playing Robot in Tokyo Olympics 2020

Toyota Unveiles a Basketball Playing Robot in Tokyo Olympics 2020

Japanese car manufacturing company Toyota recently unveils its new basketball-playing robot in the Tokyo Olympics 2020

Toyota reveals its artificial intelligence-powered robot during a Group B game between France and the United States at the Saitama Super Arena, Japan. 

The humanoid robot was able to shoot difficult three-point shots accurately during the halftime of the official match. Viewers across the globe were astounded to see the capabilities of the basketball-playing robot and took no time to express themselves on Twitter. 

Read More: Artificial Intelligence Model of Microsoft can Help Detect Heat Waves in India

Toyota’s third generation of its humanoid robot (CUE 3) is six feet ten inches tall and has sensors around its torso to analyze its distance from the basketball ring and the angle that it needs to achieve for a perfect basket. 

It also uses motorized arms and knees for movements. The robot has also bagged a place in the Guinness Book of World Records by performing the highest number of consecutive basketball free throws done by a humanoid robot. 

President of Association for Advancing Automation, Jeff Burnstein, said, “If you think about the future where robots work in our daily lives, they’re going to have to be extremely accurate, have great vision, great dexterity and great mobility.” 

He further added that this new robot that plays sports is a step towards the bright future of robotics. Toyota officials said that the robot takes 15 seconds to shoot a ball, which according to them, is a bit slow. 

Toyota first launched the prototype of CUE back in 2017 that was capable of shooting balls but requires human intervention for movements and positioning. 

Toyota has deployed numerous artificial intelligence-powered robots in the Tokyo Olympics to aid humans in many operations like transportation, entertainment, and many more.

Advertisement

IBM,‌ ‌MIT,‌ ‌and‌ ‌Harvard‌ ‌release‌ Common‌ ‌Sense‌ ‌AI ‌Dataset‌ ‌at‌ ‌ICML‌ 2021‌

Common Sense AI dataset
Image Credit: Unsplash

Creating an intelligent AI is more than finding patterns in data. They must be able to understand human’s intuitive decision-making as we make decisions based on the intentions, beliefs, and desires of others. At the 2021 International Conference on Machine Learning (ICML), researchers from IBM, MIT, and Harvard University released a Common Sense AI dataset. It was a part of a multi-year project with the U.S. Department of Defense’s Defense Advanced Research Projects Agency (DARPA). The Machine Common Sense project aims to develop models of intuitive psychology and see whether AI can reason similar to how we educate human infants. 

At ICML, Researchers unveiled AGENT (Action, Goal, Efficiency, coNstraint, uTility), a benchmark that empowers machines to grasp the core concepts of intuitive psychology. The AGENT model comprises a large dataset of 8,400 3D animations categorized under four scenarios: Goal Preferences, Action Efficiency, Unobserved Constraints, and Cost-Reward Trade-offs. 

This dataset to train AI modes is similar to how psychologists evaluate an infant’s intuitive ability. Researchers also introduced two baseline machine learning models: BIPaCK and ToMnet-G, based on Bayesian inverse planning and the Theory of Mind neural network. 

Commonsense reasoning has been a bottleneck for researchers in both natural language processing and other artificial intelligence techniques. Intuitive psychology gives us the ability to have meaningful social interactions by understanding and reasoning other people’s states of mind. However, ML models lack this power of intuition and require extensive data sets to train AI models. AGENT aims to bridge this gap and build AI that manifests the same common sense as a young child.

Read more: Artificial Intelligence Model of Microsoft can Help Detect Heat Waves in India

“Today’s machine learning models can have superhuman performance. It is still unclear if they understand basic principles that drive human reasoning. For machines to successfully be able to have social interaction like humans do among themselves, they need to develop the ability to understand hidden mental states of humans,” said Abhishek Bhandwaldar, Research Engineer, MIT-IBM AI Lab.

Like other infant studies, this project also has two phases in each trial: familiarization and test. There are 8,400 3D animations lasting between 5.6s to 25.2s and a frame rate of 35 fps. “With these videos, we constructed 3,360 trials, divided into 1,920 training trials, 480 validation trials, and 960 testing trials. All training and validation trials only contain expected test videos,” the researchers said.

Researchers compared the two machine learning algorithms built on traditional human psychology methods on AGENT with human performance. “Overall, we find that BIPaCK achieves a better performance than ToMnet-G, especially in tests of strong generalization,” reads the paper.

This study shows that we can teach AI models how humans make intuitive decisions. It’s also seen that the ML models lack generalization and need pre-training or advanced architectures. Researchers claim AGENT can be used as a diagnostic tool for developing better models of common sense AI.

Advertisement

Intel Accelerates Process And Packaging Innovations

Intel Accelerates Process And Packaging Innovations

In a recent press release, Intel Corporation announced its new roadmap towards process acceleration and groundbreaking packaging innovations. The company revealed its technologies that would power the upcoming products beyond 2025. 

Two new technologies were unveiled in the press release, which includes Intel’s new transistor architecture called RibbonFET and the world’s first backside power delivery technology named PowerVia. 

After a decade-long wait, Intel has come up with a new architecture technology for its processors. The company is also ready to launch its High Numerical Aperture Extreme Ultraviolet Lithography production technology. 

Read More: Intel Beats Its Revenue Estimate For Q3 In 2021

Senior Vice President and General Manager of technology at Intel, Dr. Ann Kelleher, said, “We led the transition to strained silicon at 90nm, to high-k metal gates at 45nm, and to FinFET at 22nm. Intel 20A will be another watershed moment in process technology with two groundbreaking innovations: RibbonFET and PowerVia.” 

She further added that Intel has a great history of foundational process innovations that have helped the industry grow by many folds. The chip manufacturing giant has come up with a new naming structure for its node that would allow customers to understand their products and their capabilities in a much easier manner. 

The fresh node names are Intel 7, Intel 4, Intel 3, and Intel 20A. The company is currently developing another node called Intel 18A and is expected to be launched in 2025. According to company officials, Amazon Web Services will be the first enterprise that would use Intel’s IFS packaging solutions and will also provide necessary insights. 

The firm also plans to collaborate with other players in the United States and Europe to continue research on more innovations as deep collaborative efforts in the ecosystem are essential for enabling high-volume manufacturing. 

Pat Gelsinger, the CEO of Intel, said, “Building on Intel’s unquestioned leadership in advanced packaging, we are accelerating our innovation roadmap to ensure we are on a clear path to process performance leadership by 2025.”

Advertisement

Cornell University Finds a Method to Introduce Malware in Neural Network

cornell university, neural network, malware in neural network, AlexNet
Image Credit: DoodleMaths

A team of researchers from Cornell University has recently demonstrated how neural networks can be embedded with malware and go undetected. 

In the paper titled, “EvilModel: Hiding Malware Inside of Neural Network Models” posted on the arXiv preprint server last Monday, the team mentions that malware in a neural network can dupe automated detection tools. This is done by embedding malware directly into the artificial neurons in a way that has minor or no impact on the performance of the neural network while remaining untraced. 

The team led by researchers Zhi Wang, Chaoge Liu, and Xiang Cui note that with the widespread use of neural networks, this method can emerge as a popular medium to launch malware attacks.  

Instead of following codified rules, a neural network helps computers learn by emulating the neural structure of the brain. A neural network is a subfield of machine learning, a branch of computer science based on fitting complex models to data. Artificial neurons of a neural network are created with the help of specialized software that runs on digital electronic computers. That software gives each neuron numerous inputs and a single output. Each neuron’s state is determined by the weighted sum of its inputs, which is then applied to a nonlinear function termed an activation function. The outcome, this neuron’s output, then becomes an input for a number of other neurons. 

Every neural network has 3 layers,

  1. Input layer: it receives feed data for neural network
  2. Hidden layer: this is where weight is estimated
  3. Output layer: it produces outcomes post-training

Read More: Scientists Are Enabling Artificial Intelligence To Imagine And Visualize Things

The team of researchers revealed that by keeping the hidden layers intact during the malware embedding process, changing some neurons will have minimal effect on performance. In their experiment, the team replaced around 50% of the neurons in the AlexNet model⁠ and still obtained the model’s accuracy rate above 93.1%. According to the authors, a 178MB AlexNet model may have up to 36.9MB of malware hidden in its structure and still produce results with a 1% accuracy loss. They also claimed that when tested against 58 antivirus engines in VirusTotal, the malware still was undetected — thus verifying the feasibility of this method. This highlights the possible alarming usage of steganography — a practice of concealing a message (here malware) within another message.

The attackers will need to design their own neural network to add the malware. They can add more neural layers to add more malware. Next, they must train the network using the provided dataset in order to obtain a well-performing neural model. Attackers can also opt to employ existing models if they are found suitable and well-trained. 

After the training, the attacker selects the best layer and embeds the malware. Then they assess the model’s performance to confirm that the loss is acceptable. If the loss is beyond the target value, attackers need to retrain the model for obtaining the desired results. 

While the larger size of the neural network will give enough room to hide more malware, on the brighter side, this method cannot be used for execution. To run the malware-infected neural network, another malicious software must be used to remove it from the poisoned model and then reassemble it into a working form.

According to the study, the harm due to malware can still be prevented if the target device validates the model before executing it. Traditional approaches like static and dynamic analysis can also be used to detect it.

Advertisement

Artificial Intelligence Model of Microsoft can Help Detect Heat Waves in India

artificial intelligence detect heat waves

Microsoft recently announced the launch of its new artificial intelligence model that can help detect heat waves in India. The tech giant had collaborated with Sustainable Environment and Ecological Department Society (SEEDS) to develop this new model called ‘SunnyLives.’ 

Earlier, this platform had been used to predict cyclones and floods in the southern parts of India. The collaboration has a target of helping more than 125,000 individuals by giving them prior notice of possible heat wave disasters. 

Microsoft said that the second phase of the artificial intelligence model would start with the development of heat wave risks in urban core heat wave zones of India. Corporate affairs director of Microsoft India, Manju Dhasmana, said, “The partnership with SEEDS brings the power of technologies such as cloud and artificial intelligence by marshaling relief resources more efficiently and effectively. It is one of the efforts to reduce the damage.” 

Read More: Google’s Waymo Argues That UK Government Shouldn’t Cap Autonomous Vehicles On The Road

He further mentioned that the artificial intelligence-powered platform would help frontline workers to make better and informed decisions regarding their operations. The solution was built by Microsoft to create a disaster-resistant community in the country using the latest technologies under its Global Artificial Intelligence for Humanitarian Behaviour initiative. 

The AI platform is capable of accurately predicting disaster impacts using satellite images and high-end algorithms. Experts suggest that the world will witness more heat waves than usual due to the climatic changes caused by industrialization and urbanization. 

Microsoft plans to enable the platform to predict earthquakes, wildfires, and biological disasters in the coming future. 

SEEDS used this platform’s generated data to rescue more than 1,100 families in Odisha during the Yaas cyclone disaster.

Advertisement

NASA Is Using Artificial Intelligence To Calibrate Images Of Sun

NASA Is Using Artificial Intelligence To Calibrate Images Of Sun

Researchers at the National Aeronautics and Space Administration (NASA) recently announced that it is using artificial intelligence to calibrate images of the Sun. 

NASA launched its Solar Dynamics Observatory (SDO) back in early 2010 to conduct research and capture high-definition images of the Sun. 

The new artificial intelligence-powered technology is now helping scientists to precisely calibrate captured images at a quick pace in order to generate accurate, usable data. NASA uses the Atmospheric Imagery Assembly (AIA) present at the SDO to capture the Sun’s images across various wavelengths of ultraviolet light every 12 seconds. 

Read More: European Union’s Artificial Intelligence Law Could Cost Over $36 Billion

Due to the degradable nature of the AIA, scientists have come up with an artificial intelligence solution to generate consistent data for research purposes. A solar physicist at NASA’s Goddard Space Flight Center in Maryland, Dr. Luiz Dos Santos, said, “It’s also important for deep space missions, which won’t have the option of sounding rocket calibration.” 

He further added that they are now tackling two challenges at the same time. Scientists used machine learning technology to train an algorithm to recognize solar flares by feeding it with images of solar flares of multiple wavelengths to enable it to identify every type of flares. 

The algorithm used AIA’s data to compare and understand the solar flare images. After numerous rounds of training, the algorithm achieved the skill of determining the level of calibration required for different images. 

Dr. Luiz said that it is a commendable breakthrough that has enabled researchers to identify structures across multiple wavelengths. NASA plans to integrate machine learning and artificial intelligence into many other projects as the technology continues to develop. 
NASA said, “For the future, this may mean that deep space missions — which travel to places where calibration rocket flights aren’t possible — can still be calibrated and continue giving accurate data, even when getting out to greater and greater distances from Earth or any stars.”

Advertisement