Thursday, November 20, 2025
ad
Home Blog Page 156

Cadence Plans to Apply Big Data to Optimize Workloads

cadence big data to optimize workloads

Cadence, a leading intelligent system design provider, announced that it plans to apply big data-based applications and artificial intelligence to optimize workloads. The suite of these applications, called Verisium, is designed on JedAI (joint enterprise data and AI). 

Verisium marks a generational transition in EDA (electronic design automation) from single-run, single-engine algorithms to algorithms that use big data and AI to optimize numerous runs of multiple engines during a complete SoC design and verification campaign.

All verification data, including waveforms, coverage, reports, and log files, are transferred to the Cadence JedAI Platform after deploying Verisium. As a result of the ML models and the mining of additional proprietary metrics from this data, a new class of tools that significantly increase verification productivity is now possible.

Read More: Google translate to venture into Sanskrit with AI

Cadence describes the first few Verisium apps as follows:

  1. AutoTriage: Creates ML models to predict and categorize test failures with similar causes and aids in automating the triage of regression failures.
  1. SemanticDiff: An algorithmic approach of comparing different source code revisions of an IP or SoC, classifying those revisions, and ranking which alterations affect the system’s behavior to find probable bug hotspots.
  1. Debug: A system that offers interactive and post-process debug flows using waveform, schematic, driver tracing, and SmartLog technologies. It delivers a debug solution from IP to SoC and from single-run to multi-run.
  1. WaveMiner: Artificial intelligence (AI) analyses waveforms from several runs to identify which signals, when they occur, are most likely to be the cause of a test failure.
Advertisement

Google’s DeepMind introduces new AI system AlphaTensor to solve math problems

Google's DeepMind introduces new AI system AlphaTensor

Google’s DeepMind has now introduced a new Artificial Intelligence system called AlphaTensor that could discover new efficient and provably correct algorithms for fundamental tasks such as matrix multiplication. 

Dubbed AlphaTensor, the system could find the fastest way to multiply two matrices, a question that has remained open for half a century. In a paper published in the journal Nature, researchers said that improving the efficiency of algorithms for fundamental computations can have a widespread impact on the overall speed of many computations.

“AlphaTensor discovered algorithms that are more efficient than state-of-the-art for many matrix sizes. Our AI-designed algorithms outperform human-designed ones, which is a major step forward in the field of algorithmic discovery,” DeepMind said in a statement.

Read More: DeepMind’s AlphaFold Predicts 3D Structure Of Every Known Protein: Insight Into Its Milestone

Researchers converted the problem of finding efficient algorithms for matrix multiplication into a single-player game, and the number of possible algorithms to consider is much greater than the number of atoms in the universe. They trained AlphaTensor agents using reinforcement learning to play the game, starting without any knowledge about existing matrix multiplication algorithms.

“Through learning, AlphaTensor gradually improves over time, re-discovering historical fast matrix multiplication algorithms such as Strassen’s, eventually surpassing the realm of human intuition and discovering algorithms faster than previously known. It improves on Strassen’s two-level algorithm in a finite field for the first time since its discovery 50 years ago. These algorithms for multiplying small matrices can be used as primitives to multiply much larger matrices of arbitrary size,” DeepMind said.

Advertisement

Interior AI: New AI Image Generator that Lets You Redesign Interior

interior ai image generator redesign interior

Interior AI is a new image generator that lets users redesign interiors and generate new designs and functions. A 2D image of an interior location, whether it be a picture downloaded from the internet or a user-taken photo, is used as input by the application. Then, it can alter this image to match one of the 16 pre-selected designs, which range from Minimalist, Art Nouveau, or Biophilic to Baroque or Cyberpunk. 

Users can also choose a different function for the kitchen, living room, outside patio, or even the fitness center, creating a new interior design. Assisting people in discovering fresh concepts and motivation for enhancing their houses might be considered an improvement over previous platforms and technologies.

A number of other applications, such as the Ikea Kreativ, employ augmented reality. Interior AI displays how an object would appear in space using 3D photos that are superimposed using smartphone cameras.

Read More: Google Cloud Unveils a New AI-powered Medical Imaging Suite

The experts working in the same disciplines, like architects and interior designers, are somewhat apprehensive about these technological advancements. Other experts view these developments as improving tools designers can employ to advance their workflows and broaden their sources of inspiration. 

Finding fresh, original ideas is only one of an interior designer’s or an architect’s key responsibilities; they must also comprehend the restrictions and potential of the space they are working for. It entails selecting the optimal option that satisfies a challenging range of subjective and objective requirements.

Advertisement

Google Cloud Unveils a New AI-powered Medical Imaging Suite

google cloud new medical imaging suite

After pioneering the use of artificial intelligence and computer vision in many of its applications, Google Cloud unveils a new AI-powered Medical Imaging Suite to overcome challenges in the development of imaging tools.  

Until now, healthcare providers and imaging centers have either procured software from IT companies, image repositories, or a third-party vendor; or they had to build customized algorithms with image classification tools. 

Jeff Cribbs, a distinguished analyst and the VP of Gartner, explained the constrained choices and added that with Google Cloud’s Medical Imaging Suite, the company is taking forward its low-code AI development in healthcare-oriented applications.

Read More: Intel’s self-driving company Mobileye files for an IPO

Ginny Torno, director of innovation and IT clinical and research services at Houston Methodist, said,” This Google product provides a platform for AI developers and facilitates image exchange.” She highlighted that the development is not inherently unique but offers an edge over the others because of interoperable opportunities others are not capable of providing. 

Google claims that Medical Imaging Suite addresses many typical issues companies have while creating AI and machine learning models with components like:

  • Cloud Healthcare API: With automated DICOM de-identification, the API offers a fully controlled, scalable, enterprise-grade development environment.
  • Nvidia and Monai’s AI-assisted annotation tools, which are natively integrated with all DICOMweb viewers, automate the difficult and repetitive process of annotating medical pictures.
  • Access to BigQuery and Looker for petabytes of image data for advanced analytics.
  • 80% fewer lines of code for modeling with the use of Vertex AI to speed up the construction of AI pipelines.
Advertisement

Meta to carry out quiet layoffs at Facebook to slash headcount

Meta carries out quiet layoffs at Facebook

Meta Platforms may soon be carrying out “quiet layoffs” at Facebook to slash its headcount as global headwinds and plummeting ad spending pose severe problems for Big Tech firms.

A Business Insider report said that before a recent weekly Q&A session between the staff and chief executive officer Mark Zuckerberg, executives told directors across the company they should select at least 15% of their teams to be labeled as “needs support” in an internal review process.

This selective restructuring hints at a possible layoff of about 15% of the workforce, or about 12,000 employees. According to the report, the potential layoffs were revealed last week in a post by a Meta worker on Blind – an app popular with tech workers that requires a valid company email address to use anonymously.

Read More: Meta Launches AI Software Tool AITemplate To Switch Between Underlying Chips

“This 15% will likely be put on PIP (performance improvement plan) and be let go,” the person wrote, prompting hundreds of comments from other Meta workers, who debated how many people would be sacked.

In Facebook’s employee-review process, someone “in need of support” is considered to be performing below the benchmark goals. Such employees are put on a PIP, which, more often than not, results in layoffs.

With so many people deemed to be underperforming and some being given 30 days to find a new position at the company or leave, one staffer said Meta was conducting “quiet layoffs.” Last week, Meta announced a pause in hiring and subsequent restructuring as recession fears loomed large across the globe.

Advertisement

WHO launched an AI-powered digital health worker, Florence

WHO digital health worker florence

In collaboration with the Qatar Ministry of Health, WHO, the World Health Organization launched its AI-powered digital health worker, Florence version 2.0. Florence offers assistance on how to eat healthily, be more active, stop using tobacco and e-cigarettes, and discuss strategies for relieving stress. She can also provide details on the COVID-19 vaccine and other subjects. Florence 2.0 offers a standard upgrade to the previous version and is available in Arabic English, French, Chinese, Spanish, Hindi, and Russian.

Andy Pattison, WHO’s team member for digital channels, said that over the last few years, digital technologies had played a vital role in helping people worldwide to lead healthier lives. Especially during the COVID-19 pandemic, AI digital health workers like Florence have helped combat misinformation and create awareness. 

Additionally, Florence had studied the mental impact of the pandemic and estimated that 1 in every 8 people suffered from one or another mental disorder due to the pandemic environment. Pattison said, “The AI health worker Florence is a shining example of the potential to harness technology to promote and protect people’s physical and mental health.”

Read More: Intel to Take on AMD’s Xilinx with Future Edge FPGAs

Dr. Yousuf AL Maslamani, the Official Healthcare Spokesperson for FIFA World Cup Qatar 2022, expressed his gratitude for partnering with WHO to develop Florence and raise awareness about key health issues faced by a majority of the people.

In order to interact with researchers, public health organizations, entrepreneurs, and policymakers, WHO released the beta version of Florence 2.0 at the WISH conference. WHO also plans to keep developing the digital health worker to help address more pressing health issues facing the world today.

Advertisement

Microsoft open-sourced FarmVibes.AI, the newly released AI algorithms for optimizing farm operations

Microsoft open-sourced FarmVibes.ai for farm operations

Microsoft Corporation has open-sourced FarmVibes.AI, its newly released toolkit of AI algorithms for optimizing farm operations. FarmVibes.AI is one of the many technologies Microsoft has created as part of Project FarmVibes. The program aims to enable more effective farming through software and connected devices like sensors. Microsoft also intends to open-source all the technologies it created for Project FarmVibes.

FarmVibes.AI toolkit includes four AI algorithms. Async Fusion, the first algorithm, can combine farm sensor data with satellite and drone pictures. The technique makes it easier to create farm maps that may be used to determine the best method for accomplishing agricultural operations.

Farm maps employ satellite data, which can be processed more easily with the help of the second algorithm called SpaceEye. When clouds are over a farm, getting the most recent satellite imagery is usually challenging. By using readings from satellite-based radar equipment, which can operate even in cloud cover, Microsoft’s SpaceEye algorithm substitutes the imagery.

Read More: Accenture and Mars, a global confectionery leader, to develop “Factory of the Future” using AI.

Farm operators can predict temperature and wind speed using the third algorithm called DeepMC. DeepMC uses forecasts from weather stations and information from online-connected agricultural sensors to identify the most suitable time for farm operations. 

The fourth algorithm that comes with FarmVibes.AI aids farmers in their efforts to promote sustainability. Microsoft claims that it can calculate the impact of various agricultural methods on the volume of carbon stored in a farm’s soil. The tool is also useful for other jobs, such as finding strategies to increase crop yields.

Advertisement

Google announces its own AI movie generator Imagen Video

Google announces its own AI movie generator Imagen Video

Google announced its artificially intelligent (AI) movie generator called Imagen Video. It is still in the development phase, but the company says it will be capable of producing 1280×768 videos at 24 frames per second from a written prompt.

According to Google’s research paper, Imagen Video will have stylistic abilities, such as generating videos based on the work of famous artists like Vincent van Gough. It will also generate 3D rotating objects while preserving their structure and rendering text in various animation styles.

Google hopes its AI-video model can significantly decrease the difficulty of high-quality content generation. Imagen Video builds on Google’s Imagen, a text-to-image program similar to OpenAI’s DALL-E.

Read More: OpenAI’s DALL-E Is Now Available To Everyone

As Google’s research team described, Imagen Video will take a text description and generate a 16-frame, three-frames-per-second video at 24×48 pixel resolution. The system then upscales and predicts additional frames, producing a final 128-frame, 24-frames-per-second video at 720p.

Google says that Imagen Video has been trained on 14 million video-text pairs and 60 million image-text pairs, as well as the LAION image-text dataset, which was used to train Stable Diffusion.

Among the examples provided by Google are a panda chewing on bamboo, a zooming shot into a choppy sea filled with pirate ships, and an astronaut riding a horse. It is worth noting that all the results from Imagen Video are picked by Google, and as of yet, no independent testers have tried the program.

Advertisement

Tesla to remove ultrasonic sensors from EVs amid scrutiny

Tesla to remove ultrasonic sensors from EVs

Tesla states it will start to remove ultrasonic sensors from its electric vehicles this month as it proceeds with using only cameras in its safety and driver-assistance features.

Tesla EVs now have 12 ultrasonic sensors on the front and rear bumpers. Short-range sound sensors are mainly used in parking applications and to detect close objects.

To recall, Tesla last year started dropping radar sensors amid a chip shortage. According to Elon Musk, the company can achieve full autonomy with cameras only, but he has missed his target to roll out self-driving taxis, which require no drivers.

Read More: Tesla Shares Fall After Production And Deliveries Lag Due To Logistic Hurdles

Tesla faces growing public, regulatory, and legal scrutiny over its Autopilot system following crashes. The company said it would remove ultrasonic sensors from Model Y and Model 3 globally over the next few months. It will be followed by Model X and Model S in 2023.

The transition would temporarily limit automated parking features but not affect crash safety ratings, notes Tesla. While self-driving automakers and tech firms use multiple sensors like expensive lidars, Tesla relies only on artificial intelligence and cameras to help a vehicle recognize the environment.

Philip Koopman, Professor at Carnegie Mellon University, said that now the question is how well the cameras can see nearby its surroundings, which sometimes can be limited.

Advertisement

Amazon discontinues Amazon Glow, its video-calling device for kids

Amazon discontinues Amazon Glow

After a year of its release, Amazon discontinues Amazon Glow, its video-calling and gaming device for kids. Bloomberg broke the news of the discontinuation with the confirmation of Amazon. The device was directed toward kids aged 3-9 to stay connected remotely to their loved ones, which is no longer available on Amazon’s website. 

Amazon Glow was developed during the pandemic to ease the distance between kids and their loved ones. It was a part of the original hardware products by Amazon, like Astro and Echo. Last year in September, at Amazon’s annual hardware event, the company announced an atypical z-shaped product known as Amazon Glow that could perform video calling along with games. 

The device provides a 19.2-inch projected graphics screen that responds to touch to enjoy hands-on activities like games, and adults can join in for a fun interactive video call. The Amazon Kids+ subscription was necessary at the starting cost of $4.99 per month to access Glow. Amazon Glow combines the experience of a game system, an art and craft center, and a children’s library with interactive video chat. 

Read more: Google and Amazon criticize Microsoft over cloud computing changes

Originally the device was priced at $300 and only available by invitation, but by the start of 2022 in late March, Amazon decided to make it available to the public. Over six months after the public rollout, now the company is stopping the selling of Amazon Glow.

Amazon spokesperson Tim Gillman stated, “At Amazon, we think big, experiment, and invest in new ideas to delight customers. We also continually evaluate the progress and potential of our products to deliver customer value, and we regularly make adjustments based on those assessments. We will be sharing updates and guidance with Glow customers soon,” indicating no reason is unveiled for the discontinuation for now. However, Amazon will bring more information to Amazon Glow customers soon.

Advertisement