Saturday, November 22, 2025
ad
Home Blog Page 224

Meta building AI that processes Speech and Text like Humans

Meta AI process speech text like humans

Meta, formerly known as Facebook, announces that it is building a new artificial intelligence (AI) model that can process speech and text like human brains. 

This research initiative is a step for Meta to better understand how humans process speech and text in their brains. 

Meta is collaborating with neuroimaging center Neurospin (CEA) and Inria to carry out this research. 

Read More: Synthesis AI raises $17 million in Series A Funding Round

According to the company, it is comparing how AI language models and the brain respond to the same spoken or written sentences to guide the creation of AI that can process voice and text as efficiently as humans. 

Image Source

“Over the past two years, we’ve applied deep learning techniques to public neuroimaging data sets to analyze how the brain processes words and sentences,” mentioned Meta in a blog

The article also said that although AI has come a long way in recent years, it is still far from understanding languages as efficiently as humans. To date, the researchers have found that the language models that best anticipate the next word from context are those that most closely mimic brain activity. 

Meta’s team models numerous brain scans recorded from public data sets using functional magnetic resonance imaging (fMRI), along with magnetoencephalography (MEG), a scanner that takes millisecond-by-millisecond pictures of brain activity. 

The company says that the tasks mentioned above are essential to meet the data requirements for deep learning. Meta evaluated several language models with the brain responses of 345 volunteers who listened to complex narratives while being captured with fMRI in partnership with Inria. 

Moreover, the researchers also discovered evidence of long-range predictions in the brain, an ability that continues to challenge language models. 

“For example, consider the phrase, “Once upon a …” Most language models today would typically predict the next word, “time,” but they’re still limited in their ability to anticipate complex ideas, plots, and narratives like people do,” mentioned Meta.  

Advertisement

Synthesis AI raises $17 million in Series A Funding Round

Synthesis AI Series A Funding Round

Artificial intelligence startup Synthesis AI raises $17 million in its recently held series A funding round led by 468 Capital. Other investors like Sorenson Ventures, Strawberry Creek Ventures, Bee Partners, PJC, iRobot Boom Capital, and Kubera Venture Capital also participated in the company’s series A funding round. 

Synthesis AI said it plans to use the raised capital to expand its team and increase its product portfolio to help its clients develop advanced computer vision models faster than ever. 

In addition to expanding its team, the company also intends to expand its research into the convergence of CGI and AI, focusing on neural rendering, mixed training, and sophisticated human behavior modeling. 

Read More: Wang Wei Min wins $100k prize in challenge to build AI Models that detect Deepfakes

Florian Leibert, partner at 468 Capital, said, “Synthesis AI is uniquely positioned to win in the emerging synthetic data space. The breadth and depth of Synthesis AI’s platform, the quality of the team, and the extensive list of Fortune 50 customers firmly establish Synthesis AI as a category leader.” 

Leibert further added that they are pleased to assist Synthesis AI as it pursues its goal of fundamentally changing how AI models are created. 

This new development comes after the launch of Synthesis AI’s first dedicated community for developing and exploiting synthetic data in AI/ML and computer vision called OpenSynthetics. The one-of-a-kind platform allows developers to exchange tools and strategies for creating and using synthetic data. 

United States-based synthetic data technology startup Synthesis AI was founded by Yashar Behzadi in 2019. To date, Synthesis AI has raised more than $24 million from multiple investors over the past. 

“Synthetic data is at an inflection point of adoption, and our goal is to develop the technology further and drive a paradigm change in how computer vision systems are built,” said Behzadi. 

He also mentioned that the industry would soon be able to fully build and train computer vision models in virtual worlds, allowing for more advanced and ethical AI.

Advertisement

Rushi Bhatt to lead the Compass India Development Center in Bengaluru

Rushi Bhatt lead Compass India Development Center Bengaluru

Rushi Bhatt, a renowned artificial intelligence expert, will now lead the Compass India Development Center located in Bengaluru, Karnataka, and serve as the company’s Senior Director and Head of AI. 

Bhatt, along with Joseph Sirosh, CTO of Compass, recently inaugurated Bengaluru’s technology innovation and development center. This will be the company’s second international development center outside of the United States, following Compass IDC in Hyderabad. 

According to the company, its newly established IDC in Bengaluru will implement state-of-art AI and machine learning advancements. Compass plans to extensively hire new talents to expand its team in areas like AI, ML, and cloud. 

Read More: Tata Motors unveils new EV Concept AVINYA

The company claims that it will use technologies including AI, ML, mobile apps, cloud, data intelligence, and data analytics to focus on increasing R&D and innovation throughout its technology stack. 

Bhatt said, “The Compass Bengaluru IDC is an important milestone in taking forward Compass’s vision of investing in Data, AI, and ML.” He further added that the IDC intends to use its technical talent pool to lead the next wave of cutting-edge innovation in developing agile, digital solutions that provide a consistent experience for all stakeholders in the real estate ecosystem. 

The IDC’s existing team has aided in developing solutions in a wide range of industries, including CRM, Marketing, Client Servicing, 3D Virtual Tours, etc. 

United States-based real estate technology company Compass was founded by Mike Weiss, Ori Allon, Robert Reffkin, and Ugo Di Girolamo in 2012. The company is most recognized for offering an online marketplace for purchasing, renting, and selling real estate. 

“Since our inception here in India, the technology talent that we have acquired has played an integral role in helping Compass establish itself as the biggest Real Estate Brokerage in the US. These engineers are not only helping us grow as a company, but they are also impacting the lives of thousands of Agents in the US,” said Joseph Sirosh. 

He also mentioned that AI, ML, Cloud, Mobile Development, and other technologies are poised to revolutionize every element of the Real Estate Ecosystem, and he is confident that India’s IT expertise will play a critical role in this revolution.

Advertisement

Wang Wei Min wins $100k prize in challenge to build AI Models that detect Deepfakes

Wang Wei Min wins AI model deepfakes

Wang Wei Min, a research scientist from Singapore, wins a $100 thousand prize in a challenge to develop artificial intelligence (AI) models that can detect deepfakes. 

Min single-handedly defeated 469 other participants in the competition to win this reward. Amnesty International Singapore, the office of the National Research Foundation’s Artificial Intelligence Program, organized the Trusted Media Challenge. 

It was a five-month-long competition that required participating teams to build AI models for detecting deepfakes or digitally modified videos. 

Read More: Google to open largest Office outside of US in Hyderabad

Wang, a National University of Singapore graduate, developed an algorithm that accurately distinguished between original syllables and those having digitally altered faces or sounds, with an accuracy of 98.53 percent. 

Moreover, Wang was also offered a $300,000 start-up grant to commercialize his developed technology. Deepfakes are fake media in which a person’s appearance is replaced with someone else’s in an existing photograph or video using artificial intelligence and machine learning. 

This technology has gained immense popularity over the years, and Wang says, “Deepfakes, good or bad, is an emerging technology that you simply cannot ignore.” Deep learning is used to make deepfakes, and it involves training generative neural network designs like autoencoders. 

However, apart from being used to create entertainment content, the technology has also been used multiple times for spreading misinformation across the globe. 

Wand said, “Technology is not just part of the problem, it can also be part of the solution. However, technology cannot be the only solution to misinformation. It must be accompanied by a broader set of measures across society.” 

According to Wang, he decided to participate in the competition as the current challenges facing the media match with his research interests, and he is passionate about using AI to solve real-world problems.

Advertisement

Tata Motors unveils new EV Concept AVINYA

Tata Motors EV Concept AVINYA

Automobile manufacturing giant Tata Motor unveils its new electric vehicle (EV) concept named AVINYA. 

The new concept EV is built on the company’s third-generation design and embodies Tata’s vision of a fully electric vehicle. Tata has also released a video to showcase its new concept car. 

The name Avinys comes from a Sanskrit word that signifies innovation that aligns with the specs of this high-tech concept electric vehicle. 

Read More: Bengaluru boy develops India’s first AI, sensor-based bicycle counter

AVINYA delivers unprecedented mobility that allows for more space and comfort. Tata’s generation 3 architecture features a more flexible design, improved connectivity, superior driver assistance technologies, and improved performance and efficiency. 

The AVINYA Concept has a distinctive DRL that runs across the front of the car, along with butterfly doors and voice-activated technologies. 

Moreover, the vehicle is built with eco-friendly material and also comes with a screen-less concept to minimize the chances of driver distraction. 

According to the company, the new concept EV is a vehicle that combines the best features of a premium hatchback with an SUV’s luxury and versatility and the space and functionality of an MPV. 

Chairman of Tata Sons and Tata Motors, Chandrasekaran, said, “While making the AVINYA Concept a reality, the central idea was to offer a mobility solution like no other – a state of the art software on wheels that is well designed, sustainable, and reduces the planet’s carbon footprint.” 

He further added that Green Mobility is at the heart of TPEM, and the AVINYA Concept is an excellent embodiment of the company’s values — a design that will not only speed up but also lead to the adoption of electric vehicles. 

Following a $1 billion cash infusion six months ago, this is Tata’s second EV debut this month. This development signifies the company’s deep interest and commitment to bringing innovations to the EV industry. 

Managing Director of Tata Motors Passenger Vehicles and Tata Passenger Electric Mobility, Shailesh Chandra, said, “The AVINYA Concept is the fruition of our first idea built on our Pure EV GEN 3 architecture, enabling us to produce a range of globally competitive EVs.” 

He also mentioned that their objective for pure EVs is to provide wellness and rejuvenation while traveling, supported by cutting-edge technologies, intending to increase the overall quality of life.

Advertisement

Google to open largest Office outside of US in Hyderabad

Google largest office Hyderabad

Technology giant Google to open its largest office outside of the United States in Hyderabad, India. 

The information was revealed after the commencement of this project in the presence of IT Minister K.T. Rama Rao, IT Principal Secretary Jayesh Ranjan, and Google Country Head and Vice President Sanjay Gupta. 

According to Google, the new facility could accommodate thousands of people over the next few years. 

Read More: HPE unveils Two Revolutionary Solutions to Boost AI and ML adoption

The campus, built on 7.3-acre land in Gachibowli, would feature a massive 3.3-million-square-foot structure. Google bought this land back in 2019 and now plans to build its largest office there out of the US. 

“I am pleased that Google is deepening its roots in Hyderabad through this landmark building which incorporates sustainability into its design, keeping in mind Hyderabad’s large and future-focused talent pool,” said IT Minister K.T. Rama Rao. 

Google currently maintains roughly ten offices in Hyderabad, one of the country’s highest. With this new campus, Google wants to double its workforce in Hyderabad, which serves as a support center for Google Adwords, Gmail, Google Docs, Google Maps, YoutTube, etc. 

This new development is a step in Google’s future plans, and after the launch, the campus would provide a highly skilled workforce and a resilient cum adaptive workplace. 

Additionally, Google and the State government signed a Memorandum of Understanding (MoU) to strengthen their relations to pursue the Digital Telangana mission. 

As a part of the MoU, Google will team up with Telangana Academy for Skills and Knowledge (TASK) to offer Google Career Certificate Scholarships to individuals interested in industries such as IT, UX design, data analytics, and project management, etc. 

“Our previous MoUs with them have resulted in some great initiatives that have positively affected citizens from all walks of life. This time we are focusing on making a step-change in communities such as youth, women, and students and in citizen services,” added Rama Rao. 

Advertisement

HPE unveils Two Revolutionary Solutions to Boost AI and ML adoption

Machine Learning Development System , HPE Swarm Learning

In an effort to boost the adoption of artificial intelligence technologies in enterprise settings, Hewlett Packard Enterprise (HPE) has launched two new artificial intelligence (AI) solutions. One solution is introducing a decentralized machine learning system that allows remote or edge installations to communicate updates to their models, and the other is geared at helping companies develop and train machine learning models at scale.

The first solution, HPE Swarm Learning is a new AI solution from Hewlett Packard Enterprise that accelerates insights at the edge by sharing and unifying AI model learnings without sacrificing data privacy. These insights range from disease diagnosis to credit card fraud detection. 

HPE Swarm Learning is the first privacy-preserving decentralized machine learning solution for edge or dispersed sites, created by HPE’s R&D unit, Hewlett Packard Labs. Through the HPE Swarm API, the solution provides users with containers that can be effortlessly incorporated into AI models. Users may then share AI model learnings both within their company and with industry peers to enhance training without having to divulge actual data.

The majority of AI model training now takes place in a single location, using centralized integrated datasets. Due to the necessity to transport huge amounts of data back to the same source, this methodology can be inefficient and costly. It could also be inhibited by data privacy and ownership restrictions and regulations that restrict data exchange and mobility, resulting in inaccurate and biased models. In contrast, companies can make faster choices at the point of impact by training models and using insights at the edge, resulting in improved experiences and results.

HPE Swarm Learning is the only solution that allows enterprises to leverage dispersed data at its source to develop machine learning models that learn fairly while maintaining data governance and privacy. HPE Swarm Learning leverages Blockchain technology to securely enroll members, dynamically elect a leader, and combine model parameters to give robustness and security to the Swarm network, ensuring that only learnings acquired from the Edge are shared, rather than the data itself. In simpler words, HPE Swarm Learning works by establishing a peer-to-peer network between the nodes and ensures that model parameters can be safely transferred.

HPE Swarm Learning is available as part of a containerized Swarm Learning Library that can operate on Docker, within virtual machines, and is hardware agnostic. HPE also mentioned that TigerGraph is already using HPE Swarm Learning in conjunction with their data analytics platform to spot odd behavior in credit card transactions.

Hewlett Packard Enterprise also unveiled the HPE Machine Learning Development System, an end-to-end solution that combines a machine learning software platform, compute, accelerators, and networks to create and train more accurate AI models at a faster and larger scale. The new system relies on HPE’s acquisition of Determined AI to combine its comprehensive ML platform, now formally transitioned as the HPE Machine Learning Development Environment, with the world’s premier AI and HPC solutions. Apart from the HPE Machine Learning Development Environment training platform, it includes container management (Docker), cluster management (HPE Cluster Manager), and Red Hat Enterprise Linux in its software and services stack.

According to the company, customers can accelerate the traditional time-to-value for reaping benefits from developing and training machine models from weeks to days. Adopting infrastructure to support model creation and training at scale has traditionally been a lengthy, multistep procedure. This entails the acquisition, installation, and administration of a highly parallel software ecosystem and infrastructure.

The HPE Machine Learning Development System helps businesses avoid the high costs and complexity of implementing AI infrastructure by providing the only solution that combines software, and specialized computing like accelerators, networks, and services. Thus, allowing businesses to quickly build and train Optimized ML models at scale. In other words, this solution makes it easier for businesses to construct and train machine learning models at scale, allowing them to achieve value faster. With distributed training, automated hyperparameter optimization, and neural architecture search – all of which are fundamental to ML algorithms – it can expand AI model training with minimal code rewrites or infrastructure revisions, and help to increase model accuracy.

The core architecture is built on HPE Apollo 6500 Gen10 server nodes with eight Nvidia A100 80GB GPUs and Nvidia Quantum InfiniBand networking. Up to 4TB of RAM and 30TB of NVMe local scratch storage are available on Apollo nodes, with HPE Parallel File System Storage as an option. To manage the system, there are additional ProLiant DL325 servers that operate as service nodes and are connected to the enterprise network through an Aruba CX 6300M switch.

The platform includes both optimized compute as well as accelerated computing, and interconnectivity, all of which are critical performance drivers for scaling models for a variety of workloads, from a modest configuration of 32 GPUs to a larger configuration of 256 GPUs. The HPE Machine Learning Development System delivers around 90% scaling efficiency for workloads like Natural Language Processing (NLP) and Computer Vision in a modest configuration of 32 GPUs. In addition, internal testing shows that the HPE Machine Learning Development System with 32 GPUs is up to 5.7 times quicker for an NLP task than another offering with 32 similar GPUs but suboptimal interconnect.

Read More: Microsoft and HPE test AI on International Space Station

The HPE Machine Learning Development System is a fully integrated solution that includes pre-configured and installed AI infrastructure for rapid model development and training. HPE Pointnext Services will provide on-site installation and software configuration as part of the service, allowing users to instantly build and train machine learning models to generate faster, more accurate insights from their data.

HPE also revealed that Aleph Alpha, a German AI company, is using the HPE Machine Learning Development System to train its multimodal AI, which incorporates NLP and computer vision. The models push the boundaries of modern AI for all kinds of language and image-based transformative use cases, such as AI-assistants for the creation of complex texts, higher-level understanding summaries, searching for highly specific information in hundreds of documents, and leveraging of specialized knowledge in a conversational context, by combining image and text processing in five languages with almost human-like context understanding.

Aleph Alpha quickly set up the HPE Machine Learning Development System and began efficiently training in rapid time, merging and monitoring hundreds of GPUs. As per Jonas Andrulis, Founder, and CEO of Aleph Alpha, the company was pleasantly surprised by the HPE Machine Learning Development System’s efficiency and performance of more than 150 teraflops. Aleph Alpha swiftly set up the system and began training its models in hours rather than weeks.

“While running these massive workloads, combined with our ongoing research, being able to rely on an integrated solution for deployment and monitoring makes all the difference,” says Jonas.

Both solutions are currently available to users. Swarm Learning, can also be coupled with the Machine Learning Development System.

Advertisement

Bengaluru boy develops India’s first AI, sensor-based bicycle counter

Bengaluru AI sensor bicycle counter

An eighteen-year-old boy from Bengaluru develops India’s first artificial intelligence (AI)-powered sensor-based bicycle counter. 

According to Nihar Thakkar, the creator of this AI system, the technology is an electronic device that counts cycles at a specific location for a set period and is installed to encourage cycling. 

In collaboration with the Directorate of Urban Land Transport (DULT) and Doddanekundi residents, the newly developed cycle counter has been installed near the Outer Ring Road cycle lane in Doddanekundi, Bengaluru. 

Read More: Meta says its new AI can formulate Sustainable Concrete

“This is the first live bicycle counter in the country and will help collect usage data of bicycles using the dedicated lane, which will make the impact of the cycle lane more visible and encourage cycling. The live bicycle counter is installed under the SuMA (Sustainable Mobility Accord) project being funded by DULT as a pilot project,” said Thakkar to The IndianExpress. 

He further added that the counter can distinguish between bicycles and all other vehicles and will only count cyclists in both directions on the cycle lane and the service road. Moreover, the bicycle counter is powered by an AI-sensor camera and a machine learning algorithm that can use the gathered data to promote cycling and build more dedicated bike lanes. 

Apart from creating this AI-powered bicycle counter, Nihar Thakkar also founded Urban Flow. Nihar mentioned Urban Flow develops AI-powered tools for cycle lane enforcement, traffic analysis, and live bicycle counters. 

“At Urban Flow, we are building AI-based tools that will help the enforcement of traffic rules. Most of the dedicated cycle lanes are being misused by two wheelers,” said Thakkar. He also mentioned that the technology uses an artificial intelligence-enabled camera that processes video footage in real-time to collect and send evidence of traffic violations to authorities.

Advertisement

Research Team Develops TinyML neural network, FOMO for real-time object detection

tinyml

Researchers at Edge Impulse, a platform that builds ML models for the edge, have created a novel machine learning solution that enables real-time object detection on devices with limited CPU and storage capacity. The solution is called Faster Objects, More Objects (FOMO), a novel deep learning architecture having the potential to open up new computer vision applications.

While cloud-based deep learning solutions have witnessed tremendous acceptance, it isn’t suitable for every circumstance as many apps need inference on the device. For example, internet connectivity is not assured in some situations, such as drone rescue operations. Besides applications that require real-time machine learning inference, the delay induced by the roundtrip to the cloud can prove risky. Hence edge devices promise to resolve the latency issues faced with the cloud. However existing traditional deep learning algorithms may fail to work on resource-constrained edge devices.

TinyML is an approach for enhancing machine learning models for resource-constrained embedded devices. These embedded devices are powered by batteries and have minimal computing power and memory. Models are transformed and optimized for use on the smallest unit of an electrical device, the microcontroller, using TinyML as traditional machine learning models will not work on these devices.

The ability to run a deep learning model on such a small device contributes significantly to the democratization of artificial intelligence. Edge computing devices have been created to fulfill the requirement for computational capacity closer to endpoints as a result of the internet of things (IoT). Even if there is no connectivity, TinyML machine learning algorithms can be applied to small, low-power devices that lie at the end of an IoT network. It assures that devices can process data in real-time, right where it is produced, and that they can identify and respond to problems in real-time, independent of both latency and bandwidth. Data processing at the source of data generation also secures it from hackers who target centralized data stores.

In hindsight, making TinyML more available to developers will be crucial in the future for turning waste data into meaningful insights and developing innovative applications across a variety of sectors.

The memory and processing requirements of most object-detection deep learning models exceed the capabilities of tiny CPUs. On the other hand, FOMO only takes a few hundred kilobytes of memory, making it ideal for TinyML.

FOMO is based on the premise that not all object-detection applications need the high-precision output that state-of-the-art deep learning models can give. Users can scale deep learning models to very small sizes while making them effective by identifying the most appropriate tradeoff between accuracy, performance, and memory. 

This is a remarkable milestone because, unlike image classification, which focuses on predicting the existence of a certain type of item in an image, object classification requires the model to recognize more than one object as well as the bounding box of each occurrence. As a result, object recognition models are far more challenging than image classification deep learning neural networks and require considerable processing power.

FOMO anticipates the object’s center rather than bounding boxes. This is due to the fact that many object identification programs are only concerned with the placement of items in the frame rather than their sizes. Detecting centroids is substantially more computationally efficient than bounding box prediction and requires fewer data.

FOMO also boasts of structural upgrades compared to conventional deep learning architectures. The team explains that single-shot object detectors are comprised of many fully-connected layers that predict the bounding box and a collection of convolutional layers that extract features. The convolution layers in a neural network work in a hierarchical order to extract visual features. The first layer identifies basic objects in various directions, such as lines and edges. Each convolutional layer is frequently paired with a pooling layer, which reduces the output of the layer while keeping the most important features in each area.

After that, the output of the pooling layer is passed into the next convolutional layer, which extracts higher-level features like corners, arcs, and circles. The feature maps zoom out and can recognize intricate things like faces and objects when more convolutional and pooling layers are added. Finally, the fully connected layers flatten the final convolution layer’s output and attempt to forecast object class and bounding box.

Read More: MIT researchers develop an AI model that understands object relationships

In the case of FOMO, the fully connected layers and the last few convolution layers are dropped. This reduces the neural network’s output to a smaller version of the original image, with each output value representing a tiny patch of the original image. After that, the network is trained using a unique loss function such that each output unit can predict the class probabilities for every corresponding patch in the input picture. The result is essentially a heatmap for different categories of objects. 

This approach is different from pruning, a popular type of optimization algorithm that compresses neural networks by omitting parameters that are not relevant to the model’s output.

FOMO’s architecture offers a number of major advantages. For starters, FOMO works with existing architectures like MobileNetV2, a popular deep learning model for image classification on edge devices. 

FOMO also reduces the memory and compute needs of object detection models by dramatically reducing the size of the neural network. It is 30 times quicker than MobileNet SSD and can run on devices with less than 200 kilobytes of RAM, according to Edge Impulse.

There are certain bottlenecks that also need to be taken care of. For instance, FOMO succeeds if objects are of the same size. However, this may not be the case if there is one very large object in the foreground and many small objects in the background. Additionally, if objects are too close together or overlap, they will occupy the same grid square, lowering the object detector’s accuracy. Though, to some extent, you may get around this limitation by decreasing FOMO’s cell size or boosting the image resolution. The team at Edge Impulse claim FOMO is very effective when the camera is in a fixed place, such as scanning products on a conveyor belt or counting automobiles in a parking lot.

At present, the Edge Impulse team hopes to improve on their work in the future, such as downsizing the model to under 100 kilobytes and improving its transfer learning capabilities.

Advertisement

SparkCognition and University of Texas at Austin partners for AI and Robotics

SparkCognition partners University of Texas AI Robotics

Artificial intelligence technology startup SparkCognition partners with the University of Texas at Austin to advance in AI and robotics. 

The primary motive of this new partnership is to advance artificial intelligence in robotics and its applications in the industry. 

As a part of this collaboration, Texas Robotics will use SparkCognition’s HyperWerx facility for development and education to bring the physical world together with cutting-edge AI technology. 

Read More: IIT Jodhpur Researchers develop AI algorithm to detect Cataracts

The HyperWerx facility, located on 50 acres in the greater Austin area, is an AI testing ground for robotics, unmanned aerial vehicles (UAVs), factory automation, oil and gas exploration, etc. 

According to the company, its HyperWerx campus is a one-of-a-kind space dedicated to designing, developing, experimenting, and commercializing AI-powered physical solutions. 

Dr. Peter Stone, Director of Texas Robotics and a Professor in the University’s Department of Computer Science, said, “HyperWerx allows us the opportunity to evaluate our robotics innovations via hands-on experiments under realistic conditions, thus enriching our understanding of what these systems are capable of, as well as facilitating educational experiences.” 

He further added that commercial advances in AI and robotics bring capabilities to the market that promise to make workplaces safer, more sustainable, and more productive for everyone. 

Texas Robotics represents researchers and students from UT Austin’s Cockrell School of Engineering, College of Natural Sciences, and Department of Computer Science, among other departments and colleges. UT Austin students will now access industry and technical specialists and the newest aerial and terrestrial development equipment at HyperWerx. 

Chief Science Officer at SparkCognition, Professor Bruce Porter, said, “Texas Robotics is a great representation of the university’s dedication to innovation in robotics and the use of artificial intelligence to fuel industrial change.” 

He also mentioned that the university could speed their research through physical experimentation and close collaboration by bringing this investment in innovation to their HyperWerx facility, ultimately driving the commercialization of robotics into society.

Advertisement