Chick-fil-A, a popular restaurant chain in the United States, is testing its autonomous delivery system in two of its stores located in Austin, Texas.
The autonomous delivery system has been developed by United States-based robots delivery startup Refraction AI.
By meeting the constantly rising need for product delivery across several categories, Refraction AI’s approach to last-mile delivery makes the cost, safety, and sustainability benefits of self-driving technology attainable.
Chick-fil-A, the fast-food brand known for its chicken sandwiches, said on Tuesday that it has partnered with Refraction AI to deploy a fleet of self-driving cars.
Luke Steigmeyer, Operator of Chick-fil-A 6th & Congress, said, “Autonomous delivery using Refraction’s robots creates an exciting new opportunity to extend the Chick-fil-A experience to a growing number of delivery guests.”
Steigmeyer further added that the platform would enable them to provide rapid, high-quality, and cost-effective meal delivery within a mile radius of their restaurant, all while contributing to the clean and safe environment of the community they serve.
According to the robotics startup, its Robot-as-a-Service platform combines self-driving technology, teleoperations, and a delivery robot that runs alongside the road or in a bike lane.
The platform’s features include avoiding the speed, distance, and legal limits of walking on the sidewalk that too without compromising safety. An additional benefit of the Robot-as-a-Service system is that it can complete last-mile deliveries with 90% lower carbon emissions and 80% less energy consumption.
After successful testing at two stores, Chick-Fil-A plans to expand the deployment of autonomous delivery systems in other localities.
CEO of Refraction AI, Luke Schneider, said, “We are thrilled about working with Chick-fil-A, an organization that is admired and respected as much for its commitment to the communities it serves as it is for the innovation and quality of its business.”
He also mentioned that they are like-minded individuals who have joined forces with Chick-fil-A restaurants to exhibit a clever, rational approach to delivery.
The AI pundits believe that the key to having a successful artificial intelligence system is building one that is at par with the ability of humans to grasp and learn any language and perform a task. There have been multifarious AI technologies based systems that possess the capability of thinking, planning, and learning about a task, parsing and representing insights gained from a dataset, and communicating using NLG-NLP algorithms. However, most of them cater to only a single form of tasks, i.e., solving quadratic equations, captioning an image, or playing chess, etc.
DeepMind has taken use of recent developments in large-scale language modeling to create a single generalist agent that can handle more than just text outputs. Earlier this month, DeepMind unveiled a novel “generalist” AI model called Gato. This latest AGI agent operates as a multi-modal, multi-task, multi-embodiment network, which means that the same neural network (i.e. a single architecture with a single set of weights) can do all tasks while involving intrinsically distinct types of inputs and outputs.
Gato🐈a scalable generalist agent that uses a single transformer with exactly the same weights to play Atari, follow text instructions, caption images, chat with people, control a real robot arm, and more: https://t.co/9Q7WsRBmIC
DeepMind also published a paper titled ‘A Generalist Agent,’ which detailed the model’s capabilities and training procedure. DeepMind argues that the general agent can be tweaked with a bit more data to perform even better on a wider range of jobs. They point out that having a general-purpose agent reduces the need for hand-crafting policy models for each area, increases the volume and diversity of training data, and allows for ongoing improvements at the data, compute, and model scale. A general-purpose agent can also be seen as the first step toward artificial general intelligence, which is the ultimate objective of artificial general intelligence (AGI).
A modality, in layman’s terms, refers to the manner in which something occurs or is perceived. Most people relate the word modality with sensory modalities, like vision and touch, representing our major communication pathways. When a research topic or dataset contains different modalities, it is referred to as multimodal. To make real progress in comprehending the world around us, AI must be able to interpret and reason about multimodal data.
According to the Alphabet-owned AI lab, Gato can play Atari video games, caption images, chat, and stack blocks with a real robot arm – overall performing 604 distinct tasks.
Though Deepmind’s preprint describing Gato is not explicitly detailed, it does reveal that its genesis is deeply anchored in transformers as used in natural language processing and text generation. It is trained not only with text, but also with images, torques acting on robotic arms, button presses from computer games, and so on. Essentially, Gato combines all types of inputs and determines whether to produce understandable text (for example, to chat, summarize, or translate text), torque powers (for robotic arm actuators) or button presses (to play games) based on context.
Gato exhibits the adaptability of transformer-based machine learning architectures by exhibiting how they may be used for a range of applications. In contrast to earlier neural network applications that were specialized for playing games, interpreting texts, and captioning photos, Gato is versatile enough to accomplish all of these tasks on its own, with only a single set of weights and a very simple architecture. Previous specialized networks required the integration of numerous modules in order to function, where the integration was largely reliant on the problem to be solved.
Researchers acquired data from a variety of tasks and modalities in order to train Gato. Training for vision and language was done using MassiveText, a multi-modal text dataset that comprises web pages, books, and news stories, as well as code and vision-language datasets including ALIGN (Jia et al., 2021) and COCO captions.
A transformer neural network batched and processed the data once it was serialized into a flat sequence of tokens. While any general sequence model can be used to predict the next token, the researchers chose a transformer for its simplicity and scalability. They employed a decoder-only transformer with 1.2 billion parameters, 24 layers, and a 2048-embedding size. What makes Gato interesting is that it is by orders of magnitude smaller than in single-task systems like GPT-3. It’s smaller than OpenAI’s GPT-2 language model, with “only” 1.2 billion weights, which is nearly 2 orders of magnitude lower than GPT-3’s 175 billion weights. Parameters are system components learned from training data that define how well a system can handle a problem, often including text generation.
When a prompt is deployed, it is tokenized, resulting in an initial sequence. The initial observation is produced by the environment, it is also tokenized and added to the sequence. Gato then takes one token at a time and samples the action vector autoregressively. The action is decoded and delivered to the environment, which steps and produces a new observation after all tokens in the action vector have been sampled. The process is then repeated. According to the DeepMind researchers, the model always observes all past observations and actions inside its context window of 1024 tokens. DeepMind emphasized in its paper that the loss is masked such that Gato only anticipates action and text targets.
The study showed that transformer sequence models perform better as multi-tasking policies in real-world settings, including visual and robotic tasks. Gato illustrates the ability to use prompting to learn new tasks rather than training a model from start.
Gato was assessed on a range of tasks, including simulated control, robotic stacking, and ALE Atari games. Gato exceeded the 50% expert score criteria on 450 of the 604 tasks in the experiments. DeepMind also found that Gato’s performance improves as the number of parameters increases: The scientists simultaneously trained two smaller models with 79 million and 364 million parameters, in addition to the main model. As per the benchmark results, the average performance tends to increase linearly with the parameters. This phenomenon has previously been seen in large-scale language models, and it was thoroughly investigated in the scientific paper “Scaling Laws for Neural Language Models” published in the early 2020s.
Demis Hassabis, the co-founder of DeepMind, congratulated the team in a tweet, saying, “Our most general agent yet!! Fantastic work from the team!”
Someone’s opinion article. My opinion: It’s all about scale now! The Game is Over! It’s about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, INNOVATIVE DATA, on/offline, … 1/N https://t.co/UJxSLZGc71
However, not everyone is on board with Gato being touted as an AGI agent. According to David Pfau, a staff research scientist at DeepMind, the team amalgamated all of the policies of a group of individually trained agents into a single network, which is not as surprising nor exciting as per the hype around Gato.
Surely, the Gato model is unquestionably a significant step forward in AGI research. However, it highlights the question of how far research has progressed in AGI research.
For instance, AGI is a term used to describe a cohort of AI-powered computers that can function totally independently while executing a set of activities that need human-level intellect. While it is possible, DeepMind’s Gato is far from capable of general intelligence in any form. This is due to the fact that general intelligence can acquire new skills without prior training, which was not the case with Gato.
Based on observances, the ‘AGI’ agent outperforms a dedicated machine learning program when it comes to directing a robotic Sawyer arm that stacks blocks. But the captions it generates for photographs are frequently subpar, especially misgendering people. Its capacity to carry on a typical chat conversation with a human interlocutor is similarly dismal, occasionally yielding contradictory and illogical responses.
In their 40-page report, DeepMind reveals that when asked what the capital of France is, the system occasionally responds with ‘Marseille’ and on other occasions with ‘Paris.’ Such errors, according to the researchers, can presumably be rectified with additional scaling.
Gato also has memory impairments, making it challenging to learn a new activity via conditioning on a prompt, such as demonstrations of desirable behavior.
Because of accelerator memory constraints and exceptionally long sequence lengths of tokenized demonstrations, the longest possible context length does not allow the agent to attend over an informative-enough context.
In addition, its performance on Atari 2600 video games is inferior to that of most specialized machine learning algorithms meant to compete in the benchmark Arcade Learning Environment.
Furthermore, a single AI system capable of doing several jobs isn’t new. In fact, Google recently began employing a system known as the multitask unified model, or MUM, in Google Search to accomplish tasks ranging from discovering interlingual variances in a word’s spelling to linking a search query to an image. However, the variety of jobs addressed and the training approach are possibly unique to Gato.
To surmise, while Gato brings a fresh spin to the AGI domain by performing multi-modal multi-task activities, it still falls short to be labeled as an AGI model.
The Institute of Technical Education (ITE) Singapore and global technology giant Microsoft partner to equip more than 4000 students with responsible artificial intelligence (AI) skills.
As a part of this collaboration, Microsoft opens its new Artificial Intelligence (AI) Lab at ITE College East, which will serve as an innovation hub for ITE students to learn about AI principles, skills, and applications in various sectors.
This is a step towards fulfilling the ever-increasing demand for a skilled workforce in emerging new-age technology fields.
ITE is offering an AI-focused curriculum to students commencing their ITE experience in the 2022 Academic Year as part of its new 3-year Higher Nitec full-time program. “ITE is committed to equipping our students with AI social and technical skills for real-world applications and enabling them to be AI-ready for work,” said Low Khah Gek, Chief Executive Officer of ITE.
Gek further added that they have been able to design a comprehensive AI-focused curriculum with a unique ‘movie-fication’ pedagogy that will excite and inspire their students’ creativity, thanks to their partnership with Microsoft.
According to Microsoft, students from all three ITE Colleges, including Higher Nitec in IT Applications Development, Higher Nitec in IT Systems and Networks, Higher Nitec in Cyber & Network Security, and Higher Nitec in Business Information Systems, can enroll in the elective course. The application process for these courses has already started.
Director of Public Sector Group, Microsoft Singapore, Lum Seow Khun, said, “Singapore continues to deploy AI at a national scale as it moves to develop its position as a Smart Nation through Industry Transformation plans. Organizations continue to see a gap in AI skill sets and talent as they adopt machine learning, deep learning, and natural language processing.”
Khun also mentioned that they seek to leverage digital literacy as a driver for proper growth through collaborations like these while also bridging the gap between skill and employment for a resilient, digitally inclusive Singapore.
Artificial intelligence and digital technology company Pactera Edge partners with the International Institute of Information Technology Hyderabad (IIIT-H) to come up with its new program aimed at startups.
The AI Innovation Challenge is a program aimed at early-stage startups with solutions that can solve challenges in the retail and manufacturing industries.
According to officials, the program primarily focuses on leveraging computer vision for Human Activity Detection, Visual Inspection in manufacturing.
Startups with solutions that solve certain issue statements, such as strategy, technology, desired outcome, and effect, are sought by the organizations.
Selected startups will participate in a four-month structured immersion program to assist them in pivoting their product for particular domain use cases, along with an equity-free award of INR 12 lakh. IIITH will give research and business mentorship to chosen startups, leveraging its strong tech research skills and extensive expertise in incubating startups.
“Pactera EDGE believes in reinventing customer experience using revolutionary AI technologies and giving our customers the winning EDGE in the digital era,” said Dinesh Chandrasekar, CIO, Pactera EDGE.
Pactera EDGE also stated that it intends to provide mentorship to these concepts in areas such as technology, business strategy, design, and product development.
Prof Ramesh Loganathan, Head of Outreach, Professor of Co-innovation at IIIT Hyderabad, said, “CIE-IIITH is very happy to be partnering with Pactera EDGE for through market access program. As an early-stage incubator, helping startups with market access is the most significant challenge.”
He further added that the assistance of such business programs is a critical facilitator in achieving this aim.
Interested startups can apply for this program from the official website of IIIT-H.
Global audit, consulting, risk management, and financial advisory services provider Deloitte has partnered with the Indian Institute of Technology (IIT) Roorkee to offer its students fellowships in artificial intelligence (AI) and advanced analytics.
This new strategic partnership between the two institutions will considerably help IIT Roorkee students gain a better industry experience through work-study programs.
The artificial intelligence and machine learning (ML) immersion fellowship programs are specifically developed to build the future generation workforce.
Prof. Ajit K. Chaturvedi, Director, IIT Roorkee, said, “The coming together of IIT Roorkee and Deloitte will create new opportunities for both of us. In fact, this partnership has the potential to strengthen the AI roadmap of India.”
According to officials, this new partnership is in line with the Indian government’s “Digital India” goal, which aims to create a digitally empowered society. Experts believe that AI proficiency will be critical in addressing existing, emerging, and future employment prospects, and this collaboration will contribute towards building an industry-ready workforce in the country.
Both the organizations will primarily focus on –
Design and deliver AI and machine learning certification courses for the Deloitte AI Academy, which educates Deloitte practitioners.
Offer IIT Roorkee researchers and students the opportunity to collaborate on AI projects with Deloitte through a work-study program.
Promote AI fluency among ambitious students and the general public through online learning courses.
Managing Principal, Businesses, Global and Strategic Services at Deloitte, Jason Girzadas, said, “We at Deloitte are committed to developing new talent with the right skill sets to deliver on the benefits of AI for business and all of society.”
He also mentioned that this partnership with IIT Roorkee will educate future business leaders, imparting AI competency meant to increase the pool of business-ready AI talent as they strive to assist their customers’ journeys to become AI-fueled enterprises.
Ai-Da, a novel robot developed by English gallerist Aidan Meller and Cornish robotic business Engineered Arts, has painted a picture of Queen Elizabeth to celebrate her Platinum Jubilee.
Algorithmic Queen, the artificial intelligence-powered robot’s picture, was created to depict the various facets of technological advancement that have occurred over Queen Elizabeth’s 70-year reign.
In the words of the developer of Ai-Da, the portrait provides a marker for “how far things have come” in the monarch’s life. The painting was layered and scaled to produce the final multidimensional portrait of Queen Elizabeth.
Ai-Da is the world’s first artificial intelligence-powered robot that can paint just like an artist. The robot, named after Ada Lovelace, the first female computer programmer, uses face recognition technology to scan photographs and feed them into an algorithm that controls her robotic arm’s movement.
The developers of this robot claim it to be “the first ultra-realistic humanoid artist.” Ai-Da was first introduced back in 2019 at Oxford University, after which it has traveled across the world to showcase its capabilities. Ai-Da is updated regularly, allowing it to extend and refine its abilities as technology is continuously evolving.
Apart from painting capabilities, Ai-Da can also effectively converse with others.
Ai-Da said, “I’d like to thank Her Majesty the Queen for her dedication and for the service she gives to so many people. She is an outstanding, courageous woman who is utterly committed to public service.” The robot further added that it thinks she’s a wonderful person, and it wishes Her Majesty the Queen a very happy Platinum Jubilee.
Global semiconductor manufacturing giant Intel announces that it has selected a diverse Ohio-based team led by Gilbane Building Company to manage the early excavation work for its two new leading-edge chip factories in Ohio.
Intel has also partnered with McDaniel’s Construction Corp., Northstar Contracting Inc., and GTSA Construction Consulting to executive the operation.
According to the company, this new development is a part of Intel’s IDM 2.0 plan. In this initiative, Intel will accelerate chip production to satisfy the increasing demand for sophisticated semiconductors, powering a new generation of innovative products from Intel and servicing the needs of foundry customers.
Gilbane will oversee the team’s efforts to prepare the site for the development of Intel’s planned facilities and to promote economic inclusion to ensure that various businesses have long-term prospects. D
an Moncrief III, chairman, and CEO, of McDaniel’s Construction, said, “Since our inception over 37 years ago, we have strived to be a leader in our market sector. We believe that the past hard work and sacrifices have put us in a position to be an integral team members for this project.”
He further added that this opportunity should offer them enough exposure to allow them to continue growing in the near future.
Earlier this year, Intel announced its plans to build a $20 billion microchip manufacturing facility in Ohio. Intel speculates that the massive industrial plant will be erected on almost 1,000 acres and will initially employ over 3,000 people.
Another interesting event happened recently when Pat Gelsinger, CEO of Intel, said that a shortage of advanced equipment to make semiconductors could hold up global expansion plans. According to him, supply timeframes for chipmaking equipment for the company’s further chip facilities in the United States and Europe have been greatly prolonged.
Technology giant Meta, formerly known as Facebook, selects Microsoft Azure as its preferred cloud provider to advance artificial intelligence (AI) innovation and deepen PyTorch collaboration.
Meta will employ Azure’s supercomputing capability to boost AI research and development for its Meta AI group as part of this partnership.
According to Microsoft, Meta will use a specialized Azure cluster of 5400 GPUs running on the newest virtual machine (VM) series in Azure (NDM A100 v4 series, using NVIDIA A100 Tensor Core 80GB GPUs).
Both companies also plan to work together to increase PyTorch usage on Azure and help developers go from experimental to production faster.
“With Azure’s compute power and 1.6 TB/s of interconnect bandwidth per VM, we are able to accelerate our ever-growing training demands to better accommodate larger and more innovative AI models,” said Jerome Pesenti, Vice President of AI, Meta.
He further added that they are also excited to collaborate with Microsoft to expand their experience to their clients who are utilizing PyTorch in their research and production process.
According to the plan, Microsoft will release new PyTorch development accelerators in the coming months to help developers quickly deploy PyTorch-based solutions on Azure. Moreover, Microsoft will provide PyTorch with enterprise-grade support, allowing customers and partners to use PyTorch models in production on both the cloud and the edge.
Recently, the open-source machine learning platform Hugging Face also partnered with Microsoft to launch its new Hugging Face Endpoints on Azure. Hugging Face Endpoints, available through Azure Machine Learning Services, allows clients to leverage Hugging Face models with a few clicks of Microsoft Azure SDK code, drastically increasing its usability.
Global technology giant NVIDIA announces its researchers have curated a database of over 100 thousand artificial intelligence (AI) and high-performance computing (HPC)-enabled brain images.
Ian Buck, general manager and vice president of NVIDIA’s Accelerated Computing, revealed this information at a recently held event.
According to him, using NVIDIA’s Cambridge-1 supercomputer and artificial intelligence, researchers at King’s College in London have compiled the world’s largest archive of synthetic brain images.
The database, which comprises 100,000 brain pictures, is being made publicly available to healthcare experts to promote the study of cognitive diseases. Jorge Cardoso, a researcher at King’s College and a founding member of MONAI, curated and provided the photographs.
Cardoso said, “In the past, many researchers didn’t want to work in healthcare because they couldn’t get good data, but now they can.”
He further added that they realized the models had learned the distribution of brain types, so they did not need the dataset anymore, it was part of the model.
The realistic 3D brain images, which can be male or female, young or elderly, can be customized to meet specific study requirements. NVIDIA’s Cambridge-1 supercomputer processes each image’s 16 million 3D pixels with considerable computational power.
According to NVIDIA, the Cambridge-1 system comprises 80 DGX A100 systems, Bluefield-2 DPUs, 640 NVIDIA A100 Tensor Core GPUs, and NVIDIA HDR InfiniBand networking.
The 100,000 photos will be stored in a national archive, Health Data Research UK, and the models will be shared for future use. Cardoso passionately praised the work as pointing in various directions, as if discharging the contents of several minds.
Technology startup One AI raises $8 million in its recently held funding round. Many investors, including Ariel Maislos, Tech Aviv and SentinelOne Inc. Chief Executive Officer Tomer Wiengarten, and numerous others, participated in the One AI’s funding round.
According to the company, it plans to use the freshly raised funds to offer better and enhanced natural language processing features in its platform.
One AI provides a set of natural language processing models that have been pre-packaged for corporate use cases. Developers can seamlessly integrate One AI’s technology with their software using an application programming interface.
Co-founder of One AI, Yochai Levi, told TechCrunch, “We believe that the technology is nearing its maturity point, and after building NLP from scratch several times in the past, we decided it was time to productize it and make it available for every developer.”
He further added that despite the market’s rapid growth, advanced NLP is still mostly employed by experts, Big Tech, and governments. Levi feels the solution is One AI’s platform, which is a collection of NLP models that have been trained for specific business use cases.
Pre-built neural networks from One AI can organize documents by subject, extract customer sentiment from help desk tickets, and generate text summaries. One AI’s pre-packaged neural networks can be used alone or in combination by developers.
One AI currently employs 22 people, including ten natural language processing researchers, and as per reports, the company plans to hire new employees after this funding round.