Facebook-parent Meta Platforms will freeze hiring and restructure amid an uncertain macroeconomic situation, said Chief Executive Mark Zuckerberg while communicating with employees. Several tech companies have recently been forced to slash headcount as advertisers trim spending to prepare for a looming recession.
During a weekly Q&A session, Zuckerberg said, “I had hoped the economy would have more clearly stabilized by now, but from what we are seeing, it does not yet seem like it has, so we want to plan somewhat conservatively.”
Meta had cut plans to hire engineers by at least 30% this year. The report added that Zuckerberg also said that Meta would reduce budgets across most teams and that individual teams will have to resolve in what ways to handle headcount changes.
The company referred to Zuckerberg’s warning of a reduction in headcount over the next year in the second-quarter earnings call. The company had indicated hiring freezes in broad terms in May, but exact figures have not previously been reported.
Zuckerberg had warned in July that Meta would steadily reduce headcount growth and that many teams would shrink so that they could shift energy to other areas. Priority areas include Meta’s TikTok competitor, Reels and Zuckerberg’s metaverse. Meta had over 83,500 employees as of June 30 and added 5,700 new hires in the second quarter.
OpenAI’s DALL-E is now available to everyone. The company has removed the waitlist to allow open access to its text-to-image generator DALL-E 2. The company unveiled DALL-E, a multimodal generative neural model, in 2021. The researchers showed that language could instruct a sizable neural network to produce high-quality images and execute various other text production tasks.
OpenAI has been hesitant about making DALL-E available to the general public. Text-to-image systems’ ability to create photorealistic and nonconsensual photographs could be potentially harmful. Such images, if generated, may serve as a means of harassment, propaganda, false information, and other similar consequences.
Additionally, Bias problems also exist. Text-to-image algorithms repeat uneven features of society since they are trained on massive databases of photos taken from the internet. To counteract these consequences, OpenAI has taken various steps, such as removing sexual and violent imagery from its training data and refusing to produce images from equally graphic cues.
OpenAI said, “In the past months, we have made our filters more robust at rejecting attempts to generate sexual, violent, and other content that violates our content policy, and building new detection and response techniques to stop misuse.”
Anyone who registers for DALL-E access will receive 50 credits for free, followed by 15 further credits each month. A single image, a modification of an image, or “inpainting” and “outpainting” can all be created using a single credit.
Cassie the robot en route to a Guinness World Record
Image Credit: Oregon State University
By sprinting 100 meters in 24.73 seconds, the bipedal robot Cassie broke the Guinness Book of World Records. Cassie, the first bipedal robot with ostrich-like knees, is the first to employ machine learning to regulate a running stride on outdoor terrain. Cassie joins a group of running bipedal robots that includes Boston Dynamics‘ Atlas humanoid robot and Mabel, described as the world’s fastest knee-equipped bipedal robot.
Image Credit: Oregon State University
Cassie was created by Agility Robotics, an Oregon State University spin-off firm, and debuted in 2017 as a platform for robotics research development. And Cassie has continued to advance substantially since then, accomplishing a 5-km jog in just over 53 minutes in 2021 (in a single charge) to show off some amazing development. Although it could have completed the task more quickly, roughly six and a half minutes were lost in troubleshooting due to overheating and losing one of its legs during a turn.
While it cannot keep up with the lightning speed of the finest athletes in the world, it is an excellent example of robotics and engineering despite the fact that it operates without cameras or external sensors.
The shattering of the world record took place on May 11, 2022, but Oregon State University recently released the footage and update. Cassie raced the historic 100-meter sprint at Oregon State University’s Whyte Track and Field Center. The robot started from a standing posture and finished the sprint in the same position without tripping – something which was necessary to achieve the World Record. In order to do this, Cassie had to use two neural networks—one for moving swiftly and another for staying still—and transition between them effortlessly.
Cassie was created by Oregon State robotics professor Jonathan Hurst with the help of a 16-month, $1 million grant from the Defense Advanced Research Projects Agency, or DARPA. Since Cassie’s debut in 2017, students at Oregon State have been investigating machine learning alternatives in the Dynamic Robotics and AI Lab with assistance from artificial intelligence professor Alan Fern. They were funded by the National Science Foundation and the DARPA Machine Common Sense initiative.
Image Credit: Oregon State University
According to Fern, the Dynamic Robotics and AI Lab combine physics with AI techniques that are more frequently utilized with data and simulation to produce innovative findings in robot control. A variety of academic disciplines, such as mechanical engineering, robotics, and computer science, are represented among students and researchers.
As per to Oregon State University blog, Cassie trained for a year in a simulation environment with a full week of parallelization, a computing approach that involves numerous processes and calculations at once and enables Cassie to undergo a variety of training scenarios concurrently.
In April, we shared the news of Chipotle testing a robot named Chippy that makes tortilla chips, at Chipotle’s Cultivate Center, an innovation hub in Irvine, California. Now, Chipotle Mexican Grill has announced it will test Chippy next month in a restaurant in Fountain Valley, California.
Companies and restaurants are testing robotics and automation to speed up operations and free up staff from mundane jobs. Currently, Chipotle employees need to cook and season the chips, which takes time.
Chipotle is using its “stage-gate approach,” as it does with the rest of its new technology and menu items, to test and learn from staff and consumers before deciding how to roll out the technology nationwide. The restaurant chain revealed on Tuesday that the robot would make its debut at a location in Fountain Valley, Orange County.
Image Source: Miso Robotics
Chippy can prepare a serving of tortilla chips in 50 seconds; it may take a human two minutes to do the work. A team member puts chips into a hopper, and an automated arm dispenses 8 or 9 ounces of them into a frying basket. After cooking them, Chippy transfers them to a bowl, adds salt and freshly squeezed lime juice, and then pours the mixture into a pan for bagging.
Chipotle is not the first food chain to venture into robotics and automation to address manual labor woes. Starbucks has incorporated new technologies for more quickly brewing drip coffee, serving food, and creating cold coffee beverages. In other locations, Panera Bread, McDonald’s, White Castle, and Buffalo Wild Wings are all experimenting with automated drive-thru ordering to shorten wait times.
Besides Chippy, Chipotle is testing a cook-to-needs kitchen management system in some of its establishments in Southern California. This system offers demand-based cooking and ingredient preparation estimates to maximize throughput and freshness while reducing food waste. By leveraging machine learning, the system automatically populates real-time production planning for each restaurant while monitoring ingredient levels in real-time and notifying the crew of how much prep work has to be done before cooking can begin.
Chipotle’s new PRECITASTE-powered kitchen management system is being trialed at eight locations in Orange County, California. According to Chipotle, preliminary findings show that the pilot is successfully optimizing kitchen operations for staff members while assuring a steady supply of fresh ingredients for customers.
Chippy was created by Miso Robotics, a California-based firm that previously created Flippy, a kitchen bot that could prepare 300 burgers each day before expanding to prepare fries with a later iteration, Flippy Lite. Compared to human workers, Flippy and Chippy never need to take breaks or complain about their working circumstances. Other products offered by Miso Robotics are Sippy, an automated beverage dispenser, and CookRight, a coffee monitoring system that Panera Bread is testing.
Intel has announced its plans to start selling graphics chips for video gamers next month. It aims to get a piece of a lucrative market dominated by competitors Nvidia and Advanced Micro Devices (AMD).
Intel dominates in semiconductors at the computational heart of personal computers, but it has long ceded the market for video gaming graphics chips to Nvidia and AMD. On Tuesday, Intel Chief Executive Pat Gelsinger signaled the company would re-enter that field, releasing a graphics card for gamers that is slated to be available on October 12.
Intel’s new cards aim to be price-competitive and are not expected to challenge high-end cards on performance. Intel’s pitch would be to gamers tired of paying sky-high prices for the fastest, most advanced graphics chips.
Intel’s cards will start at $329. Nvidia last week unveiled a new generation of graphics processors priced at up to $1,599. However, Intel’s are not expected to challenge those high-end cards on performance, instead aiming to be price-competitive with older-generation chips.
The graphics-chip market has grown rapidly in recent years due to a surge in video gaming during the pandemic and because those chips have demonstrated to be adept at artificial intelligence calculations that are gaining more widespread adoption.
Nvidia’s sales of video gaming chips were around $2 billion in its latest quarter, while AMD made $1.7 billion in its video gaming segment. The market has cooled amid economic headwinds and shakiness in consumer spending in recent months.
One of the biggest telecom companies in Spain and Europe, Telefonica, has partnered with chip maker Qualcomm to collaborate in producing metaverse-related initiatives. According to the agreement, the telecom infrastructure of Telefonica will serve as a platform to deploy experiences created with Qualcomm technology.
The technology called Snapdragon Spaces is a full stack of programs that allow designers to focus on creating these experiences, especially for augmented reality headsets. Spaces is also a device-independent technology. Therefore, the metaverses designed using it can be run with any headset on the market, including the Meta Quest line of devices.
Telefonica will include this technology in initiatives to be developed via its Metaverse Hub, a location dedicated to metaverse, Web3, and augmented reality initiatives.
Daniel Hernandez, Telefonica’s VP of devices & consumer IoT, said that extended reality (XR) would bring a new dimension to the real and virtual world, allowing people to communicate, do business, socialize, and be entertained in new ways. He added that they are preparing for the future, building the infrastructure, evolving services, establishing partnerships, and upgrading equipment that will allow them to bring innovative new devices and services to customers.
The interest of the two companies in the metaverse and augmented reality technology is not new, as they have previously invested and established partnerships in the area. Qualcomm CEO Cristiano Amon, while giving his take on the metaverse in May, said it would be a massive opportunity for the companies involved. The company recently signed a deal with Meta to develop metaverse-specific silicon for Meta’s next line of headsets.
Moreover, Telefonica has also been involved in metaverse projects. The company invested an undisclosed amount in Gamium, a Spanish open world, through an open innovation platform of the company, Wayra.
Meta, the Facebook parent company, has rolled out parental supervision tools feature and a ‘family center’ program on its social media application ‘Instagram’ to ensure the safety of young users while allowing parents to manage the time spent by children on the site at the same time.
Meta has been working closely with parents and guardians and has been stressing the need to educate them about digital services and the safety aspects involved, a company official said.
“Over the years, Meta introduced many age-appropriate features and resources that have enhanced the experience of young people on Instagram.” Facebook India (Meta), Head-Public Policy, Natasha Jog said.
According to her, parents could keep track of the time their wards spend on Instagram and access the accounts they follow with the latest supervision tools.
Parents and guardians would also be informed when their children raise a complaint on Instagram. “With the launch of these supervision tools, Meta is trying to strike a balance between young people’s need for some autonomy while using Instagram and also allow supervision in a way that supports conversations,” Jog said.
A massive fire broke out at Tesla’s Gigafactory in Berlin on Monday. As reported by local publications, the factory burst into flames after a massive pile of cardboard and wood suddenly caught fire. Around 800 cubic meters of the paper pile, cardboard, and wood were caught on fire, which resulted in a massive fire breakout.
As per a report by Electrek, the factory caught fire around 3:00 am CEST on Monday, which is around 7:00 am Indian Standard Timing. The fire reportedly was first caught near a recycling plant on the northeastern portion of the factory.
The report stated that more than 50 firefighters struggled to extinguish the fire, including 12 from Tesla’s own fire fighting brigade. It took the fighters hours to get the fire under control completely. No injuries or casualties have been reported.
Following the incident, the citizens’ initiative Grünheide (BI) is opposing the Tesla project due to environmental concerns. After the incident, the group demanded to halt the production process in the Tesla Gigafactory Berlin. A few months ago, the same group asked to revoke Tesla’s production permit after a paint leak incident in the factory.
It is not the first time Tesla faced issues related to burning cardboard. The Fremont, California facility reported a fire in cardboard pallets at the workplace parking lot earlier.
To recall, Tesla was fined by the EPA in 2019 for not disposing of highly flammable paints and solvent mixtures. The Fremont Fire Department also ordered the company to equip the facility with modern emergency response tools.
The report asks the government to consider the ethics of AI in education as an utmost priority and develop effective public-private partnerships to ensure that all students and teachers have access to the latest technology.
The 10-point recommendation by the UNESCO New Delhi Office also wants technology service providers to station data ownership with the students. A statement said that the state of education report introduces AI to demystify a subject that has survived various misconceptions. It also consists of an overview of opportunities and challenges in the Indian education sector that AI can address.
To align India’s educational curriculum to the 21st century and to prepare the students for the artificial intelligence economy, India’s National Education Policy (NEP) 2020 lays immense emphasis on the need to impart the required technical knowledge at all levels of education. It highlights the integration of AI in education systems to promote quality and skill-based education. UNESCO offers a glimpse of the varied suggestions and dimensions for future uses of AI in the school setting with this report.
The UN agency said that the publication is expected to serve as a helpful reference tool for enhancing and influencing programs and policies related to technologies such as Artificial Intelligence.
On Monday, September 26, Hugging Face, an AI startup, and ServiceNow Research, ServiceNow’s R&D arm, introduced BigCode. BigCode is an ambitious new project that seeks to develop “state-of-the-art” AI systems for code in an open and responsible manner. The main objective is to eventually make a dataset that can be used for training a code-generating system, which will then be applied to build a 15-billion-parameter model prototype using ServiceNow’s internal graphics card cluster.
Experts note that DeepMind’s AlphaCode, Amazon’s CodeWhisperer, and OpenAI’s Codex, which powers GitHub’s Copilot service, offer a fascinating preview of what AI is capable of today in the world of computer programming. However, only a small number of these AI systems have been made publicly accessible and open-sourced. For instance, OpenAI’s paid API provides access to Codex, whereas GitHub has started charging for access to Copilot. This inspires companies to explore the commercial opportunities of offering publicly accessible code-generating tools.
Anyone with a background in professional AI research and the time to contribute to the project is welcome to use in BigCode, which was inspired by Hugging Face’s BigScience initiative and BLOOM to open source highly sophisticated text-generating systems. The application form is now active.
BigCode is attempting to resolve some of the concerns that have come up regarding the use of AI-powered code generation, especially when it comes to fair use. It will achieve this by jointly developing a code-generating system that will be open-sourced under a license that will permit developers to reuse it subject to certain terms and conditions.
The initial objective of BigCode is to create a dataset of code that was gathered in the most ethically acceptable manner possible. In contrast to other releases that only scan all of GitHub for code, BigCode developers promise to go to great lengths to guarantee that only files from repositories with permissive licenses are included in the aforementioned training dataset. They assert that they would build “responsible” AI methods along the way for teaching and exchanging code-generating systems of all kinds and will seek input from relevant parties before announcing any policy changes.
Hugging Face and ServiceNow could not specify a date for when the project would be finished. However, they anticipate researching a number of code generation technologies over the coming several months, including auto-completion and code synthesis systems that operate across a wide range of domains, tasks, and programming languages. They added that the model prototype that will be built using the dataset would be smaller than AlphaCode, which has approximately 41.4 billion parameters, but larger than Codex, which has 12 billion parameters.