FTX, the crisis-struck cryptocurrency exchange, has filed for US bankruptcy protection after witnessing the digital currency equivalent of a bank run on Friday.
A week of several rumors over the platform’s financial worries sent Bitcoin to a two-year low this week, and the currency was trading at $16,861 (€16,256) by Friday evening.
Following the Chapter 11 bankruptcy filings – FTX US and FTX [dot] com initiated precautionary steps to move all digital assets to cold storage. Process was expedited this evening – to mitigate damage upon observing unauthorized transactions.
FTX, its affiliated crypto trading firm Alameda Research, and almost 130 of its other companies have started voluntary bankruptcy proceedings (Chapter 11) in Delaware.
Chapter 11 is a US mechanism enabling a company to restructure its debts under supervision of court while continuing to operate. FTX Trading said it has about $10 billion to $50 billion in liabilities, $10 billion to $50 billion in assets, and over 100,000 creditors in its bankruptcy petition.
The platform’s founder and CEO, Sam Bankman-Fried, resigned after failing to raise billions to stave off collapse as traders hastened to withdraw $6 billion in just 72 hours.
A restructuring expert, John J. Ray III, has been appointed FTX’s new CEO. Reuters cited sources saying FTX struggled to raise about $9.4 billion from rivals and investors to save itself after customer withdrawals.
Apple is planning to release Apple Reality Pro, its mixed reality (MR) headset, in March 2023. The company will collaborate with Pegatron, a Taiwanese electronics manufacturer, to push the headset into mass production starting in 2023. It is rumored to be a somewhat limited edition, only available to commercial clients, given the high price Apple products carry.
The MR headgear will likely resemble a pair of ski goggles in terms of appearance. It will be more compact and lighter than the Meta VR glasses Quest Pro (which weighs 722 gm) and are made of “mesh textiles, aluminum, and glass.” The Apple VR headset can scan irises, a feature that Quest Pro does not offer, enabling users to log in to their accounts as soon as they put on the headsets.
Additionally, Apple Reality Pro will have 14 headset cameras, out of which 2 can scan the leg positions of the wearer to portray them accurately in the virtual world. In contrast, none of the 10 headset cameras in Meta’s Quest Pro can scan leg positions.
The headset offers a resolution of 4K for each eye and is powered by an offshoot of iOS called realityOS or rOS. Its internal processor is modeled around the M2 processor that Apple just unveiled for Macs and iPads.
The Quest 2 cost US$399 and sold 15 million units, but the more recent Quest Pro, priced at US$1,499, did not get the same momentum in the market. No matter how much the Apple headsets costs, they will only cater to a small potential market.
On Sunday, Karnataka minister Ashwath Narayan said the new theme park near the Kempegowda statue would have a metaverse experience for the tourists that visit the place.
The announcement came in response to a Twitter user’s query on the efforts of the Karnataka government to boost tourism like the initiative undertaken by Gujarat through the Statue of Unity. Narayan replied to the tweet, saying that the government is constructing a theme park at the Statue of Prosperity location that depicts Kempegowda’s idea of Bengaluru. The theme park would have cultural symbols, museums, tiny lakes, and Metaverse experiences to depict rich heritage and tech prowess, the tweet added.
Definitely, yes! 👍🏻
The Govt is also constructing a theme park at the #StatueOfProsperity location that reflects Kempegowda’s idea of Bengaluru. The theme park would have tiny lakes, cultural symbols, museums, and Metaverse experiences to depict rich heritage & tech prowess. https://t.co/AMF3gcDgf0
The Karnataka minister shared the blueprint of the theme park, which, according to him, will attract tourists to the airport. “The theme park will have sacred soil and water which will be collected from various villages of Karnataka, including 31 districts. This park in front of the Kempegowda statue will serve as a major attraction for all tourists who visit Bengaluru,” he said.
Prime Minister Narendra Modi unveiled the statue of Kempegowda, the ruler known as the fortifier of Bengaluru, on Friday. The statue is also known as the ‘Statue of Prosperity.’ The prime minister also unveiled the new terminal 2 at the Kempegowda International Airport.
The new terminal is expected to start operations by December, serving approximately 25 million passengers annually.
Palmer Luckey, the founder of Meta’s Oculus (rebranded as Meta Quest 2), created a VR headset that kills people in real life if they eventually die in the video game. Luckey wrote in a blog post that he has successfully made a true NerveGame, partly able to figure out half the explosion that kills the player.
Luckey reiterated the SAO (Sword Art Online) Incident, where thousands of gamers got trapped inside a death game, escapable only via competition. If the players’ hit points drop to zero, the headset will bombard the player with powerful microwaves, probably killing the user.
Such games are based on NerveGear, a VR head-mounted display that transports users’ minds and souls to the game. Massive enthusiasm for Oculus was sparked by the success of SAO, giving it a more grounded and believable experience. Since that, people have been inclined towards NerveGear.
NerveGear perfectly recreates reality with a neural interface by tying real life to the virtual avatar. On top of it, pumped-up graphics make the gaming experience feel more natural. Luckey said that he used three explosive charge modules and tied them to a narrow-band photosensor that flashes red at a specific frequency, detecting that the game is over.
He said, “When an appropriate game-over screen is displayed, the charges fire, instantly destroying the user’s brain.” Luckey also expressed his plan to incorporate an anti-tamper mechanism that will make it impossible to remove/destroy the headset.
MIT has recently declared that models trained on synthetic data can provide real performance improvements compared to traditional models. The models trained on synthetic data can also eliminate some privacy, copyright, and ethical issues of using actual data.
Researchers teach machines to identify human actions with the help of massive-scale video datasets that show humans performing actions. However, developing such datasets is expensive, and it violates the privacy of personal information such as people’s faces, licenses, number plates, and more. Therefore, to avoid these issues, researchers have started using synthetic datasets.
Synthetic datasets are artificially generated rather than real-world events. They are made by machines using 3D models of scenes, objects, and humans. MIT researchers have developed a synthetic dataset of 150,000 videos capturing various human actions to train machine learning models. They later used six video datasets taken from the real world to the same machine learning models to recognize human actions in those datasets.
MIT’s Researchers found that the synthetic-trained models performed better than those trained on the real datasets. This invention can help researchers use synthetic datasets on machine learning models to achieve more accuracy.
Rogerio Feris, the principal scientist, manager at the MIT-IBM Watson AI Lab, and the research’s co-author, stated that the research’s goal was to replace actual data pretraining with synthetic data pretraining. Although there is a cost in creating the actions in synthetic data, once that is done, you can generate unlimited images or videos by changing poses, lighting, and more.
Artificial intelligence-based frameworks and techniques aim to mimic humans and accomplish tasks on their behalf much more efficiently. Along similar lines, researchers have been trying to simulate focal adjustments in computer vision, an AI sub-field that enables computers to analyze digital images, similar to how human eyes observe coarse objects in their surroundings. The area of research is challenging because modeling all the gritty details of visual inputs and then adjusting focal points makes it tedious.
Researchers from Microsoft have pioneered a new architecture and have proposed FocalNets (Focal Modulation Networks), neural networks with focal modulation, to build better systems based on computer vision. Computer vision technologies have significantly advanced with the help of transformers, specifically vision transformers. These transformers offer a self-attention (SA) mechanism that makes them highly applicable in visioning as it allows each token to quickly collect the required information from others.
However, the self-attention mechanism is compatible only with a determined set of prepared tokens having particular scope and granularity. Moreover, there have always been efficiency concerns due to the quadratic complexity posed by the mechanism, especially for high-resolution inputs.
While developing FocalNets, researchers have entirely replaced the self-attention mechanism with a module for focal modulation inspired by focal attention, a technique that aggregates coarse-grained visuals at multiple levels.
Focal modulation is a straightforward element-wise multiplication mechanism that enables modulator-based interaction of the model and the input. The modulator is derived using a two-step focused aggregation procedure. The first step, called focal contextualization, pulls contexts at different granularities from local to global scales. The second uses gated aggregation to pack the modulator with all context features at different granularity levels.
The model exhibits a dynamic, interpretable learning behavior when finding and identifying objects in photos and videos. Without any dense supervision, it learned to distinguish objects and adjust the focus in a manner consistent with human annotation.
Researchers also experimented with FocalNet by using some traditional techniques similar to the ones used for vision transformers. They utilized overlapped patch embeddings to downsample the input dataset and observed an improvement in the model, irrespective of its size. They also experimented by making the FocalNets deeper but thinner. These alterations led to smaller model sizes but a significantly higher time cost as it increased the sequential blocks. The idea was to see how diverse FocalNets can be and how they can be related to other architectural designs, like PoolFormer, depth convolution, etc.
FocalNets were tested on standard tasks like ImageNet classification, and their performance was evaluated against other vision networks like Vision Transformers (ViT), ConvNeXT, and Swin Transformers. The findings show that FocalNet consistently outperformed the others. Its attention-free architecture of targeted modulation considerably enhanced dense visual prediction with high-resolution picture input.
The research paper aims to educate the community on newer computer vision systems that can work accurately even with high-resolution visual inputs using FocalNets. The only concern is that the model might be biased toward the training data as it is trained on massively large, web-crawled images. When the datasets are large, the negative impact of the bias may be amplified due to the biased elements. Nevertheless, FocalNets are a significant step forward in developing computer vision applications. The researchers plan to undertake a more comprehensive study to analyze if focal modulation can be extended to other domains like natural language processing (NLP).
FIFA has announced its launch of FIFA AI Metaverse League, a game playable during the FIFA World Cup Qatar, 2022, in partnership with Altered State Machine. The novel “smart football” game uses AI-based characters to give players a metaverse experience.
As the global metaverse market is expected to almost reach $1000 billion by 2030, even gaming associations are becoming a part of it. Aaron McDonald, Co-founder of Altered State Machines, said that the FIFA World Cup AI Metaverse League launch would help casual gamers transition to metaverse gaming. He also added, “We are honored to be building the first Web3 game with FIFA. We look forward to bringing the world’s most popular sport into the metaverse.”
Players can unleash a team of AI-powered footballers on the streets of the Metaverse in this ground-breaking 4-v-4 football game, set in stylized playing arenas of well-known locations where street football is played worldwide. The game offers a prediction challenge where players have to put their football knowledge to the test and compete against others to earn prediction points.
The players can score “epic rewards” like Vivo smartphones, Adidas gift cards, gaming setup, Adidas AL RIHLA balls, and much more. Additionally, players can create their own customized and collectible street football team with which they can play, train, and trade.
FIFA AI Metaverse League will soon be available via the mobile app with numerous modes. The ‘League’ mode will offer a tactical play where players will experience a cutting-edge football manager game, and the ‘Mayhem Mini’ game where players can casually play alongside AI characters.
Swiss luxury watchmaker Rolex is all set to enter the metaverse. The company recently filed a trademark application related to non-fungible tokens (NFTs), cryptocurrencies, and virtual goods,
The details of the trademark application filed by Rolex with the United States Patent and Trademark Office (USPTO) were shared by Michael Kondoudis, trademark and patent attorney. The tweet indicates that the watchmaker has extensive plans for its luxury brand in the metaverse.
According to the data in the tweet, the trademark application (serial number 97655284) was registered on October 31. It included plans for non-fungible tokens as well as the exchange and transfer of virtual currencies.
Luxury watchmaker #ROLEX has filed a trademark application claiming plans for:
Along with conducting virtual goods auctions for digital collectibles like art and watches, the company is also looking forward to setting up online spaces for sellers and buyers of virtual products such as watch parts.
Rolex intends to firmly establish its brand in the metaverse by creating NFTs, NFT marketplaces, and NFT-backed media. The luxury watchmaker also intends to market its brand via product placements in online games as part of its virtual expansion.
Zoom is working with Tesla to incorporate a video conferencing feature into its new range of electric vehicles. Nitasha Walia, Zoom’s group product manager, said during the company’s Zoomtopia 2022 event that video conferencing would come standard in all new Tesla models very soon.
“You’ve been zooming from your home, office, phone, and even your TV. We’re going to make it even easier for you to make zoom calls from anywhere,” Walia said during the event.
Zoom has also announced various new features at the event. With Calendar Clients and Zoom Mail, users will no longer need to leave the Zoom platform in order to access their email and calendar.
Popular calendar and email services will be integrated directly into Zoom. That way, users can quickly access their scheduling and communications to get their work done more efficiently.
“Zoom Mail, Calendar Clients, Calendar Services, and Zoom Mail will be available in beta to only Canada and US upon launch,” said the company.
Coming in early 2023, Zoom Spots is a video-enabled persistent space integrated within the Zoom platform to help foster inclusive discussions and keep colleagues connected.
MTV Hustle 2.0, the second season of India’s leading rap show, launches the first Indian AI rapper BotHard, in collaboration with the DDB Mudra Group. The rapper uses artificial intelligence and rhyming sensibilities to generate hip-hop music.
Due to the growing popularity of hip-hop music, musicians are venturing into AI to try and generate synthetic raps. FN Meka, an AI rapper, was created with the same aim. But unfortunately, the project failed miserably due to the backlash it received, as Mekka’s raps were offensive to the black community.
Recently, Hustle 2.0 ended by announcing MC Square as the season’s winner while launching BotHard, the first Indian AI rapper. BotHard is a legitimate name stemming from the Hindi-based colloquial phrase “Bohot Hard,” commonly used in raps. It was developed using GPT-3, OpenAI’s most popular language model that can code, generate video games, write emails, and now apparently, can rap.
Rahul Mathew, Chief Creative Officer at DDB Mudra Group, said, “While this idea is built on cutting-edge technology, the most exciting part for us was how it had to stay to hip-hop culture for it to be accepted by the audience.”
EPR, the rap sensation of Hustle 2.0, challenged BotHard to a rap battle. In his opinion, such rap battles will reinforce how precise and intuitive humans are, which still needs to be matched by artificial intelligence.
As seen in the video, BotHard AI rapper can be seen dissing another human rapper. Evidently, AI rappers have a long way to go till they can match human intent and humility.