ftNFT uses a fraction of its stores to sell genuine NFT art products
SoftConstruct, a technology developer and service provider, has announced the opening of two physical non-fungible token (NFT) art stores (ftNFT) in Dubai, United Arab Emirates. The two stores have been built thanks to SoftConstruct-powered web3 facilitator Fastex Ecosystem, in what the company calls “a visionary concept” that it hopes will bring the digital and physical worlds together.
Both ftNFT locations were chosen for their strategic placement in the bustling Dubai Mall and the Mall of the Emirates. The stores will offer limited-edition goods along with other items created by artists. Visitors may purchase authentic NFT artwork there, learn about Web3, experiment with cutting-edge technology, and speak with an on-site SoftConstruct specialist on NFTs.
This in-store experience has been designed to provide face-to-face support for customers from professionals specially trained and certified by SoftConstruct, who will help with discovery, engagement, and delivery.
According to the announcement, all NFTs offered for sale at the ftNFT store outlets will be art pieces rather than digital representations of value that might be exchanged, transferred, utilized as a payment or exchange mechanism, or used for investment reasons. Therefore, the NFTs won’t be exchanged or provided digitally through a virtual asset platform.
SoftConstruct is well-known for its persistent drive to attempt new things and allow its ideas to expand in new directions, as it has always desired with its goods and trademarks. With 300+ partners and 16+ locations globally, Softconstrcut, home to over 8+ businesses that provide cutting-edge IT solutions for various sectors, is once again changing the game.
In the IBM quantum summit 2022, IBM announced ten new products that can help the global quantum computing ecosystem. The first announcement was about the launch of a 433-qubit IBM quantum computer processor called Osprey. Qubits are basic units of information in a quantum computer.
IBM Osprey is the most powerful quantum computer processor with 433 qubits, three times more qubits than its latest IBM Eagle processor. Like Eagle, Osprey also consists of multilevel wiring to provide flexibility for signal processing and device layout. It has integrated filters for reducing noise and improving stability.
The IBM director of research, Dario Gil, said, âIBM Osprey is a big processor, but next year IBM is planning to launch a new processor with over 1000 qubits and 4000 qubits processor in 2025. IBM is also launching Quantum System Two, a modular quantum computing system to scale larger and larger systems over time.â
IBMâs Quantum System Two is designed to combine multiple processors into one system with communication links. The system is expected to be online by 2023 and will be a building block for a quantum-centric supercomputing environment. Besides Eagle and Osprey, IBM has 20 quantum processors worldwide, like Q5 Yorktown, Q14 Melbourne, Q20 Austin, Q50 prototype, Q53, and more.
Facebookâs parent company Meta announced yesterday that they would be firing more than 11,000 employees in a bid to reduce costs following disappointing earnings and a drop in revenue. The reductions, part of the first significant budget cut since the foundation of Facebook in 2004, depict a sharp slowdown in digital advertising revenue, an economy on the brink of recession, and Mark Zuckerbergâs heavy investment in a speculative virtual-reality push, the metaverse.
Between January 2019 and September 2022, Meta has invested $36 billion in Reality Labs. Spending so much on an iteration of the Internetâs imaginary future without any profitable results has led many to think that Zuckerberg ruined the capital. But, it does make you wonder, what caused such a catastrophe?
What went wrong?
At the beginning of the pandemic, the world started to move online rapidly, and the increase in e-commerce resulted in revenue growth that was outsized. “Many people predicted this would be a permanent acceleration that would continue even after the pandemic ended. I did too, so I made the decision to significantly increase our investments. Unfortunately, this did not play out the way I expected,” said Zuckerberg in his message to the employees. Not only has online commerce bounced back to prior trends, but the macroeconomic downturn, ads signal loss, and increased competition have caused Meta’s revenue to be much lower than he and his team at Meta had expected.
Secondly, Apple announced significant changes to its privacy policy last year to give users greater control of their data, adding opaqueness and complexity for Meta. Data collected from apps and website cookies has been the backbone of the digital economy, and restrictions on the same by Apple have dealt a massive blow to Meta’s revenue.
Meta has also failed to onboard early users for metaverse and has fallen short of expectations. Before the firings were announced, Meta had been giving signals of cost cuts by shrinking the companyâs real estate footprint and withdrawing some of the benefits for its employees.
According to the companyâs most recent Q3 2022 financial reports, its revenue was $285 million, which is down 49% year over year. Realty Labsâ expense was $4.0 billion, which is up 24%, primarily because of employee-related costs and technology development expenses, the firm said. The unitâs operating loss increased to $3.7 billion from $2.6 billion in the corresponding period of the previous fiscal. Last week, shares of Meta plunged 24.5% as investors and analysts digested the companyâs third-quarter earnings miss and a weak fourth-quarter outlook. Shares closed at $97.94, the lowest price since 2016.
Metaâs social virtual reality platform, Horizon Worlds, has seen less than 200,000 monthly active users, far from Metaâs plan to add 500,000 monthly active users. The tech giant recently unveiled its latest VR headset, Meta Quest Pro, priced at $1,499. The company had previously released a few headsets, including the Quest 2. Zuckerberg acknowledged that the headsetâs R&D and production costs were among the most significant contributors to Reality Labsâ losses.
Meta is investing billions of dollars in the metaverse to make it possible for the few interested people, and to some extent, it has been successful. Using the Oculus headset, one can also enter existing virtual worlds such as Discronia, The Room VR, Puzzling Places, and many more games and apps. However, until mass mainstream adoption, these virtual worlds may be reduced to ghost towns.
Entrepreneur Krishnan Sundararajan, who is building Loka-an Indian Metaverse, believes that Meta should have focused more on users, not just infrastructure. âA platform is only valuable if there are enough users on it. This is something that many other new Metaverse projects have gotten wrong,â he says.
Many entrepreneurs building virtual world projects believe in the winning formula of mainstreaming the metaverse for various use cases such as productivity, work, fitness, music, art, and more. But not all of them agree on the importance of VR. Unlike Meta, some of these entrepreneurs donât believe VR will be widely adopted, especially in developing countries whose citizens will find headsets extremely expensive.
Startups, individual creators, and communities are paramount in building and shaping the metaverse. Meta and Zuckerberg have previously stressed that millions of creators and developers worldwide must come together to build an open and interoperable metaverse. Whether or not this remains to be seen, but in any case, the lack of rules, frameworks, and interchange standards on interoperability present significant constraints for the metaverse.
Conclusion
It is evident from Zuckerbergâs remarks that the layoffs come as the company caters to the need to become more capital efficient by cutting costs wherever possible due to its recent ongoing financial slump. It looks like meta will shift its resources to high-priority growth areas such as its AI discovery engine, ads, and business platforms, as well as its metaverse project.
However, there is a need for a clear framework to govern how brands can interact with the metaverse, enabling more startups to enter the segment and forging tie-ups with brands to connect with a mainstream audience. This network effect will allow META to tap various possibilities in commerce and retail and create new revenue streams. To do so, it may have to draw inspiration from its earlier version, Facebook. It appears Zuckerberg needs to rethink core strategy and take a user-first approach as he embarks on an ambitious journey to get Meta back on track.
After the prevalent layoffs of big IT Tech firms like Twitter, Intel, and Microsoft, Salesforce has recently layoff of its employees. However, the exact number of layoff employees has yet to be revealed.
As per the protocolâs recent report, the main reason behind the layoff was economic uncertainty and product challenges. The report stated that more than 2500 employees would be affected by the recent job cuts. According to the report, hundreds of employees are put on a 30-day review for one month.
One of the Salesforce spokespersons stated on CNBC, saying, âour sales performance process drives accountability, which might lead to leaving the business for some, and we support them through their transition.â
When Salesforce announced its layoff in August 2020, it offered 60 day’s notice period and severance, placement services, and a few months of benefits to affected employees. Investors expect a massive return from Salesforce, which always directed its profit towards growth. As per the report, Salesforce is also facing pressure from activist investor Starboard.
In October 2022, when Salesforce laid off around 90 contract workers, it implemented a hiring freeze until January 2023. At that time, one of the spokespersons revealed that limited hiring continues in Salesforce, but most departments have already reached their hiring goals for the financial year. The organization has ended contracts with some of its recruiting contractors temporarily.
A new association focused on blockchain and cryptocurrency has been established in Abu Dhabiâs free economic zone to advance the development of blockchain and crypto ecosystems throughout the Middle East, North Africa, and Asia.
The zone was formed to support the advancement of fintech companies in the UAE. According to its website, the nonprofit organization will strive to create commercial opportunities, facilitate regulatory solutions, and invest in education to enable industry growth.
The founder of a Dubai-based international risk and compliance consulting firm, Jehanzeb Awan, will lead the association.
Other association members include Stuart Isted, Crypto.comâs Middle East and Africa general manager; Richard Teng, regional head of Binanceâs Middle East and North Africa (MENA); and CEO of BitOasis, Ola Doudin.
After achieving remarkable milestones in the e-scooters arena in recent months, Ola Electric is now venturing into production of electric bikes. The Founder and CEO of Ola Electric, Bhavish Aggarwal, hinted at the latter in a tweet.
The exact details of the specifications and production for the Ola Electric bikes are not yet available and are expected to be released soon. However, considering the comments to the CEO’s tweet, people do seem excited at the news.
Olaâs new EV startup Ola Electric has witnessed a tremendous response on its first three recently launched scooters and has said that the company has made its 1,00,000 vehicles within a period of 10 months. With three products, S1 Air, S1, and S1 Pro, the company has made record sales during the last festive season.
Talking about this landmark Bhavish said, âSince beginning our journey to electrify India, we have unleashed the potential of EVs in our nation, providing customers with a product and experience that is far superior to anything a petrol alternative can provide. This accomplishment is just the startâ.
Notably, Ola has taken over Hero Electric and now has the best month-on-month growth in terms of sales in the segment. According to sources, there was a 60% growth every month.
A machine learning model called Neural Acoustic Fields (NAFs) has been developed by researchers from MIT and the IBM Watson AI Lab to forecast what sounds a listener will hear in various 3D settings. The machine-learning model can mimic what a listener would hear at different locations by simulating how any sound in a room would travel across the space using spatial acoustic information.
The neural acoustic fields system can understand the underlying 3D geometry of a room from sound recordings by precisely modeling the acoustics of a scene. The researchers may utilize the acoustic data collected by their system to create realistic visual reconstructions of a space, akin to how people use sound to infer the elements of their physical surroundings.
This approach might aid artificial intelligence agents in better comprehending their surroundings in addition to its potential uses in virtual and augmented reality. According to Yilun Du, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and co-author of a paper describing the model, an underwater exploration robot could sense things that are farther away by simulating the acoustic properties of the sound in its environment.
Du admits that most researchers have so far concentrated solely on simulating vision. These models often combine a neural renderer with an implicit representation that has been trained in order to capture and render visuals of a scene simultaneously. By leveraging the multiview consistency between visual observations, these methods can extrapolate images of the same scene from unique viewpoints. However, because humans have a multimodal perception, sound is just as essential as vision, which opens up an attractive research area on improving how sound is used to describe the environment.
Previous studies on capturing a location’s acoustics called for careful planning and designing of its acoustic function which cannot be applied to arbitrary scenes. According to the study report from MIT, despite recent improvements in learned implicit functions that have produced ever better visual world representations, learning spatial auditory representations has not made similar strides. A variant of the machine-learning model known as the implicit neural representation model has been employed in computer vision research to produce continuous, smooth reconstructions of 3D scenes from images. These models make use of neural networks, which are composed of layers of linked nodes, or neurons, that analyze data to perform an action. These models make use of neural networks, which are composed of layers of linked nodes, or neurons, that analyze data to perform an action.
The MIT researchers used a similar model to depict how sound continuously permeates a scene. However, they failed!
This inspired the team to work on neural acoustic fields, an implicit model that reflects how sounds travel in a spatial environment. Neural acoustic fields encode and transmit an impulse response in the Fourier frequency domain to capture the complex signal representation of impulse responses.
NAFs can be used to create or enhance existing feature maps of rooms. (Credit: Luo et al)
In order to enable neural acoustic fields to constantly map all emitter and listener location pairings to a neural impulse response function that can be used to process any sound, acoustic propagation in a scene is modeled as a linear time-invariant system. Using sound instead of visuals enabled the team to get around the (vision) model’s dependence on photometric consistencyâa phenomenon where an item seems to look about the same no matter where you are standing but does not apply to soundâwhich was necessary. To circumvent the photometric consistency issue, the neural acoustic fields method utilizes the reciprocal nature of sound i.e., exchanging the location of the source and the listener has no effect on how the sound is perceived, as well as the impact of regional elements like furniture or carpeting on the sound as it travels and bounces. The model randomly picks locations and learns from its experience by using a grid of objects and architectural features.
The NAF system is based on techniques originally developed for computer vision systems. (Credit: Luo et al)
Researchers input the final neural acoustic fields with both model visual information about an acoustic setting as well as spectrograms that demonstrate what an audio piece would sound like if the emitter and listener were positioned at specific points around the room. The algorithm then forecasts what the audio would sound like at any location in the scenario in which the listener would move.
The machine learning model produces an impulse response that depicts how a sound would alter as it spreads throughout the environment. The researchers then use this impulse response to various sounds to hear how they ought to alter when a person moves about a room.
The researchers found that their methodology consistently produced more precise sound models when compared to other techniques for modeling acoustic data. Their model also had a far higher degree of generalization to new locations in a scene than previous approaches since it incorporated local geometric information.
Additionally, researchers discovered that incorporating the acoustic knowledge their model picks up into a computer vision model can improve the visual reconstruction of the scene. In other words, a neural acoustic fields model could be used backwards to enhance or even build a visual map from scratch.
The researchers intend to continue improving the model so that it can be generalized to new scenarios. Additionally, they plan to use this method for more complex impulsive reactions and larger scenes, like entire buildings or even a whole town or metropolis.
The MIT-IBM Watson AI Lab’s principal research staff member Chuang Gan believes that this new method may present novel opportunities to develop a multimodal immersive experience for the metaverse application.
The research team also mention the limitations of their neural acoustic fields model. Their method, like previous spatial acoustic field coding studies, does not model the phase. While a magnitude-only approximation may still be sufficient for tasks that depend on the phase, it may not be able to reproduce believable spatial acoustic effects in a compact and continuous manner. This NAF model needs a precomputed acoustic field, which was also the prerequisite in earlier acoustic field studies. Though this isn’t a drawback for many applications, researchers believe the potential to generalize from really small training samples can create new opportunities. Finally, this model is fitted to a particular scene like earlier research that uses implicit neural representations. It’s still unclear if it’s possible to forecast the acoustic field of new scenes.
For the upcoming football World Cup in Qatar, FIFA revealed in August that 15,000 cameras with facial recognition technology would be used to monitor the whole event, including spectators. According to the organizers’ chief technology officer, Niyas Abdulrahiman, the cameras monitoring football fans across eight stadiums and on the streets of Doha would herald in a new norm, a new trend in venue management â calling it Qatar’s gift to the world of sport.
The facial recognition-based surveillance is a facet of Qatar’s attempts to monitor security risks, including terrorism and hooliganism, during the competition, which is anticipated to draw over 1 million spectators. The eight stadiums where the matches will be played will be under the technological command and control of the Aspire Command and Control center, which will also manage the surveillance network. All neighboring metro trains and buses would be monitored by the control center.
The College of Engineering at Qatar University (QU) has created an intelligent crowd management and control system with various aspects for crowd counting, face recognition, and abnormal event detection in partnership with the Supreme Committee for Delivery and Legacy. Using data from drones, the university research team initially created a method for counting crowds that uses dilated and scaled neural networks to extract useful characteristics and estimate crowd densities. The research team has also worked on developing a face identification system that uses a multitask convolutional neural network to take into account faces in various poses. For this, a cascade structure was used to integrate a posture estimation algorithm and a face identification module. The left side, frontal, and right-side captures of faces served as the training data for the CNN-based posture estimation method. To eliminate unnecessary face information, a skin-based face segmentation approach that is centered on structure-texture decomposition and a color-invariant description (e.g., background content) has been developed.
Other security issues have been brought up concerning the forthcoming World Cup event, in addition to the usage of biometric technologies to survey attendees. Visitors entering Qatar will be required to download two smartphone applications that may jeopardize their personal privacy and data security. Access to events will be managed by the Hayya Card, a digital identification card that can only be obtained by uploading a passport scan and a clear photo of your face.
Qatarâs World Cup organizers are not just deploying facial recognition-based technology to monitor football fan activity alone. Earlier, FIFA said that the football tournament would have semi-automated offside detection technology. With the use of this technology, officials will be able to make judgments more quickly, which will assist the game’s progress.
These facial recognition-based monitoring devices have been implemented at football stadiums and clubs around the world in recent years. Valencia CF and the biometrics company FacePhi signed a contract in June 2021 to develop and implement face recognition technology at Mestalla Stadium for the following season.Â
A face recognition system developed by Russian technology company NtechLab was previously utilized by local law enforcement to identify and detain more than 40 people during World Cup-related activities in Moscow in 2018. NEC, a Japanese company that also provided its face recognition cameras for the 2020 Tokyo Olympics, provided facial recognition stadium security for the Brazil 2014 World Cup.
Facial recognition technology has not always been successful in monitoring crowds, since there have been instances where things went wrong. At the 2017 Champions League final in Cardiff, UK, facial scanning technology falsely labeled almost 2,000 spectators as potential offenders. After a court ruling, the system was shelved, only to be redeployed early this year.
Mark Zuckerberg, Metaâs CEO, announced that the company is downsizing as it lays off approximately 13% of its employees, mounting over 11,000. The layoffs are a part of a few other steps, like freezing hiring through Q1, that Meta will take to cut down discretionary spending.
Zuckerberg said that the company continuously invested in moving online and digitizing after the world moved online due to the COVID-19 pandemic. At the time, digitizing brought in considerable revenue, which led companies to anticipate outsized revenue growth even after the pandemic ended. However, it did not work out as expected. The digital trends have downturned to the levels prevailing before the pandemic.
To survive the paradigm shift, the company needs to become more capital-efficient. Zuckerberg added, âWeâve cut costs across our business, including scaling back budgets, reducing perks, and shrinking our real estate footprint.â
Meta will compensate all affected employees (US) with severance pay of 16 weeks in addition to two weeks of yearly service. The company will also provide immigration support to help out stationed employees. Other remuneration includes the payment for all remaining PTO tenures, health insurance vesting, career support with an external vendor, and access to job leads.
Elon Musk finally got the dibs on the social media platform Twitter for US$44bn on October 28 after receiving a go-ahead on the previously postponed lawsuit by the former. Since the takeover, the social media platform has come under the limelight for various reasons, including potent rumors, laying off employees, and claiming to be accused of dropping or weakening its content policies. Muskâs takeover has raised speech concerns worldwide, especially given how âself-confessedâ he is. To deprecate any following debate on the issue, Musk tweeted against any policy changes.
Again, to be crystal clear, Twitterâs strong commitment to content moderation remains absolutely unchanged.
In fact, we have actually seen hateful speech at times this week decline *below* our prior norms, contrary to what you may read in the press.
Nevertheless, the platform remained under the light as Musk dissolved the board of directors while affixing his control as the âsole director.â Within the first week, Musk fired Chief Executive Parag Agrawal, while others, like the Chief People and Diversity Officer Dalana Brand, resigned after the takeover.
Moving into the second workweek under Musk, the platform was expected to lay off over 25% of the employees to bring in people with higher profiles and restructure the microblogging platform to make it more accurate and liable.
Twitter needs to become by far the most accurate source of information about the world. Thatâs our mission.
However, as many as 3,700 people were laid worldwide, out of which over 180 employees were laid off from its marketing and communications department in India alone, on Friday, November 4. Soon after, many of these people were reached out as they were fired âin errorâ or they were âtoo essentialâ for the changes that Twitter was working on.Â
The company has been sued for mass layoffs in California, where employees have allegedly been sacked without valid notice. Following the abrupt suspension of their access to corporate services, including email and Slack, several employees discovered they had been fired. Another class-action lawsuit was filed in San Francisco federal court following the layoffs, now being called a âwhiplash.â
However, all the changes brought to the employee portfolio by Muskâs takeover resulted from Twitterâs performance despite the massive workforce in the last few years. The social media platform was losing as much as US$4m/day, leaving the new director with no choice but to lay off people with lesser significant roles. Musk clarified that the company offered three months of severance pay, roughly 50% more than the legal requirement.
The paradigm shift may not seem significant; however, it has affected underrepresented groups like females, Hispanics, and black people. In 2021, Twitter saw a considerable increase in the proportion of Black and Latinx employees, partly due to the company’s ability to accommodate remote work. These groups make up a more significant share of those who wish to work remotely than men or their white counterparts.Â
Many believe that Twitter is laying off to save the year-end compensation given to the employees. Musk is vocal in denying such accusations by tweeting, like always.
Jack Dorsey, Twitterâs former CEO and co-founder broke his silence on the layoff allegations and said he was hiring employees too quickly, resulting in the ardent need to downsize the platform post Musk took over. The changes Twitter is experiencing post Muskâs takeover were long-awaited, given the platformâs performance and spending. For instance, the platform spent 50% more than DeepMind for research and development!
Nevertheless, Musk is encouraging those still working at the company to release new features rapidly. The company plans to roll out the Blue Subscription in India, a feature that provides additional access to the most engaged people on the platform. Elon has also authorized 50 professionals, including people from Teslaâs Autopilot team, Neuralink, and Boring Company, to get more experts on the panel.