Friday, November 21, 2025
ad
Home Blog Page 171

OpenAI’s DALL-E now offers Outpainting Feature to Extend Existing Images and Artworks

OpenAI’s DALL-E now offers Outpainting Feature to Extend Existing Images and Artworks
Crop of an Outpainting created by DALL-E user Emma Catnip

In a blog post, OpenAI’s DALL-E announced “Outpainting,” a new feature that encourages users to utilize natural language description to express themselves more creatively. By incorporating visual components in the same style and using a brush to cover places outside of the original canvas, outpainting enables users to extend an image beyond its initial bounds. To put it simply, the new Outpainting feature seeks to continue the viewable section of any backdrop painting or panorama.

GIF from OpenAI
Original: ‘Girl with a Pearl Earring’, Johannes Vermeer | Outpainting: August Kamp

DALL-E, an image-to-text generator tool unveiled last year, combines the language interpretation and context offered by GPT-3 and its underlying structure to produce a convincing image that corresponds to a prompt. The updated version of DALL-E, called DALL-E 2 was released for a limited number of customers earlier this year in April, and it just surpassed the 100,000-user mark.

DALL-E 2’s current ‘Inpainting’ edit functionality allows for adjustments inside a created image. However, by extrapolating the original image data, users can produce large-scale images in any aspect ratio. In order to preserve the context of the originality of the photographs, the new Outpainting feature considers the already-present visual components like reflections, shadows, and textures.

The size and aspect ratio of the creations users could make with DALL-E 2 were previously constrained. The AI algorithm could not produce any image greater than 1,024 pixels by 1,024 pixels or with a different form. With outpainting, customers are now only constrained by their credit balance and not by content controls. Both the extra outpainted portion and the creation of the initial image each cost one credit. During their first month, everyone is given 50 free generation credits, and they are given 15 to utilize each subsequent month. Additional credits in blocks of 115 can be purchased for US$15.  

Read More: Imagen vs DALL.E

DALL-E 2 is one of the best text-to-image generating AI tools available today. It is opening doors for a new generation of artists who may have previously been constrained by physical limitations, a lack of time, or the inability to pursue an art school. However, it has received backlash due to bias, and violent and sexual content. Now, given the Outpainting feature of DALL-E 2, it might rapidly become troublesome if anybody could add to already-existing pieces of art. 

All DALL-E 2 users currently have access to outpainting, although it is only available on desktop. OpenAI has promised to expand these functionalities to smaller display devices in the upcoming months. If you want to participate in the public beta, sign up for the waitlist.

Advertisement

Samsung Reveals A Second data breach this year: Are you one of the affected?

Second Samsung Data breach 2022

Samsung recently revealed a cybersecurity data breach that occurred in late July. The incident – discovered on August 4 – resulted in a breach of personal information, including names, contact and demographic information, dates of birth, and product registration information, according to a September 2 notice from Samsung to customers. The corporation informed customers that social security or credit card information kept in the system was unaffected by the data breach.

Samsung said in the blog post that it had taken measures to safeguard the compromised systems. Additionally, it has teamed up with a well-known outside cybersecurity company and is collaborating with law authorities on the matter. Furthermore, the company has created a FAQ page on its website with additional questions, solutions, and recommended actions.

Although the number of people impacted has not yet been disclosed, Samsung warns that if you have received a notification, your data may have been compromised. Samsung advises people impacted by the breach to be on the lookout for phishing scams, regularly watch their credit profiles, read the security notice FAQ, and review the company’s privacy policy. If you’re worried, you can ask questions regarding the incident by email at seainfo@email.support.samsung.com.

Read More: How well can Vertical Federated Learning solve machine learning’s data privacy Issues?

It’s not the first time Samsung has struggled with a security problem, nor is it its first in 2022. Back in March, Samsung disclosed that it had experienced a cybersecurity breach that exposed confidential business information. The leaked information is believed to have contained source code from its partners, including proprietary data from Qualcomm, a US chipmaker that supplies chipsets for Samsung Galaxy smartphones marketed in the US. However, according to Samsung, neither its employees’ nor customers’ personal information was impacted. The company assured at the time that it had taken precautions to prevent attacks in the future. The Lapsus$ hacker organization had previously claimed responsibility for the breach.

Advertisement

Google and Amazon criticize Microsoft over cloud computing changes

Google and Amazon criticize Microsoft over cloud computing changes

Google and Amazon have criticized Microsoft’s cloud computing changes commenting they limit competition and discourage customers from switching to rival cloud service providers.  

The US giant on Monday announced amended licensing deals and other changes that will take effect on Oct. 1. Microsoft said that they would make it easier for cloud service providers to compete. Amazon, Google, Alibaba, and Microsoft’s cloud services are excluded from the deals.

Microsoft’s move comes after smaller European Union (EU) competitors took their complaints about its cloud service practices to EU antitrust regulators, which, as a result, questioned market players on the issue and the impact they have experienced. Amazon, trailed by Microsoft and Google, was scathing in its critiques.

Read More: Controversy Sparked As AI-Generated Art Piece Wins Top Spot At A Competition

A spokesperson for Microsoft cloud service unit AWS said that Microsoft is now doubling-down on the same disruptive practices by implementing even more restrictions in an unfair attempt to limit the competition it faces rather than listening to its customers and restoring fair software licensing in the cloud for everyone. 

Marcus Jadotte, Google’s vice president for government affairs and policy, was equally critical. He said in a tweet that the promise of the cloud is flexible, elastic computing without contractual lock-ins.

He added that customers should be able to move freely across platforms and choose the best technology for them rather than what works best for Microsoft.

Advertisement

Russian Astronauts Finish Outfitting European Robotic Arm during their Latest EVA

Russian Astronauts Finish Outfitting European Robotic Arm during their Latest EVA
European robotic arm (ERA) is pictured extending out from the Nauka multipurpose laboratory module

On Friday, Roscosmos’s Expedition 67 Commander Oleg Artemyev and Flight Engineer Denis Matveev completed their spacewalk at 5:12 p.m. EDT. They succeeded in accomplishing their primary goals, which included relocating the European robotic arm’s exterior control panel from one operating area to another and testing a rigidizing mechanism that would be utilized to make it easier to grab payloads. By opening the hatch of the Poisk docking compartment airlock, the duo had to outfit the European robotic arm on the Nauka laboratory of the International Space Station (ISS). Furthermore, the team moved a Strela telescoping boom from the Zarya module to the Poisk module.

The European robotic arm (ERA) for European Space Agency (ESA) was created by a European team under the direction of Airbus Defense and Space in the Netherlands. Launched into orbit in July 2021, the robotic arm is designed to function like a human arm but can lift up to 17,600 pounds (8 tonnes) when operating outside the orbiting space station. The robot arm, which is the first to be capable of anchoring itself to the ISS, has two hands and can move back and forth by crossing one hand over the other like an inchworm. By transferring payloads as they arrive at the space station without the need for spacewalking humans, the arm lessens the workload on the ISS astronauts. It can also help transport astronauts when they are performing spacewalks. Apart from this arm, ISS already has the Canadian-built Canadarm2 robotic arm, and the Japanese arm currently assisting station maintenance, operations, and research.

Read More: NASA leverages AI-based CFD to Develop Hypersonic Missiles

According to information released by ESA, the European robotic arm successfully carried out its maiden transfer outside of the International Space Station on August 24. The arm maneuvered in accordance with instructions from Russian cosmonauts (astronauts) on board to release a tiny payload from a single pin latch on the Nauka science module. The payload was then put back in its original position after being transferred to the other side by the robot. The process took around six hours, and then the robot resumed its hibernation.

The main reason behind the latest spacewalk was to complete unfinished tasks from the August 17 EVA (extravehicular activity, or spacewalk), i.e., VKD-54. If two or more Russian spacewalks are carried out by the same crew, a suffix (“a,” “b,” etc.) is added to any prior unplanned or emergency spacewalks. So, the EVA on Friday was given the name VKD-54a. Officially, VKD-54a was the eighth spacewalk in 2022 and the 253rd overall to assist the International Space Station. It was also Matveev’s fourth spacewalk and Artemyev’s eighth spacewalk overall in his career as an astronaut.

For their journey outside the Station, both men donned Russian Orlan-MKS spacesuits, with Artemyev’s having red stripes and Matveev’s having blue stripes.

On Friday, September 2, at 13:25 UTC, the hatch on the Poisk module was opened, signaling the formal commencement of the EVA. A little while later, Artemyev and Matveev exited the airlock.

The two cosmonauts moved to the Nauka module once they were outside and set up a payload adapter platform there. They then began moving the External Man Machine Interface (EMMI) control panel, which enables spacewalkers to operate the European Robotic Arm during EVAs manually. Artemyev and Matveev used the EMMI to translate from the side to the front of Nauka, then put the control panel on a handrail and hooked it to the BLT3 base point via a cable.

The EMMI was then turned on, and Artemyev was instructed to test the controls, which worked according to plan. Next, the team verified and configured several settings on each of the European Robotic Arm’s end effectors using a rotating torquing tool, confirming with ground control in Moscow intermittently to ensure each setting was accurate.

To reposition the European Robotic Arm, Artemyev and Matveev then unhooked End Effector 1’s launch restraint ring. The robotic arm’s testing went off without a hitch, with orders to slowly approach and eventually grip the BTL2 base point on Nauka.

After the European Robotic Arm’s movement testing was over, Artemyev was advised to return to the EMMI for more inspections, which were successfully carried out. The EMMI was then put into storage mode, and Artemyev moved to complete mounting a set of soft handrails to the side of the Nauka module. Since the two cosmonauts were working significantly ahead of schedule, mission control decided to proceed with completing get-ahead tasks before the next EVA. One of these was creating a translation route by extending the Strela boom from its base on the forward end of the Zarya module to the Poisk airlock module.

Although it was postponed from a previous spacewalk, this mission was nevertheless crucial since subsequent EVAs will use it to relocate Nauka’s airlock and radiator.

Advertisement

Google Wear OS Play Store gets its new redesign

Google Wear OS Play Store gets its new redesign

According to September’s Google Play system update, the Wear OS Play Store has begun to get a new redesign. First shown off of the redesign was done at Samsung’s Unpacked event last month with the Watch 5 Pro and Galaxy Watch 5 unveiled there. This redesign begins with the search button now being housed in a pill instead of a circle.

‘Explore all’ with three app suggestions are next to the ‘See More’ button. You get the application’s name, rating,and icon at the same place. ‘Recommended for you’ and ‘Now trending’ are followed by large cards for ‘Music streaming,’ ‘Watch faces,’ ‘Healthy mind & body,’ and ‘Essential watch apps.’

The last section allows one to browse and install applications from other devices, such as one’s phone, with the option to open the Play Store on said device. ‘Settings’ and ‘Manage apps’ round out the page. Unfortunately, the former is still buried for checking app updates.

Read More: Controversy Sparked As AI-Generated Art Piece Wins Top Spot At A Competition

In contrast, the current design is very text heavy and dull. Promoting apps directly on the main feed can encourage people to download more elements for their wearables and can further boost developer interest. It’s a sign that wearable design is more focused on showing most content in one view rather than having to dive into different menus.

The current report of this redesign is on version 31.2.10-26 of Google Play for Wear OS. However, the revamp is being rolled out with a server-side update, and it is not yet widely showing up for most people today.

Advertisement

Controversy sparked as AI-generated art piece wins top spot at a competition

Controversy sparked as AI generated art piece wins top spot

A designer has sparked controversy after his artificial AI-generated piece of art won the top position at a competition in the US, with critics calling the win a threat to human artists everywhere.

Jason M Allen and his Theatre D’opera Spatial image beat more than a dozen other digitally manipulated/digital art photography entries at the Colorado State Fair.

The winning artwork was created using the Midjourney, an AI tool that turns lines of text into realistic graphics. The award included a $300 cash prize. AI tools for generating images have been around for years, with companies such as OpenAI and Google being notable investors in these text-to-image systems.

Read More: Thailand Rolls Out New Rules On Advertising For Crypto Companies 

However, several individuals have taken to social media to express their anger over the award. They argued that it took away from the hard work invested by humans to create unique art physically. One social media user said on Twitter that Midjourney could be fun, but it should never be used to cheat on other artists.

Some expressed concern that it could endanger their livelihoods. In contrast, others said AI-generated art should have its separate category in the future, something Allen also suggested as a way to avoid any future controversy in an interview with the Pueblo Chieftain newspaper.

The two judges for the category were unaware that Allen’s submission was AI-generated, but they also said that it would not have changed their decision as they were looking for how the art tells a story and invokes spirit.

Advertisement

Thailand rolls out new rules on advertising for crypto companies 

Thailand rolls out new rules on advertising for crypto companies

Securities and Exchange Commission (SEC) of Thailand has rolled out new rules and regulations on advertising for crypto companies after the industry has come under scrutiny from authorities.

The new rules include providing a balanced view of potential risk and returns and clearly showing investment risks in advertisements, the SEC said on Thursday. Information on advertising terms must also be given to regulators.

The SEC said that operators must give details of spending and ads, including the use of bloggers and influencers, to the SEC, including terms and time frame. He added that operators had 30 days to comply with the new rules.

Read More: US Orders NVIDIA And ARM To Stop Selling AI Chips To China 

Crypto companies in Thailand advertise extennsively on digital media, and billboards promoting the industry can also be seen throughout the city’s capital Bangkok. Recently, Thailand’s regulator has also imposed fines to crypto companies, with the operations of several companies hit by a worldwide slump in the value of digital currencies.

An executive from local operator Bitkub, Samret Wajanasathian, was fined $231,670.75 (8.5 million baht) this week for insider trading. Samret said he would appeal the decision.

Last month, Thailand’s fourth-largest lender by assets, SCB X PCL, scrapped a $500 million worth of acquisition of Bitkub over regulatory concerns.

Rival operator, Zipmex, was fined 1.92 million baht on Wednesday for suspending withdrawals in July. The startup said in that it was closely assessing the terms of the penalty with their legal counsel.

Advertisement

Microsoft launches first cloud data center region in Qatar 

Microsoft launches first cloud data center in Qatar

Microsoft has launched its very first cloud data center region in Qatar in collaboration with the Qatari Ministry of Communications and Information Technology. 

The 55th region for Microsoft globally will join the biggest cloud infrastructure in the world, enhancing Qatar’s regional and global competitiveness and consolidating its digital transformation. It will also boost local growth by supporting economic diversification, fostering talent, and attracting foreign investment. 

The launch ceremony, titled Qatar Digital Journey to the Future, was attended by several ministers, senior officials in the commercial sectors, and Microsoft officials. The government has already passed a series of laws to encourage investment in the digital economy. 

Read More: US Orders NVIDIA And ARM To Stop Selling AI Chips To China

With the opening of the Microsoft center, local and international businesses will be able to host their cloud data in Qatar, benefiting from high levels of reliability and performance. Customers can now use Microsoft Azure to develop advanced apps in a secure cloud environment using artificial intelligence, data analytics, the Internet of things, and hybrid cloud capabilities. 

Ralph Haupter, president of Microsoft Europe, Middle East, and Africa, said that customers in Qatar are using Microsoft cloud to innovate, achieve their goals, and accomplish a lot with less effort. He said Qatar’s first large-scale cloud data center would provide more opportunities to accelerate digital transformation. 

Several agencies in Qatar are using Microsoft cloud data centers to develop their digital capabilities, including the MCIT through its national programs, the TASMU platform, the Qatar digital government, and the Supreme Committee for Delivery and Legacy.

Advertisement

ProcTHOR by Allen Institute generates embodied AI environments 

procTHOR by Allen institue researchers generates embodied AI environments

Using large-scale training data in computer vision and natural language processing (NLP) models has strengthened and developed new findings. The recently deployed models of CLIP, DALL-E, GPT-3, and Flamingo have used massive task agnostic data to pre-train the neural architecture, which results in a remarkable performance at downstream tasks, including zero and few-short settings. Lately, embodied AI simulators are gaining attention and strengthened by physics, manipulators, object states, deformable objects, fluids, and real-sim counterparts. However, scaling them up to ten of thousand scenes is challenging. Given this, ProcTHOR by Allen Institute researchers was developed to create a procedural generation of embodied AI environments. The name goes for procedural-THOR, which stands for the house of interactions.

What is Embodied AI?

Embodied AI is AI controlling a physical thing, such as robots or autonomous vehicles. It is an interdisciplinary field combining natural language processing, reinforcement learning, computer vision, physic-based simulations, navigation, and robotics. This new age technology is an approach of computer learning to apply a relationship of mind and body identical to the human embodiment, how our mind and body react to complex movements and situations. Embodied AI starts with embodied agents, virtual robots, and egocentric assistants training in a realistic 3D simulation environment. The working of embodied AI is based on reinforcement learning, a type of machine learning that makes the machine perform suitable actions to maximize reward according to the situation. Researchers in embodied AI development are trying to avoid algorithm-led approaches and direct towards attempting to understand how biological system work, then build principles of intelligent behavior so that these can be applied to artificial systems. 

The embodiment hypothesis dates back to 2005 when Linda Smith proposed that the idea of intelligence emerges in the interaction in an environment and is a result of the sensorimotor activity. Even though the initial hypothesis was centered on psychology and cognitive science, the recent growth and research developments of embodied intelligence come from computer vision. While the applications of embodied AI seem to have great potential, till now, this has only benefited a couple of manufacturers and startups. Some researchers believe embodied AI can be combined with existing internet of things (IoT) devices that can take life-saving decisions on the spot within milliseconds. 

Read more: MIT Team Builds New algorithm to Label Every Pixel in Computer Vision Dataset

What is ProcTHOR?

ProcTHOR is a machine learning framework based on AI2-THOR used for the procedural generation of embodied AI environments. AI2-THOR is an open-source interactive environment containing four types of scenes for embodied AI. ProcTHOR framework can construct whole interactive procedurally physic-enabled settings for embodied AI research. The PRIOR team developed it at the Allen Institute for AI under a research paper, ‘ProcTHOR: Large-scale Embodied AI using procedural generation.’ ProcTHOR aims to train robots within a virtual environment and then apply the learning in real life. 

ProcTHOR allows random sampling of large datasets of varying, interactive, customizable, and high-performing virtual environments to train and evaluate embodied agents. For example, given a room specification, say a 3bhk house, ProcTHOR helps you build varieties of floor plans that meet your requirement. The environments in ProcTHOR are completely interactive and support navigation, object manipulation, and multi-agent interaction.

This framework is a state-of-art application of machine learning that extends AI2-THOR inheriting its huge asset library, robotic agents, and precise physics stimulation. The pre-training with ProcTHOR improves the downstream performance and gives a zero-shot performance, or per se is a zero-shot learning model. Zero-shot learning is a significant technique in machine learning, which refers to the models classifying objects or data based on very few to almost no labeled data points. The ProcTHOR by Allen Institute researchers has five key characteristics:

  • Diversity: One can create several varieties of rich environments with ProcTHOR. The framework provides many options for every embodied AI task, like the diversity of floor plans, assets, materials, object placements, and lighting. 
  • Interactivity: The property of interacting with objects in the environment is fundamental to embodied AI tasks. ProcTHOR has agents with arms for manipulation of objects. 
  • Customizability: ProcTHOR gives users the complete power of customization from rooms to material and lighting specifications. 
  • Scale: ProcTHOR provides 16 different scene specifications and 18 semantic asset groups. These result in an indefinite number of assets and scenes for seeding the generation process. So, each environment/ house created on ProcTHOR is scaled to find the best result per requirements.
  • Efficiency: ProcTHOR represents scenes in a JSON file and loads them into AI2-THOR at runtime to make the memory overhead of sorting houses astoundingly efficient. Furthermore, ProcTHOR gives high framerates to train embodied AI models where the scene generation process is automatic and fast.

ProcTHOR-10k 

ProcTHOR-10k is the model of ProcTHOR by Allen Insititute researchers using a sample set of 10,000 fully interactive houses obtained by the procedural generation process. In addition, it contains a set of 1,000 validation and 1,000 testing houses for evaluation. The assets are split across train, validation, and test, counting to 1633 unique assets and 108 asset types. 

There are two essential requirements for large-scale training in embodied AI simulator:

Scene statistics: The scene statistics of houses in ProcTHOR-10k are generated by applying 16 different room specifications. The room specification provides to change the distribution of size and complexity of houses. It is seen that ProcTHOR has a broader spectrum of scenes than other embodied AI simulators, including AI2-iTHOR, RoboTHOR, Gibson, and HM3D. 

Rendering speed: High rendering speed directly proportions to large-scale training because training algorithms require to converge millions of iterations. The GPU experiments were performed and recorded, which tells in the number of experiments how many processes were distributed among the GPUs. It was found for 1 GPU experiment, 15 processes, and for 8 GPU experiments, 120 processes were distributed. In the end, a comparison between ProcTHOR, iTHOR, and RoboTHOR was done, which concluded that ProcTHOR provides more framerates and renders it fast enough to train large models in a fair amount of time.

Read more: Jio Haptik uses Microsoft Azure Cognitive Services to improve Hindi conversational AI

Training and scalability in ProcTHOR

The former methods of embodied AI environments demand a lot of work from 3D designers who must create 3D elements, organize them in suitable configurations inside sizable spaces, and create the proper textures and lighting in these scenes. In the latter, specialized cameras are moved through various real-world scenarios, and the resulting photos are then pieced together to produce 3D reconstructions of the scenes. Using these strategies, it was impossible to scale up the scene repositories by several orders of magnitude. Then, ProcTHOR came that can handle a higher magnitude of the number of scenes than current modern simulators because of the arbitrary massive collection of settings. Additionally, it supports dynamic material randomizations, which enable the randomization of particular asset colors and materials each time an environment is stored in memory for training.

The training inside ProcTHOR is a complex process over several levels, including room specification, connecting rooms, lighting, object placement, and many more. The paper mentioned above demonstrated the potential of ProcTHOR with the ProcTHOR-10k model, which has a sample of 10,000 generated houses and a simple neural network. The advantages of scaling up from 10 to 100 to 1K, then to 10K scenes are shown by an ablation analysis, and it is suggested that even more benefits could be obtained by using ProcTHOR to create even bigger settings. Modern models for various navigation and interaction benchmarks are produced by agents trained on ProcTHOR-10K with minimum neural architectures (no depth sensor, only RGB channels, no explicit mapping, and no human task supervision). With no fine-tuning on the downstream benchmark, we also show strong zero-shot performance on these benchmarks, frequently outperforming earlier state-of-the-art systems that access the downstream training data. The code used in the research of ProcTHOR will be made publicly available shortly. Until then, ProcTHOR-10K was launched in a Google Colab notebook.

Among other frameworks to build embodied AI environments, ProcTHOR by Allen Institute researchers has made a name for itself because of its procedural approach to generation. Furthermore, the data set produced for ProcTHOR enables the training of simulated embodied agents in more diverse environments.

Advertisement

Seoul Green-Lights Beta Test Run of Metaverse Seoul project

Metaverse Seoul

The Seoul Metropolitan Government (SMG) claimed last year that it would be the first large city to enter the metaverse. The aim is to establish a virtual communication ecosystem for all facets of the city government, now known as “Metaverse Seoul.” This would involve economic, cultural, tourist, educational, and civic service in three stages beginning this year. 

On Wednesday, Seoul had a closed beta test run of the first stage of its metaverse project Metaverse Seoul. This “Introduction” phase will be followed by “Expansion” (2023 to 2024) and finally, “Settlement” (2025 to 2026).

The test run included a virtual recreation of Seoul City Hall and the Seoul Plaza, where people could engage in interactive activities and games. It also has a virtual counseling room where young people could meet and talk about their issues with mentors in the metaverse. Officials believe this will assuage worries about feeling awkward talking to a possible mentor in person.

About 3,200 users of Seoul Learn, the city’s online learning platform, had signed up to take part in the test run, along with professionals from the Seoul IT Tech Governance Group. Seoul will work on an improvement based on the suggestions made by these participants.

The first phase of Metaverse Seoul is anticipated to become live by the end of November. Following that, a multitude of facilities and services, like the “Virtual Mayor’s Office,” a “Seoul FinTech Lab,” Invest Seoul, and Seoul Campus Town,” would be gradually introduced. Through a designated “Virtual Tourist Zone,” the metaverse will also provide virtual copies of its well-known tourist sites, including Gwanghwamun Plaza, Deoksugung Palace, and Namdaemun Market. Visitors also could explore digital reconstructions of lost historical places like the Donuimun gate, which was razed during the Japanese colonial period.

Read More: UAE’s AI Minister Demands Laws and Actions Against Crimes in Metaverse

As part of the city’s Seoul Vision 2030 plan, the South Korean capital has invested KRW 3.9 billion (approximately €2.8 billion) in the project. The mayor, Oh Se-hoon, stated that the project intends to make Seoul a city of coexistence, a worldwide leader, a secure city, and a future emotional city. Residents of Seoul will soon be able to don VR headsets to attend mass gathering events, speak with avatar authorities, and see authentically recreated landmarks.

Other initiatives that have been mentioned include the Seoul Lantern Festival, which will be hosted in the metaverse beginning in 2023 and will be accessible worldwide. The city also announced that it will utilize the platform to produce services for the socially disadvantaged, such as safety and convenience content for individuals with disabilities, and that it will expand the platform to all municipal government sectors to increase efficiency.

Seoul is only one of a rising number of towns globally exploring ways to use metaverse technology to manage public services better, engage residents, and increase participation with companies or downtown areas. Other cities in this growing list include Dubai and Santa Monica. In July, Dubai unveiled its own metaverse strategy, which seeks to become the city a global hub for the metaverse community and be one of the top ten metaverse economies in the world.

If used effectively, the metaverse offers a real chance to enhance municipal services and the quality of life for citizens. To accomplish this, urban authorities must be at the core of metaverse cities initiatives.

Advertisement