The Ministry of Science and ICT, Lim Hyesook, South Korea, announces a pan-government strategy on metaverse as a part of the Digital New Deal 2.0 Initiative.
The metaverse strategy focuses on four aspects mainly.
To strengthen the metaverse ecosystem and environments for the metaverse platform to grow.
To support experts in the metaverse field and allow people to access metaverse events without regional restrictions.
To support leading companies in the metaverse by providing facilities like metaverse demonstrations and funding.
The strategy also sets ethical principles for the metaverse to avoid illegal and unethical conduct while protecting digital assets and copyrights. According to the strategy, the government will also offer metaverse education for non-technical people.
Minister Lim Hyesook mentioned that the metaverse is an unexplored digital continent with unlimited capabilities. Metaverse can become a place where youth can take up many challenges, grow, and leap into a greater world. The Ministry provides metaverse support and facilities to South Korea to help it become a leading global metaverse nation.
Clip Studio Paint, which is one of the most popular software for digital illustration, has removed its new AI “Image Generator palette” after significant backlash from its users, according to a statement.
Clip Studio Paint had announced the new AI tool on November 29. The company announced that it had decided to drop the feature only three days later.
“After our initial announcement, we received several feedbacks from the community and, hence, will no longer implement the image generator palette,” according to the statement. “We were so occupied with how generative AI technology can be used creatively that we lost track of what our core users desire from Clip Studio Paint as a creative tool.”
In the company’s previous announcement about the new AI tool, users were assured that it was using Stable Diffusion, a powerful AI text-to-image model, to power their generator. The company added that they would not be harvesting their users’ work to train Stable Diffusion’s well-developed AI further.
AI text-to-image generators have been controversial among artists, particularly among digital illustrators, as their models were built by ingesting and analyzing a considerable number of images, many of which were used without the artists’ consent. Illustrators are wary of AI tools even being used as an assistive tool. Hence, there has been a backlash to Clip Studio Paint’s announcement.
The world’s largest semiconductor manufacturer, Intel Corp, is shifting operations from traditional hubs, including Taiwan and China. Steve Long, corporate vice president of Intel, says that the company’s semiconductor opportunities are now moving to India due to its geographic benefits and government policy support like the Make in India campaign.
Until now, Taiwan has been a monopolistic country dominating semiconductor manufacturing due to its hold on producing technologically advanced nodes. The Indian government has ventured into a mission to make India a strategic chip manufacturer by developing and manufacturing these nodes.
Long said, “Governance initiatives like Make-in-India are driving design opportunities from historical regions in Taiwan, or China or other parts of Southeast Asia to India. We see a big opportunity here.”
Other than policy support, Intel collaborates with several Indian telecommunication companies to enhance 4G and 5G technologies to leverage V-RAN (virtualized radio access network) and O-RAN (open RAN) services.
Long expressed his excitement to work with Indian carriers because of their extensive design capabilities that companies like Nokia or Ericsson overlooked. He said that with Intel’s help, the newer companies could grow and export their abilities.
DroneAcharya, a drone innovation startup, will open its DroneAcharya IPO listed on the BSE stock market exchange on December 13. The company plans to offer over ₹62.90 lakh shares via a book-building process in a range of ₹52-54 per share.
DroneAcharya Aerial Innovations is a Pune-based drone ecosystem start-up founded by Prateek Srivastava. The company provides land and underwater surveying services and aspires to produce 100% indigenous, customized drones.
It is one of the first few private ventures to receive a DGCA (Directorate General of Civil Aviation) certification and RPTO (Remote Pilot Training Organization) license in March 2022. Since then, the company has trained over 180 drone pilots.
The company recently announced its plans to acquire over 100 new drones and train 500+ pilots and instructors starting in 2023. Prateek said, “Being a 40+ people team, we are now embarking on a 2.0 vision of growth and value creation – with DroneAcharya IPO listing and investment plans fortified.”
Other than DroneAcharya IPO, the organization has also launched several brief and industry-relevant drone and GIS courses to empower young Indians to upskill and create a modern profession within the drone ecosystem.
Microsoft has hired former Everbridge Chief Technology Officer (CTO) John Maeda as the new vice president of design and artificial intelligence, according to his LinkedIn account.
His LinkedIn post announcing his move to Redmond, Wash-based Microsoft received more than 110 comments and over 840 reactions.
Maeda most recently worked as CTO of Everbridge, a Boston-based publicly traded critical event management (CEM) software provider. The company has a partner program for channel partners, service partners, and other vendors.
Everbridge hired Maeda in 2020 as the chief customer experience officer. In this role, Maeda led “Everbridge’s technology and product vision at the company, city, and country levels with a focus on large language models (AI/ML), outcomes-driven approaches to visualization, and the Japanese craft of kintsugi, as applied to digital products/services, according to his LinkedIn.
Maeda previously worked at technology services provider Publicis Sapient for about a year, leaving the company in 2020 as executive vice president and chief experience officer, according to his LinkedIn account.
Adobe Stock will accept images generated by AI on its service, the company said in a blog post on Monday.
Unlike stock image services like Getty Images, which have prohibited illustrations generated using AI on their platforms, Adobe is embracing content created with generators like DALL-E, which is now open to everyone, and Stable Diffusion. These generators use text-to-image prompts to generate art and other-worldly images.
“Adobe Stock contributors are using AI technologies to increase their earning potential, diversify their portfolios, and expand their creativity,” Sarah Casillas, senior director at Adobe, said in the blog post. Adobe Stock will accept art created with such models under the condition that they are registered as such.
Leading up to Monday’s announcement, Adobe has been quietly testing AI-generated images. Casillas said that the company was pleasantly surprised by the results. “It meets our quality standards and has been performing well,” Casillas said.
Due to possible copyright issues, Getty Images said in September that it would not use images generated by AI on its service. Adobe, however, created terms to avoid any such risks.
Creators must have property rights for their art before they submit it to Adobe Stock, and they must read the terms and conditions for AI tools. They cannot submit photos with logos, notable people, famous characters, or real places. If they stick to the terms and conditions, artists may earn royalties through their AI-generated content.
Recently, OpenAI unveiled a prototype general-purpose chatbot that exhibits a remarkable diversity of new text creation capabilities. The company’s language interface, known as ChatGPT, has become extremely popular online as users speculate about its potential to replace anything from playwrights to Google Search Engine to college essays. Meanwhile, in an unexpected turn of events, Twitter CEO Elon Musk said on Sunday that he ‘paused’ OpenAI from using the microblogging platform’s database for training after finding out about it.
Elon stated in a tweet that he would like to learn more about the governance and future revenue plans of ChatGPT. He also mentioned OpenAI, which he co-founded, started as a non-profit and open-source project. Both are no longer true.
The tweet by Musk sparked discussion among many people. Sam Altman, CEO of OpenAI, was one of them and tweeted: “Interesting to me how many of the ChatGPT takes are either ‘this is AGI’ (obviously not close, lol) or this approach can’t really go that much further.”
Musk responded, admitting ChatGPT is scary good. He also expressed his concerns that “We are not far from dangerously strong AI.”
Altman replied that he agrees that we are getting near to dangerously powerful AI in the sense that an AI offers a significant cybersecurity risk, and he thinks we might get to actual AGI in the next decade, so we have to take the risk of that very seriously too.
Today, Altman tweeted that ChatGPT has crossed 1 million users in less than a week since its inauguration last Wednesday.
Machine learning algorithms are fueled by data. Gathering relevant data is the most crucial and challenging step in creating a robust machine-learning model that can successfully execute tasks like image classification. Unfortunately, just because data is becoming more abundant does not mean everyone can use it. Real-world diverse data collection is complex, error-prone, time-consuming, and can cost millions of dollars to generate. As a result, getting reliable outcomes is generally out of reach since there is a dearth of credible training data that would allow machine learning algorithms to be trained more effectively. This is where synthetic data comes to the rescue!
Synthetic data is created by a computer using 3D models of environments, objects, and humans to swiftly make different clips of certain behaviors. It is becoming increasingly resourceful as synthetic data comes without the inevitable copyright constraints or ethical ambiguity that come with real data. It fills in the gaps when real data is scarce or current image data fails to represent the nuances of the physical world thoroughly.
By bridging the gap between reality and its representation, synthetic data prevents machine learning from committing errors that a person would never make. However, there is a significant bottleneck: the synthesis begins off simple but becomes more difficult as the quantitative and qualitative demand for the image data increases. You need to possess expert domain knowledge to develop an image data generation system that yields useful training data.
To address such issues, MIT researchers from the MIT-IBM Watson AI Lab captured a dataset of 21,000 publicly accessible programs from the internet rather than building unique image-generating algorithms for a specific training purpose. These programs generate a wide range of graphics using simple colors and textures. This includes procedural models, statistical image models, models based on the architecture of GANs, feature visualizations, and dead leaves image models. Then, they trained a computer vision model using this extensive collection of basic image-generating programs. The team explained that such programs generate a variety of graphics with simple color and texture patterns. The programs, each of which had only a few lines of code, were not edited or modified by the researchers.
According to the researchers, excellent synthetic data for training vision systems has two essential characteristics: naturalism and diversity. It’s interesting to note that the most naturalistic data is not necessarily the best because naturalism might compromise diversity. The primary goal must be to obtain naturalistic real data, which captures key structural aspects of real data.
The researchers didn’t feel the necessity to create images in advance to train the model since these simple programs ran so efficiently. In addition, the researchers discovered that they could produce images and train the model at the same time, which sped up the process.
The researchers pre-trained computer vision models for both supervised and unsupervised picture classification tasks using their enormous dataset of image-generating programs. While the image data in supervised learning are labeled, in unsupervised learning, the model learns to classify images without labels.
Compared to previous synthetically trained models, the models they trained with this large dataset of programs classified images more accurately. Besides that, the researchers demonstrated that adding more image programs to the dataset enhanced model performance, all while their models outperformed those trained using actual data, suggesting a new method for increasing accuracy.
The accuracy levels were still inferior to those of models trained on actual data, but their method reduced the performance difference between models trained on real data and those trained on synthetic data by an impressive 38%.
The researchers also employed each image generation software for pretraining in order to identify parameters that influence model accuracy. They discovered that a model performs better when a program generates a more varied set of images. They also discovered that the most effective way to enhance model performance is to use vibrant images with scenes that occupy the full canvas.
Through this research, the team emphasizes that their findings raise questions about the real complexity of the computer vision problem; if very short programs can create and train a high-performing machine learning computer vision system, then creating such a model may be simpler than previously thought, and might not require enormous data-driven systems to achieve adequate performance. Further, their methods enable training computer vision image classification systems that cannot get access image datasets. Thus addressing the expensive, biased, private, or ethical aspects of data collection. The research participants clarified that they are not advocating for completely eliminating datasets from computer vision (since real data may be needed for evaluation), but rather evaluating what can be done in the absence of data.
Flipkart is investigating how Web3 can reshape the future of commerce, consumption, and value creation and transform the shopping experiences for millions of people through its relationship with Polygon and the new Blockchain-Commerce Centre.
Over the past year, the leading e-commerce company has been testing Web3 projects through Flipkart Labs. This collaboration comes after a number of recent forays into Web3 by Flipkart. Flipkart Labs, its innovation arm, was introduced earlier this year to incubate various concepts to bring innovation to the Indian e-commerce sector. With Labs, Flipkart explored NFTs, Virtual Immersive stores, and other Blockchain-related use cases as they related to Web3 and Metaverse commerce.
Before its recent festival season sale, Flipkart collaborated with the Ethereum scaling protocol for Flipverse, its interactive virtual shopping platform on the metaverse. The Flipverse is created by eDAO, a company established by Polygon, in partnership with 23 teams from the tech, design, Web3, and brand industries. By encouraging new forms of engagement and use cases through NFTs that saw community exploration, the Flipverse reoriented the top-down relationship between customers and brands.
Jeyandran Venugopal, Chief Product and Technology Officer, Flipkart, said, “With the COE, we look forward to working with Polygon and leveraging their expertise and technical know-how to successfully onboard users not just to the value proposition of Web3 or Metaverse commerce but also Web3 in general.”
This partnership aims to enhance research and development at the nexus of Web3 and experiential retail, which will increase acceptance and impact in India and throughout the world, according to Sandeep Nailwal, cofounder of Polygon. He continues that the Blockchain-eCommerce Center of Excellence “will be a driving force in the future development of e-commerce.”
The State Bank of India, ICICI Bank, Yes Bank, and IDFC First Bank are the first four banks to participate in the Reserve Bank of India’s testing of its retail central bank digital currency (CBDC), the digital rupee (e₹-R), in Mumbai, New Delhi, Bengaluru, and Bhubaneswar.
This pilot program will ultimately include participation from four other banks: Bank of Baroda, Union Bank of India, HDFC Bank, and Kotak Mahindra Bank. It will also be introduced in more cities, including Ahmedabad, Gangtok, Guwahati, Hyderabad, Indore, Kochi, Lucknow, Patna, and Shimla.
The introduction of a digital currency by the RBI this fiscal year is intended to advance the digital economy and facilitate effective currency management, as per Finance Minister Nirmala Sitharaman’s Budget 2022–23 Speech from earlier this year.
The RBI states that CBDC is the legal tender issued in digital form by a central bank. It has the exact same value as fiat money and may be exchanged for it in exact amounts. CBDC can be traded via Blockchain-backed wallets, which make payments final and reduce settlement risk.
Digital currency can be exchanged for money equivalent to paper notes since the CBDC is freely convertible against real money. To use e-rupees, unlike UPI, a consumer does not require a bank account.
In contrast to cryptocurrencies, the Digital Rupee will also have another key benefit of being centralized i.e., it will be administered by a single entity, hence lowering the risk of volatility that is associated with the likes of Bitcoin, Ethereum, etc. It can also aid in preventing fraud. With inherent programmability and controlled traceability, CBDC could proactively combat fraud, whereas the existing system relies on post-facto inspections to do so.
A report describing the Reserve Bank of India’s ambitions for the digital rupee, or “e-rupee,” was published earlier in October. It also outlined the reasons for the implementation of a CBDC and how it would be tested in distinct phases.
As per the official announcement by the central bank on Tuesday, the digital rupee will be distributed through intermediaries like banks and will be produced in the same denominations as present paper money and coins.
Users will be able to process transactions with e₹-R using a digital wallet provided by the collaborating banks and stored on mobile devices, the central bank explained, adding that both person-to-person (P2P) or person-to-merchant (P2M) will be possible. By scanning the QR code placed on the spot, a customer can make a purchase from the vendor. The digital currency can be changed into other kinds of payment, such as bank deposits, as needed but will not accrue any interest.
The Reserve Bank of India stated that the pilot would evaluate the stability of the complete creation, distribution, and retail use of digital rupees in real-time. Based on the insights learned from this pilot, it would eventually test more e-rupee features and applications.
To get started, download the CBDC app and provide a phone number associated with a bank account. A digital wallet with a specific ID will be provided to you after you successfully register on the app. After that, you may add money to the wallet by transferring funds from your bank account. Next, the app allows you to select currencies in whatever denomination you like. In order to load 20,000, you may, for example, ask for 500×20 units, 100×50 units, and 50×100 units. And after you confirm, you’ll find digital cash in these denominations in your wallet.
Several nations are exploring centralized digital currencies. While some are carrying out research, others have launched trial programs or formally implemented digital money.
In 2020, the Bahamas introduced the Sand Dollar, one of the first digital currencies issued by a central bank. In order to test integrating their domestic CBDCs, the central banks of Sweden, Norway, and Israel have started a project with the Bank for International Settlements. In October this year, the Central Bank of Nigeria celebrated the first anniversary of the launch of Africa’s first digital currency, the e-Naira.
This month in the United States, a coalition of banking institutions led by the Federal Reserve Bank of New York, HBSC, Mastercard, and Wells Fargo announced the launch of the Regulated Liability Network, a proof-of-concept digital money network. Through the Venus Initiative, France and Luxembourg settled a bond for 100 million euros (US$104 million) using an experimental CBDC. The National Bank of Ukraine unveiled plans on Monday on the possibility of creating an electronic hryvnia that could be used for a variety of purposes, including the issue and exchange of virtual assets.