Wednesday, November 19, 2025
ad
Home Blog Page 141

How is 2142 taking the leap with an NFT comic book?

2142

2142 is a community-driven, sci-fi transmedia created by a team of innovative minds from Belgrade (aka the city that never sleeps) in Serbia. It is a team of seven, including Dusan Zica, Mladen Merdovic, Nenad Krstic, Rade Joksimovic, Darjan Mikicic, Vladimir Pajic, and Dragan Jovanovic, and all have a background in the video game industry. The 2142 project started in May 2022, the world’s first NFT (non-fungible token) webcomic book narrating a super cool sci-fi adventure. To get into the depth of this brilliant idea of NFT comic and the fiction world the team of 2142 is building, Analytics Drift interacted with the CEO and co-founder of 2142, Dusan Zica. 

What is 2142?

2142 is an NFT project and comic book that might change the world’s perspective on NFTs. According to a report from nonfungible.com, NFT got a vast fandom and marketplace trading going as high as $17.6 billion in 2021. These are sold on cryptocurrency platforms like Gemini, Binance, Metamask, OpenSea, Coinbase, etc. NFTs are unique cryptographic tokens existing on a blockchain that cannot be replicated and can represent real-world items like artwork and real estate. Tokenizing NFT assets makes buying, selling, and trading more efficient than ever, reducing from minimum to zero probability of fraud and looking super cool simultaneously. 

Collectors had swamped the NFT marketplace for a while with funky monkey pictures. However, the monkeys might sway other people, Dusan has a different opinion and stated, “I hate today’s NFTs. I hate stupid monkeys with golden cards in their mouth because I truly believe it’s just a matter of hype. It doesn’t have any aesthetical, philosophical, or artistic value, and they don’t make sense to me.” With this thought to provide value and meaning to NFTs, 2142 started building an NFT comic book named “2142” and created an NFT marketplace. The comic’s story arch revolves around the idea that technology and spirituality can go together and not manipulate each other. Additionally, the community of 2142 has around 1500 people who talk about combining science fiction with mythology and cosmic myths based on real astronomical theories.

The beginning of the comic is aligned with the mining of the last Bitcoin in 2140, and in the year 2142 AD, the planet Earth has turned into a polluted dystopian world. The world is high and dry, controlled by corporations and brand sectors in both the real world and the Metaverse, when the long-forgotten secret came out of the legend to give hope.

Here is the synopsis of the comic:

As the world lies in ruins, Metaverse is the opium of
the masses and the last Bitcoin has been mined to
shake the foundations of a known world, three troubled
humans and one strained AI get trapped into hideous
astral conflict at the end of the great cosmic cycle.

Follow them on an epic journey throughout their
spiritual awakening in the physical world, virtual reality,
and astral planes. While, at the same time, the ancient
deities start to awaken, alien demons ravage the Earth,
and AI fights AI for the decentralized liberation of all conscious entities.

Another myth becomes a reality as the mysterious
Satoshi’s wallet activates for the first time to purge the
planet from mega-corps and Metaverse brand sects.
Deep beneath the mortal coil of our suffering planet,
the chant is murmured: “Mother Earth, it is time…” 

The creation of 2142 includes borrowing existing things and adding fictional elements. Dusan added, “We decided to use Satoshi Nakomoto, the mysterious maker of bitcoin, as one of our main characters who is going to be practically like an artificial intelligence god in our comic book, tabletop, and RPG.” Dusan also confirmed that 2142 is planning for a video game which is in the pre-production phase and will soon be announced on the website.

How is 2142 an investment?

Apart from the great fiction story one gets to read, when one buys the comic book, they buy NFTs too. There are 21 NFTs in a bundle priced at $30, which is not much for NFTs and comic books. Additionally, collecting the comic book will give you doubles, meaning double pictures, panels, pages, and covers for selling. As 2142 has its own marketplace, the trading of comic books and NFTs is easier and helps buyers regain their investments. “We are not targeting those NFT guys. We’re targeting comic book fans, and we are targeting science fiction fans, and we are targeting video game players and gamers,” Dusan said about their target audience.   


Now, there has been a crash in the NFT market this year, resulting in losing the hype of NFTs for the last couple of months, and barely any NFTs are sold. In this phase, 2142 has built a AAA high-quality teaser trailer and preparing to release an animatic trailer. Dusan said, “It’s the worst and the best time to do the NFTs. It’s the worst time if you want to sell them because people are not buying the NFTs right now. It’s not even close to what it was one year ago. And the best time to do NFTs because now, if you make a good NFT project, a long-term project, you can become a leader in a year or two or five.”

Advertisement

Gauging public confidence in AI-based content moderation tools

Cornell study on confidence in AI content moderation
Image Source: Pixabay

According to a recent study from Cornell University, people’s perceptions of the moderation decision and the moderation system are influenced by the type of moderator, human or AI, and the “temperature” of harassing content online.

The study, which has just been released in the journal Big Data & Society, made use of a unique social media website where users could submit food-related content and leave comments on others’ posts. The website includes a simulation engine called Truman, an open-source platform that uses pre-programmed bots developed and managed by researchers to replicate the activities (likes, comments, posts) of other users. The Cornell Social Media Lab, under the direction of communication professor Natalie Bazarova, developed the Truman platform, which was named after the 1998 movie “The Truman Show.”

The Truman platform gives researchers the social and design flexibility to explore a range of study topics concerning human behaviors in social media. This allows them to create a controlled yet realistic social media experience for participants. According to Bazarova, Truman has been a really helpful tool for the group and other researchers to develop, implement, and test designs and dynamic treatments while allowing for the gathering and monitoring of people’s actions on the site. 

For a number of digital and media platforms, social media websites, and e-commerce marketplaces, content moderation has grown to be a crucial technique for fostering their success. This entails eradicating unrelated, offensive, unlawful, or otherwise improper content that is regarded as inappropriate for the general public. 

While AI may not be available to flag every offensive content on social media or other websites compared to humans’ proficiency, it is useful when faced with humongous troves of online data. Besides, AI moderation costs are quite low, and it saves us from being subjected to mental trauma due to viewing hours of inappropriate content. 

Read More: The Buck Stops Where: Insight into misuse of AI by Israel Government

Nearly 400 participants were informed that they would be beta testing a new social media platform for the study. They were chosen at random to participate in one of six experiment conditions that varied the type of content moderation system (users; AI; unknown source) and the harassing comment they encountered (ambiguous or clear).

Participants had to log in at least twice daily for two days. During this time, they were exposed to a harassing comment between two separate users (bots), which was regulated by a person, AI, or unidentified source. 

The researchers discovered that when the content is fundamentally equivocal, users are more inclined to question AI moderators, particularly how much they can trust their moderation choice and the moderating system’s accountability. The level of confidence in AI, humans, or an unidentified source of moderation was almost the same for a comment that was more visibly harassing in tone. There were no distinctions in how participants judged the fairness, objectivity, and comprehension of the moderating process. Overall, the study results highlight that when an AI moderator is visible, people tend to doubt the moderation decision and system more, which emphasizes how challenging it is to successfully control the exposure of autonomous content moderation in social media environments.

According to Marie Ozanne, the study’s lead author and assistant professor of food and beverage management, both the perception of system accountability and trust, whether the system is considered to behave in the best interests of all users, in the moderation decision are subjective assessments. If there is uncertainty, an AI appears to be questioned more than a human or an unidentified source of moderation.

The researchers propose that future studies examine how social media users would respond if they saw people and AI moderators working together, with computers able to manage vast volumes of data and humans able to read comments and identify subtle linguistic cues. In other words, they are looking to research a hybrid moderation system to understand the complex process of content moderation. This is important because the increasing negativity in the social media landscape has led to the adoption of AI as a content moderator.

When presented with ambiguous content, it is natural that the participants questioned the AI moderators. This is mostly due to concerns that completely automated content moderation by AI would be overly blunt and will unintentionally violate the right to create and circulate essential information.

While NLP has made great strides in content parsing, AI systems may struggle to interpret context effectively. AI systems currently lack the ability to comprehend fundamental human notions like sarcasm and irony as well as the political and cultural context, both of which change from time to time and region to region.

Read More: New York City Proposes New Rules on Employers’ Use of Automated Employment Decision Tools

Until now, AI has aided content moderation by utilizing visual recognition to identify a wide range of error-prone content (such as nudity or accidents). It occasionally depended on matching content to a list of prohibited content, such as propaganda films, child pornography, and copyrighted information, which calls for humans first to develop a list of prohibited items. In either instance, AI has falsely flagged comments that had information about ethnicity, sexual identity and culture. Thus, causing people to have low confidence in AI-based content moderation. 

At the same time, we need to rely on AI for content moderation as new data is created. Humans cannot flag every content during a conversational exchange between two parties in real-time. Further, studies have shown that constant exposure to online toxicity has detrimental effects on the human mind. From PTSD to real-life violence (e.g. violence due to religious nationalism and islamophobia in Myanmar).

The only solution is to make a ‘responsible, fair, trustworthy, and ethical’ AI system adept in content moderation with the help of humans.

Truman may be downloaded for free from the public GitHub repository. Cornell encourages other scientists to design and carry out their own studies using Truman. The university has also released a pdf document with a full how-to guide for installing and utilizing Truman for your research.

Advertisement

UAE set to become world’s first metaverse-as-a-service hub

UAE metapolis Metaverse as a service

With the arrival of Singapore-based Metapolis, the UAE is poised to see the next step in the growth of the metaverse. The company bills itself as the world’s first “metaverse-as-a-service” (MaaS) enterprise and has ambitious plans to link organizations from diverse industries to the virtual world.

Arabian Business reports that to help all economic sectors in the UAE and the greater GCC area establish new business insights and income streams, as well as expand their customer and client engagements to the virtual world, the Metapolis MaaS platform will offer a full range of services. Though the metaverse is currently linked with the arts, sportswear, real estate, and fashion retail, other businesses, such as healthcare, are making modest efforts to create a presence in the fast-developing virtual reality.

According to Sandra Helou, co-founder, and chief commercial officer of Metapolis, if SaaS (software-as-a-service) emerges as the well-established business model to dominate the web2 space, the future transition to web3 will surely see comparable market potential for MaaS businesses. Helou disclosed that Metapolis has also offered to guide clients on the full spectrum of services – from conceptualization through metaverse implementation and development.

Read More: Japanese city implements metaverse schooling service to address absenteeism

The Singapore-based firm Metapolis revealed last week plans to establish its worldwide headquarters in Dubai as part of its ambitious plans in the UAE. Helou Metapolis claims that the company, which already conducts business in the US, Europe, and Asia, intends to increase its physical presence in other GCC nations, though a specific date is yet to be set.

Advertisement

JP Morgan Executes First DeFi Transaction on Polygon Blockchain

JP Morgan’s first DeFi transaction

JP Morgan, an international banking company, has completed the first cross-border transaction ever utilizing decentralized finance (DeFi) on a public blockchain. On the Polygon Layer-2 blockchain employing a modified version of the Aave protocol’s smart contract code, JP Morgan issued 100,000 tokenized Singapore dollars (or $71,000) during the transaction. Then, JP Morgan swapped the tokenized Singapore dollars for tokenized Japanese yen, with Tokyo-based banking firm SBI Digital Asset Holdings on the opposite end of the transaction.

The first-ever industry pilot transaction was made possible on November 2 by the Project Guardian of the Monetary Authority of Singapore (MAS). Project Guardian investigates possible decentralized finance (DeFi) uses in the wholesale funding market. In other words, the pilot marked a new step in researching the many use cases for tokenized assets and DeFi protocols, including how traditional financial institutions could conduct financial transactions. The pilot program also included participation from Oliver Wyman Forum, a corporate leadership platform, and DBS Bank, the largest bank in Singapore.

The exchange was not strictly a cryptocurrency transaction because the parties utilized virtual representations of fiat currencies, but it still exemplifies how actively big financial institutions are embracing blockchain technology.

According to MAS Chief Fintech Officer Sopnendu Mohanty, the latest pilot has contributed to developing the nation’s digital asset strategy. He noted that live pilots led by industry participants exhibit how, with the right safeguards in place, digital assets and decentralized finance have the potential to transform capital markets.

Read More: Nubank Plans to Use Polygon Tech To Create its Own Crypto Asset

Umar Farooq, CEO of Onyx by JP Morgan, a blockchain-focused business unit inside the asset management firm, informed Bloomberg that JPMorgan’s on-chain transaction was the first time that a large bank, probably any bank, has tokenized deposits on a public blockchain.

A whitepaper that summarizes the key learnings from this pilot program was published by Oliver Wyman Forum in collaboration with DBS Bank, JP Morgan, and SBI Digital Asset Holdings. The paper includes the advantages of digital asset interoperability and transaction efficiency that institutional DeFi protocols can introduce to financial markets, 

Aave believes that the pilot represents a significant step toward linking traditional finance assets to DeFi as DeFi is set to bring a turning point for the industry.

Advertisement

OpenAI releases public beta of DALL-E API

Dall-e 2 API openai

The image-generating AI system DALL-E 2 from OpenAI is finally available as an API, allowing developers to embed it into their applications, websites, and services. DALL-E 2, an enhanced version of DALL-E, is a transformer language model that enables users to create and modify creative pictures using natural language inputs. OpenAI announced in its blog that developers can now quickly integrate its AI image-generating tool to take advantage of its capabilities. With this announcement, DALL-E has followed in the footsteps of joining GPT-3, Embeddings, and Codex in Open AI’s API platform.

On October 20, OpenAI disclosed information on the New York City-based startup Cala’s usage of the DALL-E API for a particular business use case. By providing a digital platform that enables designers and producers to develop and produce clothing lines, Cala positions itself as the “world’s first operating system for fashion,” integrating the process from product conceptualization to order fulfillment. The inclusion of DALL-E-powered text-to-image generating tools will enable Cala users to create fresh visual design concepts from provided reference photos or natural word descriptions. Even Mixtiles, situated in Tel Aviv, registered for early access to API when co-founder Eytan Levit noticed the potential of the image-generating technology.

According to Luke Miller, product manager at OpenAI, the API has three main functions. Users have the option to create a picture, edit a portion of it, and generate various revisions of that same image.

Registration with OpenAI is necessary in order to use the API, and you’ll need a private API key to access the DALL-E generator. Additionally, OpenAI charges for each image that is produced depending on the image resolution. Images with a resolution of 1024×1024, 512×512, and 256×256 cost $0.02, $0.018 per picture, and $0.016 per image, respectively.

While the DALL-E API is now in public beta, OpenAI will keep tweaking and enhancing it through the year’s end. According to Luke, the company is quite enthusiastic about all the ways that developers will be able to adapt this technology to unique requirements, applications, and communities in order to progress even higher than before.

One major change compared to earlier image results from DALL-E. is that the inclusion of a watermark won’t be necessary for photos created using the API. With the release of the API, OpenAI has decided to make watermarking optional after introducing it during the DALL-E 2 beta as a means to identify which images came from the system. 

Read More: How AI Image Generators are Compounding existing Gender and Cultural Bias woes?

However, with the availability of the API, minimal changes have been made to the terms of the policy. This might be disappointing for those looking forward to OpenAI, considering the ethical implications of generative AI tools on legal grounds and bias in results. On the plus side, using DALL-E 2 API to generate violent, pornographic, and hateful content is still banned by OpenAI. The company continues to use a combination of automated and human monitoring systems to keep users from sharing or uploading pictures of people without their permission or images they do not have ownership or permission. 

Advertisement

Tesla to start mass production of Cybertruck at the end of 2023

Tesla mass production of Cybertruck end of 2023

Tesla announced that it will begin the mass production of Cybertruck by the end of the year 2023, two years after the initial target for the long-anticipated pickup truck that Elon Musk unveiled in 2019.

According to Tesla, it was working on preparing its Austin, Texas, plant to create the new model, with early production ready to start in the mid of 2023. Musk said on a conference call with financial analysts, “We’re in the final lap for Cybertruck.” 

A gradual speed-up in the second half of next year to total output for the sharp-angled electric truck means that Tesla would not be recording revenue till early 2024 for a full quarter of production on a new model, as key to its growth.

Read More: Meta’s India Head Ajit Mohan Quits To Join Rival Snap

It would also mean another year’s wait for hundreds of thousands of potential buyers who paid $100 to book a Cybertruck in one of the most closely tracked and highly anticipated electric vehicle launches ever.

Musk introduced Cybertruck in 2019, revealing where the designer damaged the vehicle’s supposedly unbreakable armor glass windows. The company has pushed back production timing about three times from late 2021 to 2022, then to early 2023, and now most recently, to the mid-2023 target for initial production.

In 2019, Tesla had announced an initial price of $40,000, but prices for new vehicles will be higher, and Tesla has also raised prices across its lineup.

Advertisement

Google is Developing an AI App that Creates Images from Text

google ai app creates images from text

Tech Giant Google announced that it is developing an AI application that creates images from input text. Users can use the new app to generate images by typing only a few words. The app offers a “City Dreamer” function using which users can construct buildings and a “Wobble” feature to interact with cartoon monsters generated by the app.

The AI app was announced at the AI event in New York and will be accessible through AI Test Kitchen App. Users can test prototype AI-powered systems from the company’s labs using the AI Test Kitchen application before they are put into production.

Read More: Meta’s India head Ajit Mohan quits to join rival Snap.

The app will be rolled out shortly. Google has not revealed the date yet and plans to take it “slow.” Douglas Eck, Google’s Principal Scientist in New York, said, “We also have to consider the risks that generative AI can pose if we do not take good care, which is why we have been slow to release them.”

Additionally, Google demonstrated advancements in AI tools that assist with coding and audio production jobs and a text creation tool, Phenaki, that writers could use to create short stories.

Advertisement

Meta’s India head Ajit Mohan quits to join rival Snap

Meta India head Ajit Mohan quits

Facebook-parent Meta Platforms said on Thursday that its India head, Ajit Mohan, has stepped down after four years, while a media report said he would join rival Snap. According to sources, Mohan will serve as Snap’s President of the Asia-Pacific business. 

According to a spokesperson, Manish Chopra, head and director of partnerships at Meta India, will take on an interim capacity. Vice President of Global Business Group at Meta, Nicola Mendelsohn, said that Mohan has played a significant role in shaping and scaling their India operations over the last four years. 

Mendelsohn said, “We remain deeply committed to India and have a strong leadership team to carry on all our work and partnerships.”

Read More: NVIDIA Collaborates With Mozilla Common Voice For Speech AI

Mohan joined Meta in January 2019 as a vice-president and managing director of the India business. During his stay at the firm, Facebook’s family of apps, including Instagram and WhatsApp, added over 300 million users in India and made a series of ambitious investments in the country, including cutting a $5.7 billion check to Indian telecom giant Jio Platforms. It also ramped up the commerce engine of WhatsApp.

Snap has ramped up efforts in India and several other Asian markets recently and pushed to introduce a series of localized features. The app has quadrupled its user base to 100 million in India in the past three years. 

Advertisement

Google Announces a New Project to Develop a Single AI-Language Model Supporting 1000 Spoken Languages

google ai model supporting 1000 languages

Google has announced a new project to develop a single AI-language model supporting over 1000 spoken languages. The company is actively researching the domain as it believes that more than 7000 languages are spoken around the world, but only a few have representation. 

Google is actively developing and enhancing language models, starting from MUM and LaMDA. The advancement was followed by Pathways, a neural architecture that works similarly to the brain. The company developed a 540-billion-parameter language model named PaLM using Pathways. Another language model, Universal Speech Model, is compatible with 400 languages as a part of its ambition.

Pathways-driven language models aid in breaking down language barriers by supporting more languages. Numerous languages have been added to Google Translate due to the development of such language models. 

Read More: ALaaS: A Data-Centric MLOps Approach to AI Using Server-Client Architecture

The recently proposed AI model will also aim to bring greater inclusion of marginalized communities. Google is partnering with several marginalized groups to source speech data and include more language recognition and translation services. 

Other tech companies like Meta are also working in the domain to include marginalized languages and give them a fair representation with models like No Language Left Behind.

Advertisement

How AI Image Generators are Compounding existing Gender and Cultural Bias woes?

gender and racial bias in ai image generator
Image Credit: Lexica

AI image generators are one of the hottest trends this year. AI image generators have gained more traction this year with the release of OpenAI’s DALL-E 2 in April and the self-titled product from independent research lab Midjourney in July. Additionally, Google announced Imagen in May, and Stability AI rolled out the text-to-image generator Stable Diffusion. We were also blessed with knock-offs like HuggingFace’s DALL-E Mini (later renamed to crayon). While these innovations are already sparking controversies about ownership and copyright violation, the results also ‘display’ another challenge: perpetuating gender and cultural bias in the results.

Hugging Face’s artificial intelligence researcher Sasha Luccioni developed a tool to show how the bias in text-to-art generators is put to use. Using the Stable Diffusion Explorer as an example, entering the word “ambitious CEO” produced results that showed various men in various black and blue suits, however, entering “supporting CEO” produced results that displayed an equal number of both women and men.

Luccioni explained to Gizmodo that the LAION image set, which comprises billions of photographs, photos, and other images collected from the internet, including image hosting and art sites, serves as the foundation for the Stable Diffusion system. The way Stability AI classifies various image categories leads to the establishment of this gender, as well as some racial and cultural prejudice. According to Luccioni, the system is trained to focus on the 90% of photos connected to a prompt if the training dataset comprises 90% male and 10% female images. That is the most extreme case, but the greater the diversity of visual data on the LAION dataset, the less probable it is that the Stable Diffusion Explorer would generate bias-free results.

In a Risks and Limitations document released in April, OpenAI noted that their technology might encourage prejudices. Their technology generated photos that disproportionately displayed white individuals and frequently depicted western culture, such as western-style weddings. Similar to this, the DALL-E 2 generated results that were biased toward men for the phrase “builder” and women for the term “flight attendant,” despite the fact that both men and women work as flight attendants and builders, respectively. 

Large tech companies are generally hesitant to grant unrestricted access to their AI product because of concerns about being misused and, in this case: to generate ethically questionable content. But in the last few years, several AI researchers have started creating and making AI products available to the public. The development leads to experts raising questions about the possible exploitation of AI technology. Even though Stability AI touted Stable Diffusion as a far more “open” platform than its competitors, its actually quite unregulated. In comparison, OpenAI has been upfront about the biases in their AI image generators. 

Even Google mentioned that Imagen encodes social prejudices, as its algorithms also produce content that is frequently racist, sexist, or harmful in other creative ways.

The core inspiration behind Stable Diffusion is to make AI-based image generation more accessible. However, this did not age well with time! Though the official version of Stable Diffusion contains safeguards to prevent the production of nudity or gore, since the entire code of the AI model has been made available, others have been able to remove such restrictions – allowing the public to generate explicit content and thereby adding more bias to the system. The inability to AI image generators to state or showcase how well they understood the prompt while generating results is another gray area. Even if they produce hazy or disfigured results like Craiyon, giving incorrect results can be frustrating.  For example, results of individuals dressed as judges for a ‘lawyer’ prompt. 

Read More: Study Finds Humans Fail to Distinguish Real Image from Deepfakes: An Impending Concern

People have been motivated to quit their jobs, pursue AI art, and fantasize about the future when artificial intelligence will make it possible for them to experiment with making creative material since AI image generators have evolved so much better over the past year. The earlier generation of generative AIs was built using an algorithm known as GANs, or generative adversarial networks. GANs are notorious for their ability to create deepfakes of humans by pitting two AI models against one another to see who can better produce a picture that serves a certain purpose. In comparison, AI image generators rely on transformers initially introduced in a 2017 Google study. Transformers are trained on larger datasets that are generally web-scrapped online data, including user information from social networking sites. And every social media user is aware of the fact that social networking sites can be breeding grounds for ethnic and racial hate, coupled with content that are either sexist or misogynist. Therefore, training transformer models on online (visual) data that already leans towards human socio-ethnic bias, is something that needs to be addressed.

AI image generators are not the first instance of AI models being criticized for biased results. For instance, Microsoft created an AI chatbot to learn how to comprehend conversations a few years ago. While engaging with thousands of Twitter users, the software began to act abnormally, and in a surprising turn of events, the chatbot professed its hatred for the Jewish people and voiced allegiance to Adolf Hitler. A few years ago, Amazon’s test-market hiring tool employed artificial intelligence to provide job seekers ratings of one to five stars, akin to how customers review purchases on Amazon. In this case, Amazon’s computer models were programmed to screen candidates by studying trends in resumes sent to the company over a 10-year period, where the majority of submissions were from males, echoing the dominance of men in the tech world. As a result, Amazon’s algorithm started to favor male applicants and penalized applications that contained the phrase “women’s.”

There are a lot of stories in the news about how AI failed to stop sexist advertising, misogynistic employment practices, racist criminal justice practices, and the dissemination of misleading information. or has contributed to it. AI image generators are attracting a lot of hype, as with any other new technology, which is reasonable given that it is a truly remarkable and ground-breaking tool. But before it becomes a mass service tool we need to grasp what is happening behind the scenes. Even if these research results may be disheartening, at least we have started paying attention to how these biases seep into models. Some companies, such as OpenAI, have addressed flaws and hazards associated with their AI image generators in documentation and research, stating that the systems are prone to gender and racial prejudices.

Developers and open-source model users can use this to identify problems and nip them in the bud to prevent them from getting worse. In other words, what can the scientific and tech community do to improve things is the question. 

For now, if some of the AI image generators are not open-sourced yet, it is better as their capacity to automate image-making spread racial and gender bias on a large scale is being reviewed before making it public.

Advertisement