Allado McDowell set out to write with AI the most cringe-worthy novel possible but instead crafted a strange and humorous book. Amor Cringe is an ode to cringe maximalism! It is fast, lively, and a little sleazy, and reading it would feel like frolicking around with glee while everyone else is not having one.
When McDowell was asked about the implication of the word Cringe, she answered (with GPT-3): Cringe is a phrase that organically came from the internet. It refers to the behavior that makes a witness squirm with posthumous embarrassment, online and in real life. My broad hypothesis of “cringe” is that it reflects our social nature as humans.
The novel results from a one-of-a-kind writing process in collaboration with GPT-3 AI! GPT-3 is a text-generating AI. Consider it like an autopilot that finishes the deal. It works on a Large Language Model, or LLM, which is vital to the future of new generation AI models trained on massive datasets.
This isn’t Allado McDowell’s first collaboration with AI, but Amor Cringe is a significant step forward in the process. The particular reason for the same is that rather than portraying the collaboration as a duet, Allado-McDowell has blended her words with GPT-3’s writing, resulting in a distinct narrative voice.
McDowell is known to have established the Artists + Machine Intelligence program in 2016 at the Google AI event. She has a deep knowledge and understanding of AI and related areas. Before Amor Cringe, McDowell published Pharmako-AI, which depicts a conversation between the former and the Generative Pre-Trained Transformer 3 or the GPT-3.
With more developments in line with AI-based text generation, many more novels will have an artificially intelligent co-author. You never know; someday in the future, you may end up binging on a book entirely written with the help of AI.
Cloud-native login and security analytics services providing company Devo raises $100 million in its recently held series F funding round led by Eurazo.
Several other investors, such as Insight Partners, General Atlantic, Bessemer Venture Partners, TCV, and others, also participated in Devo’s latest funding round.
Devo plans to use the newly raised capital to drive development in new regions and verticals, to speed Devo’s delivery of the “autonomous SOC,” and to support possible future M&A expansion. This new funding has now increased Devo’s market valuation to over $2 billion.
CEO of Devo, Marc van Zadelhoff, said, “Security teams are facing more threats than ever—regardless of industry or geography—and that challenge is compounded by the difficulty of hiring and retaining talent, a lack of visibility into the full attack surface, and the speed and scale necessary to keep up with not just growing threats, but the growth of their organizations.”
He further added that this round of investment enables them to deliver on the autonomous SOC through continuing technological innovation, to grow to other areas to service more clients, and to examine more M&A opportunities.
Moreover, he is delighted to have instilled such trust in the company’s investors, who continue to support its innovation and the value it provides to customers.
United States-based cybersecurity startup Devo was founded by Pedro Castillo in 2011. The company specializes in offering a cloud-native logging and security analytics platform that unlocks data’s full potential to enable bold, confident action. To date, Devo has raised nearly $500 million from multiple investors over seven funding rounds.
According to Devo’s plan, it will continue to expand into new verticals and locations, focusing on the public sector and the Asia-Pacific (APAC) region.
To enhance efficiency, Google has been incorporating machine learning into its cloud-based Google Workspace infrastructure. These latest AI developments in Google Workspace are intended to help employees stay on the right track, collaborate securely, and improve work relationships in various environments.
People who have been on the platform have substantially notified Google that the post-Covid scenario has increased the hybrid work in terms of volume of emails, meetings, chats, etc. Most recent Google’s AI developments in Workspace address such issues along with other scheduled enhancements.
Machine Learning technologies now enhance Google Meet services and make the transfer of information more convenient and secure. Other AI-based enhancements cover ‘live sharing’ to make the meets more interactive by allowing content synchronization among participants.
Owing to these developments, Google Docs now has an improvised automated summary section that allows the user to catch up on a missed update.
This year, Google has also planned to enhance its image, music, and content sharing services. Google’s artificial intelligence will work towards ‘portrait restoration’ to help improve video quality. The process will be done for low-quality videos or low-light settings and areas that do not have stable network connectivity. This Google AI enhancement will automatically improve the video quality without slowing down the device.
As the future of hybrid working arrangements seems promising, Google’s AI aims to enhance its clientele’s work experience by incorporating intelligent capabilities across the platform.
Ranjani Mani, Senior Manager of Data Sciences and Advanced Analytics at VMware, has left the company to join global software firm Atlassian after serving for roughly seven years.
She announced this news through a post on her LinkedIn profile. Mani has joined Atlassian as the Analytics Leader of Atlassian’s EMEA CSS Data Sciences team.
Mani has over a decade of expertise in analytics and data science and has served crucially for her previous company.
At VMware, she headed a team of global data scientists that collaborated with VMware Customer Experience [CXS] leadership, product, engineering, and IT to push ideas to complete data sciences and business analytics initiatives aimed at providing a great customer experience.
She is passionate about delivering corporate value and providing an excellent customer experience via Analytics, Strategy, and Leadership. Mani has been recognized several times in reputed events and reports, including India AI 21 Women ’21, Women in AI Leadership Award 2021, Women in Big data 2021, and many more.
Apart from VMware, she has worked with other industry-leading companies like Oracle and Dell. She has a Bachelor’s degree in Engineering, Electronics and Communication, and a Master’s in Business Administration in Brand Management and Analytics.
“I have worked across analytics & data sciences – specifically in Product, Pricing and Customer Experience Analytics, Product Management and Strategy across companies such as Oracle, Dell Global Analytics and Tata Docomo,” Mani mentioned in an interview.
The world is not new to the notion of being watched by facial recognition systems. While facial recognition has already revolutionized the field of biometrics verification and was shot to widespread usage amid covid restrictions, the biometrics field is on the verge of witnessing new trends. For instance, gait recognition using motion sensors. Like a human fingerprint, an individual’s gait is unique too. It is believed that gait recognition will soon supplement the existing biometric technologies like facial recognition, voice recognition, iris recognition and fingerprint verification, for enhanced security.
Much like in the movie Mission Impossible: Rogue Nation, where the protagonist’s sidekick Benji has to walk across a room where gait analysis is performed on him, before retrieving a secret Syndicate file. In the movie the gait analysis system observes how an agent talks, walks and even their facial tics – any mismatch will result in getting tased. In the real world, gait recognition is touted to work similarly.
Machine Learning technologies formed the foundation for developing gait recognition tools. Even if a person’s face is hidden, turned away from the camera, or hidden behind a mask, ML-based computer vision systems can recognize them from a video.
The software looks at the person’s silhouette, height, pace, and walking patterns and matches them to a database. Some algorithms, for example, are intended to interpret visual information, while others employ sensor data. Irrespective of the nature of data, the whole process involves: capturing data, Silhouette segmentation, contour detection and feature extraction with classification. In public locations, this technology is more convenient than retinal scanners or fingerprints since it is less intrusive. Further, as mentioned earlier, gait recognition is unlikely to be beaten since no two people’s gaits are the same.
Scientists are now striving to enhance recognition systems using the data and models gathered. Because each footfall is distinct, the identification algorithms are always confronted with new data. The algorithm will assess future data better if it detects more gait variations.
Because no algorithmic method can be perfect and there is always room for development, new capture methods and algorithms are always being tested to address the shortcomings of present gait recognition systems. While video camera data is already used for gait analysis, researchers are also working on gait recognition based on-body sensors, sensors on mobile phones or smart wearable devices, radar input, and so on.
Biometrics for gait recognition is still in its infancy, however, it has already found an interesting application, i.e., detecting human emotions on how they walk. Gait recognition combines spatial-temporal characteristics like step length, step width, walking speed, and cycle time with kinematic parameters like the ankle, knee, and hip joint rotation, ankle, knee, hip, and thigh mean joint angles, and trunk and foot angles. The relationship between step length and a person’s height is also taken into account. All these factors change depending on the mood of an individual. A person who just learned that they won a lottery will have a different stride than someone bereft or pacing nervously.
Analyzing these can help identify nonverbal behavioral cues and help with our social skills. Researchers at the University of Maryland created an algorithm called ProxEmo a few years ago, which allowed a small wheeled robot (Clearpath’s Jackal) to assess your gait in real-time and anticipate how you might be feeling. The robot could choose its course to offer you more or less space based on your detected emotion. Though this could appear to be a minor milestone in the context of robot-human relations, but scientists believe in future machines will be clever enough to interpret an unhappy person’s walk and attempt to comfort them.
The team presented a multi-view 16 joint skeleton graph convolution-based model for classifying emotions that works with a common camera installed on the Jackal robot. The emotion identification system was linked into a mapless navigation scheme, and deep learning techniques were used to train the system to correlate various skeleton gaits with the feelings that human volunteers identified with those walking humans. On the Emotion-Gait benchmark dataset, it earned a mean average emotion prediction precision of 82.47%. You can check the dataset here.
Last year, researchers from the University of Plymouth presented an experimental biometrics smartphone security system that used gait authentication. The solution uses smartphones’ motion sensors to record data on users’ stride patterns, with preliminary results indicating that the biometric system is between 85 and 90% accurate in detecting an individual’s gait, based on 44 participants.
So far gait recognition system promises new avenues of biometrical identifications; however, the success depends on whether it violates public privacy.
Electric utility company Tata Power announces the launch of its new artificial intelligence (AI)-powered smart home automation energy solutions.
Tata Power EZ Home, a newly launched product under the company suite of Home Automation Range, comes with a new AI-enabled PIR (Passive Infrared) Motion Sensor to deliver smart energy solutions to assist clients in conserving energy and lowering their power bills.
This newly launched product further strengthens Tata Power’s goal of its #DoGreen mission. According to the company, its Tata Power EZ Home can recognize human motion within a 5-meter radius, causing the PIR Motion Sensor to activate the connected appliance, resulting in enormous power saving.
Reports suggest that the Tata Power EZ Home PIR Motion Sensor has the ability to reduce the consumption of linked home appliances by up to 40%.
A Tata Power spokesperson said, “We’re excited to introduce our AI-Powered PIR Motion Sensor to mark the World Environment Day. Through the use of these AI sensors, our discerning customers will be able to make informed decisions about energy conservation and its optimization and join us in our #DoGreen mission.”
Moreover, the Sensor, along with other existing EZ Home solutions, are in accordance with recent government guidelines for requiring Energy Efficiency code compliance for residential structures.
Apart from being energy-efficient, Tata Power EZ Home also provides a range of Home Automation and IoT-based devices such as modular switches, converters, controllers, and others. Smart devices give end consumers the ease and convenience of being able to manage and control their home appliances from anywhere.
Customers can install the Tata Power EZ Home app or use voice commands with Alexa or Google Assistant to control the devices.
Visual bookmarking tool Pinterest announces that it plans to acquire an artificial intelligence-powered shopping platform for the fashion industry, THE YES. However, no information has been provided yet regarding the valuation of this acquisition deal.
According to Pinterest, THE YES’s mission of being the home of taste-driven shopping will be accelerated by THE YES’s leadership, innovative technology, and skilled staff, which combines shopping knowledge with the fashion industry’s reputation.
Bornstein will report to Silbermann and will oversee Pinterest’s retail vision and strategy, establishing a new and strategic organization dedicated to Pinterest’s taste-driven buying operations, which will assist in the evolution of features for Pinners and merchants on Pinterest.
Co-founder and CEO of Pinterest, Ben Silbermann, said, “THE YES team are experts in building an end-to-end shopping experience. They share our vision of making it simple to find the right personalized products for you based on your taste and style.”
He further added that they are ecstatic with THE YES’s brilliant staff and technology as they create specialized buying experiences on Pinterest.
According to Pinterest, it is a shopping platform that combines its audience’s distinct commercial motive with the opportunity to visually explore things like one would in a magazine or catalog.
THE YES was founded by Julie Bornstein and Amit Aggarwal in 2018, who currently serve as the company’s CEO and CTO, respectively. THE YES has grown to give a customized daily shopping feed that learns a user’s style as they shop with hundreds of merchants throughout the fashion spectrum, including big names and discovery companies.
CEO of THE YES, Julie, said, “I’ve spent my career at the intersection of shopping, fashion, and technology and have seen first-hand the valuable impact of building technology that enables brands to join a platform with ease while enabling customers to share their preferences.”
Julie also mentioned that joining together with Pinterest to expand their reach through such an inspiring platform is an exciting and natural next step for our team and technology.
The Ministry of Electronics and Information Technology (MeitY) issued a fresh draft of the National Data Governance Framework Policy (NDGFM). The new NDGFM draft by MeitY has an advanced emphasis on sharing of non-personal data for building an extensive repository of India-specific datasets. MeitY has invited feedback and inputs on the NDGFM draft, and the last submission date is June 11, 2022.
A platform will be developed to provide access to anonymized and non-personal datasets. In the earlier draft by MeitY, a similar policy was issued but retracted after widespread criticism over monetizing data sharing. However, in the current draft, MeitY has made no provision for data monetization.
The Indian government will also launch an India Dataset program in which the data collected by government entities will be made available. The main source of datasets will be the government ministry, departments, and organizations which will have to identify and classify available datasets. Private companies will also be encouraged to create datasets and contribute to the India Datasets program.
The India Data Management Office (IDMO) will be responsible for developing standards, guidelines, and rules for this data framework. IDMO will also be responsible for designing a platform for processing startup and research organizations’ access requests. To ensure proper implementation of the new policy, data management units (DMU) will work closely with IDMO.
There will be limits for data requests to ascertain whether the requesting organization should be allowed full or partial access for their respective use case. There will also be government-to-government data access where each entity is supposed to create a searchable inventory along with detailed and clear metadata.
IDMO will notify the protocols for sharing these datasets while ensuring security and privacy. There will be rules for exclusivity to Indians or India-based organizations. The policy draft also mentions that IDMO might charge users a maintenance fee. You can send the feedback and inputs on the NDGFM draft to Ms. Kavita Bhatia at kbhatia@gov.in or pmu.etech@meity.gov.in.
Meta, formerly known as Facebook, announces its plans to restructure its artificial intelligence research and development (R&D).
Our new structure for Meta AI will help us not only better pursue open, ground-breaking research, but also improve how we leverage AI in our products. Learn more about how the team at Meta AI is evolving:https://t.co/fXHYSRc571pic.twitter.com/HDwEfImkR7
Meta’s AI Platform, AI for Product, and, more recently, AI4AR teams have developed cutting-edge techniques to leverage AI to improve our products, better protect the people who use them and build innovative new applications, drawing inspiration from their colleagues at FAIR and counterparts across the industry.
Therefore this new development is a step for the company to further expand its AI research and development tasks.
Chief AI Scientist at Meta, Yann LeCun, mentioned the significant changes in the organization in a Twitter thread.
Big changes for AI R&D at Meta!
– FAIR remains FAIR: No change in the modus operandi and mission. – FAIR is now part of Reality Labs – Research (RLR) under Michael Abrash. RLR is a wide-ranging research organization that now includes AI. 1/N https://t.co/SiVMySz7wl
Meta announced a new decentralized organizational structure for Meta AI to serve the company’s goal of advancing AI. Meta’s AI teams have incubated several innovative projects, such as Responsible AI, and have established Meta as a leader in the artificial intelligence industry worldwide.
“Jerome Pesenti has been our fearless leader in this work, but over the last several months, he has put in place a plan to change the status quo. Jerome identified that while the centralized nature of the organization gave us leverage in some areas, it also made it a challenge to integrate as deeply as we would hope,” mentioned Andrew Bosworth, Chief Technology Officer at Meta.
He announced the following changes in the organization –
The Responsible AI organization will become a member of the Social Impact team.
The AI4AR team will join Meta’s XR team in Reality Labs.
Meta’s product engineering team will take over the AI for Product teams that seek to secure individuals who use the company’s platforms.
FAIR, Meta’s AI research team, will become a new pillar of Reality Labs Research.
According to Meta, FAIR’s leadership will remain with Joelle Pineau, Antoine Bordes, and Yann LeCun. Additionally, Meta will also form a new cross-functional AI leadership team, which will be managed by Joelle.
Advances in AI research have led to a plethora of ML and deep learning models capable of generating images from text prompts. The text-to-image tools represent a tremendous cultural shift because this new and democratized form of expression can amplify the magnitude of imagery produced by humans. Researchers at Google and OpenAI have developed text-to-image models that haven’t been released to the public. However, like other automated systems, they also come with ethical and unfair use risks that these companies are yet to solve. DALL.E by OpenAI and Imagen by Google are two popular models in this industry.
Today, we will discuss which text-to-image model is better and more accurate at creating images.
OpenAI launched DALL.E 2, an AI tool that can create realistic images and art from the text. You can test various variations of text to see the results of DALL.E 2. For example, in this text: An astronaut, playing basketball with cats in space, and as a children’s book illustration, DALL.E 2 produces the following image:
You can use a similar text with variations like in watercolors or a minimalistic style. OpenAI launched this new version with renewed capabilities and restrictions to prevent abuse. It can turn a text prompt into an accurate image. This tool could create an image from text within seconds. The new version is better at its job, and the images are bigger and more detailed. It has become faster and can spin out more variations than the last version. DALL-E 2 has an invite-only test environment where developers can try it out in a controlled way. All the prompts are evaluated for violations.
Days after OpenAI launched DALL.E 2, Google introduced Imagen, a competitor to OpenAI’s DALL.E that creates images and artworks using a similar method. After adding a description, Imagen can generate images based on how it interprets the text. It combines different attributes, concepts, and styles. For example, by giving text like ‘a photo of a dog’, Google’s system can create an idea that looks like a picture of a dog. However, by altering the description to ‘an oil painting of a dog,’ the generated image would look more like an oil painting. Like DALL.E, Imagen has also not been released for public use because of the risk associated with biases in extensive language models.
DALL.E worked on 1.2-billion parameters, and DALL.E 2 works on a 3.5-billion parameter model. It has another 1.5-billion parameter model to enhance the resolution of its images. However, Imagen has surpassed various text-to-image models like DALL.E 2 because of T5-XXL, Google AI’s largest text encoder, which has 4.6 billion parameters.
Imagen can create better images because of high parameters. Scaling the size of the text encoder has been shown to improve text-image alignment to a great extent. On the other hand, scaling the size of the diffusion model improves sample quality, but a bigger text encoder has the highest overall impact. Imagen also uses a diffusion technique called noise conditioning augmentation that helps to higher FID and CLIP scores.
Additionally, images created by DALL.E lacks realism. Here is what Google’s research scientist had to say about it:
From my experience, I feel #Imagen tries very hard to render exactly what is written in the prompt. #Dalle, on the other hand, is like a rebellious assistant who’s a bit disobedient, and sometimes a little high 🌿 pic.twitter.com/ebrIHwxVKo
Thomas Wolf, the cofounder of Hugging Face, has also written in favor of Google’s text-to-image model. However, he also mentioned that not releasing such models for public use has hindered research in this industry. He also wants the datasets to be made public, so there can be a collective effort to improve the models.
Ethical challenges
Jeff Dean, Senior Vice President of Google AI, states that he “sees AI as having the potential to foster creativity in human-computer collaboration.” However, due to the various ethical challenges of preventing misuse of this technology, both Google and OpenAI haven’t released their models for public use. Although it’s unclear how they can safeguard this technology from misuse, so it’s not used for unethical purposes.