Thursday, January 15, 2026
ad
Home Blog Page 307

Intertwined Intelligences: Introducing India’s First AI NFT Art Exhibition

Intertwined Intelligences: Introducing India’s First AI NFT art Exhibition
Humanity (Fall of the Damned), Second State by Scott Eaton (Source: PR Handout)

Recently, Terrain.art launched ‘Intertwined Intelligences,’ India’s first artificial intelligence non-fungible token (NFT) art exhibition, which showcases the relationship between artificial intelligence (AI) and human creativity. The exhibition is on display till August 20 and features six global artists pioneering AI — Pindar Van Arman, David Young, Scott Eaton, Harshit Agrawal, Sofia Crespo, and Feileacan McCormick from Entangled Others Studio. 

Terrain.art is a blockchain-powered online platform that focuses on art from South Asia. Intertwined Intelligences, curated by Harshit Agrawal, is the initial phase in Terrain.art’s dedication to creating an environment for artists working in emerging kinds of art. This includes art forms like generative art, neural art, machine learning, and AI-assisted art, which intend to stimulate critical thinking about the kind of future the world of art should create using such technologies. 

Harshit got into the limelight with his project “The Anatomy of Dr. Algorithm,” in which he fed photos of surgeries into an algorithm and employed artificial intelligence to produce Rembrandt-inspired art from images of everything from organs to fibroids. For the Terrain.art exhibition he curated 3000 landscape paintings and trained the computer to understand the visual patterns within them to generate its own set of landscape paintings.

He explains that the artist can reconfigure their artwork by providing raw creative inputs into a GAN (generative adversarial network), a collection of machine learning algorithms that translate the artist’s inputs into visual media. The system has a lot of back-and-forths, but humans have the final word. Overall, encompassing this method, artificial intelligence offers artists a fresh set of frontiers to discover and cross. 

This is not the first time India has hosted an exhibition of art influenced by artificial intelligence. Aparajita Jain, Founder of Terrain.art and Co-Director of Nature Morte, organized India’s first Al display, titled “Gradient Descent,” at Nature Morte in Delhi in August 2018, with all of the artworks created by Al in cooperation with artists. Today, Intertwined Intelligences on Terrain.art has added a new dimension by displaying works that have both physical and digital equivalents and are protected on blockchain using NFTs.

Aparajita says, “Human existence and digital technologies are no longer separable. Our lives are deeply intertwined within a web of technology, which itself is rapidly cultivating an intelligence of its own, seeded by human intelligence and mined data.”

The artwork on display includes 3D creatures that resemble living forms but were created using algorithms, as well as portraits painted by trained robots. This demonstrates how unlimited artistic ingenuity can be. Aparajita also predicts that gaming art will take over in the future when humans will be able to explore AI-generated augmented realities creatively.

Read More: IMD to leverage Artificial Intelligence for Better Weather Forecasting

The world of art is experiencing a quiet revolution, with Non-Fungible Tokens (NFTs), allowing aesthetically talented artists to demand top pay for their work. Meanwhile, the combination of Covid-19 isolation and cryptocurrency earnings provided a strong motivation for digital-positive collectors to compete for these NFTs, and some creators are profiting handsomely. NFT also saves time and money spent in procuring or selling as well as prevents extra costs due to damaged artwork in the process. Additionally, since blockchain and cryptocurrencies operate in a decentralized marketplace, buyers of digital artwork and NFTs are largely unaffected by the traditional art and craft market.

Advertisement

GPT-3, in combination with other AI models, writes better Phishing Emails than Humans

Artificial Intelligence writes better Phishing Emails than Humans

A recent test conducted by Singapore’s State Technology Agency points out that an artificial intelligence (AI) system outperformed humans in writing better phishing emails. The study was presented during the Black Hat and Defcon security conference held in Los Angeles earlier this month. 

Two hundred employees are tested with two phishing emails generated by the artificial intelligence system and humans, respectively. Surprisingly most of the employees fell for the phishing email generated by artificial intelligence. 

Researchers used OpenAI’s deep learning model GPT-3 and other artificial intelligence technologies to build this AI program. A government cybersecurity specialist, Eugene Lim, said that researchers have found out that it takes millions of dollars to train such artificial intelligence models with high levels of accuracy. 

Read More: NSA Partners with DoD for joint evaluation of Federal Artificial Intelligence use

“But once you put it on AI-as-a-service, it costs a couple of cents, and it’s really easy to use—just text in, text out. You don’t even have to run code, you just give it a prompt, and it will give you output. So that lowers the barrier of entry to a much bigger audience and increases the potential targets for spearphishing.” 

The artificial intelligence algorithm focuses on personality analysis for generating phishing emails. OpenAI’s GPT 3 platform analyses an individual’s tendencies to react to something and generate results accordingly. 

Leveraging this capability, researchers developed such advanced tools for creating phishing emails that surpass human intelligence to a certain extent.

OpenAI officials said, “We grant access to GPT-3 through our API, and we review every production use of GPT-3 before it goes live. We impose technical measures, such as rate limits, to reduce the likelihood and impact of malicious use by API users.” 

They further mentioned that the misuse of language models is an industry wide issue, and they are meticulously working towards the deployment of safe and responsible artificial intelligence technologies.

Advertisement

NSA Partners with DoD for joint evaluation of Federal Artificial Intelligence use

NSA Partners with DoD for joint evaluation of Federal Artificial Intelligence use

The National Security Agency (NSA) of the United States partners with the Department of Defence (DoD) for a joint evaluation of federal artificial intelligence use. 

The assessment will focus on the integration of artificial intelligence in strategic operations like insights collection from foreign communications and information related to weapon systems. 

Both the bodies will also scrutinize the artificial intelligence framework of the country and the ethical use of AI. A recently announcemenced on the DoD’s official website confirms these developments. 

Read More: Edinburg Researchers Create New Weather Dataset For Next-Gen Autonomous Vehicles

DoD officials said, “We may revise the objective as the evaluation proceeds, and we will also consider suggestions from DOD and National Security Agency management on additional or revised objectives.” 

Last year the DoD started a review program for the same, but now DoD claims that review has been terminated to start this new evaluation program in collaboration with the NSA. 

An official said, “In this case, given the objective as stated in our announcement memo, we determined that it is a better use of taxpayer resources to conduct our oversight jointly with the National Security Agency.” 

He further added that the department of defense office of the Inspector General considers a wide variety of factors while determining the time for conducting oversight operations or canceling previously announced programs. 

Recently, the NSA also cracked a secret deal with the tech giant Amazon worth $10 billion. Through this deal named ‘WildandStormy,’ Amazon Web Services will provide NSA cloud computing services.

Advertisement

Xiaomi Launches new open source Quadruped robot CyberDog

Xiaomi Launches new open source Quadruped robot CyberDog

Chinese multinational electronics company Xiaomi launches its new open-source bio-inspired quadruped robot named CyberDog. The unique robot was unveiled during Xiaomi’s launch event held on 10th August. 

This new launch marks the beginning of Xiaomi entering into a completely different domain of robotics. Robotics enthusiasts in the worldwide open-source community can now collaborate and compete with Xiaomi engineers for the development of quadruped robots. 

During the launch event, Xiaomi said, “CyberDog can analyze its surroundings in real-time, create navigational maps, plot its destination, and avoid obstacles. Coupled with human posture and face recognition tracking, CyberDog is capable of following its owner and darting around obstructions.” 

Read More: IIT Madras invites Applications for Online Data Science Program

CyberDog weighs nearly 3 kilograms and is powered by the Jetson Xavier AI platform developed by NVIDIA that has 48 Tensor and 384 CUDA cores. The robot comes with an array of different sensors and is equipped with multiple cameras, including Intel’s RealSense D450 that enables it to move autonomously.

It also has a GPS unit installed to aid its movements. Its motors can generate a peak torque of 32Nm that allows it to move at a maximum speed of 11kmph. With these technologies, CyberDog can analyze its surroundings in real-time and generate a navigational map to avoid obstacles during its movements. 

According to Xiaomi, the robot dog can perform many complex movements like backflips. CyberDog comes with one HDMI and three USB-C ports that allow it to connect with various tools like LiDAR sensors, panoramic cameras, and lots more. 

Xiaomi plans to launch 1000 units of CyberDog in the initial stage and will offer it at a price of $1544. The company will build a ‘Xiaomi Open Source Community that would enable developers to share their progress with a worldwide audience.

Advertisement

JLL acquires Israeli Real Estate Technology platform Skyline AI

JLL acquires Real Estate Technology platform Skyline AI

Investment management firm JLL acquires Israeli real estate technology developing platform Skyline AI. No information has been shared regarding the valuation of this deal. 

Skyline AI uses machine learning and artificial intelligence tools to streamline the fragmented data available in the commercial real estate sector that helps investors analyze and find real estate opportunities. 

According to officials, the deal will close soon after the completion of a few remaining formalities. JLL plans to combine the expertise of both companies to provide the best possible insights to its consumers. 

Read More: Edinburg Researchers Create New Weather Dataset For Next-Gen Autonomous Vehicles

JLL’s CEO of Global Capital Market, Richard Bloxam, said, “When you combine the intelligence of the best advisors on the ground with a quantitative expert team and AI data analysis, you get insights that are beyond human and create a competitive edge for JLL and our clients.” 

He further added that this acquisition would enable JLL to provide innovative and strategic advice to its clients. Integrating Skyline AI’s technology with JLL’s services will also allow the company to accurately predict future property valuations, identify better investment opportunities, improve cost-saving, and help make informed business decisions. 

Richard Winstanley founded JLL in 1997. The company specializes in providing real estate services to its customers to help them make better decisions related to investment and future real estate trends. 

Co-CEO of JLL Technologies, Yishai Lerner, said, “Our teams consist of knowledgeable real estate experts and world-class technologists who successfully bring new AI offerings like Skyline AI into the fold and provide the best insights to our clients, accelerating JLL’s leadership in CRE technology.” 

He also mentioned that this acquisition is a step forward towards JLL’s goal of accelerating its growth by investing in prop-tech enterprises. 

The CEO of Skyline, Guy Zipori, said that JLL would help them achieve their vision of revolutionizing the real estate industry using artificial intelligence tools.

Advertisement

Edinburg Researchers Create New Weather Dataset For Next-Gen Autonomous Vehicles

Heriot-Watt University bad weather data and autonomous vehicles, RADAR, LIDAR
Image Credit: Sensible 4

It has been nearly a decade since the first autonomous vehicle (AV) hit the road. Autonomous vehicles are attracting a lot of attention because of the convenience and safety benefits they provide. However, a fully autonomous vehicle is yet to progress beyond the testing stage. One of the biggest hurdles for this technology is not only artificial intelligence algorithms but also fog, rain, and snow weather data.

Today, more than hundreds of self-driving cars, trucks, and other vehicles companies are testing this technology but are leveraging data of road conditions on a clear sunny day. While the majority of the autonomous vehicles have given outstanding results on such test data, making the automobile navigate through rapidly changing road and weather conditions, especially in the circumstances with heavy snowfall, fog, or rain, poses a tremendous challenge. 

Read More: Microsoft Launched An Autonomous Beach Cleaning Robot BeachBot

Driverless vehicles rely on sensors to view street signs and lane dividers, but inclement weather can make it harder for them to ”see” the road and make correct decisions when cruising at high speeds. An autonomous car uses three sensors — camera, radar, and lidar — to view and perceive everything around it. The cameras assist it in obtaining a 360-degree vision of its surroundings, recognizing objects and people, and determining their distance. Radar aids lane maintaining and parking by detecting moving objects and calculating distance and speed in real-time. LIDAR uses lasers instead of radio waves to create 3D images of surroundings and map them, creating a 360-degree view around the car. 

However, light showers or snowflakes might cause LIDAR sensor systems to malfunction and lose accuracy. The vehicles also depend heavily on data gathered from optical sensors, which are less reliable in bad weather.

Therefore, leveraging bad weather data will not only play a crucial role in safety-critical AV choices like disengagement and operational domain designation but also help in more basic tasks like lane management.

Despite the pressing need to accommodate bad weather data in the training dataset, there was scanty publicly available data. As a result, the Radiate (RAdar Dataset In Adverse weaThEr) project, directed by Heriot-Watt University, has released a new dataset that will aid in the creation of autonomous vehicles that can drive safely in adverse conditions. The team drew inspiration from the sensors that have already proven their excellence during rain, snow, and fog weather in Scotland. Their goal is to make radar sensing research on object identification, tracking, SLAM (Simultaneous Localization and Mapping), and scene comprehension in harsh weather easier.

This dataset comprises three hours of annotated radar images, multi-modal sensor data (radar, camera, 3D LiDAR, and GPS/IMU), and more. Professor Andrew Wallace and Dr. Sen Wang of Heriot-Watt University have been gathering data since 2019. First, they outfitted a van with LiDAR, radar, stereo cameras, and geopositioning devices. Then, they intentionally drove the vehicle across Edinburgh and the Scottish Highlands at all hours of the day and night, capturing urban and country roads conditions in bad weather.

Wallace explains that such datasets are critical for developing and benchmarking autonomous vehicle perception systems. Though we are still a long way from having driverless cars on the roads, autonomous vehicles are already being tested in controlled environments and piloting zones.

Located on the outskirts of Edinburgh, Heriot-Watt houses the famous National Robotarium that is a £3 million initiative that brings together robotics, cognitive science, and psychology experts with colleagues from Imperial College London and the University of Manchester. Wallace’sWallace’s team is based at Heriot-Watt University’s Institute of Sensors, Signals, and Systems, which has previously pioneered conventional and deep learning techniques for sensory data interpretation. 

According to Wallace, the duo successfully demonstrated how radar could assist autonomous cars in navigating, mapping, and interpreting their environment in bad weather when vision and LiDAR are rendered useless. In addition, the team also labeled about 200,000 road actors in the dataset – bicycles, cars, pedestrians, traffic signs, and other road actors, that could help researchers and manufacturers develop safe navigation in autonomous vehicles of the future.

Dr. Wang cites, “When a car pulls out in front of you, you try to predict what it will do – will it swerve, will it take off? That’s what autonomous vehicles will have to do, and now we have a database that can put them on that path, even in bad weather.”

Wallace claims that they need to improve the resolution of the radar, which is naturally fuzzy. Combining high-resolution optical images with improved weather-penetrating capabilities of radar would help autonomous vehicles see and map better and, ultimately, travel more safely.

Advertisement

IIT Madras invites Applications for Online Data Science Program

IIT Madras invites Applications for Online Data Science Program

The Indian Institute of Technology (IIT) Madras announces that it has started accepting applications for its online data science program. The Institute clarified that marks of JEE would not be needed to apply for this program. 

This unique course was first launched in 2020 for students who have completed senior secondary education. IIT madras would conduct a qualifying process where the applicants will be trained through online video lectures and assignments for four weeks. 

Applicants will also get the opportunity to interact with the course lecturers. Admission will be offered to the learners who successfully clear the qualifying process. Any student who has cleared class 12th examination in English and had studied mathematics till class 10th is eligible to apply for this data science program. 

Read More: Cognistx announces Partnership with SAE International on Artificial Intelligence tool

The professor in charge of IIT Madras’s data science program, Prof Andrew Thangaraj, said, “With this program, IIT Madras is delivering the highest quality learning opportunity to a very large number of learners, without compromising the rigor of the process. The combination of online classes and in-person invigilated exams accomplishes this. At each stage, students will have the freedom to exit from the program and receive a Certificate, Diploma or a Degree, from IIT Madras.” 

He further mentioned that applicants would be able to build a career in programming and data science after pursuing this course offered by IIT Madras. With this diploma program, students will be able to learn from the best faculties of IIT Madras without the need to appear for the Joint Entrance Exam. 


A scholarship of up to 75% will be provided by IIT madras to students belonging from the underprivileged sections of society. Interested candidates can apply from the official website of IIT Madras. The last date to apply for this program is 30th August 2021.

Advertisement

Cognistx announces Partnership with SAE International on Artificial Intelligence tool

Cognistx announces Partnership with SAE International on Artificial Intelligence tool

Cognitive computing company Cognistx announces its partnership with the world’s leading association for engineers in the aerospace industry SAE International to improve the global engineering standards of artificial intelligence and machine learning. 

With this partnership, both the companies plan to expand SAE International’s SAE OnQue Digital platform. This unique system enables engineers in the aerospace sector to speed up the product development process, reduce the complexity of projects, and digitally connect their products. 

Cognistx will use natural language processing and text intelligence technologies to increase the platform storage capacity up to 66%. That would help the SAE community search and apply for jobs in their fields. 

Read More: Deepen AI launches Artificial Intelligence-Powered Annotation Tool

Sanjay Chopra, CEO of Cognistx, said, “The OnQue Digital Standards System enables decisions to be made faster. AI reduces the grunt work of searching through mountains of data so aerospace companies can work more efficiently and quickly.” 

He further mentioned that this tool would also allow users to accurately search their desired information in a manner that they do not miss out on standards they did not know existed. Cognistx had processed more than 3,000 SAE approved documents related to aerospace parts and material standards earlier this year. 

Sanjay Chopra founded the North Carolina-based company Cognistx in the year 2015. The firm specializes in developing computing solutions for real-world challenges using data science tools. 

The Chief Growth Officer of SAE International, Frank Menchaca, said, “Our collaboration with Cognistx allows us to use the power of artificial intelligence to better integrate standards into engineering workflows for product development, product performance, and quality management.” 

He also added that by ensuring security, the aerospace industry could be transformed using groundbreaking products developed by engineers in an efficient way.

Advertisement

Deepen AI launches Artificial Intelligence-Powered Annotation Tool

Deepen AI launches Artificial Intelligence-Powered Annotation Tool

Data labeling company Deepen AI launches its new artificial intelligence-powered annotation tool to boost computer vision training for autonomous driving vehicles and robotics. 

The platform will provide highly accurate annotations for images and videos in a concise time frame. The company has decided to offer this platform on a 60-days free trial basis in the initial stages. 

Data annotation is one of the most vital operations for training artificial intelligence and machine learning models. Deepen AI’s new platform is an all-in-one solution that collects data, uploads them on the annotation tool, and generates accurate outputs in an optimized manner. 

Read More: Researchers Use Neural Network To Gain Insight Into Autism Spectrum Disorder

Founder and Chief Executive Officer of Deepen AI, Mohammad Musa, said, “The demand for high-quality annotated data is increasing rapidly, and with our AI-powered easy to use annotation tools, enterprises and individuals can reduce annotation time and effort significantly – while maintaining the highest quality.” 

The platform also has one of the best quality control task management features that allow businesses to quickly rectify quality concerns and seamlessly monitor the entire work process. It is also loaded with numerous advanced and user-friendly tools to provide a better user experience. 

Below mentioned are some of the highlighted features of the platform – 

Super Pixel – Pixel accurate machine learning assisted segmentation.

Bounding Box Segmentation – Pixel-wise object labeling by drawing boundary boxes.

Frames Classification – Automatic pre-label up to 80 common classes that increase productivity upto 7 times.

Carter Tiernan, an engineer at Deepen AI, said, “The Deepen tool has allowed our team to create sizable bounding box and segmentation datasets. The tool itself has matured quite a bit, and the support team has been proactive in debugging and helping us.”

Deepen AI is a California-based autonomous development tooling and data labeling company founded by Anil Muthineni, Mohammad Musa, and Cheuksan Wang in 2017. The firm specializes in developing automated data labeling solutions for LiDAR, camera, and radar data.

Advertisement

Researchers Use Neural Network To Gain Insight Into Autism Spectrum Disorder

Neural network, Autism Spectrum Disorder
Image Source: LinkedIn

Recently, researchers from Tohoku University have unraveled why people with autism read facial expressions differently using a neural network model. The results of this study were published in the journal Scientific Reports on July 26, 2021.

In individuals with autism spectrum disorder (ASD), problems with facial emotion recognition (FER) are prevalent. The problem isn’t with how information is encoded in the brain signal, but rather with how it is interpreted. In other words, individuals with autism spectrum disorder understand facial expressions differently than healthy people. As patients get older, the characteristics that identify autism, such as sensory and emotional problems, repetitive behaviors, and a lack of social subtlety, can make it difficult to manage too.

According to Yuta Takahashi, one of the paper’s co-authors, by looking at facial expressions, humans can detect distinct emotions such as sadness and anger. However, little is known about how humans learn to distinguish distinct emotions based on facial expressions’ visual information. Yuta also mentioned that earlier, scientists were not sure what happens in this process that causes individuals with an autism spectrum disorder to have trouble reading facial expressions.

To understand this better, the researchers devised a predictive processing theory. According to this hypothesis, the brain is continuously predicting the next sensory experience and adapting when it is incorrect. Sensory data, such as facial expressions, aid in the reduction of prediction error.

Based on predictive processing theory, the team developed an artificial neural network [hierarchical recurrent neural network] that was able to mimic the developing process. It achieved this by training itself to predict how different regions of the face will move in facial expression videos. The main goal was to use a developmental learning method to train a neural network model to predict the dynamic changes in facial expression movies for six fundamental emotions without explicit emotion labels.

The next stage was to self-organize the emotion clusters into the higher level neuron space of the neural network model. At the same time, the model had no idea what emotion the video’s face expression represented. The neural network model was also able to generalize unknown facial expressions that were not included in the training stage, as well as recreate facial part movements with minimal prediction errors.

Image Credit: Yuta Takahashi, et al

During the tests, the team of researchers introduced anomalies in the neurons’ activity, which provided insight into the influence on learning development and cognitive characteristics. The experiments showed that generalization ability dropped in the neural network model when the heterogeneity activity in the neuronal population was lowered. This showed that the development of emotional clusters in higher-level neurons was suppressed, which resulted in the neural network model failing to detect the emotion of unfamiliar facial expressions, a sign of autism spectrum disorder.

Read More: Latest AI Model From IBM, Uncovers How Parkinson’s Disease Spreads in an Individual

“Using a neural network model, the study demonstrated that predictive processing theory can explain emotion detection from facial expressions,” says Yuta. The findings also support the previous studies that impaired facial emotion recognition in autism spectrum disorder can be explained by altered predictive processing and provide possible insight for investigating the neurophysiological basis of affective contact. This will also help researchers with a better understanding of the neurophysiological foundation of affective contact.

“We hope to further our understanding of the process by which humans learn to recognize emotions and the cognitive characteristics of people with autism spectrum disorder,” added Yuta.

Advertisement