Friday, November 29, 2024
ad
Home Blog Page 322

Expert.ai Announces The Release Of Hybrid Natural Language Platform

Expert.ai's natural language

Expert.ai announced on Tuesday regarding the general availability of the hybrid natural language launched in March for various natural language solutions such as designing, developing, testing, deploying, and monitoring.

This cloud-based platform aids enterprises to accelerate, augment and expand their firms for any activity that includes language. It assists by turning any text-based document into structured data. Expert.ai’s Platform supports knowledge discovery, process automation and decision-making, and the shared advantage of flexible designing of language models.

By switching from tactical to the strategic use of natural language for designing artificial intelligence models, enterprises will build greater portability of language assets and models. Natural language solutions also aid with solving business challenges and deliver natural-language-enabled enterprises when used as a whole. 

The Expert.ai Platform uses an exclusive hybrid AI approach honed from hundreds of real-world implementations. Comprehensive and easy to use, it combines symbolic AI and machine learning techniques to ensure the best possible accuracy for each use case with explainable AI.

Read more: Alphabet’s Waymo Raised $2.5 Billion In Second Funding Round

“Language powers business, so unlocking the value of data embedded in your everyday language is critical to success. With the launch of our platform, we enable, for the first time, the combination of different AI techniques to design and deploy practical applications,” said Luca Scagliarini, expert.ai CPO.

The natural language processing uses a unique hybrid artificial intelligence approach that has been sharpened using hundreds of real-world implementations, making the platform easy and comprehensive. The combination of symbolic AI and machine learning techniques ensures that every individual user gets the best possible accuracy while being transparent about the explainable AI. A live demo for it can also be obtained from the website.

Advertisement

NVIDIA Canvas Uses Artificial Intelligence To Turn Your Doodles Into Images

NVIDIA Canvas Uses Artificial Intelligence To Turn Your Doodles Into Images

NVIDIA recently announced the launch of its artificial intelligence-powered application, ‘Canvas.’ The application will be available to every NVIDIA RTX GPU user at no additional cost. This application can convert simple doodle images into stunning photographs with the use of artificial intelligence.

The developers had surprised the audience in 2019 by using deep learning models to turn rudimentary sketches into photorealistic scenes. Canvas is appearing to be the publicly available version showcased by the developers earlier. The app is a part of NVIDIA Studio, which is a platform that provides artists and creators with both hardware and software tools to aid them in bringing creative visions to life. 

Artists can start sketching shapes and lines using a palette of 15 tools like grass, mountains, clouds, and weather effects. The artificial intelligence-powered software will convert those into a photorealistic scene in real-time.

Read More: NVIDIA Partners With Equinix To Launch Its Artificial Intelligence Launchpad

NVIDIA officials said, “The tool allows artists to use style filters, changing a generated image to adopt the style of a particular painter.” They also mentioned that the app doesn’t just stitch pieces of images together but it creates new images the way artists do. The creative applications of the tool are endless, and it can help artists create paintings faster than before. 

The application uses a type of artificial intelligence called generative adversarial networks (GANs). GANs consist of a generator and a discriminator that work simultaneously. The generator converts material maps into images, and the discriminator analyses certain aspects of what is generated. The GAN system used to develop Canvas has been trained on an NVIDIA DGX system and has been fed with over five million images. 

The beta version of the application is available for download on NVIDIA’s official website. The minimum system requirements to run the application are a GPU of NVIDIA’s RTX segment with 460.89 or a newer driver. 

Advertisement

Tesla Unveils Computer Vision-Based Autopilot

Tesla Unviels Computer Vision-based Autopilot

On June 20th, 2021, at the CVPR’ 21 Workshop on Autonomous Driving, Tesla unveiled its brand new supercomputer, relying entirely on the Computer Vision approach for autopilots leaving behind the radar and lidar sensors. Earlier this year, on April 10th, Elon Musk made a popular tweet: “When radar and vision disagree, which one do you believe? Vision has much more precision, so better to double down on vision than do sensor fusion”. From that time, it has been evident that Tesla is transforming solely into highly optical camera-based systems to produce safe autonomous vehicles.

However, the newly launched supercomputer is the predecessor of Tesla’s existing supercomputer “Dojo,” coming with 10 petabytes of “hot tier” NVME storage, 1.8 EFLOps, and 720 nodes of 8x A100 80GB, as stated by Tesla’s head of AI Andrej Karpathy during the CVPR’ 21 events. Karpathy also claims that this might be the fifth most potent supercomputer after Sunway TaihuLight, but later conceded that his team has not yet reached the benchmark for the TOP500 SuperComputer Ranking.

Read more: Tesla Sues Rivian And Accuses Of Theft

The Tesla newcomer possesses eight high-defined cameras capturing 36 frames per second from its surroundings, with the help of a $10,000 FSD package that detects the highway ramps, lanes, and responds to traffic signals. Furthermore, Tesla’s artificial intelligence team employs supervised machine learning approaches for training the neural network on large, clean, diverse datasets created using Tesla’s Auto-labelling feature. Later deploying the model on FSD (Fully-Self Driving) computer, which is the Tesla in-house developers chip comprising 12 CPUs, a GPU with (600 GFlops) and two customized NPUs (Neural Processing Units).

Utilizing all the emerging technologies like Computer Vision, GPU-clustered SuperComputers, and NPUs, it is believed that the new Tesla AutoPilot will provide a safe driving environment by avoiding accidents.

Advertisement

Upside AI Raised $1.2 Million Led By Endiya Partners

Upside AI raised $1.2 million

Upside AI has raised $1.2 million in its funding round led by venture capital fund Endiya Partners, including other participants like Vijay Kedia, Ajay Nanavati, chairman of Quantum Advisors, and Gopichand Katragadda, CEO of Myelin Foundry. 

The company plans to use the fund to increase production and distribution. It also plans to scale up its tech teams that will focus on growing assets under management (AUM) from high net-worth individuals (HNIs), institutional investors, and family offices.

Earlier in 2019, the company showcased its first set of products focused on equity investing using machine learning algorithms to understand, recognize, and buy relevant companies.  

Read More: Yellow Messenger Renamed To Yellow.ai To Launch Artificial Intelligence Powered Voice Bots

Upside AI is a SEBI (Securities and Exchange Board of India) recognized portfolio management service provider founded in 2017 by Nikhil Hooda, Kanika Agarwal, and Atanuu Agarwal. 

Since July 2019, the company has consistently delivered 71% cumulative returns. It has grown over ten fold in the last year to over ₹55 crores with funds from different High Networth Individuals and family offices.

The co-founder of Upside AI, Atanuu Agarwal, said, “We believe the Indian asset management industry is in its early days with a single-digit penetration of household wealth.” He further added that the company aims to add more than a thousand HNIs, institutional clients, family offices and grow to ₹1,000 crore AUM. 

The co-founder also predicted that technology and rules-based investing would dominate the Indian stock market in the coming decade. 

“The funding will help Upside AI build a robust pipeline of differentiated tech products and a network of large distributors, wealth managers, and stockbrokers,” said the managing director of Endiya Partners, Sateesh Andra.

The company has already acquired numerous clients in India as well as in the United States, including prominent venture capitalists and CEOs of many multinational corporations.

Advertisement

Aundril Raises $450M At The New Series D

Aundril Raises $450M at Series D

Palmer Luckey, the founder of a defense-based AI start-up named Anduril, made an infamous tweet to announce that his company has so far been able to raise an additional $450M, which soars the company’s values to $4.6 Billion. On Thursday, Luckey tweets: “We just raised $450M in Series D funding for Anduril. It will be used to turn American and allied warfighters into invincible technomancers who wield the power of autonomous systems to safely accomplish their mission. Our future roadmap is going to blow you away, stay tuned!”  

The Anduril started growing quietly into the defense business in 2017 with Marine Cops, and Customs and Border Protection contracts, and it is currently set to build an intelligent warfare platform for the Air Force’s Joint All Domain Command & Control (JADC2) project.

Anduril has been a producer of border-control technologies like surveillance towers, long-flying drones, high-resolution cameras, infrared sensors for tracking intruders with the facility of deploying them in the company’s artificial intelligence-based software named Lattice.

The defense tech hub is now improving its technologies to secure military bases and borders by creating virtual walls, knocking enemy drones in the sky, and many more.

Read more: Google Announces On Wider Skin Tone Range For Better Artificial Intelligence Recognition

On the funding round of the new Series D, the global investor and the former Twitter VP Elad Gil while leading the occasion said that society is unable to deal with the growing crimes and outgoing threats; hence, it requires Anduril like organizations to expand their technologies in multiple fields from controlling natural disasters to cyber-attacks.

He also concluded that Andurils adverse usage of advanced technologies like machine learning, computer vision, and sensor fusing would help prevent human trafficking, combating wildfires, fighting drug cartels, defending the energy infrastructure, and thereby saving humanity.  

Thus, with all this positive support and encouragement from Tech investors, it is expected that Aundril will utilize the additional financial resources to better humankind.

Advertisement

Google Announces On Wider Skin Tone Range For Better Artificial Intelligence Recognition

google's wider skin tones

Google announces that it would be developing new methods to come up with a wider skin tone range, leaving the age-old scale behind to ensure that the artificial intelligence products produced are not being biased to any skin tone. 

The current tool that Google is using or, for that matter, any skin tone detection system uses is the Fitzpatrick skin type scale (FST). This scale has six skin tones ranging from pale to dark brown/black skin. It was introduced in 1970 for dermatology purposes, and all the tech companies rely on it for facial recognition and smartwatch pulse detection. 

Last October, during the federal technology conference, it was recommended to abandon FST due to its poor range of color representation for facial recognition. In response to this, Google took a step forward in developing a new method for skin tone recognition.

Read more: Facebook’s Artificial Intelligence Can Now Detect Deepfake

Google recognized that the forthcoming products will be assisted with robust artificial intelligence, hence will be much more sensitive to skin tones and that having the same FST scale would lead to poor performance of the products for darker and yellow skin tones. To curb racism and for the products to be highly accessible the company leaped ahead of its other competitive peers.

Artificial intelligence systems are more sensitive to skin tones and a scale with a minimum range like FST would not do the job perfectly. It was proved when Facebook was testing their AI systems for deepfake recognition in April. The researchers said that FST does not encircle the diversity in skin tones. It failed to recognize a few of the skin tones between white and brown. 

A study conducted by University of California San Diego clinicians shows that FST often fuels false assurances about heart rates on smartwatches for darker skin tones. For the advanced artificial intelligence to not be discriminated against anyone, there at least need to be 12-18 tones rather than six shades, says Victor Casale, a colour expert.

Advertisement

Candy Shop Slaughter: A Video Game Created By GPT-3

GPT-3 generated game

Candy shop slaughter is a video game that was developed using GPT-3 by Fractl gaming group for OnlineRoulette.com. It is a full-fledged game that consists of all the elements required for a successful mobile game. 

Artificial intelligence is being expanded to various levels in every domain each day, even in the gaming industry. Although there are many AI and VR games, this is the first game designed entirely by artificial intelligence. The game is based on the main character named Freddy Skittle and has two modes. In the story mode, the player can accumulate points by performing various activities. The game turns into a 3D fighter game in arcade mode, wherein the blood and guts are transmuted into candies and treats. Not only that, but the game consists of 12 other additional characters that are bosses and companion players, all created by AI. 

The artificial intelligence model GPT-3 was used to fabricate the game, an OpenAI language model that can not only generate human-like text using pre-trained algorithms but also generate code. The model was designed to create anything that requires a language structure, i.e., from answering questions to writing essays, translating and summarizing texts. In fact, Fractl had made a complete website that entirely had blog content using GPT-2 and GPT-3. 

Read more: Microsoft’s Power Apps Will Allow You To Generate Code With GPT-3

Playing around with the same technique, the gaming enterprise had created all the game characters, game art, and gameplay using GPT-3. Joe Mercurio, the creative strategy lead of Fractl, came up with the idea and development of the project while Tynski, the cofounder, developed the AI outputs.

OnlineRoulette had surveyed 1000 gamers to find out how they found the game to be along with its various aspects and if they were willing to pay for an AI-generated game. The survey found that 10% of the gamers found it unoriginal and 54% found Candy Shop Slaughter to be original, and a shocking 20% considered it very authentic. In addition, 67% of them ranked it as high quality, and 65% were willing to pay for it.

Advertisement

The Pentagon will launch an Artificial Intelligence and Data Accelerator Initiative

The Pentagon Launched Initiative For Artificial Intelligence

The Pentagon, on Tuesday, announced that it aims to develop combatant command networks for the data-heavy, artificial intelligence-powered reality of the future battlefield. 

Kathleen Hicks, Deputy Secretary of Defence, said that the artificial intelligence initiative is an effort that will deploy technical teams to combat commands to train military networks for Joint All-Domain Command and Control. 

The Pentagon plans to quickly pass the best data from artificial intelligence-backed systems to army personnel. He also said, “Its goal is to rapidly advance data and AI-dependent concepts like Joint All-Domain Command Control.” 

Read More: FDL To Use Artificial Intelligence In Space Science Explorations

This initiative will depend upon the combatant commands’ experimentation events and exercises to test artificial intelligence capabilities. The implementation of this concept would require an advanced artificial intelligence system to process data on a battlefield. 

This AIDA initiative would create foundational capabilities through numerous exercises to continuously gain knowledge, said Hicks. The Department of Defence is creating operational data teams that will be dispatched to eleven combatant commands. 

The teams will meticulously work,  manage, catalog, and automate data feeds that inform decision-making. They will ensure that the data is captured and remains usable until it is used to create decision advantages on the ground. DOD will also strengthen data relationships with additional “flyaway teams of technical experts” to help soldiers streamline and automate workflows by integrating artificial intelligence. 

It will also use the gathered data to update network infrastructure and the effectiveness of its global warfighting capabilities. DOD plans to understand and analyze the issues that impair their current artificial intelligence capabilities through successive experiments. 

Hicks also said, “This will produce data and operational platforms designed for real-time sensor-data fusion, automated command and control tasking, and autonomous system integration. It will allow data to flow across both geographic and functional commands.” 

At a different event, Lt. Gen. Dennis Crall said that the Pentagon had started an analysis to understand the technology gaps it needs to overcome to prepare for wars in the future. 

Advertisement

HPE Acquired Artificial Intelligence Startup Determined AI

HPE acquired Determined AI

Hewlett Packard Enterprise (HPE) recently announced it has acquired an artificial intelligence startup Determined AI. The company said that the reason for this acquisition is to train artificial intelligence models faster utilizing Determined AI’s open-source machine learning (ML) platform. 

The company plans to combine the startup’s artificial intelligence platform with its own high-performance computing platform to reduce the production time of artificial intelligence-powered products for better and quicker customer service. 

Determined AI was founded in 2017 by Evan Sparks, Neil Conway, and Ameet Talwalkar in San Francisco. The company quickly became popular in the industry after the launch of its open-source artificial intelligence platform in 2020. Their platform has been used in several different industries like autonomous vehicles, defense contracting, biopharmaceuticals, and manufacturing. 

Read More: Deep Learning For AI: A Paper By The Experts

Senior Vice President of HPE, Justin Hotard, said, “Determined AI’s unique open-source platform allows ML engineers to build models faster and deliver business value sooner without having to worry about the underlying infrastructure.” He further mentioned that as the world enters the age of insights, the customers have highlighted the requirement of machine learning to provide faster answers from their data. 

The company’s primary motive is to leverage artificial intelligence training to launch projects which require specialized computing skills. HPE and Determined AI’s team plans to make high performance computing more accessible with its GreenLake edge to cloud service. 

Determined AI’s unique artificial intelligence platform enables researchers to innovate and boost their delivery time by eliminating the complexity and cost related issues with machine learning development. 

Determined AI’s CEO, Evan Sparks, said, “Over the last several years, building AI applications have become extremely compute, data, and communication intensive.” He also added that the acquisition would help accelerate their speed to create artificial intelligence applications and will also expand their customer reach. 

Advertisement

NVIDIA Partners With Equinix To Launch Its Artificial Intelligence Launchpad

NVIDIA Partners With Equinix To Launch Its Artificial Intelligence Launchpad

NVIDIA recently announced its partnership with Equinix to launch its artificial intelligence launchpad to expand its artificial intelligence hardware from the clouds, where the hyper-scalers control the hardware designs. This decision was made to co-locate data centers that are cloud-based but allow users to buy and install NVIDIA DGX servers and have them hosted and managed by a third party, and if need be, with cloud-based pricing on bare metal instances.

The company has released its Base Command software as a part of its complete artificial intelligence stack with which NVIDIA can run its machine learning training in its supercomputers. 

Companies worldwide are planning to develop in the cloud and deploy within data centers that lower cost, data security, and workload isolation. The process of building infrastructure is a significant concern for enterprises and is highly time-consuming. 

Read More: NVIDIA Will Acquire DeepMap For Advancing The Autonomous Vehicle Industry

Original Equipment Manufacturers (OEMs) worldwide are trying to convert all of their hardware into the cloud in terms of how it is consumed while also making it a physical asset that customers can control either on-premises or in co-location facilities

General Manager of NVIDIA, Justin Boitano, said, “So instead of in our engagements where enterprises say, ‘I need to go buy servers so I will come back in two or three months to get started,’ they can get started instantly.” He further added that with the help of this, companies could assemble their infrastructure in a jiffy rather than building their own setup from scratch.

“This is going to aid customers to get started on this journey faster and show value to internal stakeholders before making bigger CAPEX investments – and help accelerate the entire cycle for us,” said Boitano. 

Equinix will start providing this artificial intelligence launchpad in the summer of 2021 in the United States. Initially, the focus will be on the locations having enterprises willing to work on artificial intelligence and then roll out globally.

Advertisement