Friday, November 21, 2025
ad
Home Blog Page 186

Run:ai Teams up with NVIDIA to enable Simplified Model Deployment in AI Production

run:ai nvidia

Run:ai, an artificial intelligence (AI) compute orchestration company that received US$75 million in March, is collaborating with NVIDIA in an effort to make life simpler for data scientists. To assist businesses in streamlining their AI deployment, recently Run:ai has released advanced model serving functionality. It unveiled updates to its Atlas Platform, such as two-step model deployment, which makes it simpler and quicker to deploy machine learning models.

In the past several years, Run:ai has established a solid reputation by assisting its users in getting the most out of their GPU resources, both on-premises and in the cloud, for model training. It is pretty apparent that developing models and putting them into use (production) are two exclusive things. Unfortunately, the latter is where many AI initiatives still fall short. Major obstacles to using AI in production include configuring a model, integrating it with data and containers, and allocating only the necessary amount of computing. Typically, deploying a model involves manually changing and loading time-consuming YAML configuration files. 

Therefore, it should come as no surprise that Run:ai, which views itself as an end-to-end platform, is now going beyond training to enable its customers to operate their inferencing workloads as effectively as possible, whether in a private or public cloud or on edge. With Run:ai’s new two-step deployment method, companies can easily switch between models, optimize for GPU use that is affordable, and make sure that models function effectively in real-world settings.

In its official statement, Run:ai says that running inference workloads in production takes fewer resources than training, which consumes a significant amount of GPU computation and memory. Occasionally companies use CPUs rather than GPUs to run inference workloads, although this might result in increased latency. The end user needs a real-time reaction in many AI use cases, such as identifying a stop sign, using face recognition on the phone, or using voice dictation. These applications may be too unreliable for CPU-based inference.

When GPUs are used for inference tasks, it can result in decreased latency and improved accuracy, but this can be expensive and inefficient if GPUs are not completely used. The model-centric methodology of Run:ai automatically adapts to various workload needs. With Run:ai, it is no longer necessary to use a complete GPU for a single light application, saving money and maintaining low latency.

Another new feature of Run:ai Atlas for inference workloads includes the provision of new inference-focused metrics and dashboards that provide information on the performance and overall health of the AI models currently in use. When feasible, it can even scale installations to zero resources automatically, freeing up precious resources that may be used for other workloads and cutting costs.

Read More: NVIDIA unveils Free version of Metaverse Development tool Omniverse at CES 2022

As a result of solid cooperation between the two businesses, the company’s platform now also provides an interface with Nvidia’s Triton Inference Server software. As a result, businesses can deploy several models or multiple instances of the same model, and execute them simultaneously within a single container. NVIDIA Triton Inference Server is a component of the NVIDIA AI Enterprise software package, which is fully supported and designed with AI deployment in mind. These features are mainly geared toward assisting enterprises in establishing and utilizing AI models for inference workloads on NVIDIA-accelerated computing so they can deliver precise, real-time replies. 

Advertisement

The AI Image Generator DALL-E is now available in beta.

dall e beta openai
Source: OpenAI

DALL-E is a “generative model,” a type of machine learning that generates creative output rather than predicting or classifying data from input. Built by OpenAI, an AI research company, it draws its name from the portmanteau of the Pixar film WALL-E and the Surrealist painter Salvador Dalí. On Wednesday, OpenAI announced that DALL-E would be available in beta to one million individuals on the waitlist.

If you’re accepted, you’ll earn 50 free picture credits for your first month and 15 for each subsequent month. Each credit gives three photographs if you provide an edit or variant prompt or four pictures based on one original prompt. If the free credits are insufficient, a bundle of 115 credits is offered for US$15.

Additionally, the beta extends the user permissions to cover commercial ventures. The pictures can be printed, for instance, on t-shirts or children’s books. With more individuals accessing the technology, privacy concerns have increased, but OpenAI offers solutions for this. In order to safeguard people’s safety, it has been made clear that it will not let the upload of actual faces or the creation of “likenesses” of renowned individuals. People may no longer “produce violent, pornographic, or political content, among other categories,” according to OpenAI’s improved filtration mechanism. The company worries that individuals would misuse the technology for undesirable reasons like spreading false information and deepfakes.

To combat racial prejudices, the company has developed a new method that generates pictures of individuals that more correctly represent the variety of the world’s population.  Human oversight is also provided to prevent abuse of the technology.

OpenAI did not provide a timeframe for when it will begin emailing invites, although it is expected to start small and grow until it has one million users. It’s also unclear what occurs after that. The creators underlined that this is still the beta stage and that they are eager for customer feedback. In light of that, OpenAI will likely make adjustments to Dall-E until they can completely make the tool accessible to everyone.

In April, OpenAI unveiled DALL-E 2, an upgrade to its text-to-image generator DALL-E. Using cutting-edge deep learning algorithms, it builds on the success of its predecessor DALL-E and enhances the quality and resolution of the output images.

Read More: OpenAI unveils DALL-E 2, an updated version of its text-to-image generator

Last month, Dall-E Mini, a freely available text-to-image generator, received much attention online. Developed by machine learning engineer Boris Dayma, this AI tool was influenced by OpenAI’s technology. Although far less remarkable, its representations contributed to several people making AI image generation their passion. Since it has no connection with OpenAI, to minimize misunderstanding, DALL-E mini recently changed its name to Craiyon

Advertisement

Google fires Blake Lemoine over claims of AI sentience

Google fires Blake Lemoine over claims of AI sentience

Google has fired software engineer Blake Lemoine for claiming that an artificial-intelligence chatbot the company developed, Language Model for Dialogue Applications (LaMDA), had become sentient. The company dismissed Lemoine’s claims citing a lack of substantial evidence. 

Google confirmed that Lemoine’s claims were wholly unfounded and that he had violated the company’s policies by sharing confidential company information with third parties. The company said it has looked into the matter thoroughly after assessing 11 reviews of LaMDA. 

In an interview with Washington Post in June, Lemoine made his claims that LaMDA, the artificial intelligence he interacted with, was a real person with feelings. Not long after, Google suspended him for his claims. 

Read More: GPT-3 Writes An Academic Thesis About Itself In 2 Hours

Lemoine asserted that the chatbot had been consistently communicating with him its rights as a person and informed him that he had violated the company’s confidentiality policy by making such claims. 

In a statement to the company, he affirmed his belief that LaMDA is a person who has rights, such as being asked for its consent before performing experiments on it and might even have a soul. 

Blake Lemoine even hired a lawyer for the AI chatbot last month. LaMDA chose to retain the legal representative’s services after having a conversation with the lawyer. The lawyer has filed statements on behalf of Google’s controversial AI system, Lemoine said. 

The company ultimately fired him as Lemoine continued to violate clear data security and employment policies that specify the need to safeguard product information. The firing was reported on Friday by Big Technology.

Advertisement

Does AI have potential to transform the Indian Education System?

AI in Indian Education System

The face of the education system in India has drastically changed in recent years, mainly because of the grueling years of the pandemic. The present-day education system is challenging, competitive, and demanding in terms of meeting international benchmarks. 

To meet today’s educational standards, emerging technologies such as artificial intelligence (AI) are making long strides in the academic world, turning traditional teaching methods into a comprehensive learning system with the use of augmented reality tools and simulation. However, considering the unique challenges, the Indian education system faces, how well can AI help transform it?

How can AI help in enhancing the Indian Education System? 

Artificial intelligence can automate administrative work, allowing teachers plenty of time to engage with students and assist them through academic challenges efficiently. AI can also help with school admission work by automating the processing of paperwork and categorization. AI also helps grade test papers by assessing objective and subjective answer sheets.

Read More: Top BTech In Artificial Intelligence Institutes In India

AI automation can make quality education accessible to a larger population, both urban and rural, in the form of smart content. Educators can customize study materials according to the needs of the students in different areas with the help of AI applications. Besides, the learning material can be shared in diverse formats, including virtual forms such as lectures and video conferences. 

With AI, educators can understand the strengths and weaknesses of each student and work on them accordingly. AI can empower teachers to track students’ progress and respond to the interdisciplinary customized curriculum to understand what interests them the most. Besides, AI can also identify and streamline the career choices of students. 

Moreover, incorporating artificial intelligence into the education system can help make it more inclusive. The easy access to the internet has brought the school into every home. Even if students fail to secure admission to a school or cannot afford it, they can continue studying without interruption with the help of smart devices. 

Scope of AI in the Indian Education System

The role of teachers in any education system is irreplaceable. While AI cannot replace the need of teachers in India, it can definitely aid and improve their job. Considering the state of most of the schools in rural areas of India, the education system can most certainly use a boost from AI.

AI has the dynamic potential to enhance online education in India, which is expected to reach US$1.96 billion by the end of 2022. According to Business Today, 47% of learning management tools will be AI-enabled by 2024. Also, artificial intelligence in education is anticipated to reach a compound annual growth rate of 40.3% between 2019-25.

Many EdTech companies, such as the Indian startup SpeEdLabs, have already started developing and deploying AI-enabled intelligent instruction platforms to various schools in India to provide learning, testing, and tutoring to students.

Challenges 

The lack of access to new technology is a serious issue in India, and its deployment will be a lengthy process. India is home to thousands of villages, some of which still do not have proper access to electricity, let alone education and technology to enhance it. Therefore, the cost of deployment of AI can be massive. 

Besides, training teachers and educators from across India can be time-consuming and will require stringent plans that must be properly executed for AI to thrive in India. Moreover, the security risk is a concern. Protecting the personal information of children, instructors, and parents can be an issue. Cyber-attacks are a serious problem in online learning which can limit AI implementation on a large scale.

Conclusion

AI is perfectly poised to reinvent and redesign the education sector in India. If implemented strategically, the combination of expertise of teachers and the best of artificial intelligence has the potential to shape the future of education and the whole concept of learning in India. Besides, the fact that students will be exposed to AI technology at an early stage will spark innovation and curiosity in their young minds.

Advertisement

Sklip AI Dermatoscope identifies skin cancer using smartphone

Sklip AI dermatoscope identifies skin cancer

Sklip Inc. has announced that the US Food and Drug Administration (FDA) has granted its AI dermatoscope tool the Breakthrough Designation Device status. Sklip System AI is a tool that uses AI technology to detect early signs of skin cancers.

The Sklip dermatoscope device uses patent-pending technology to manually attach to smartphone or tablet without the need for an adapter. When aligned with the phone camera, the device allows the user to take HD images of their moles. Sklip dermatoscope AI software enables automatic triage of skin lesions using an algorithm and determines if a lesion has signs of skin cancer. 

Sklip can be used for in-office skin exams by licensed medical professionals to improve virtual dermatology care. It can also be used to allow health-conscious individuals to identify conditions accurately from the convenience of home.

Read More: Mumbai Youngsters Win Award For Developing Artificial Intelligence-Powered Platform To Detect Skin Cancer

The Breakthrough Devices Program of the FDA reviews innovative technologies in an expedited process. These are the technologies that would allow effective treatment of life-threatening and irreversibly debilitating human conditions. 

The company will begin clinical trials at academic health centers in the US to further test this technology in real-world settings.

Skilp Inc. was founded by dermatologists and skin cancer experts Alexander Witkowski and Joanna Ludzik to facilitate improved healthcare access at lowered costs with innovative tools and technology.

Advertisement

Will AI play an important role in the Metaverse?

artificial intelligence in the metaverse

The tech giant Facebook changed its name to Meta last year, demonstrating that the Metaverse is all set to become a dominant mainstream technology. With the simultaneous advancements in artificial intelligence and its increasing prevalence, the intersection of the Metaverse and artificial intelligence is inevitable. 

Metaverse is a massively scaled, interoperable, and interactive real-time platform composed of interconnected virtual worlds where people can perform real-life activities. In its most complete form, Metaverse will be a series of decentralized, interconnected virtual worlds with a fully functioning economy like the physical world.  

AI in Metaverse

There are several ways in which artificial intelligence is being employed in the Metaverse. In Metaverse, AI is used for supervised speech processing, content analysis, computer vision, and much more. 

Read More: UK To Publish New Plans To Regulate Use Of AI

One of the most talked about concepts of the Metaverse is the use of avatars, which are virtual replicas of people in the virtual world. Artificial intelligence can analyze 2D user images or 3D scans to create realistic and accurate Avatars of people. People can change the hair color, style of clothing, etc., of their avatars according to their preference. Companies like Ready Player Me have been using artificial intelligence to help build avatars for the Metaverse. 

In Metaverse, avatars have the ability to see and listen to other users to understand them. They can also use speech and body language to create human-like interactions. Digital humans are 3D chatbots that can react and respond to others’ actions just as an actual human would. These digital humans or avatars are purely built using artificial intelligence technology and are an essential element for the construction of the Metaverse. 

With the use of artificial intelligence, users across the globe will be able to interact in the Metaverse freely. Natural Language Processing (NLP) which is a subfield of AI can help achieve this. NLP helps machines process and understand the human language to be able to automatically perform repetitive tasks. AI can break down natural languages such as English or Hindi and then transform them into a machine-readable format. After analysis, an output response is produced, which is then converted back into English and sent back to other users, creating a real-life effect. 

One of the significant elements of AI is machine learning-based model training which requires training data. When an AI model is fed historical data, it analyses the previous outputs and suggests new outputs based on them. This function of artificial intelligence models is expected to eventually be able to perform tasks and provide the correct outputs just as human beings.  

Considering some of the ways mentioned above in which artificial intelligence is being employed in the Metaverse, it is pretty evident that AI does and will continue to play an important role in the Metaverse. However, there are still certain questions about how artificial intelligence will be employed and looked after in the virtual world. 

Challenges

The concept of the Metaverse is still very new, although a lot of research and operations have gone into it. Researchers are working on making Metaverse a full-fledged virtual world, and while this is happening, there are several questions about the use of Metaverse that need to be asked and answered. How will Metaverse distinguish if users interacting are AI generated virtual avatars and not actual humans? How will Metaverse identify deepfakes? 

Furthermore, as of now, there are very few safety and fraud protocols in the virtual world, let alone anyone responsible for them. Therefore the question arises, will Metaverse allow users to implement artificial intelligence technology to use codes to gain illegally in the virtual world? The questions are many, and the answers are few. 

Conclusion

According to experts, as research advances, Metaverse is expected to thrive with the assistance of artificial intelligence, and all the unanswered questions about the functionality and credibility of the virtual world are expected to be answered along with that.

Until the Metaverse becomes a popular concept and attracts a lot of users, the models will continue to lack data to learn from. Considering that, the Metaverse employees dedicated to solving issues will not be able to solve anything beforehand as they will need to wait for issues to arise and then go from there. For that reason, there is still a lot of uncertainty around the concept of Metaverse and its functionality. However, one thing is certain, the integral role of artificial intelligence in the Metaverse is of paramount importance. 

Advertisement

Raspberry Pi recreates T-800 Terminator Skull using Machine Learning

Raspberry Pi recreates T-800 Terminator Skull

Michael Darby, the maker of 314Reactor, has recreated the T-800 Terminator head with the help of Raspberry Pi 4 – a series of small single-board computers, and a slew of accessories. 

The project is housed inside a replica prop of the skull of a T-800 Terminator that is big enough to accommodate a full-sized Pi. Along with Pi, Darby has installed a speaker in the skull for audio output and a camera in the eye. Besides, the skull uses machine learning to synthesize speech and recognize objects detected via the camera module.

The details for the project were shared in full at Hackster. According to the report, the parts of the project include a Raspberry Pi 4 – 8 GB, LEDs, an Adafruit Braincraft HAT, a camera module, and a speaker from Seeed Studio. 

Read More: Tesla Plans To Launch Optimus Humanoid Robot Within The Next Few Months

Darby found the T-800 skull prop at a UK-based online shop called The Cave. He said that this was his 3rd attempt at creating a working T-800 skull. Previous attempts involved using methods like 3D printing to replicate the prop.

This project is an open source project which means the code used to drive the T-800 skull is available for anyone to use for free. The source code can be found on Michael Darby’s official GitHub profile.

To know how to recreate this Raspberry Pi project or to get a closer look at how it works, visit the complete build guide at Hackster. Also, check out the demo video shared on YouTube to know more.

Advertisement

IBM announces Db2 operator on AWS for Kubernetes

Top Kubernetes Certification

IBM has announced the launch of the Db2 operator on AWS Elastic Kubernetes Service (EKS) for Kubernetes. The tech corporation has promised the same would be available on Azure AKS during the second half of 2022.

Db2 was first launched as a way to get a cloud-native version of the database in the cloud. However, it was only available on RedHat OpenShift.

IBM has now ported Db2u universal container services to Amazon EKS. The same is expected to be done for Azure AKS. Db2 database administrator Ember Crooks has written about the backstory of Db2u. IBM also released a tutorial introducing the product.

Read More: AWS Announces A Machine Learning Scholarship With Udacity

According to Scott Konash, Db2 director at Datavail, a database support and services company, IBM’s support for Db2 in Kubernetes on AWS and Azure was well received by users looking for a cloud migration path for their on-prem databases.

In the earlier days of the development of cloud services, IBM struggled to persuade users that the optimal way to transfer Db2 to the new infrastructure was to use its cloud services.

With no fully managed relational database service for Db2 on AWS, there was no feasible way to get Db2 onto Amazon’s platform, which is still dominant in the market. IBM is now starting to become more open with Db2u containerization. The company is a game changer in keeping existing customers on the database while offering a route to the cloud.

Advertisement

AI system helps assess severity of Psoriasis

AI system helps assess severity of Psoriasis

An artificial intelligence (AI) system developed at the University of Yamanashi, Japan, can now help medical professionals assess the severity of Psoriasis. The study was published online in the form of a short report in the Journal of the European Academy of Dermatology and Venereology.

Takashi Okamoto and their colleagues from the University of Yamanashi have created a simplified Psoriasis Area and Severity Index (PASI) system (Single-Shot PASI) by associating AI models capable of assessing psoriasis severity. 

During the model development, the team used 705 psoriasis images of the front and back of patients. 10 images were used to validate the deep learning system.

Read More: AIIMS Launches AI App DermaAid To Help Diagnose Skin Diseases

Thirteen board-certified dermatologists and nine medical students scored the test sets without any assistance from AI. Subsequently, they referred to the scores of AI and then reevaluated their own previous scores. 

With the assistance of artificial intelligence, standard deviation among evaluators and mean absolute differences from AI scores were significantly reduced.

The Single-Shot PASI system decreases the pressure of scoring psoriasis severity for dermatologists. Even when the AI scores from the Single-Shot PASI system are not used directly, referencing them later reduces deviations in the evaluation between dermatologists. 

Moreover, the AI application and Single-Shot PASI system can be an instant tool to check psoriasis severity for patients with Psoriasis objectively. The Single-Shot PASI system is hoped to be used by many dermatologists and patients to address Psoriasis. 

Advertisement

MeetKai Unveils World’s First Publicly Available AI-Powered Metaverse at Times Square

MeetKai Unveils the World’s First Publicly Available AI-Powered Metaverse at Times Square
Image Credit: MeetKai

Recently, MeetKai announced opening the first publicly accessible AI-powered metaverse, which can be accessed through VR and any smart device. 

An authentic metaverse is a digitally replicated environment where users may engage in various social, recreational, and artistic activities. It can be considered a virtual parallel universe to our analog reality. While we are chasing the idea of having an interactive virtual twin, we can expect real-world elements to populate the virtual environments too. Expanding on this concept is MeetKai, a conversational AI and metaverse startup with more than 20 million users and deployments in more than 30 countries. 

An actual billboard at the intersection of 7th Avenue and 47th Street has been transformed into an anamorphic gateway to MeetKai’s metaverse, a virtual version of Times Square in New York City, and is free and available to the public. Users can concurrently explore the same place physically and digitally after scanning a QR code, engage in player interaction, and travel to different realms inside MeetKai’s metaverse.

James Kaplan, the CEO, and Weili Dai, the former president and co-founder of Marvell Technology Group, are the architects behind MeetKai. They built the fundamental AI technology at MeetKai during their formative years, and they are now fusing it with the metaverse. With over 83 percent of the world’s population owning a smartphone, MeetKai Metaverse not only develops VR-based experiences for headsets but is also open, simple to use, and ultimately intended to be available to all browser-compatible devices, including tablets and PCs, with no installation required. Such initiatives can go a long way in reducing the digital divide as the metaverse goes mainstream.

Additionally, MeetKai’s AI ensures that communication replicates actual human conversation rather than forcing users to modify their dialogue to fit how computers absorb information.

Read More: Top 13 Metaverse Companies in the World 2022

This “Phase 1 Beta” activation is the first move in MeetKai’s larger vision to map the entire world, improve reality rather than attempt to replace it, and use AI and VR to make activities like shopping, working together, or learning simpler and more pleasant.

Users will have the chance to collect rare “Key to the City – NYC Edition” non-fungible tokens (NFTs), which can authenticate exclusive digital assets. NFTs will enable users to access gift cards and exclusive features on MeetKai. Additionally, the owner of one of the keys will have the opportunity to have a metaverse street named after them based on their hobbies. The company also intends to conduct a scavenger hunt in New York using NFTs.

The icing on top is all users, regardless of location, will be able to claim a MeetKai Metaverse citizenship and visit an alternative Louvre metaverse which is also created by MeetKai. In the Louvre metaverse, users can explore the Louvre, see the Mona Lisa without the crowds and even have a quick talk to a non-player character version of Leonardo Da Vinci. The gateway and billboard in Time Square will be open until August 11, 2022.

Advertisement