Home Blog Page 38

IBM Introduces Energy-efficient Analog AI Chip that Works Similar to Human Brain

IBM introduces energy-efficient analog AI chip
Image Credits: IBM

A new prototype of an analogue AI chip that functions like a human brain and executes intricate computations for a variety of deep neural network (DNN) applications has been announced by the tech company IBM. According to IBM, the cutting-edge chip can significantly increase artificial intelligence’s efficiency while reducing battery consumption for computers and cellphones.

The completely integrated circuit has 64 AIMC cores that are coupled via an on-chip communication network, the company stated in a blog introducing the chip. Additionally, it uses extra processing and digital activation functions that are used in each convolutional layer and long short-term memory unit.

The 64 analogue in-memory computation cores in the new AI chip were created at IBM’s Albany NanoTech Complex. In order to bridge the analogue and digital worlds, IBM claims that it has incorporated small, time-based analog-to-digital converters inside each tile or core of the chip. These converters are modeled after the main characteristics of neural networks that operate in biological brains. 

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

According to the blog post from IBM, each tile (or core) also has compact digital processing units that carry out straightforward scaling and nonlinear neuronal activation operations. Future computers and phones could run advanced AI apps on IBM’s prototype chip instead of the ones that are now used. 

IBM says that a lot of the chips being created right now separate their memory and processing units, which slows down computing. This indicates that AI models are often kept in a separate location in memory, and computational operations necessitate the frequent rerouting of data between the memory and processing units.

When comparing the human brain to conventional computers, Thanos Vasilopoulos, a scientist at IBM’s research facility in Switzerland, told BBC that the former is able to achieve remarkable performance while consuming little power. He claimed that because of the IBM chip’s improved energy efficiency, large and more complex workloads can be executed in low power or battery-constrained environments.

Advertisement

The New York Times Forbids Using its Content for AI Training

The New York Times forbids using content AI training
Image Credits: NYT

The New York Times has taken proactive steps to prevent the exploitation of its material for the development and training of artificial intelligence models. 

The NYT changed its Terms of Service on August 3rd to forbid the use of its content, including text, pictures, audio and video clips, look and feel, metadata, and compilations, in the creation of any software programme, including, but not limited to, training a machine learning or artificial intelligence system.

The revised terms now add a restriction prohibiting the use of automatic technologies, such as website crawlers, for accessing, using, or gathering such content without express written consent from the publication. According to the NYT, there may be undefined fines or repercussions if people refuse to abide by these new regulations. 

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

Despite adding the new guidelines to its policy, it doesn’t appear that the publication has altered its robots.txt file, which tells search engine crawlers which URLs can be viewed. The action might be in response to Google’s recent privacy policy update, which disclosed that the search engine giant may use open data from the internet to train its numerous AI services, such as Bard or Cloud AI. 

However, the New York Times also agreed to a $100 million contract with Google in February, allowing the search engine to use part of the Times’ content on its platforms for the following three years. Given that both businesses would collaborate on technologies for content distribution, subscriptions, marketing, advertising, and “experimentation,” it is probable that the modifications to the NYT terms of service are aimed at rival businesses like OpenAI or Microsoft. 

According to a recent announcement. website owners can now prevent OpenAI’s GPTBot web crawler from scraping their sites. Numerous large language models that power well-known AI systems like OpenAI’s ChatGPT are trained on large data sets that may contain content that has been illegally stolen from the internet or is otherwise protected by copyright. 

Advertisement

Stability AI Introduces Japanese Language Model Japanese StableLM Alpha 

Stability AI Introduces Japanese Language Model Japanese StableLM Alpha
Image Credits: Stability AI

Stability AI, the pioneering generative AI startup behind Stable Diffusion, has unveiled its first Japanese Language Model (LM), known as Japanese StableLM Alpha, in a key step towards improving the Japanese generative AI market. 

As the company claims their language model to be the most effective publically available model catering to Japanese speakers, this historic debut has drawn attention. Accordingly to the company, thorough benchmark assessment against four other Japanese LMs supports the assertion. With its design of 7 billion parameters, the recently unveiled Japanese StableLM Alpha is a tribute to Stability AI’s dedication to technological development. 

The well-known Apache Licence 2.0 will be used for the commercial distribution of the Japanese StableLM Base Alpha 7B iteration. This specialised model was painstakingly created after prolonged training on a massive dataset that included 750 billion tokens of both Japanese and English text that were carefully collected from web archives.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

The Japanese community of Stability AI created datasets by utilising the knowledge of the EleutherAI Polyglot project’s Japanese team. The use of EleutherAI’s GPT-NeoX software, a key component of Stability AI’s development process, in an expanded form, greatly facilitated this group effort.

The Japanese StableLM Instruct Alpha 7B is a similar model that represents yet another outstanding achievement. This model was created primarily for research purposes and is only suitable for research-related applications. Through the use of several available datasets and a advanced approach known as Supervised Fine-tuning (SFT), it demonstrates a unique capacity to follow user instructions.

EleutherAI’s Language Model Evaluation Harness was used to conduct thorough evaluations that served to validate these models. The models underwent scrutiny across various domains, such as question answering, sentence classification, sentence pair classification, and sentence summarization, emerging with an impressive average score of 54.71%. 

According to Stability AI, this performance indicator clearly places the Japanese StableLM Instruct Alpha 7B ahead of its rivals, demonstrating its strength and supremacy.

Advertisement

Research Shows Popular AI Language Models Inclined to Political Bias

Research shows popular AI language models inclined to political bias
Image Credits: Shutterstock

According to a recently released research paper, the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University have recently found that different AI language models have political biases. 

According to the study, which examined 14 large language models, OpenAI’s AI chatbot ChatGPT and the latest LLM version GPT-4 have a propensity to favor left-wing libertarianism whereas Meta’s LLaMA has a tendency to favor right-wing authoritarianism. The researchers asked questions about democracy, feminism, and other themes, and used this information to assess the political slant of these models. 

Unexpectedly, the study discovered that training the models on datasets with various political biases changed their behavior and changed their capacity to recognise hate speech and false information.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

The study used a three-stage approach to look at the development of AI language models. The models’ initial responses to politically charged words revealed their innate political leanings. For instance, compared to OpenAI’s GPT models, Google’s BERT models showed a sense of social conservatism. The discrepancy may be explained by the fact that more recent GPT models were influenced by liberal online texts and older BERT models were trained on conservative book sources.

In the following step, datasets containing news and social media posts from both left-leaning and right-leaning sources were used to retrain the GPT-2 and Meta’s RoBERTa models. The biases that these models already had were reinforced by this process.

In the study’s last phase, it was shown how political preferences of AI models affected how well they could categorize hate speech and false information. While models trained with right-wing data were more sensitive to hate speech directed at white Christian men, those trained with left-wing data were more tuned in to hate speech that targeted minority groups.

The research team emphasised the need of comprehending the political biases exhibited by AI language models, particularly as these models are increasingly being incorporated into popular goods and services. Right-wing skeptics have criticized OpenAI, the company that created ChatGPT, claiming that the chatbot represents a liberal viewpoint. 

The public has been reassured by OpenAI that it is actively addressing these worries and instructing human reviewers to refrain from supporting any one political organization while the AI model is being improved. The scientists are nevertheless dubious, claiming that it is doubtful that any AI language model will be totally free of political prejudices.

Advertisement

Microsoft Introduces Private and Secure Azure ChatGPT for Internal Enterprise Use

Microsoft introduces Private Secure Azure ChatGPT
Image Credits: Microsoft

Microsoft has introduced ChatGPT on Azure solution accelerator. This solution provides a similar user experience to ChatGPT but acts as your private ChatGPT. The open-source code for the application is available on GitHub. 

As we all know now, ChatGPT’s popularity has grown exponentially since its launch. This AI service which is freely available to public is frequently used by business users around the world to increase productivity or serve as a creative assistant.

ChatGPT, however, runs the danger of disclosing confidential data. Blocking corporate access to ChatGPT is one method, but people will always find a way past it. Additionally, this lessens ChatGPT’s potent powers and lowers worker productivity and satisfaction. To address this issue, ChatGPT on Azure solution accelerator was introduced. 

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

Azure ChatGPT provides built-in protections for the privacy of user’s data and complete isolation from OpenAI systems. Other enterprise-grade security controls are built in, and network traffic can be completely isolated to the user’s network. 

Users can provide additional business value by integrating plug-ins with their internal services such as ServiceNow, etc., or by using your own internal data sources (plug and play).

The project is open for contributions and suggestions from the public. A Contributor Licence Agreement (CLA), which states that users have the authority to provide the company the rights to use their contribution, is typically required in order for them to make a contribution.

In January, Microsoft CEO Satya Nadella said that the company would soon add OpenAI’s popular AI chatbot ChatGPT to its cloud-based Azure service very soon. In March, Microsoft announced that ChatGPT is available in preview in Azure OpenAI Service. 

Advertisement

TikTok Allegedly Working on a Feature for Creators to Disclose AI-generated Content

TikTok working on feature disclose AI-generated content
Image Credits: The Verge

It appears like TikTok is developing a new method for creators to disclose whether or not their posts contain AI-generated content. A new “AI-generated content” option has surfaced under the “more options” section before sharing a video, according to social media strategist Matt Navarra. 

TikTok amended its content restrictions in March to require users to disclose deepfakes and AI-generated content in the video’s title or apply an identifying sticker. In the description for that toggle, TikTok claims the label will assist prevent content removal.

According to a video that Navarra had posted showing the function, when the toggle was flipped in the video, TikTok displayed the brand-new pop-up explaining the feature. The pop-up reminds content producers that they must mark AI-produced material that depicts “realistic scenes” and cautions them once more that improper labeling could result in the removal of their work.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

Although users were unable to locate this toggle in the app, it appears that it may just have been rolled out for testing. TikTok has still not commented on the feature yet. 

Following last week’s revelation that competitor platform Instagram is developing its own artificial intelligence content disclosure labels, the new AI-generated content feature for TikTok debuted just on cue. Last month, Meta made a similar pledge to internet giants Google, Amazon, Microsoft, and others, saying it will responsibly develop AI and be transparent with users about its use.

Advertisement

One Model Raises $41M to Bring AI-powered Insights to HR Management 

One Model raises $41M bring AI-powered insights HR management
Image Credits: One Model

Today, One Model revealed that it has raised $41 million in a funding round headed by Riverwood Capital. One Model is a platform that employs artificial intelligence to assist organizations in making decisions regarding hiring, promoting, laying off employees, and general workplace planning.

According to One Model’s CEO, Christopher Butler, the funding will be used to boost several of the company’s growth initiatives, particularly in the fields of technology, product development, customer success, and go-to-market.

“One Model’s people analytics product roadmap will be expanded to solve problems for a diverse array of data science, analyst, people manager, and C-level audiences, delivering tailored content proactively through alerts, notifications, and individualized reporting,” Butler said. 

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

One Model is what is referred to as a “people analytics” platform, a platform made, at least in theory, to gather and use organizational and talent data for better business outcomes. People analytics has long attracted significant interest. In a 2018 Deloitte survey, 84% of major organizations ranked them as “important” or “very important,” and 69% had already established a people analytics team.

Butler is a member of One Model’s founding team. The founding team initially worked for Inform, a people analytics business that was purchased by SuccessFactors, now known as SAP SuccessFactors, in 2010. After the acquisition, they claim they noticed a widening gap between what customers wanted to accomplish with their people data and the market’s available solutions. Hence, One model was founded. 

One Model can carry out fundamental activities including finding talent or skill gaps inside an organization and forecasting future workforce requirements in light of demographic shifts and corporate objectives. Beyond this, the platform can estimate the cost of turnover and headcount in an effort to develop a strategy that monitors and gradually lowers this cost.

Advertisement

US Announces AI Cyber Challenge to Safeguard Crucial Government Software 

The Biden-Harris administration today announced the start of a significant, two-year competition that will employ artificial intelligence (AI) to safeguard the most crucial software used by the US, including the code that powers the internet and their vital infrastructure.  

The “AI Cyber Challenge” (AIxCC) will test contestants from all over the United States on their ability to find and fix software vulnerabilities. The Defence Advanced Research Projects Agency (DARPA), which is in charge of this competition, will work with a number of leading AI firms, including Anthropic, Google, Microsoft, and OpenAI, who are contributing their expertise and making their cutting-edge technology available for this challenge. 

The goal of this competition, which will award nearly $20 million in prizes, is to inspire the development of innovative technologies that will significantly increase the security of computer code, one of cybersecurity’s most urgent concerns. A public competition will be held by DARPA, and the participant who secures critical software the best will receive millions of dollars in rewards. 

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

DARPA will also make $7 million accessible to startup companies who want to participate in AIxCC, ensuring widespread participation and an equal playing field.

Teams will take part in a qualifying competition in the spring of 2024, and the top-scoring teams (up to 20) will receive invitations to the semifinal competition at DEF CON 2024, one of the premier cybersecurity conferences in the world. The highest scoring teams (up to five) will proceed to the next round and receive cash prizes.

The best competitors will significantly impact cybersecurity for both America and the rest of the world. The challenge advisor will be the Open Source Security Foundation (OpenSSF), a division of the Linux Foundation. It will also assist in making sure that the winning software code is immediately put to use to safeguard America’s most crucial software and keep the American people secure.

Advertisement

RBI Suggests AI-powered Conversational Payments System for UPI

Shaktikanta Das, governor of the Reserve Bank of India (RBI), revealed on Thursday that the monetary policy committee had kept the repo rate unchanged and that the central bank had proposed using conversational payments on the unified payments interface (UPI) through artificial intelligence.

In his proposal, Governor Das suggested introducing Conversational Payments on UPI, which is a cutting-edge payment method that will let users interact in a conversation with an AI-powered system to initiate and complete transactions in a safe and secure environment.

The RBI stated that conversational instructions hold immense potential in enhancing ease of use, and consequently reach, of the UPI system in a report titled “Statement on Development and Regulatory Policies,” which was released at the time of announcement of the monetary policy.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

According to the central bank, this channel will be made available through smartphones and feature phones-based UPI channels. This will help in the deepening of digital penetration in the country.

The conversational UPI payment option will be initially made accessible in Hindi and English before being made available in additional Indian languages, the central bank governor stated, adding that instructions to NPCI will be issued shortly.

Additionally, the RBI advocated employing Near Field Communication (NFC) technology to make offline transactions easier in order to support the implementation of UPI-Lite, which was released in September 2022.

Advertisement

China’s Internet Giants Place $5 Billion Nvidia Chips Order Amid US Restrictions Scare

Internet behemoths in China are on the prowl for high-performance Nvidia chips necessary for creating generative artificial intelligence systems, placing orders totaling $5 billion. The shopping frenzy is sparked by worries that the US would implement new export restrictions.

According to numerous people with knowledge of the situation, Baidu, ByteDance, Tencent, and Alibaba have ordered 100,000 A800 processors from the US chipmaker for delivery this year for a total of $1 billion. According to two persons close to Nvidia, the Chinese organizations had also ordered additional $4 billion worth of graphics processing units for delivery in 2024.

A800 is the less powerful version of the cutting-edge A100 GPU for data centers by Nvidia. Due to export limitations put in place by Washington last year in an effort to stifle Beijing’s technology ambitions, Chinese tech companies can only purchase A800s, which have slower data transfer rates than A100s.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

As the excitement surrounding AI has grown over the past year, Nvidia’s GPUs have emerged as the most in-demand product among the largest tech companies in the world for providing computational capacity for the creation of complex language AI models.

Chinese internet organizations are rushing to stockpile A800 chips out of concern that the Biden administration may be considering further export limits that would include even Nvidia’s less-powerful chips, as well as a larger GPU scarcity brought on by a spike in demand.

On Wednesday, Washington placed additional restrictions on some US investments in China’s quantum computing, advanced chip, and artificial intelligence industries which will come into effect from next year. 

According to recent news, the A100 chips by Nvidia are reportedly offered for a stunning $20,000 per unit, which is double the standard price. Despite export limitations put in place by the United States, Chinese sellers are profiting from the soaring demand for high-end Nvidia chips, particularly the A100 artificial intelligence chips. 

Advertisement