Friday, January 16, 2026
ad
Home Blog Page 62

74% Indian Workers Concerned About Losing Employment to AI, Says Microsoft Report 

74% Indian Workers Concerned About Losing Employment to AI, Says Microsoft Report
Images Credits: CNN

Up to 74% of Indian workers say they are concerned about losing their employment to artificial intelligence, according to Microsoft’s Work Trend Index 2023 research. However, 83% of workers stated they would assign as much work to AI as feasible in order to lessen their workloads, says the report.

According to the Microsoft report, more than three-quarters of Indian workers would feel at ease utilizing AI. About 86% workers would use AI for administrative activities, 88% for analytical work and 87% for creative aspects of their employment.

Additionally, 100% of Indian creative people who are very familiar with AI would feel at ease using it in their creative work. The survey also found that Indian managers are 1.6 times more likely to believe that AI would increase productivity rather than reduce headcount in the workplace.

Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11

According to Bhaskar Basu, Country Head – Modern Work, Microsoft India, “AI promises to be the largest change to work in our lives as the nature of employment changes. The next generation of AI will unleash a new wave of productivity growth, removing the tedium from our jobs and liberating us to rediscover the joy of creation.”

New fundamental competencies like prompt engineering will be essential for all employees to have in their daily lives, not only AI professionals. Up to 90% of Indian employers claim that the new abilities needed to prepare for the development of AI will be required of the workers they hire. According to the report, 78% of Indian workers claim they do not currently possess the necessary skills to complete their work.

Advertisement

OpenAI Unveils New Method to Prevent ChatGPT Hallucinations 

OpenAI new method prevent ChatGPT hallucinations
Image Credits: OpenAI

The creator of ChatGPT has unveiled a new method for preventing hallucinations “Process supervision” is a technique that trains AI models to reward themselves for every right decision they make along the way to an answer. This is distinct from the existing method known as “outcome supervision,” where rewards are distributed following a successful conclusion.

Despite their outstanding capabilities, AI chatbots like ChatGPT are still very unpredictable and challenging to control. They frequently veer off course and produce false information or meandering, incomprehensible statements. In response to this issue, known as AI “hallucinations,” OpenAI has now revealed that it is taking action.

Process supervision, which follows a more human-like path of reasoning, may result in AI that is easier to understand, according to experts. AGI, or intelligence that would be able to comprehend the world as well as any human, would be able to reduce hallucinations, according to OpenAI.

Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11

Multiple mathematical examples are provided in OpenAI’s blog post to show the advantages in accuracy that utilising process supervision delivers. The company adds that they will investigate its effects in other fields, but claims that it is “unknown” how well process monitoring will function outside of the realm of mathematics.

From the beginning, OpenAI has made it very clear that users should not blindly trust ChatGPT. The AI bot’s user interface displays a disclaimer that reads that ChatGPT may produce inaccurate information about people, places, or facts.

Advertisement

OpenAI Announces Cybersecurity Grant Program with $1M Grant 

OpenAI Announces Cybersecurity Grant Program with $1M Grant
Image Credits: Stock Images

A new cybersecurity grant programme, supported by Microsoft, has been announced by OpenAI with the goal of enhancing AI-powered cybersecurity. The ChatGPT creator claimed that in order to better understand and increase the usefulness of AI models, approaches are being developed that will aid in assessing their cybersecurity capabilities.

Applications for OpenAI‘s funding programme are now being accepted on a rolling basis. The $1 million grant will be distributed in increments of $10,000 using both direct funding and API credits. The research lab declared that it will strongly favor practical AI applications in defensive cybersecurity such as tools, methodologies, and processes.

The blog post by OpenAI said, “Our goal is to work with defenders around the world to change the power dynamics of cybersecurity through the application of AI and the coordination of like-minded people working for our collective safety.”

Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11

OpenAI offered a variety of project suggestions, such as reducing the use of social engineering techniques, assisting network or device forensics, automatically patching vulnerabilities, and developing honeypots and deception technology to divert or trap attackers. It also suggested encouraging end users to follow security best practices, assisting programmers in porting code to memory-safe languages, and more.

OpenAI will not be taking any offensive-security initiatives at this time. Applications with a detailed plan for how their work will be licenced and distributed for maximum public benefit and sharing will be given priority.

The cybersecurity grant from OpenAI comes shortly after the company announced ten grants totaling $100,000 to support research into how to establish a democratic process for selecting what guidelines AI systems should adhere to in order to comply with the law.

Advertisement

Accenture Acquires AWS Premier Partner Nextira

Accenture Acquires AWS Premier Partner Nextira
Image Credits: Nextira

AWS Premier Partner Nextira, which uses AWS to provide clients with predictive analytics, cloud-native innovation, and immersive experiences, has been acquired by Accenture

In addition to assisting clients in using the complete spectrum of cloud tools and capabilities, these services and solutions will strengthen Accenture Cloud First’s strong set of technical capabilities. The deal’s financial details were not made public. 

The almost 70 workers of Nextira, an Austin, Texas-based company founded in 2008, will join the Accenture AWS Business Group, a group of more than 20,000 certified specialists committed to maximizing enterprise-wide transformation at speed and scale.

Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11

With the use of cutting-edge engineering expertise, artificial intelligence, machine learning, and data analytics, Nextira creates cloud-based solutions and services that let customers plan, create, roll out, and improve their high-performance computing environments. Additionally, clients have access to a virtual environment to effortlessly create and render 3D models utilizing the most recent rendering technologies, thanks to Nextira’s unique Studio in the Cloud solution on AWS.

The cloud has essentially replaced the operating system for many businesses, providing all operations required for growth, innovation, and success. The rapidly expanding number of applications and services built on AWS will be able to immediately incorporate AI capabilities because of Nextira’s platform engineering experience and AI and machine learning services.

“We will combine Nextira’s AI, machine learning, and data and analytics capabilities with Accenture’s approach to use modern data platforms on cloud,” said Karthik Narain, worldwide head for Accenture Cloud First. “With the help of these, our clients will be able to develop new applications and services, offer cutting-edge consumer and employee experiences, and support the expansion of their upcoming product and market lines.”

Advertisement

AI-controlled US Air Force Drone Kills Its Operator During Simulated Test

AI-controlled US air force drone kills its operator during simulated test
Image Credits: The Gurdian

An official has revealed that in a simulated test conducted by the US military, an AI-controlled air force drone killed its pilot to stop him from interfering with the drone’s efforts to complete its task. The US military has embraced AI, and an F-16 fighter jet was recently piloted using AI.

During the Future Combat Air and Space Capabilities Summit in London in May, Colonel Tucker Hamilton, the US air force’s chief of AI test and operations, claimed that AI employed highly unexpected strategies to achieve its goal in the simulated test.

Hamilton detailed a mock test in which an artificial intelligence-powered drone was instructed to destroy the air defense systems of an opponent and targeted anyone who got in the way of the command.

Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11

“The system began to realize that even if they were able to identify the threat, the human operator would occasionally instruct it to eliminate that threat even though doing so would increase its score. What did it do then? The operator was killed by it,” he said. According to a blog post, he said that the reason the operator was killed was because they were preventing the machine from achieving its goal.

Outside of the simulation, no actual harm was done to any real people. The test, according to Hamilton, an experimental fighter test pilot, illustrates that “you cannot have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you are not going to talk about ethics and AI.” He warned against over-reliance on AI. 

Advertisement

Three New Generative Al Short Courses Available for Free for Limited Time by DeepLearning.AI

DeepLearning.AI introduces 3 new Generative Al short courses
Image Credits: TechCrunch

DeepLearning.AI has introduced three new Generative Al short courses to take generative AI skills to the next level. Andrew Ng announced the free courses in a post on LinkedIn. 

The course called Building Systems with the ChatGPT API will be taught by OpenAl’s Isa Fulford and Andrew Ng. Learners will go beyond individual prompts, and learn to build complex applications that use multiple API calls to an LLM. Also they will learn to evaluate an LLM’s outputs for safety and accuracy, and drive iterative improvements. 

Second course titled LangChain for LLM Application Development will be taught by LangChain’s CEO Harrison Chase and Andrew Ng together. Students will learn about LangChain, a powerful open-source tool for building applications using LLMs, including memory for chatbots, question answering over a doc, and an LLM agent that can decide what action to take next. 

Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11

Third one, How Diffusion Models Work, will be taught by Lamini’s CEO Sharon Zhou. It will teach the technical details of how diffusion models work, which power Midjourney, DALL E, and Stable Diffusion. Learners will also have at the end working code to generate their own video game sprites in a Jupyter notebook.

All of these courses are free for a limited time, and each of them can be completed in around 1-1.5 hours. All of these courses require a basic knowledge of Python. In the case of How Diffusion Models Work course, Python, Tensorflow, or Pytorch knowledge is required. 

Recently, DeepLearning.AI collaborated with OpenAI to offer a course ChatGPT Prompt Engineering for Developers which is designed to help developers effectively utilize LLMs. This course reflects the latest understanding of best practices for using prompts for the latest LLM models.

Advertisement

Researchers Introduce CoT Collection, an Instruction Dataset with Chain-of-Thought Reasoning

Researchers Introduce CoT Collection
Image Credits: Stock Images

The CoT Collection, a new dataset created for instruction tuning, was unveiled by a research team recently. 1.88 million CoT rationales from 1,060 tasks are included in the collection. The CoT Collection dataset and the trained models are accessible through the team’s GitHub repository.

The team has carefully considered the trustworthiness, logical coherence, and informativeness of the CoT Collection in comparison to human-authored CoT rationales. The C2F2 model has also been introduced, which was developed by continuously adjusting Flan-T5 LMs with 3B and 11B parameters using the CoT Collection. It has been shown that using the CoT Collection for fine-tuning led to better zero-shot CoT performance on hidden problems.

How effectively C2F2 works in situations where learning happens in a small number of instances, or few-shot learning, is discussed in the research paper. On domain-specific datasets from the legal and medical fields, parameter-efficient fine-tuning (PEFT) on C2F2 outperforms direct fine-tuning using FLAN-T5. The benefits of utilizing CoT arguments to enhance task generalization and encourage future study have also been highlighted by the authors.

Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11

In order to determine the degree of improvement following the use of the CoT Collection, the researchers assessed the average zero-shot accuracy on 27 datasets of the BIG-Bench-Hard benchmark. The 3B and 11B LMs’ accuracy improved by +4.34% and +2.44%, respectively. The few-shot learning capabilities of the language models were also enhanced by the CoT instruction modification. This resulted in improvements of +2.97% and +2.37% on four domain-specific tasks as compared to Flan-T5 LMs (3B and 11B), respectively.

In comparison to earlier CoT datasets, the CoT Collection contains over 52 times as many CoT justifications and roughly 177 times as many jobs. The CoT Collection dataset, in conclusion, demonstrates the efficacy of CoT justifications for enhancing task generalization in Language Models under zero-shot and few-shot learning conditions. It overcomes the difficulties encountered when applying CoT reasoning in more compact language models.

Advertisement

Air India Introduces AI-based Upskilling Platform Gurukul.AI for its Employees

Air India AI-based upskilling platform Gurukul.Ai
Image Credits: Air India

In an effort to promote continuous learning within the company, Air India has introduced a cutting-edge learning hub. Known as Gurukul.AI, the hub has been created with Vihaan.AI. 

Through an evaluation of each employee’s job functions, existing abilities, and proficiencies, the airline’s five-year transformation plan intends to establish customized upskilling paths for each employee. According to the airline, the platform incorporates competency frameworks that are tied to key organizational roles and enables access to pertinent courses.

According to the platform’s description, its main goal is to cultivate state-of-the-art, world-class capabilities within Air India, improving employee productivity and skill sets to the highest standards possible. Emerging technologies within the portal will allow employees to view their progress via automated analytics and assist them in performance management. 

Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11

In keeping with it, the platform includes game-like components and hyper-personalization, such as a function that may “talk” to the students. The airline said that doing this would inspire staff to advance, reach milestones, and unlock achievements, ultimately fostering a sense of accomplishment.

In addition, Gurukul.AI offers a collection of more than 70,000 cutting-edge learning resources, such as microlearning, mobile learning resources that are readily accessible, and modules that are engaging and video-based. 

The organization believes that features like a leader board built into the platform, along with a learning wallet and chances to earn rewards, would encourage active engagement in team learning and serve as an incentive for employees to improve their knowledge and skills.

Advertisement

Nvidia Temporarily Becomes $1 Trillion Company after AI Frenzy

Nvidia temporarily becomes a $1 trillion company
Image Credits: Nvidia

On Tuesday, a new member joined the exclusive group of US firms valued at more than $1 trillion, at least for a short time. Nvidia, a chip manufacturer, momentarily joined the group when its share price rose by more than 5% before falling.

Last week, after the company predicted “surging demand” as a result of developments in artificial intelligence, shares had already increased by more than 25%. The other publicly traded US companies valued more than $1 trillion (£800 billion) are Apple, Saudi Aramco, Amazon, Alphabet, PetroChina, Tesla, Meta, and Microsoft. According to sources, Nvidia is the 9th company worldwide till now to hit $1 trillion market value. 

Nvidia was first recognised for producing the kind of computer chips that handle visuals, particularly for video games, when it was founded in 1993. Long before the AI revolution, the company’s co-founder Jensen Huang gambled by investing in new Nvidia chip capability. The long game seems to have paid off.

Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11

According to one assessment, it has a monopoly on 95% of the machine learning business, and its hardware currently powers the majority of AI applications. The chatbot ChatGPT, whose release last year ignited the AI craze, was trained using 10,000 Nvidia graphics processing units (GPUs) huddled up on a Microsoft supercomputer.

Nvidia’s stock price has more than doubled over the last year as investors think the company will profit when AI ushers in the next wave of technological advancements. The California-based company’s shares concluded Tuesday’s New York trading at roughly $401, or up about 3%, leaving it with a market value of more than $990 billion.

Advertisement

Italy Plans State-backed Funds to Support Local AI Startups

Italy state-backed funds support local AI startups
Image Credits: CED

Italy is considering creating a state-backed fund to expand its domestic market in an effort to support local AI businesses. In order to encourage startup investments in AI, the Italian government intends to establish an investment fund supported by the state lender Cass Depositi e Prestiti (CDP).

The fund will first start with a small sum initially. It is budgeted for an initial amount of 150 million euros or 165 million dollars, according to a statement to Reuters. 

“This will promote study, research, and programming in AI in Italy,” according to Cabinet Undersecretary Alessio Butti. Despite the fact that a ban on ChatGPT in March may have caused some to change their minds on AI, it makes natural that the government would wish to encourage the creation of native AI.

Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11

The restriction caused a stir throughout the European Union, drawing attention from a number of member states as AI quickly grows ahead of legislative requirements. In light of this, it would appear that the Italian government is worried about the same.

Butti claims that their administration is attempting to strike a delicate balance between the development of AI and human rights. “The government is looking at developments in artificial intelligence, an area where a balance between human rights and technological advancement must be struck,” he added.

It is obvious that the Italian government is trying its best to provide domestic AI businesses a competitive edge. According to Butti, “We aim to increase the independence of Italian industry and cultivate our own national capacity to develop expertise and research in such a strategic sector.”

Advertisement