Home Blog Page 51

Meta Releases Clues on How AI is Used for Instagram and Facebook

Meta AI
Image Source: Analytics Drift

Meta, the parent company of Instagram and Facebook, released tools and information to help users understand how AI influences what they see on its apps.

The introduction of nearly two dozen explainers, focused on features like Instagram Stories and Facebook’s news feed, explains how Meta selects the content to recommend to users. Meta’s move is part of its “wider ethos of openness, transparency, and accountability,” as per the blog post of Meta’s President of Global Affairs, Nick Clegg.

In the blog post, Clegg also writes, “With rapid advances taking place with powerful technologies like generative AI, it’s understandable that people are both excited by the possibilities and concerned about the risks.”

Read More: Meta to stop sharing news on Facebook, Instagram in Canada to comply with Bill C-18

With European lawmakers’ swift advancement of legislation, there could be requirements for explanation and transparency from companies that use AI. US lawmakers hope to begin working on similar legislation later this year.

Meta’s newly-released 22 system cards for Facebook and Instagram will provide information on how its AI systems rank content and predict what content might be most relevant to its users. They cover feeds, reels, stories, and other surfaces that people visit to find content from the people or accounts they follow. The system cards also cover AI systems that recommend “unconnected” content from accounts, people, or groups they don’t follow. 

Now, users can customize their feeds, on both Facebook and Instagram, by accessing a menu from individual posts. While Meta had only offered users the ability to tell Instagram to show less, not more, the release of its new tools provides the ability to ask Instagram to supply more of a certain content type. 

Meta will also provide a content library and an application programming interface (API) featuring a variety of content from Facebook and Instagram for researchers to study its platforms.

Additionally, the platform is expanding the Why Am I Seeing This? feature to Facebook Reels and Instagram Reels tab and Explore over the coming weeks. The feature was previously found in some ads and Feed content on both Facebook and Instagram.

Advertisement

“Godfather of AI” Urges Governments to Safeguard Humanity

Godfather of AI Geoffrey Hinton
Image Credits: Pindula

Speaking at the Collision tech conference, AI scientist Geoffrey Hinton, widely known as the “Godfather of AI,” urged governments to intervene and ensure that machines do not seize control of society. Hinton, who recently left Google after a decade of service, emphasized the need to address the potential risks associated with AI, especially after the release of the captivating ChatGPT.

Addressing a packed audience of more than 30,000 industry professionals in Toronto, Hinton highlighted the importance of understanding how AI could potentially try to take control. He expressed his concerns by stating, “Before AI is smarter than us, I think the people developing it should be encouraged to put a lot of work into understanding how it might try and take control away. Right now, there are 99 very smart people trying to make AI better and one very smart person trying to figure out how to stop it taking over, and maybe you want to be more balanced.”

Hinton’s concerns regarding AI were not without substance, as he emphasized the need to address the risks seriously. He stated, “I think it’s important that people understand this is not science fiction; this is not just fear-mongering. It is a real risk that we must think about, and we need to figure out in advance how to deal with it.”

Read More: OpenAI Sued for Stealing Massive Amounts of Personal Data

In addition to potential control issues, Hinton also expressed concerns about how AI could worsen inequality. He expressed his fear that the benefits of AI adoption would primarily flow to the wealthy, leaving workers behind. “The wealth isn’t going to the people doing the work. It will go into making the rich richer and not the poorer, and that’s very bad for society,” he added.

Hinton also addressed the dangers of fake news generated by AI-powered bots like ChatGPT. He expressed hope that AI-generated information could be watermarked, similar to how central banks watermark actual money. The European Union is currently exploring such a technique as part of its AI Act, which is being negotiated by legislators to establish AI standards in Europe.

Advertisement

IIT Jammu Introduces B.Tech in Mathematics and Computing to Signify Their Interdependence 

IIT Jammu B.Tech Mathematics Computing
Image Credits: IIT Jammu

The Indian Institute of Technology (IIT) Jammu will be having a cutting-edge undergraduate programme, B.Tech in Mathematics & Computing, in response to the rising demand for workers with both math and computer science skills. 

Realizing the complex relationship between the booming fields of artificial intelligence, machine learning, and data science, and mathematics is the aim of this course, said the spokesman. The primary goal of the course, according to the IIT Jammu spokeswoman, is to empower young brains to contribute to its groundbreaking developments.

The four-year B.Tech. in Mathematics & Computing programme is created to give students a thorough knowledge of how mathematics and computer science are interrelated. The range of topics covered by the students includes statistics, real and complicated analysis, and optimisation methods. They will also acquire a solid education in the foundational computer science concepts necessary for their chosen field.

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

The selection process will be based on the rank obtained in the JEE (Advanced) National Entrance Exam. Candidates must have passed their Class XII or equivalent examinations from a recognised board with Physics, Chemistry, Mathematics, a language, and an additional subject in addition to the aforementioned four. 

The spokesperson added that this programme serves as a launching pad for young minds to build the future of AI and contribute to its revolutionary potential across numerous industries. Therefore the job prospects for graduates are broad and exciting for this field. 

Advertisement

Google and Startup India Launches Startup School’s 2023

Google and Startup India to launch Startup School
Image Credits: Google for Startups

Google has established the second edition of the virtual Startup School, 2023, supported by Startup India and the Department for Industrial Policy and Promotion (DPIIT). The second edition of the program will commence on July 11, 2023.

The program aims to empower the growth of startups, especially in India’s non-metropolitan areas. With a span of eight weeks, Startup School 2023 plans to reach out to 30,000 Indian startups by providing them with expertise-tailored knowledge and mentorship.

The fully-packed comprehensive program designed by Google covers a wide range of essential technical topics, including AI, funding, leadership, global growth, government support to entrepreneurs, product and tech strategy, and many more. 

Read More: AI Startup Cohere Raises $270 Million in Series C Funding, Values at $2.2 Billion

Startup School 2023 will feature over 30 Google and industry experts via fireside chats with transformative entrepreneurs and VCs who will contribute their insights on various topics by leading informative sessions to promote and support start-ups in India.

Some of the distinguished speakers leading these programs include Bikram Bedi, Managing Director of Google Cloud India; Nithin Kamat, Founder & CEO of Zerodha; Shuvi Shrivastava, Partner of Lightspeed, and Aastha Grover, Vice President of Invest India and Head of Startup India. Other speakers include Gayatri Yadav, Chief Marketing Officer of Peak XV Partners; Ashish Kashyap, Founder & CEO of Indmoney; Afsar Ahmad, Co-founder of Gameberry Labs; Vani Kola, Managing Director of Kalaari Capital, and Sanjeev Barnwal, Founder & CTO of Meesho.

Manmeet K Nanda, Joint Secretary, DPIIT Ministry of Commerce and Industry, expressed her gratitude and pleasure in the collaboration, emphasizing on objectives of scaling up startups through this program. 

Startup School 2023 is open to all DPIIT-recognised startups. If you’re a leading startup team interested in learning from experienced entrepreneurs, you can quickly register through Google’s registration process.

Overall, the collaboratives present an exciting opportunity for startups in India to gain valuable mentorship and guidance on the necessary resources required for thriving in the dynamic digital startup ecosystem.

Advertisement

OpenAI Sued for Stealing Massive Amounts of Personal Data

OpenAI Sued Stealing Personal Data
Image Credits: Analytics Drift

OpenAI, the company behind the widely-used ChatGPT tool, is facing a lawsuit claiming it unlawfully collected and utilized large quantities of personal data from the internet to train its AI models. 

According to the 160-page complaint filed in a California federal court, OpenAI allegedly scraped “massive amounts of personal data from the internet,” obtaining nearly every piece of exchanged information without authorization. The scale of this alleged data scraping is described as unprecedented. OpenAI and its major investor, Microsoft, have not provided immediate comments regarding the lawsuit.

The lawsuit further alleges that OpenAI’s products utilized stolen private information, including personally identifiable data, from millions of internet users, including children, without their knowledge or informed consent. Such uninformed data usage raises ethical concerns and highlights the potential risks of exploiting personal data.

Read More: Microsoft and Nvidia Invest in Inflection AI’s $1.3 Billion Funding.

The legal action seeks injunctive relief, calling for a temporary halt on the commercial use of OpenAI’s products. It also demands the payment of “data dividends” to compensate individuals whose information was used to develop and train OpenAI’s AI tools. These demands reflect the increasing importance of safeguarding individuals’ data rights and promoting responsible data practices within AI development.

Timothy K. Giordano, a partner at Clarkson, the law firm behind the suit, expressed his concerns, stating, “By collecting previously obscure personal data of millions and misappropriating it to develop a volatile, untested technology, OpenAI put everyone at an unacceptable level of risk in terms of responsible data protection and use.”

OpenAI gained significant attention following the launch of ChatGPT, a tool known for generating human-like responses to user prompts. Its success ignited an AI arms race, with tech companies, large and small, scrambling to integrate AI tools into their products. However, the outcome of this legal battle will have significant implications not only for OpenAI but also for the broader AI community. This lawsuit highlights the need for reliable data protection and usage to ensure an ethical AI ecosystem

Advertisement

Europe to Launch “Crash Test” Facilities for AI to Ensure Safety

Europe launches Crash Test Center for AI
Image Source: reuter

Technology is emerging every day. But who will oversee these innovations? And what guarantees the content provided by AI tools is safe to use? To ensure the safety, usage, and misinformation spread by AI, European Union (EU) has planned to establish testing facilities for AI around Europe. 

With the launch of four testing facilities across Europe on Tuesday, the project already has an investment of $240 million. These crash test centers, which are physical and virtual, will start operating next year. The test centers will also help enlighten public policy on AI and support the growth of the AI industry while taking care of these technological innovations that are trustworthy and harmless.

Where innovation emerges every day, crash centers will provide space for technology providers to test Artificial Intelligence (AI) and robotics in real-world settings within the manufacturing, food, agriculture, and healthcare sectors.

Read More: EU Launch of Google’s Bard Delayed Due to Privacy Concerns

On 27th May 2023, Ms. Lucilla Sioli, Director for AI and Digital Industry at the European Commission, highlighted the misinformation, risk, and impacts associated with AI. She explained why technology providers are expected to bring “trustworthy AI” to market and how using crash centers capabilities will help them test and validate their applications.

Meanwhile, the Technical University of Denmark is allocated to lead one of the crash test centers, and it would act as a safety filter between technology providers and users in Europe, complementing regulations such as the EU’s AI Act. Recently, the EU has taken a crucial step on the way to regulate AI that imposes new restrictions on the risk and uses of the technology. However, the European Union has always been very privacy-conscious when it comes to technology, and this one step towards crash test facilities will ensure that AI innovations are safe and trustworthy.

Advertisement

Microsoft Unveils Free Generative AI Skill Training Course 

Microsoft unveils free generative AI skill training course
Image Credits: Stock Images

Microsoft has unveiled a new AI skills project to aid individuals and groups worldwide in better understanding artificial intelligence. A new, free coursework created with LinkedIn is part of the effort, which is a part of Microsoft’s skills for Jobs programme. 

Thisnew course launched by Microsoft and LinkedIn offers free introductory generative AI learning materials. Individuals will master fundamental AI principles through this new programme, including an examination of responsible AI frameworks, and will graduate with a Career Essentials certificate. This appears to be beneficial for anyone seeking a professional qualification in generative AI.

According to a recent Nasscom analysis, India has the second-largest AI talent pool in the world and is at the top of the list in terms of AI talent concentration and related skills. The demand for AI/ML big data analytics tech expertise in India still outpaces supply by 51%, even with the country’s present skill pool of about 420,000 professionals.

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

Microsoft is launching a new effort to assist individuals in improving in AI, claiming that AI is ready to establish a whole new way of working at a time when the velocity of information work is overwhelming human capacity to keep up. According to the tech giant, it has taught approximately 70,000 female students from Tier II and III towns in AI skills over the past two years.

“AI skills represent the third highest priority for companies’ training strategies, alongside analytical and creative thinking,” said Gunjan Patel, director and head of Philanthropies at Microsoft India. “The potential for AI to empower workers is enormous. But we must make sure that everyone is equipped with the necessary abilities. The new AI Skills Initiative is a fresh start that will build on an upcoming wave of technological innovation.”

Advertisement

ChatGPT Developer OpenAI Picks London for First Office Outside the US

OpenAI London Office Announcement
Image Source: Getty Images

OpenAI, the US company behind ChatGPT, has selected London for its first international office outside of the United States. Having received multibillion-dollar backing from Microsoft, OpenAI said the focus of its London office would be on research and engineering.

The announcement comes days after UK Prime Minister Rishi Sunak’s speech at London Tech Week, stating, “If our goal is to make this country the best place in the world for tech, AI is surely one of the greatest opportunities before us.” Mr. Sunak wants the UK to be an AI technology superpower, with London as home to a global body tasked with legislating AI.

The UK boasts a number of AI companies, including Google’s DeepMind, data analytics giant Palantir, Thought Machine, Infogrid, Builder.ai, Quantexa, OneTrust, Darktrace, and Wayve.

Read More: OpenAI Introduces Bing Search to ChatGPT on iOS

Sam Altman, CEO of OpenAI, stated, “We see this expansion as an opportunity to attract world-class talent and drive innovation in AGI (artificial general intelligence) development and policy.”

According to Bloomberg, OpenAI had been considering Poland or France for its first international office. However, its decision to go with London is seen as “another vote of confidence for Britain as an AI powerhouse,” as per Chloe Smith, UK Secretary for Science, Innovation, and Technology.

“We’re thrilled to extend our research and development footprint into London, a city globally renowned for its rich culture and exceptional talent pool,” said Diane Yoon, OpenAI’s VP of People.

While no details have been provided on when the office would open or how many people would be employed, they have posted four role openings, including a solution architect and security engineer.

Advertisement

First AI-Generated Drug Enters Phase II Trials

Insilico Medicine AI-generated drug
Image Source: Insilico Medicine

Insilico Medicine, the Hong Kong-based biotech startup backed by private equity giant Warburg Pincus and Chinese conglomerate Fosun Group, has begun human trials of its AI-designed drug.

Known as INS018-055, the fully generative AI drug is targeted to treat idiopathic pulmonary fibrosis (IPF), a chronic disease that causes scarring in the lungs. The condition is prevalent in about 100,000 people in the United States and, if left untreated, can lead to death in two to five years, according to the National Institutes of Health.

Currently, the treatments for IPF are pirfenidone and nintedanib, which may provide some relief or slow down the symptoms but don’t stop progression or reverse the damage. Some unpleasant side effects of these drugs include diarrhea, nausea, loss of appetite, and weight loss.

Read More: Harvard University Employs AI Chatbot as CS Instructor

“There are very few options for people with this terrible condition, and the prognosis is poor,” said Insilico Medicine’s CEO Alex Zhavoronkov. Initial studies indicate that INS018_055 has the potential to address some of the limitations of existing therapies.

The drug received positive topline data in Phase I in early 2023, with international multi-site Phase I studies demonstrating consistent results, indicating favorable tolerability and safety of the AI-generated INS018_055. Shown to be generally safe and well tolerated by healthy volunteers involved in the study, the supportive data led to the initiation of the Phase II study.

A 12-week trial will include subjects diagnosed with IPF, and the Phase II study will assess the tolerability, safety, preliminary efficacy, and pharmacokinetics of the drug. Insilico also has plans to expand the testing population by recruiting 60 subjects with IPF at about 40 sites in China and the U.S.

Insilico’s chief medical officer, Sujata Rao, M.D., stated, “If our Phase IIa study is successful, the drug will then go to Phase IIb with a larger cohort of participants.” If there is a significant response to the drug, then it will be evaluated on hundreds of patients to confirm its safety and effectiveness before it can be a new FDA-approved treatment for patients with that condition.

Advertisement

US Government Approves World’s First Flying Car

US government approves world's first flying car
Image Credits: Alaf Aeronautics

The US government has given Alef Aeronautics’ flying car the approval to soar. According to aviation law firm Aero Law Centre, the car manufacturer declared that it has earned a Special Airworthiness Certification from the US Federal Aviation Administration (FAA). A vehicle of this kind has never before been certified in the US.

“The FAA is actively working on its policies for electrical vertical takeoff and landing (eVTOL) vehicles, as well as governing interactions between eVTOLs and ground infrastructure,” Alef Aeronautics stated in a statement. The statement said that Alef’s Special Airworthiness Certificate, therefore, limits the areas and purposes for which Alef is permitted to fly.

The flying automobile, which has its base in San Mateo, California, is entirely electric and has seating for one or two people. According to Fox News, the automobile, which costs about $300,000, can fly to avoid road accidents and over stalled traffic.

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

In October 2022, the business unveiled two functional full-size technology demonstrator cars as well as a full-sized sports car. Over 440 of the business’s vehicles had been pre-ordered, according to the company, from both individual and corporate consumers in January.

By the end of 2025, the corporation wants to start delivering flying automobiles to clients, according to sources. The flying automobile is being designed for use on regular urban or rural roads. It can fit into a regular-sized garage and in a regular-sized parking spot.

The car is a low-speed vehicle, therefore it can only go at a top speed of 25 mph on paved surfaces. According to Alef’s website, the assumption is that, if a driver needs a faster route, a driver will use Alef’s flight capabilities.

Advertisement