Google has unveiled a new tool called “Google-Extended,” offering website publishers the option to opt out of using their data to train Google’s AI models while still remaining accessible through Google Search. This initiative aims to strike a balance between data accessibility and privacy concerns.
With Google-Extended, websites can continue to be scraped and indexed by web crawlers like Googlebot while avoiding the utilization of their data for the ongoing development of AI models. This tool empowers publishers to have control over how their content contributes to enhancing Google’s AI capabilities.
Google emphasizes that Google-Extended enables publishers to “manage whether their sites help improve Bard and Vertex AI generative APIs.” Publishers can use a simple toggle to control access to their site’s content.
Google had previously confirmed its practice of training its AI chatbot, Bard, using publicly available data scraped from the web. The introduction of Google-Extended aligns with the company’s commitment to balancing data usage for AI development and respecting publishers’ preferences.
Google-Extended operates through robots.txt, the file that informs web crawlers about site access permissions. Google also indicates its intention to explore additional machine-readable methods for granting choice and control to web publishers as AI applications continue to expand. Further details on these approaches will be shared in the near future.
In order to keep digital platforms functional, content control is essential. OpenAI claims to have created a method for using its flagship GPT-4 generative AI model for content moderation, relieving the workload on human teams.
Content moderation is time-consuming and difficult since it requires careful work, sensitivity, a deep grasp of context, as well as quick adaptation to new use cases. Toxic and harmful content has traditionally been filtered out by human moderators trawling through vast amounts of content assisted by simpler, vertically-specific machine learning models. The procedure is inherently slow and puts a strain on human moderators’ minds. Let’s take a look at the new way proposed by OpenAI and how it can help the traditional methods of content moderation on LLMs.
Content Moderation with GPT-4
To overcome the challenges associated with content moderation, OpenAI is investigating the usage of LLMs. Their large language models, like GPT-4, are suitable for content moderation since they can comprehend and produce natural language. Based on the policy rules that are given to the models, they can make judgments on moderation. The time it takes to create and modify content policies is reduced with this approach from months to hours.
After formulating a policy guideline, policy experts can compile a valuable set of data by selecting a small number of examples and labeling them in accordance with the policy. Following that, GPT-4 reads the policy and labels the same dataset without viewing the results. The policy experts can ask GPT-4 to explain its labels, analyze policy definitions for ambiguity, clear up confusion, and add additional explanation to the policy as needed by comparing the differences between GPT-4’s judgments and those of a person. Till we are content with the policy’s quality, we can repeat stages 2 and 3.
As a result of this iterative process, more refined content policies are produced, which are then converted into classifiers to allow for policy deployment and content moderation at scale. According to OpenAI, users can also utilize GPT-4’s predictions to hone a much smaller model in order to handle massive volumes of data at scale.
Advantages
Several advantages of this straightforward but effective concept over conventional methods of content moderation include more consistent labels, faster feedback loops, and reduced mental burden.
Content policies are frequently highly specific and are constantly changing. Inconsistent labeling might result from people interpreting policies differently or from certain moderators taking longer to process new policy updates. LLMs, in contrast, are perceptive to minute variations in phrasing and are quick to adjust to changes in policy, providing consumers with a consistent content experience.
The cycle of policy updates, which involves creating a new policy, labeling it, and getting user feedback, is frequently a drawn-out and time-consuming procedure. GPT-4 can shorten this process to a few hours, allowing for faster responses to new threats.
Human moderators may become emotionally worn out and stressed out if they are constantly exposed to unpleasant or hazardous content. The well-being of the people involved benefits from the automation of this kind of employment.
Shortcomings
Despite the above-mentioned advantages, GPT-4 model judgments are susceptible to biases that may have been added to the model during training. Like with any AI application, outcomes and output need to be carefully watched over, verified, and improved by keeping humans in the loop. Human resources can be better directed towards tackling the complex edge circumstances most crucial for policy improvement by decreasing human involvement in some aspects of the moderation process that can be handled by language models.
Conclusion
OpenAI takes a different approach to platform-specific content policy iteration than Constitutional AI, which primarily depends on the model’s internalized judgment of what is safe vs. what is not. Since anyone with access to the OpenAI API can currently carry out the same tests, the company has invited Trust & Safety practitioners to test out this method for content moderation.
With GPT-4 content moderation, policy changes can be demonstrated considerably more quickly, cutting the cycle from months to hours. Additionally, GPT-4 can quickly adapt to changes in policy and interpret subtleties in extensive documentation on content policy, resulting in more consistent labeling. We think this presents a more optimistic view of the future of digital platforms, where AI can help regulate online traffic in accordance with platform-specific policies and reduce the mental load of a significant amount of human content moderators.
The management of Google apparently issued a “code red” during December 2022 to deal with OpenAI’s ChatGPT, an AI-powered chatbot that gained popularity for its capacity to directly answer queries in a conversation, which posed a threat to Google. Now, GPT-4, the most recent in OpenAI’s line of AI language models that power programs like ChatGPT and the new Bing, has been officially released after months of speculation and debate. Recently, Microsoft announced a new version of the search engine Bing, powered by an updated version of the AI technology powering chatbot ChatGPT. While many leading companies are making strides in the AI field, many are wondering whether Google has lost the AI race with these tech giants. Let’s explore.
Google’s AI Initiative
Sundar Pichai, the CEO of Google, has been involved in a number of meetings to determine Google’s AI strategy. In an effort to counter the threat that chatGPT poses, Pichai has since altered several operations of numerous groups within the company. By May 2023, teams from Google research, trust and safety, and other divisions will have developed and released new AI prototypes and products, according to the CEO. Pichai has given its employees the responsibility of developing new AI products that, like OpenAI’s DALL-E technology, can produce artwork and photos.
Companies such as Google, OpenAI, and Microsoft are at the forefront of the AI field, and each is vying to become the leader in AI research and development. While all three companies have made significant strides in the field of AI, Google has not lost the AI race to its competitors. In fact, the recent announcement concerning its own AI chatbot, Brad, generative AI features in Google Workspace, and many more are proof of its progress. Let’s take a look at some of Google’s significant AI developments.
In January, Google unveiled a new AI system called “MusicLM” that can create high-fidelity music in any genre just with a text description, according to a research paper. MusicLM was trained on a dataset of about 280000 hours of music to learn to generate coherent songs based on descriptions of significant complexity. Its capabilities extend beyond creating short clips of songs. Google researchers showed that the system could build on existing melodies, whether played on an instrument, hummed, sung, or whistled.
Bard
In February, Google introduced Bard, an experimental conversational AI service powered by LaMDA, and opened it up to trusted testers ahead of making it more widely available to the public. Google’s chatbot is powered by Transformer, a neural network architecture, and LaMDA, Google’s language model. Google claims that Bard uses web resources to provide insightful, up-to-date responses. In light of OpenAI’s ChatGPT’s success and the hype around Microsoft’s Bing, a lot is being expected out of Brad.
“Now, our newest AI technologies — like LaMDA, PaLM, Imagen and MusicLM — are building on this, creating entirely new ways to engage with information, from language and images to video and audio. We’re working to bring these latest AI advancements into our products, starting with Search,” said Pichai in a blog.
Generative AI capabilities in Google Cloud
Developers can access Google’s AI models, notably PaLM, on Google Cloud to create and modify their own models and applications using generative AI. In order to give developers and businesses access to enterprise-level safety, security, and privacy while also allowing for seamless integration with their current Cloud solutions, Google has added new generative AI capabilities to the Google Cloud AI portfolio.
Apart from allowing users to build and deploy AI applications and machine learning models at scale, Google Cloud’s Vertex AI platform now provides foundation models, initially for creating text and images, and over time with video and audio using generative AI. Google also introduced Generative AI App Builder. It connects conversational AI flows with out of the box search experiences and foundation models, thus empowering companies to build generative AI applications in minutes or hours.
PaLM API & MakerSuite
Google released the PaLM API, an accessible and secure approach for developers to build on top of its best language models, for those who are exploring with AI. On Tuesday, Google made a size and capability-wise effective model available; further sizes will be added soon. Additionally, the API includes the user-friendly MakerSuite tool, which enables users to swiftly prototype concepts. It will eventually offer capabilities for prompt engineering, creating synthetic data, and tuning specific models, all of which will be supported by strong safety tools.
“In addition to announcing new Google Cloud AI products, we’re also committed to being the most open cloud provider. We’re expanding our AI ecosystem and specialized programs for technology partners, AI-focused software providers and startups,” Thomas Kurien, CEO of Google Cloud said.
Generative AI features in Workspace
Google announced a number of generative AI features for its various workspace apps, such as Google Docs, Gmail, Sheets, and Slides. The features include new ways for Google Docs’ AI to brainstorm, summarize, and generate text; the ability for Gmail to create complete emails from users’ brief bullet points; and for Google Slides to create AI imagery, audio, and video to illustrate presentations. The AI features will enable one to go from raw data to insights and analysis through auto-completion, formula generation, and contextual categorization in Sheets. Users can also generate new backgrounds and capture notes in Meet, while enabling workflows for getting things done in Chat.
Conclusion
While OpenAI and Microsoft are certainly formidable competitors in the AI space, Google has not lost the AI race. The company’s significant investment in AI, access to vast amounts of data, track record of innovation, and commitment to accessibility demonstrate that it is still at the forefront of this exciting field. With the latest release of its new AI tools for Google Workspace, Google is continuing to push the boundaries of what is possible with AI and demonstrating its commitment to making AI more accessible to a wider audience.
Amazon Web Services (AWS) has announced a suite of new offerings aimed at accelerating generative AI innovation. Among these releases, AWS highlights the general availability of “Amazon Bedrock,” a fully managed service that consolidates foundation models (FMs) from leading AI companies into a single API.
To provide customers with more options for FMs, AWS also disclosed the general availability of the “Amazon Titan Embeddings” model. Furthermore, “Llama 2” will soon be accessible as a new model through Amazon Bedrock, marking the first fully managed service to offer Meta’s Llama 2 via an API.
In addition, AWS revealed a forthcoming feature for Amazon CodeWhisperer. This AI-powered coding companion will securely customize code suggestions based on an organization’s internal codebase. To enhance the productivity of business analysts, AWS is introducing a preview of “Generative Business Intelligence (BI) authoring capabilities” for Amazon QuickSight.
This unified BI service, designed for the cloud, enables customers to create compelling visuals, format charts, perform calculations, and more by simply describing their needs in natural language. This array of innovations contributes to AWS’s efforts to provide customers with comprehensive generative AI solutions across various layers of the AI stack.
Swami Sivasubramanian, Vice President of Data and AI at AWS, noted, “The proliferation of data, access to scalable compute, and advancements in machine learning have led to a surge of interest in generative AI. This is sparking new ideas that could transform entire industries and reimagine how work gets done.”
AWS’s ongoing commitment to advancing generative AI aims to provide businesses, from startups to enterprises, and professionals, from developers to data analysts, with secure, flexible, and high-performance solutions to harness the transformative potential of generative AI.
Recently, Minus Zero, a Bengaluru-based autonomous driving start-up, unveiled the zPod, which it claims is India’s first autonomous vehicle. However, the claims are misleading as Minus Zero is not the first company in India to introduce autonomous vehicle technology. Minus Zero has not yet responded to our email concerning their claims.
The official website of the company Minus Zero says ‘The history was made 04.06.2023.’ It goes on to falsely claim that “While India experienced its first autonomous vehicle ride, the world witnessed True Vision Autonomy for the first time.”
Leading news outlets such as Economic Times, Business Today, and Hindustan Times have published news reports with headlines that explicitly describe Minus Zero’s claims of zPod being India’s first autonomous vehicle, whereas Financial Times, The Hindu, and Wion mention the startup’s claims on the same.
A funding news report published by Business Insider blatantly described Minus Zero as ‘India’s first self-driving car startup’. The report said, “India’s first startup building affordable fully self-driving cars in India has raised $1.7 million in seed round led by Chiratae Ventures.”
The zPod by Minus Zero claims to be an autonomous vehicle that uses its camera system to operate in any location and circumstance. The zPod is a four-seater electric vehicle with no controls or a dashboard like a typical car. Unlike many autonomous cars, the zPod uses a system of six cameras instead of LIDAR (light detection and ranging), with two in the front and back and four on the sides.
However, Swayatta Robots has already been using this technology for the past few years.
Founded in 2015, Swaayatt Robots has been focusing on fundamental research and application in reinforcement learning, motion planning, and algorithmic frameworks for autonomous vehicles that are capable of navigating highly stochastic traffic-dynamics, and can even function in unstructured Indian environments.
To enable autonomous vehicles to handle both stochastic and hostile traffic circumstances, the team has been using reinforcement learning (RL) to learn different navigation policies. For instance, their multi-RL agent system for intent analysis and negotiation enables autonomous vehicles to navigate congested, dynamic highways at both low and high speeds.
Swaayatt Robots has already created algorithmic frameworks for vision that enable autonomous vehicles to see both during the day and at night without the use of RADARs and LiDARs. In comparison to current state-of-the-art deep learning systems, its perception algorithms, based on end-to-end deep learning, are extremely computationally efficient.
About 2% of Swaayatt Robots research has been focused for years on making autonomous driving possible without the need for fidelity maps. To do away with high-fidelity maps, Sanjeev and his colleagues at Swaayatt Robots have created new algorithms in both perception and planning. Typically, businesses employ high-fidelity maps to project and generate delimiters since they are unable to create algorithms that are reliable enough to do so in real time. Therefore, Minus Zero is not the first to introduce this technology in its zPod, as shown in the videos.
The Swayatta Robots official website and YouTube have many videos that showcase the company testing its autonomous vehicle technology under various circumstances. Some of these videos are as old as six years, serving as proof of their research and testing. The company has also received accolades and funding for autonomous vehicle research and development.
In conclusion, it is clear that Minus Zero’s zPod claiming the title of India’s first autonomous vehicle undermines Swaayatt Robots’ years of research and development progress. Considering all the facts, there is no doubt that Swaayatt Robots is the only deserving startup to claim the honor.
Indian ministries responsible for education and skill development, in a significant move, have entered into eight agreements with IBM. These agreements aim to provide curated courses that empower India’s youth with skills essential for the future job market.
The collaboration focuses on co-creating curricula to enhance the skills of learners across various education levels, including school education, higher education, and vocational skills. The emphasis is on emerging technologies such as AI (including generative AI), cyber-security, cloud computing, and professional development skills.
The collaboration extends to three core levels of education. First, IBM will provide access to digital content from IBM SkillsBuild to high school students, teachers, and trainers in schools identified by key educational bodies such as Navodaya Vidyalaya Samiti (NVS), National Council for Teacher Education (NCTE), and Kendriya Vidyalaya Sangathan (KVS). This program will be delivered through online channels, webinars, and in-person workshops conducted by IBM’s CSR implementation partners.
Second, IBM will refresh the AI curriculum for Grades 11 and 12 of the Central Board of Secondary Education (CBSE). Additionally, IBM will develop cyber-skilling and blockchain curricula for high school students, which will be hosted on IBM SkillsBuild.
IBM will continue its partnership with the Ministry of Skill Development and Entrepreneurship, collaborating with the Directorate General of Training and state vocational education and skilling departments. Their objective is to onboard job seekers, including long-term unemployed individuals and school dropouts, onto the IBM SkillsBuild platform. This initiative aims to equip them with the technical and professional skills needed to reintegrate into the workforce.
Amazon has announced a significant investment of up to $4 billion in Anthropic, an artificial intelligence startup, as part of its strategic partnership. This move underscores the growing trend of major tech companies pouring substantial funds into AI technology to capitalize on its transformative potential.
The collaboration between Amazon and Anthropic aims to focus on the development of “foundation models,” which serve as the backbone for generative AI systems that have garnered global attention. Under this agreement, Anthropic will adopt Amazon as its primary cloud computing service provider and leverage Amazon’s custom chips for training and deploying its generative AI systems.
Anthropic, based in San Francisco, was established by former OpenAI staff, the creators of the ChatGPT AI chatbot known for its human-like responses. Anthropic has introduced its own ChatGPT rival called “Claude,” capable of advanced dialogue, creative content generation, complex reasoning, and detailed instruction.
Amazon’s investment in Anthropic reflects its efforts to catch up with competitors such as Microsoft, which made a $1 billion investment in OpenAI in 2019, followed by additional substantial investments earlier this year of about $10 billion.
Amazon is actively expanding its AI offerings, including enhancing its popular assistant, Alexa, to engage in more human-like conversations and providing AI-generated summaries of product reviews, all in response to the ongoing AI competition among tech giants.
Recently, the New York Times (NYT) took a preemptive measure to avoid having its content used to create and train artificial intelligence models. The New York Times banned the use of its content, which includes text, images, audio and video clips, metadata, and other forms of content in the development of any software program. The new Terms of Service, which were updated on August 3, specifically forbid training a machine learning or artificial intelligence system on its data. It is not surprising that NYT’s decision gained traction after OpenAI released a new web crawling bot called GPTBot to expand its dataset for training its forthcoming generation of AI systems. According to OpenAI, the web crawler would collect data from publicly accessible websites while avoiding paywalled, sensitive, or illegal content.
However, the system is an opt-out one, which means website owners have the option to disable the web crawler to access their content. Similar to other search engines like Google, Bing, and Yandex, GPTBot will assume that any information that is available is, by default, available for usage. However, the question remains: why has NYT banned OpenAI’s GPTBot web crawler despite the growing popularity of its AI chatbot ChatGPT? Well, there might be several reasons behind this crackdown. Let’s take a quick look at what they might be.
Copyright Violation
OpenAI’s artificial intelligence chatbot ChatGPT is based on GPT large language models, which are trained on a vast amount of dataset gathered from the internet. This means a web crawler used for data collection purposes copies the entire content, as it is, from the websites to feed into the LLMs for training. The responses generated by ChatGPT are based on this training dataset fed to the LLMs. The opt-out option for OpenAI’s web crawler came only after a series of lawsuits against the company for copyright infringement, as the training data gathered for the earlier versions of GPT was used without the consent of the respective owners. The content created by authors and writers from the New York Times is protected by copyright. Since ChatGPT generates all its responses without any attributes or credits to the original source of the information, it clearly violates copyright laws. Neither is there any compensation for original content creators for the unconsented use. Considering this, it seems only fair on the part of NYT to prevent OpenAI from using its copyrighted content.
Identity Theft
Apart from the copyright infringement issue posed by OpenAI web crawler, NYT perhaps might be concerned about ChatGPT stealing its thunder. The AI chatbot has the remarkable ability to produce all sorts of textual content based on detailed prompts provided by the users. This is because of the vast amount of eclectic content it has been trained on. Now, NYT is known for the quality of its written content, which is thoroughly research-based and inhibits a unique writing style. Once the GPT models are trained on the NYT’s content, it is not much of an assumption to say that ChatGPT may be able to imitate its content style. This can be used by malicious actors to create content under the name of the prestigious news organization, seriously affecting its reputation and credibility. There have been several similar instances since the advent of ChatGPT. Recently, author Jane Friedman protested that five books listed as being written by her on Amazon were actually not written by her. According to the author, the books are poorly written and are probably created using ChatGPT. Amazon later pulled the titles from sale.
$100 million Google Partnership
In May, The New York Times signed a deal with Google that will enable Alphabet to feature NYT content on several of its platforms, including the Google News Showcase, a product that pays publishers to feature their content on Google News and some other Google platforms. Google will pay the New York Times about $100 million over the course of three years as a part of the deal. Now, ChatGPT is being seen as the potential rival for Google, threatening it to become the future of search engines. Keeping this in mind, the NYT’s decision to ban the OpenAI web crawler may be a calculated move on the part of Google as a part of the deal, the sole purpose of which is to put OpenAI at a disadvantage. This assumption is supported by recent talks of NYT considering legal action against OpenAI over copyright infringement, which could easily turn into a high profile legal tussle as it will also bring into consideration the intellectual property rights. There are speculations that if this lawsuit goes ahead and the NYT is successful, OpenAI could be forced to completely erase ChatGPT’s dataset and start again using only authorized content, which will serve Google very well.
Repercussions for NYT
Despite the several valid reasons that support NYT’s decision, there may be some consequences for the new organization in the future. The advent of LLMs and their subsequent applications, such as ChatGPT and Bing Chat, are changing the way people search for information. Instead of visiting links on the internet, people now desire a prompt response to their search queries, which the AI chatbots are remarkable at achieving. Bing Chat is already able to access the internet and provide up-to-date information such as current events and news. It is only about time that ChatGPT joined the race, too, considering OpenAI’s conscious efforts to partner with new organizations such Associated Press and the American Journalism Project for their training data. It can be easily said that AI chatbots such as ChatGPT can become the future of search engines.
Websites, such as NYT that deny web crawlers access to their web content might be sabotaging their own future. Naturally, Bing Chat and ChatGPT, both of which are based on GPT large language models, will only show content that they have been trained on and have access to. If these chatbots do become the future of search engines and NYT continues to prohibit the use of its content for AI training, the news organization might eventually lose its domain authority, directly impacting its readership. This may even impact the credibility of the organization’s content, since the training dataset is devoid of their content and wouldn’t prioritize them. Moreover, NYT’s competitors, who decide to allow their content to be used for AI training, are bound to have the edge over the news organization. Many companies use datasets like Common Crawl to create lists of websites to target with advertising, and since NYT won’t be in the datasets, it may also affect its ad revenue.
Conclusion
Considering all the points mentioned in the article, it is evident that there could be several reasons why NYT has banned OpenAI from using its content for training its AI models. While some of the reasons might seem pretty valid, there might also be serious repercussions to the NYT’s bold decision as the AI chatbots gain more traction every day.
Now, the question remains: should you allow GPTBot to crawl your websites? The answer depends on several factors. If your intent is to maintain or increase the website traffic, protect the copyrighted content, or are concerned about being taken out of context or any other valid reason, then you may consider blocking the web crawlers for your own good. However, if the above-mentioned reasons are the least of your concerns and your sole purpose is to stay at the top of the rapidly changing search landscapes, then allowing the data to be used can be seen as a wise decision.
The field of artificial intelligence is thriving more than ever as technological advances continue to emerge. Researchers and experts are striving harder than ever before to make the unprecedented happen. Throughout the process, experts constantly keep vocalizing their opinions on AI in various contexts. Here are some of the latest and most interesting quotes on artificial intelligence.
“We have seen AI providing conversation and comfort to the lonely; we have also seen AI engaging in racial discrimination. Yet the biggest harm that AI is likely to do to individuals in the short term is job displacement, as the amount of work we can automate with AI is vastly larger than before. As leaders, it is incumbent on all of us to make sure we are building a world in which every individual has an opportunity to thrive.”
– Andrew Ng, American computer scientist, and technology entrepreneur
“The playing field is poised to become a lot more competitive, and businesses that don’t deploy AI and data to help them innovate in everything they do will be at a disadvantage.”
– Paul Daugherty, chief technology and innovation officer, Accenture
“Fairness is a big issue. Human behavior is already discriminatory in many respects. The data we’ve accumulated is discriminatory. How can we use technology and AI to reduce discrimination and increase fairness? There are interesting works around adversarial neural networks and different technologies that we can use to bias toward fairness rather than perpetuate the discrimination. I think we’re in an era where responsibility is something you need to design and think about as we’re putting these new systems out there, so we don’t have these adverse outcomes.”
– Paul Daugherty, chief technology and innovation officer, Accenture
“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower.”
– Alan Kay, American computer scientist
“I visualize a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.”
– Claude Shannon, American mathematician, electrical engineer, and cryptographer
“It’s going to be interesting to see how society deals with artificial intelligence, but it will definitely be cool.”
– Colin Angle, chairman of the board, chief executive officer, and co-founder of iRobot
“Artificial intelligence is growing up fast, as are robots whose facial expressions can elicit empathy and make your mirror neurons quiver.”
– Diane Ackerman, American essayist and author
“A lot of times, the failings are not in AI. They’re human failings, and we’re not willing to address the fact that there isn’t a lot of diversity in the teams building the systems in the first place. And somewhat innocently, they aren’t as thoughtful about balancing training sets to get the thing to work correctly. But then teams let that occur again and again. And you realize, if you’re not thinking about the human problem, then AI isn’t going to solve it for you.”
– Vivienne Ming, executive chair and co-founder of Socos Labs
“Change is hard within organizations. It’s unclear to me whether or not AI, just as a technology, is going to radically change all of the challenges that we have within an organization. Things like getting people to change, change their practices and processes, and using this set of technologies. There is a huge gap in terms of what we can do now with AI. There’s improved lead generation that machine learning can do better than humans.”
– Michael Chiu, partner, McKinsey Global Institute (MGI)
“To be human is to be ‘a’ human, a specific person with a life history and idiosyncrasy and point of view; artificial intelligence suggest that the line between intelligent machines and people blurs most when a puree is made of that identity.”
– Brian Christian, American non-fiction author, poet, programmer, and researcher.
“I believe that at the end of the century, the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”
– Alan Turing, English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist.
“AI is a complex field, and I am the first to say that we computer scientists have not progressed as far as many people believe. For instance, we currently have no credible research path to any kind of conscious AI algorithm, and there are no robots that are truly autonomous or able to make their own decisions — so don’t worry about walking terminators.”
– Richard Socher, former chief scientist, Salesforce
“There’s a real danger of systematizing the discrimination we have in society [through AI technologies]. What I think we need to do — as we’re moving into this world full of invisible algorithms everywhere — is that we have to be very explicit, or have a disclaimer, about what our error rates are like.”
– Timnit Gebru, research scientist, Google AI
“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
– Edsger W. Dijkstra, Dutch computer scientist, programmer, software engineer, systems scientist, and science essayist.
“As data and science become more accessible and more the production of software and AI, human creativity is becoming a more valuable commodity.”
– Hendrith Vanlon Smith Jr., American Banker.
“If an AI possessed any one of these skills—social abilities, technological development, economic ability—at a superhuman level, it is quite likely that it would quickly come to dominate our world in one way or another. And as we’ve seen, if it ever developed these abilities to the human level, then it would likely soon develop them to a superhuman level. So we can assume that if even one of these skills gets programmed into a computer, then our world will come to be dominated by AIs or AI-empowered humans.”
– Stuart Armstrong, James Martin Research Fellow at the Future of Humanity Institute at Oxford University.
“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, and we will have multiplied the intelligence – the human biological machine intelligence of our civilization – a billion-fold.”
– Ray Kurzweil, American computer scientist, author, inventor, and futurist.
“Machine intelligence is the last invention that humanity will ever need to make.”
– Nick Bostrom, Swedish-born philosopher at the University of Oxford
“People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid, and they’ve already taken over the world.”
– Pedro Domingos, Professor Emeritus of computer science and engineering at the University of Washington.
“Maybe the only significant difference between a really smart simulation and a human being was the noise they made when you punched them.”
– Terry Pratchett, English humorist, satirist, and author of fantasy novels.
“As more and more artificial intelligence is entering into the world, more and more emotional intelligence must enter into leadership.”
– Amit Ray, Indian author, and pioneer in proposing compassionate artificial intelligence.
“A powerful AI system tasked with ensuring your safety might imprison you at home. If you asked for happiness, it might hook you up to a life support and ceaselessly stimulate your brain’s pleasure centers. If you don’t provide the AI with a very big library of preferred behaviors or an ironclad means for it to deduce what behavior you prefer, you’ll be stuck with whatever it comes up with. And since it’s a highly complex system, you may never understand it well enough to make sure you’ve got it right.”
– James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era.
“If we do it right, we might be able to evolve a form of work that taps into our uniquely human capabilities and restores our humanity. The ultimate paradox is that this technology may become a powerful catalyst that we need to reclaim our humanity.”
– John Hagel, a silicon valley based consultant and author
“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”
– Eliezer Yudkowsky, American decision theory, and artificial intelligence (AI) researcher and writer
“The real question is, when will we draft an artificial intelligence bill of rights? What will that consist of? And who will get to decide that?”
– Gray Scott, futurist, techno-philosopher, and expert in the field of emerging technology.
“If the government regulates against the use of drones or stem cells or artificial intelligence, all that means is that the work and the research leave the borders of that country and go someplace else.”
– Peter Diamandis, Greek-American engineer, physician, and entrepreneur
“If people trust artificial intelligence (AI) to drive a car, people will most likely trust AI to do your job.”
– Dave Waters, professor at the University of Oxford
“Anything that could give rise to smarter-than-human intelligence—in the form of Artificial Intelligence, brain-computer interfaces, or neuroscience-based human intelligence enhancement – wins hands down beyond contest as doing the most to change the world. Nothing else is even in the same league.”
– Eliezer Yudkowsky, American decision theory, and artificial intelligence (AI) researcher and writer
“Why give a robot an order to obey orders—why aren’t the original orders enough? Why command a robot not to do harm—wouldn’t it be easier never to command it to do harm in the first place?”
– Steven Pinker, Canadian-American cognitive psychologist, psycholinguist, popular science author, and public intellectual.
“The upheavals [of artificial intelligence] can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.”
– Nick Bilton, technology, business, and culture contributor at CNBC
“I don’t want to really scare you, but it was alarming how many people I talked to who are highly placed people in AI who have retreats that are sort of ‘bug out’ houses, to which they could flee if it all hits the fan.”
– James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era.
“We must address, individually and collectively, moral and ethical issues raised by cutting-edge research in artificial intelligence and biotechnology, which will enable significant life extension, designer babies, and memory extraction.”
– Klaus Schwab, German engineer, economist, and founder of the World Economic Forum
In a significant rebranding move, Meta, formerly known as Facebook, has officially dropped the name “Stories” from its smart glasses lineup, now simply referring to them as “smart glasses.”
The latest addition to this revamped series, unveiled at Meta’s Connect launch event, is the Ray-Ban Meta Smart Glasses, set to be available for preorder immediately and hitting the market on October 17th, with prices starting at $299.
The Ray-Ban Meta Smart Glasses come equipped with two primary functions, marking an evolution in wearable technology. First and foremost, they aim to replace traditional headphones by offering a personal audio system similar to Amazon’s Echo Frames and the Bose Tempo series, ensuring a private listening experience. The new generation boasts an improved microphone system with five microphones, including one located in the nose bridge, promising enhanced call quality and voice commands.
Secondly, the glasses function as a camera, featuring small camera lenses on each right temple. These cameras can capture 12-megapixel photos and 1080p videos, a significant upgrade from their predecessors. With 32GB of internal storage, users can store approximately 500 photos and 100 30-second videos, all of which sync through the Meta View app. The app also facilitates seamless sharing across Meta’s various platforms.
In a remarkable addition, users can initiate live streams to Facebook or Instagram with a few taps on the glasses’ stem while recording, indicated by a pulsing white light around the lens.
Powering these smart glasses is Qualcomm’s Snapdragon AR1 Gen 1 processor, featuring “on-glass AI” in a compact package. The glasses boast a battery life of four to six hours during active use, with the included case capable of providing an additional eight charges.
Meta’s rebranded smart glasses offer a convergence of audio and visual capabilities, positioning them as versatile and user-friendly devices for everyday use.