Ranjani Mani has been appointed as the new AI director at Microsoft. She previously worked as the lead overseeing Atlassian’s CSS Analytics team for Enterprise, collaborating closely with Engineering and Product leaders.
She is going to lead the GenAI tech specialist team at Microsoft. In her official LinkedIn post, she wrote, “Bringing together my passion for Gen AI and driving end-to-end business outcomes for end customers with technical leadership, I am super stoked to be working with technologists working at the cutting edge of Gen AI.”
She further added, “This is also a step towards my personal mission to bring more women into AI Leadership.”
Ranjani Mani is a well-known name in the field of AI, as she received multiple recognitions at esteemed events such as India AI 21 Women 21, Women in AI Leadership Award 2021, Women in Big Data 2021, etc.
Sam Altman, the CEO of OpenAI, the artificial intelligence research company behind the popular ChatGPT model, has been fired by the board of directors on Friday, November 17, 2023. The board cited “irreconcilable differences” and “loss of confidence” in Altman’s leadership as the reasons for his dismissal.
Altman joined OpenAI in 2020 after stepping down as the president of Y Combinator, the influential startup accelerator.
However, Altman also faced criticism and controversy for some of his decisions and actions, such as:
Exclusive partnership with Microsoft, which gave the tech giant exclusive access to some of OpenAI’s technologies.
Altman’s ouster triggered a wave of resignations and protests from some of the senior researchers and employees of OpenAI, who expressed their solidarity and support for Altman. Among them were Greg Brockman, the co-founder and president of OpenAI, and Jakub Pachocki, Aleksander Madry, and Szymon Sidor, three of the leading researchers of the company.
The board of OpenAI has appointed Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately, and announced that it will conduct a thorough and transparent search for a permanent replacement. The board also stated that it remains committed to the vision and mission of OpenAI, and thanked Altman for his contributions and achievements.
The first ever global summit on artificial intelligence (AI) safety concluded on Thursday with a landmark declaration signed by 28 countries, including the US, China, and the EU. The declaration, known as the Bletchley Declaration, calls for international cooperation to manage the challenges and risks of AI, especially the latest and most powerful AI systems, dubbed “Frontier AI”.
The summit, hosted by the UK at Bletchley Park, the historic site of World War II codebreaking, brought together representatives from governments, technology companies, researchers, and civil society groups. The summit aimed to establish some ground rules and foster international collaboration on the safe and responsible development of AI around the world.
Prime Minister Rishi Sunak, who announced the summit in Autumn 2023 as one of his priorities, said: “To fully embrace the extraordinary opportunities of artificial intelligence, we must grip and tackle the risks to ensure it develops safely in the years ahead. With the combined strength of our international partners, thriving AI industry and expert academic community, we can secure the rapid international action we need for the safe and responsible development of AI around the world.”
Some of the concerns that were raised at the summit include the potential use of AI for terrorism, criminal activity, and warfare, as well as existential risk posed to humanity as a whole. The Bletchley Declaration outlines some principles to guide the development and use of AI, such as ensuring human oversight, transparency, accountability, fairness, and respect for human rights. The declaration also commits to supporting national and international frameworks for AI governance, as well as fostering collaboration on AI safety research and standards.
One of the highlights of the summit was a live interview between tech entrepreneur Elon Musk and Prime Minister Sunak on AI safety on Wednesday. Musk, who has been vocal about his fears of AI surpassing human intelligence and posing a threat to humanity, praised the UK for hosting the summit and urged other countries to join the effort. He also announced that his company OpenAI, which aims to create artificial general intelligence (AGI), would share its safety results with the public and collaborate with other researchers.
President Joe Biden also welcomed the summit and signed an executive order requiring AI developers to share safety results with the US government. He also announced the creation of an American AI Safety Institute, as part of the National Institute of Standards and Technology (NIST), to conduct research and testing on AI safety.
The next AI Safety Summit is planned to be hosted by South Korea in mid-2024, followed by France around late-2024. The UK government said it hopes that the summit will be a catalyst for further action and dialogue on AI safety among all stakeholders.
Datasaur, an AI startup specializing in text and audio labeling for AI projects, unveiled the launch of the LLM lab. This all-encompassing platform serves as a one-stop solution to assist teams in constructing and training personalized large language model applications similar to ChatGPT.
Accessible for deployment in both cloud and on-premise environments, LLM Lab offers enterprises an initial foundation for developing their own in-house generative AI applications.
This approach alleviates concerns related to business and data privacy risks commonly associated with third-party services while affording teams more project autonomy.
LLM LAB offers a diverse set of features to enable users to experiment with various base models, link to their internal documents, streamline server expenses, and access various other functionalities.
Collaborating with industry leaders such as Google and Blackbird, Datasaur has significantly accelerated the data labeling process, achieving a 5.9-fold increase in speed compared to manual labeling. Over the last few years, the company has dedicated its efforts to crafting a robust NLP solution encompassing various functionalities, including entity recognition, text classification, speaker diarization, and more.
Datasaur’s platform is expanding its support to accommodate data scientists in NLP and LLM methodologies, enabling them to combine these approaches and utilize LLMs to automate data labeling in conventional models.
Udacity collaborates with Bertelsmann to offer scholarships for participants to acquire the necessary technology skills. Both organizations have collaborated before for different skills scholarships, but this time, it is for artificial intelligence, cybersecurity, or project management.
The scholarship is called Next Generation Tech Booster and will award thousands of scholarships to students worldwide online. The program offers three specialization courses: Agile Software Developer, Enterprise Security, and AI Programming with Python. Students can choose the course of their choice.
For interested candidates, the application window for this scholarship is already open from October 23, 2023, and will remain open till November 30, 2023. Check out the program details here.
To participate in this program, you need basic knowledge of the field you are choosing and fill in the application form. According to the official Udacity website, “If you’re 18 years or older, have English comprehension, and are looking to develop job-ready skills in project management, artificial intelligence, and cybersecurity, this scholarship program is for you.”
Next Generation Tech Booster is 100% online, and as a learner, you will have access to learning material from a virtual classroom. To graduate, you must complete specified projects within a set of time frames.
This collaboration between Udacity and Bertelsmann to provide scholarships for professionals represents a significant opportunity for the careers of many aspiring tech professionals.
OpenAI has announced the establishment of a new team dedicated to addressing and minimizing the “catastrophic risks” associated with artificial intelligence.
In their recent update on Thursday, Open AI revealed that this preparedness team’s mission is to diligently monitor, assess, predict, and safeguard against significant challenges arising from AI, which may include threats as those related to nuclear technology.
Additionally, the team will also focus on reducing the potential dangers stemming from “chemical, biological, and radiological threats,” as well as preventing “autonomous replication,” which involves AI independently creating copies of itself. The team will also address risks related to AI’s capacity to deceive humans and tackle issues concerning cybersecurity threats.
Aleksander Madry, currently on leave as the director of MIT’s Centre for Deployable Machine Learning, has been appointed to lead the preparedness team.
OpenAI has underscored that this team will additionally be responsible for crafting and upholding a “risk-informed development policy,” delineating Open AI’s actions for assessing and overseeing AI models.
Computer Science Professor Ben Zhao at the University of Chicago has developed “Nightshade,” an online tool that can bring a watershed moment in the AI-Art landscape.
The ongoing predicament between artists and AI companies is well known. Whether it be the hollywood writer strike, or the continuing mounting of lawsuits filed against generative AI companies by painters, musicians, etc, for unlawfully scrapping their material from the internet to train AI models is shedding light on the subject’s intricacies.
The question is, how would Nightshade prevent these AI giants from scraping data from the internet? The answer is not in preventing but in poisoning. Nightshade joins the party when an artist uploads their work online but doesn’t want it to be scrapped by AI companies without permission or royalties.
But the question is, what Nightshade specifically do? The AI tool alters the information related to a particular image, eventually making the image-generating AI models learn the wrong names of things they are looking at. This concept is called “data poisoning.”
What is Nightshade? It's a tool that performs a data poisoning attack against generative AI image models. Poisoning is not new. Poisoning genAI models at scale is new. You can read the MIT TR article for the high level story. For details, here's the paper: https://t.co/0mIZgOl1Fp
Ben Zhao and his team poisoned images of dogs by embedding information in the pixels to deceive an AI model into perceiving them as cats. In their experiment, the researchers put this attack to the test on Stable Diffusion’s latest models. By positioning just 50 images of dogs and then instructing Stable Diffusion to generate dog images, the results turned unusual, producing creatures with excessive limbs and cartoonish features. When the number of poisoned samples increased to 300, an attacker could successfully coerce Stable Diffusion into generating dog images that resembled cats.
ok ok, 1 last tweet I promise. I realized the most surprising result was not included in the MIT TR article. You can read the details in the paper (fig17), and I will just leave the figure here. FIN/ pic.twitter.com/zeDDlHbVEO
Furthermore, Nightshade’s method of poisoning data poses a formidable challenge in terms of defense. This is because it necessitates that developers of AI models meticulously identify and remove images containing altered pixels. These pixels are intentionally designed to be inconspicuous to the human eye and may prove challenging even for data-scrapping software tools to detect.
The team plans to release Nightshade as an open-source tool, enabling others to experiment and create their own variations. Zhao explains that the tool’s potency increases as more people utilize it as large AI models rely on datasets that can encompass billions of images; the more poisoned images that find their way into the model, the more significant the potential impact of this technique.
So next time, all an artist needs to do before uploading an image online is to use Nightshade and poison the material so that AI models like DALLE and Midjourney can’t identify the true nature of the picture, eventually making the data deceptive and problematic for the system.
British Deputy Prime Minister Oliver Dowden has confirmed that China will take part in the UK’s global AI summit scheduled for next week.
Chinese officials have only received invitations for two days of the summit, which will be held on November 1-2, and not the second-day meeting, which is focused on AI safety and security risks.
The invitation to China has triggered a strong negative response, with figures like former British Prime Minister Liz Truss advocating for China’s exclusion. Concerts have been raised regarding Beijing’s utilization of AI for state control and national security.
In a letter addressed to the current Prime Minister, Rishi Sunak, Truss stated, “No reasonable person expects China to abide by anything agreed to at this kind of summit.”
However, Sunak, who extended the invitation to China, has defended this decision. He asserts that involving all major AI powers is indispensable for developing a robust AI strategy.
Sunak’s decision to include China in the UK global AI summit was embraced by Arati Prabhakar, U.S. President Joe Biden’s top science adviser. She underscored the importance of fostering a global conversation on the matter.
SurveyMonkey has launched a feature named “Build with AI,” which employs AI technology to auto-generate surveys once users outline their objectives and target audience.
The salient objective of Build with AI is to significantly reduce the time needed to create tailored polls and questionnaires.
Using this feature, SurveyMonkey users can craft surveys with the help of generative AI using Open AI’s advanced GPT-3.5-turbo models.
SurveyMoneky’s official account thoroughly describes how to build a survey. First, a user must click “Create Survey” and select “Build with AI.” Next, input a prompt and then click on the paper airplane icon.
Following this, a user will be prompted to “Create Survey” once again. At this stage, one can take a moment to review the survey. If satisfied with the result, click on “Use Survey.” This will allow users to customize the survey according to their preferences.
Additional features include assistance in survey creation, ensuring that users can pose pertinent questions effectively regardless of expertise. It also incorporates safety measures to safeguard data when employing GPT-3, offers support for over 50 languages, and seamlessly integrates with SurveyMonkey’s existing AI features, facilitating comprehensive data collection and analysis through AI.
Motorola’s new product is straight from the future. Keeping up with the technological innovation madness of the 21st century, the Lenovo-owned company showcased its latest smartphone that can wrap around your wrist like a slap bracelet.
Motorola released its avant-garde concept during the Lenovo Tech World forum, showcasing the smartphone’s non-rigid facets. The innovative conceptual device employs an FHD + pOLED display that can be flexed and molded into various shapes based on users’ requirements.
For instance, it can serve as a regular Android phone, stand independently, or transform into a wrist-worn watch accessory.
Besides the elastic smartphone, Motorola also unveiled the latest AI features they have incorporated into their smartphones. One of them is a generative AI model running locally on the device, enabling users to personalize their smartphones with AI-generated wallpapers that match their outfit selection.
MotoAI, a personnel voice and text assistant that harnesses the capabilities of a Large Language Model (LLM) and runs on smartphones and PCs, was also released during the forum. This feature can manage schedules, respond to queries, and compose messages. Motorola also unveiled the mobile Doc Scanner, which gives crisp and clear images by minimizing wrinkles and shadows.
Finally, features like AI Text Summarization, which segregates necessary information from long-form chats, emails, or reports and concise them for better understanding, and Privacy Content Obfuscation, which blurs profile pictures and names in social posts to safeguard users information and privacy were also released.