Home Blog Page 121

Why is the Singapore government’s workplace AI therapy platform Midline at Work problematic? 

Singapore government's Midline at Work problematic

The Singapore Ministry of Health and Ministry of Education launched the Mindline at Work tool in August 2022 for public-sector teachers as part of a mental health initiative. A chatbot component was trialed during development. However, anxious and burned-out teachers coming to the portal for help were met with comments like: “Remember, our thoughts are not always helpful. If a family member or friend was in your place, would they also see it the same way?” Unsettling screenshots of the chatbot’s interactions went viral on social media platforms. 

Wysa

As part of the mental health initiative, the Singaporean government partnered with Wysa, which is a well-known name in the AI therapy app sector. Wysa is recognized for having one of the most substantial evidence bases among several such apps and is clinically recommended by expert groups such as the Organisation for the Review of Care and Health Apps.

Read More: Indian Startup Fluid AI Introduces First Book Written By AI Algorithms

Despite that, in an investigation by Rest of World, users described Mindline at Work as a one-size-fits-all software that struggles to meet the specific needs of the teachers. More generally, psychology experts warn that partnering with digital therapy or wellness apps can backfire when the leading causes of mental health issues, in this case at workplaces, remain unaddressed. 

Singapore Government’s Mental Health Initiative

Singapore’s government is the first to bring Wysa’s therapy bot into a national-level service. The original Mindline.sg initiative, which was launched in June 2020, was aimed to help anyone in Singapore access care during the pandemic. The platform was integrated four months later as an “emotionally intelligent” listener by the Ministry of Health’s Office for Healthcare Transformation, Singapore. Later on, when teacher burnout during the pandemic became a prominent news topic, an extension, Midline at Work, was rolled out for public education professionals as a more tailored version.

Midline at Work’s Generic Advice

Complaints began to emerge only a few days after the launch of the extension. Upset users were unsatisfied with the chatbot’s generic advice. They said it is not the right tool to address the root causes of teachers’ stress, including uncapped working hours, demanding performance appraisal systems, and large classroom sizes. 

“It’s pretty useless from the teacher’s point of view,” said a public school teacher in his late 20s. He said that no one he knows takes the Midline bot seriously. “It’s a joke. It is trying to gaslight the teachers into saying, ‘Oh, this kind of workload is normal. Let’s see how you can reframe your perspective on this,'” he said.

Experts Take

In response to the backlash, Sarah Baldry, Wysa’s vice president of marketing, said that the app helps its users to build emotional resilience. “It is important to understand that the chatbot can’t change the behavior of others. Wysa can only help users change the way they feel about things themselves.” 

Some research shows that AI apps do promise to alleviate symptoms of anxiety and depression, although a significant issue is that most of them are not evidence-based. Other barriers that make AI bot therapy unpopular are concerns about data privacy and low engagement. Being the founder of the mental health collective Your Head Lah, Reetaza Chatterjee works in the nonprofit sector. “I don’t trust that these apps wouldn’t share my confidential information and data with my employers,” she said. 

Overall, privacy remains a weak point. Mozilla Foundation, a digital rights nonprofit, found that many mental health apps majorly fail at it. However, Mozilla noted that Wysa was a strong exception in this case, though users who commented on the matter hadn’t used the platform enough to think about it. 

Conclusion

Although artificial intelligence is drastically transforming the healthcare industry, particularly the mental health sector, it is evident from the example of Midline at Work that it cannot be completely relied upon, and doing so can prove disadvantageous. AI still has a long way to go before it can genuinely achieve human intelligence. Until then, we can make the most of what the technology has to offer to radicalize mental health scenarios while keeping an eye out for any irregularities and abnormalities. 

Advertisement

Deepfake Detection Technology: Where do we stand?

Deepfake ai detection
Credit: Mihai Surdu/Shutterstock

A team of researchers from the Johannes Kepler Gymnasium and the University of California, Berkley, created an artificial intelligence (AI) application that can determine whether a video clip of a famous person is real or a deepfake. 

To determine if a video was real or not, researchers Matyá Boháek and Hany Farid discuss training their AI system to distinguish certain persons’ distinctive physical movements in their study published in Proceedings of the National Academy of Sciences. Their research expands on previous work in which a system was trained to recognize deepfake features and head movements of prominent political figures, including former president Barack Obama.

Deepfakes, a portmanteau of the terms “deep learning” and “fake,” initially appeared on the Internet in late 2017, powered by generative adversarial networks (GANs), an exciting new deep learning technology. Today, deepfakes have plagued the internet with their presence. 

Consider the following scenario: you receive a video of a celebrity from a friend. You see that the famous celebrity is making an absurd statement or having a dance-off or engaging in ethically questionable activity. Whether intrigued or shocked, you forward the video to your other friends, only to discover later that it is fake. Now think back to the time you first watched the video. Perhaps you assumed the video was real cause it appeared to look completely like one. Unlike earlier deepfake videos, which were quickly debunked in the previous decade, today, GANs are powerful enough to create deepfake content where the human eye cannot discern if it is manipulated media.

In February, a study published in the Proceedings of the National Academy of Sciences USA, found that humans are finding deepfake images to be more realistic than the actual ones. Researchers at Stanford Internet Observatory reported in March that they had found over 1,000 LinkedIn accounts with profile photos that seemed to be generated by AI. Such instances highlight that it is important to develop a tool or solution that identifies the deepfake content online

Last month, Intel introduced new software that is claimed to be capable of instantly recognizing deepfake videos. With a 96% accuracy rate and a millisecond reaction time, the company asserts that their “FakeCatcher” real-time deep fake detection is the first of its type in the world.

In their current research, Boháek and Farid trained a computer model using over 8 hours of video footage that shows Ukraine President Volodymyr Zelenskyy saying things he did not say. According to reports, the video was produced to support the Russian government in persuading its public to believe state propaganda about the invasion of Ukraine.

Inspired by the previous research study where AI could identify deepfake by analyzing the jagged edges of pupils of human eye, Boháek and Farid noted at the outset that people have other distinctive qualities aside from physical markings or facial features, one of which was body movements. For instance, they discovered Zelenskyy’s tendency to raise his right eyebrow whenever he lifts his left hand. They used this information to create a deep-learning AI system to analyze a subject’s physical gestures and movements by reviewing video footage of Zelenskyy. Over time, the system became more adept at identifying actions that people are unlikely to notice—actions that collectively were exclusive to the video’s topic. It can recognize when something doesn’t match a person’s regular patterns.

Up to 780 behavioral characteristics are analyzed by the detection system as it examines many 10-second segments obtained from a single video. It will alert human experts to take a closer look if it flags many segments from the same video as being fake. 

The researchers then tested their system by evaluating multiple deepfake films together with genuine videos of different people. Researchers obtained true positive rates of 95.0%, 99.0%, and 99.9% when comparing different subsets of an individual (facial, gestural, or vocal) or combination characteristics against various datasets. This suggests that they discovered their system successfully distinguishes between actual and deepfake. It was also successful in recognizing the fabrication of the Zelenskyy video.

Though this is an exciting and comforting news, there is a catch! While the success rates of the deepfake detection tools are encouraging, the presence of misinformation and misleading content will not disappear. As AI becomes adept in recognizing deepfakes, its technologies are also helping in the creation of more powerful deepfakes which can evade the existing technologies. Hence these detection solutions offer a partial solution to countering the threat. However, they do present a fighting chance to minimize the harm caused by deepfake content.

Advertisement

Uber Launches Self-Driving Robotaxis in Las Vegas

Uber Launches Self-Driving Robotaxis in Las Vegas
Image Credits: Motional

Uber is collaborating with driverless technology startup Motional to provide autonomous car rides in Las Vegas, with hopes to extend to other major cities such as Los Angeles.

Starting on Wednesday, Uber users in Las Vegas may request an autonomous version of the Hyundai-designed IONIQ 5 mid-sized SUV. The Motional modified IONIQ 5 has 30 outside sensors, together with cameras, radar, and lidar systems, which enable Level Four autonomy by identifying threats at an ultra-long range. In other words, automobiles can operate autonomously, although in a few specific situations and only in pre-approved (geofenced) areas.

The other features of the Ioniq 5s remain the same, including a range of 238 to 315 miles and 220kW quick charging, which enables the batteries to refuel from 10% to 80% charge in 18 minutes.

The self-driving cars cannot be requested directly by customers. Instead, users must choose “UberX” or “Uber Comfort Electric” to be paired with a Motional vehicle. Customers will need to opt-in before the trip is confirmed, and a self-driving car will be sent to pick them up if one is available. Two “vehicle operators” are dispatched in the robocars to keep an eye on the technology and offer further assistance to passengers. According to Motional, if a robotaxi encounters challenging circumstances (such as road construction or flooding), an operator can remotely manage it to steer it to safety. Once the robotaxi has arrived at the agreed-upon pickup location, the Uber app will urge users who have accepted it to unlock the doors. 

Read More: Motional and Lyft team up to Launch Robotaxi Service in Los Angeles

The companies aren’t charging for driverless rides, but they plan to do so once the service is available to the general public.

The debut of the robotaxi service comes after the introduction of self-driving Uber Eats deliveries in Los Angeles in May as part of a 10-year business agreement between Uber and Motional.

Exactly two years after selling its self-driving division to Aurora Innovation, the launch ushers Uber into a new age of autonomous vehicles. Automation has long been considered a means of reducing costs and accelerating service by rideshare companies like Uber and Lyft.

Image Credits: Uber

With the potential to serve millions of customers, Uber claims that its recent cooperation with Motional would result in one of the largest deployments of autonomous vehicles on a major ride-hailing network.

Since 2018, Motional has been providing robotaxi services in Las Vegas through Lyft, a competitor of Uber; however, until 2020, trips were provided under the parent company Aptiv.

Advertisement

NeuReality Receives Series A Funding of US$35 Million

NeuReality Series A funding

NeuReality, an Israeli AI chip startup, has announced a US$35 million Series A fundraising round to commercialize its NR1 processor, which is developed to speed up artificial intelligence applications. The round was headed by Samsung Ventures, Cardumen Capital, Varana Capital, OurCrowd, and XT Hi-Tech, with participation from SK Hynix, Cleveland Avenue, Korean Investment Partners, StoneBridge, and Glory Ventures. With this round, NeuReality’s total funding now stands at US$48 million.

The NR1, which is a network-attached “server on a chip,” employs a new class of Network Addressable Processing Units (NAPU) designed specifically for deep learning inference applications such as computer vision, natural language processing, and recommendation engines. Large-scale users like Hyperscalers and next-wave data center clients will be able to accommodate the expanding spectrum of their AI usage thanks to the NAPU. 

The latest funding will bolster NeuReality’s aspirations to begin implementing its Inference products in 2023. The term “inference” refers to the process of executing trained neural networks in production. In contrast to the existing technologies, NeuReality’s solution is designed for optimum deployment in data centers and near-edge on-premises sites. These locations require better performance, reduced latency, and significantly higher efficiency. Generally, large-scale AI- infrastructure settings struggle with maintaining hardware efficiency with scaling demands. This is because they require the addition of more chips to the infrastructures – which in turn requires huge power to manage them. The NR1 chip solves the problem via linear scaling, where you add more chips to the server cluster without compromising hardware efficiency. At the same time, the latency of AI operations is reduced, and system costs and power usage are reduced. These factors are crucial for improving the total cost of ownership (TCO) of data centers and on-premises large-scale compute systems, which is essential for the business models of many applications.

NeuReality provides the NR1 as part of an appliance called the NR1-S Inference Server, which features several NR1 chips. When compared to rival hardware, NeuReality claims that the NR1-S Inference Server can reduce prices and power needs by a factor of 50. The company also features the NR1 as part of the NR1-M accelerator card, which can be connected to a server via a PCIe port. With the use of the accelerator card, companies can incorporate NeuReality’s technology with their current server infrastructure in their data centers.

In addition to the NR1, NeuReality offers a collection of software tools to make deploying AI applications in production easier. These solutions from the company also promise to make managing applications easier. Among the software components in NeuReality’s portfolio is an AI hypervisor, which assists customers in managing machine learning applications deployed on NR1 chips.

Read More: Elon Musk Said Neuralink Brain Chip To Begin Human Trials in The Next Six Months

Dr. Mingu Lee, Managing Partner at Cleveland Avenue Technology Investments, said,  “NeuReality is bringing ease of use and scalability into the deployment of AI inference solutions, and we see great synergy between their promising technology Fortune 500 enterprises companies we communicate with. We feel that investing in companies such as NeuReality is vital, not only to ensure the future of technology, but also in terms of sustainability.”

Since last May, NeuReality claims it has been distributing NR1 prototype implementations to partners. Using its latest US$35 million funding round, the company hopes to roll out its technologies extensively. In order to help with the endeavor, NeuReality will hire 20 additional staff over the next six months.

Advertisement

GitHub Announces GitHub Copilot for Business Plan

GitHub Copilot for Business

Last year in June, GitHub and OpenAI debuted a technical preview of a new AI tool called Copilot, which runs within the Visual Studio Code editor and autocompletes code snippets. Now, months after debuting for individual users and schools, GitHub Copilot is now available in a plan for enterprises.

The new subscription, known as GitHub Copilot for Business, costs $19 per user per month and includes all the capabilities of the Copilot single-license tier in addition to flexible licensing management, organization-wide policy controls, and industry-leading privacy. With the help of this plan, businesses can easily establish policy controls to impose user preferences for public code matching on behalf of their company.

The emergence of AI-assisted coding, according to GitHub’s blog post, will transform how we create software, much like the rise of compilers and open source. Because of this, GitHub has faith in the ability of AI to enhance the developer experience, boost productivity and happiness, and speed up innovation by offering GitHub Copilot to businesses of all sizes with enhanced admin controls.

GitHub wants to offer more tools in 2023 to assist developers to make educated judgments about whether to adopt Copilot’s suggestions, such as the ability to recognize strings matching public code and link to those repositories. Additionally, GitHub asserts that it will not save or distribute code samples for training purposes with GitHub Copilot for Business users, regardless of whether the data originates from public repositories, private repositories, non-GitHub repositories, or local files.

Read More: GitHub creates private vulnerability reports for public repositories

This development follows the filing of a class-action lawsuit in a federal court in the US challenging the legality of GitHub Co­pilot and OpenAI Codex. The lawsuit filed against GitHub, Microsoft, and OpenAI alleges a breach of open-source licensing and has the potential to significantly influence the artificial intelligence community. The lawsuit was filed by Matthew Butterick, a programmer and lawyer, and the legal firm Joseph Saveri, which specializes in antitrust and class lawsuits.

Advertisement

Google announces Simple ML machine learning add-on for Google Sheets

Google Simple ML add-on

This week, Google LLC announced a brand-new tool that will let users create machine learning models in Google Sheets. Currently available as a beta, the tool is dubbed Simple ML. Users are able to get it for free as a Google Sheets add-on. 

Simple ML was developed by one of the Google teams responsible for TensorFlow, a prominent open-source AI technology published by the search giant in 2015. Due to the extensible nature of Google Sheets, users can take advantage of add-ons that enhance the application’s built-in features. Users won’t need to use a specialized TensorFlow service in this instance because Google has designed Simple ML for Sheets to be as user-friendly as possible.

With just a few clicks, anyone, including those without programming or ML expertise, can experiment and apply some of the power of machine learning to their data in Google Sheets. It is important to note users must first create a Google Sheets spreadsheet with a collection of data points arranged in rows and columns in order to train AI models using Simple ML. 

After configuring Simple ML for Sheets from the Google Workspace Marketplace, open the spreadsheet and run the add-on from the Extensions menu. Simple ML, which launches in a side panel, currently offers two AI use cases: “Predict missing values” and “Spot abnormal values.” You select the “Column with empty cells” in the first example and then hit the blue “Predict” button.

Source: Google

After a few seconds, the predictions and confidence percentages will be loaded into your spreadsheet. Google warns that it is possible that these statistical predictions are inaccurate. However, this may not be the case in future versions.

In the second use scenario, Simple ML can detect abnormal data items. This is possible since Simple ML creates no less than 10 AI models that automatically assess the accuracy of the data in a spreadsheet to identify data abnormalities. For instance, the add-on can assess whether a text string has been unintentionally entered into a spreadsheet field that is supposed to hold a number value. 

A handful of advanced user features are also included in Simple ML. Google claims that the tool allows users to assess the quality of the AI models it creates and offers technical information about them. Additionally, Simple ML lets users transfer AI models to Colab, a Google cloud-based code editor that is used for projects involving machine learning and data science.

Read More: Google shuts down its Duplex on the Web service

Simple ML may be used in conjunction with Connected Sheets, another powerful data processing tool from Google. It gives users access to the Google Sheets interface for data analysis hosted in the search engine giant’s BigQuery cloud data repository. With the help of the program, users can handle billions of spreadsheet data entries without having to create SQL queries.

Google also points out that because Simple ML runs in your browser, your data remains secure in your Google Sheets spreadsheet. For convenient sharing with the rest of your team, the models are instantly saved to Google Drive. Additionally, you can export models trained in Simple ML to the TensorFlow ecosystem because Simple ML is built on top of TensorFlow Decision Forests.

Advertisement

Meta to employ AI to verify user age on Facebook Dating

facebook dating ai age verification meta , meta facebook dating

In the past, Meta has employed AI facial scanning tools to confirm Instagram users’ ages. The company has now declared that it is testing the technology on Facebook Dating in order to make the product safer.

Recently, Meta stated in a blog post that it will begin requesting Facebook Dating users to verify their age if the platform suspected a user is underage. Users may upload a video selfie by following a few simple guidelines. Meta then shares a still picture from the video with Yoti, a third-party age-verification company, which calculates an age estimate based on facial features. The image is removed when Meta receives the result.

Source: Meta
Source: Meta
Source: Meta

You could also upload an ID that shows your age, like a driving license. While your age is being confirmed, the ID is encrypted and saved. The ID is never made public on your Facebook page, and Meta then allows users to choose how long it should be kept. According to Meta, the verification procedure takes a few days, but its own help documentation reveals that it can store IDs for up to 1 year.

Read More: Meta Introduces CICERO, the First AI That Plays Diplomacy at a Human Level

Following the company’s Cambridge Analytica data privacy breach, Facebook Dating debuted in 2019, however, it had fallen behind rivals in the dating industry. Meta has been experimenting with various artificial intelligence-based age identification systems as governments put more pressure on online companies to provide minimal protections to protect minors. Although Meta hasn’t completely described the markers it searches for to determine a person’s age, it has previously stated that it analyzes things like a user’s birthday postings since friends frequently respond with the user’s genuine age in their comments. With this most recent action, Meta is ahead of legislative frameworks that are increasingly centered on the online safety of kids and teenagers, with federal legislation underway both nationally and internationally.

According to Meta, testing since June has shown that the service was able to deter 96% of teenagers who attempted to modify their birthdays from doing so. The company said that the new age verification technologies will aid in preventing youngsters from using adult-only services. The age verification test will be implemented across all other Meta products that demand users to be at least 18 years old as well as in other nations where Facebook Dating is active.

For adults using Facebook Dating, Meta did not list any requirements (such as ensuring a 45-year-old is not pretending to be 18). The Verge claims that the technology is not equally accurate for all users: Yoti’s data reveals that the algorithm’s accuracy is worse for “feminine” faces and individuals with darker skin tones.

Advertisement

Howie Mandel Gets a Digital Twin from DeepBrain AI

howie mandel digital twin from deepbrain ai

Howie Mandel enters the metaverse as he gets a hyperrealistic digital twin from DeepBrain AI. Calling it “AI Howie,” DeepBrain AI developed it to be an interactive virtual human. 

DeepBrain AI is a South Korean and California-based company providing virtual human services like recreating late family members’ personas in the virtual worlds. With the identical appearance, voice, gestures, and subtle mannerisms as the real person, these virtual characters are their digital twins. 

The DeepBrain AI team also produced digital twins of South Korean president Yoon Suk-yeol, several news anchors from Asia, and Premier League soccer star Son Heung-Min before working with Howie Mandel.

Read More: Wega Labs, a web3 Gaming Startup developing Cricinshots — a unique cricket strategy game

Joe Murphy, business development manager at DeepBrain AI, said that the company also develops completely synthetic people called “digital people” and Roblox avatars. However, these technologies are not very advanced, and many companies can do it. But when it comes to the digital twins of real-life people, DeepBrain AI goes through an extensive process of deep learning to clone the voice, face, mannerisms, and even the way their eyes move.

The famous comedian, actor, host, and computer enthusiast Howie Mandel worked with DeepBrain AI to develop a virtual human AI digital twin. Mandel said in a statement, “I am equally thrilled, excited and terrified to finally have the ability of showing up and doing things without going anywhere or doing anything.”

Advertisement

AWS Machine Learning University to offer a free AI Educator enablement program

AWS AI Educator enablement program

Amazon Web Services (AWS) Machine Learning University is going to offer a free AI educator enablement program from January 2023 to help institutions prioritize teaching databases, machine learning, and artificial intelligence concepts to historically underserved students. 

AWS will facilitate a free hands-on experience for students in a cloud-based sandbox to apply ML concepts and experiment with a variety of AWS services, ML cloud computing, and data analytics tools.

This program will prioritize minority-serving institutions (MSIs), community universities, and historically Black colleges and universities (HBCUs) in the US. The program will have six educator boot camps in 2023, using the same content Amazon uses to train their own developers and data scientists. 

Read More: ChatGPT Fails That Prove Why OpenAI Is Far From Expositing Ethical Concerns In Language Models

The boot camps will introduce learners to the program and will consist of lecture slides, exams, instructor handbooks, and hands-on coding exercises based on feedback from school systems that helped pilot the early program. 

Educators who finish the program can get an AWS stipend and continuing education credits. They will also have year-round professional development through tech talks, Slack study groups, regional events, and virtual study sessions moderated by AWS instructors. 

Advertisement

FTC tries to block Microsoft’s plan to buy Activation Blizzard

FTC block Microsoft's plan buy Activation Blizzard

The Federal Trade Commission (FTC) has filed a legal case against Microsoft to try and block the company’s plan to buy Activision Blizzard for about $68.7 billion. 

The lawsuit was filed today, according to a press release from the regulator, after weeks of back and forth between Sony, Microsoft, and other regulators over concerns about competition and the future of the game Call of Duty. 

The FTC believes that the acquisition would allow Microsoft to suppress competitors to the company’s Xbox gaming consoles and its rapidly growing cloud-gaming business and subscription content.” 

Read More: ChatGPT Fails That Prove Why OpenAI Is Far From Expositing Ethical Concerns In Language Models

Today’s vote from the FTC commissioners will cause Microsoft to face significant hurdles in securing its Activision Blizzard deal. Even after Microsoft’s repeated attempts to appease regulators, regulators in the EU and UK are also scrutinizing the agreement closely.

“We continue to believe that the deal will expand competition and generate more opportunities for game developers and gamers,” Brad Smith, Microsoft’s vice chair and president. In a letter to Activision Blizzard employees, CEO Bobby Kotick assured the staff that he wanted to reinforce his confidence in the fact that the acquisition would close. 

Advertisement