Following a warning to doctors working in hospitals in Perth about using ChatGPT to create medical notes, Australia’s top medical group has called for strict regulations and transparency surrounding the use of artificial intelligence in the healthcare sector.
In a comment to the federal government’s discussion paper on safe and responsible AI, the Australian Medical Association noted that Australia lags behind other comparable countries in the regulation of AI and that stronger rules are required to protect patients and medical professionals and to foster trust.
After it was discovered that certain staff members had been utilizing the large language model for the practice, five hospitals in Perth’s South Metropolitan Health Service were instructed in May to stop using ChatGPT to generate medical notes for patients. According to the ABC, the service’s CEO, Paul Forden, told colleagues via email that utilizing such technologies could not guarantee patient confidentiality and that it had to stop.
Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor
In order to prevent the system from causing more health inequities, the AMA also stated that patient data must be preserved and that proper ethical oversight is required.
The recent EU Artificial Intelligence Act, which would classify the dangers associated with various AI technologies and set up an oversight board, should be taken into consideration for Australia, according to the AMA. It also recommended taking into account Canada’s rules on necessity for human intervention points in AI-related decision-making.
The AMA president, Prof Steve Robson said, “We need to address the AI regulation gap in Australia, especially in healthcare where there is the potential for patient injury from system errors, systemic bias embedded in algorithms and serious risk to patient privacy.”