gliff.ai is a Durham based company that is leading the world in the development of medical Artificial Intelligence (AI). The company was spun-out from Durham University and began building a set of tools during lockdown that help people, particularly Doctors, to build their own AI using their own images.
gliff understood that the opportunities are no longer just in the creation of AIs. “We no longer need to be convinced that a computer is capable of copying the actions of a Doctor in a laboratory – we know computers can copy Doctors to make the same decisions,” said Bill Shepherd, CEO of gliff.ai. “The opportunity is actually to take these AIs and make them trustworthy for use in hospitals and clinics.”
Some of the world’s most influential governments in medical regulation have released guidelines for practices used for developing medical Artificial Intelligence. The legal landscape around medical AI is still evolving to catch-up with the technological advances in this area. However, the new guidelines, published by UK, US and Canadian government bodies, is a significant leap forward for building trustworthy AI for medical use.
AI will revolutionise healthcare, but it is important that AI applications are developed to the highest standards to ensure effectiveness and safety for Doctors and their patients.
Last week, the UK Government’s Medicines and Healthcare products Regulatory Agency (MHRA) published a joint document with the U.S. Food and Drug Administration (FDA) and Health Canada, titled “Good Machine Learning Practice for Medical Device Development: Guiding Principles.” The partnership between these governmental organisations reflects the international efforts to create global standards for medical AI. The document outlines 10 principles that will help promote safe and effective medical AI.
Technology start-up gliff.ai has strongly welcomed the release of these guidelines. gliff.ai’s software is specifically designed to assist the development of trustworthy AI and already includes support to the newly developed standards. “It is great to be leading the world in software development for AI,” continued Bill Shepherd.
“We’re really excited to see that these guidelines emphasise the role of high-quality data in AI development, as well as the need to involve experts from different disciplines,” says Chas Nelson, CTO of gliff.ai. “At gliff.ai, we’ve already been focusing on addressing the issues presented in these guidelines.”
Lucille Valentine, gliff.ai’s Head of Regulation and Compliance, closely monitors global developments in standards and regulation pertaining to AI and machine learning. “These international guidelines represent a huge milestone for medical AI development. What’s more, they underline the case that AI developers must use a first-rate MLOps system to develop their datasets, especially if they wish to see their products put into use in the real world,” says Lucille.[1]
Whilst demonstrating a leap forward , these guidelines are only part of the foundations required for a global set of standards. The intergovernmental partnership expects that their 10 principles will help inform further international engagement with public health bodies.
[1] Machine Learning Operations (MLOps) is the emerging part of the technology world which makes developing AI easier for domain experts such as Doctors and Engineers to develop their own AI outside of the laboratory.