Integration of Algorithms compliance in medical decision
working in the biomarkers business, I've been in contact with a fair number of clinicians that had a very diverse appreciation of compliance to validated algorithms used in clinical decisions, be it in guidelines or publications. It seems the impact of compliance is only lightly studied and showing a clear weakness of human decisions in some cases when it tends to be statistically lower then expected. What would be the means and impact of having Artificial Intelligence work in collaboration with the clinician to document and improve compliance?
The recording of compliance lends itself extremely well to a block chain solution; as there is 1) a validated algorithm which can act as the genesis block, 2) the smart contract can be used to standardize the compliance requirements (ie: I have done X, Y, Z, etc), 3) the genesis block wil not chain unless the smart contract is complete and digitally signed by whoever performed the algorithm, and 4) there is a granular secure 'golden record' of how many times the algorithm has been performed - and when.
There are obvious benefits to the 'golden record'. Anyone viewing it knows immediately how often the algorithm has been used, when, and potentially the geographic where (has to be agreed to) it was used. If the algorithm relates to something contagious (Ebola, SARS, etc), the benefits of this global real-time reporting is immense. To the community as a whole, an algorithm that is visibly used often (ie: long block chains) - is a lot more valuable than one that isn't.
While the technology for compliance is there, we cant make people record compliance. Hence the further from an automated algorithm that you are, the less compliance you are going to have; and the less developed the user community, the worse it is going to be. Technology doesn't trump human nature.
Our Member-based Industry Benchmarking Consortium focuses on Governance Risk Compliance (GRC) topics. We leverage 4 or 5 low cost tools which are used for analytics and compliance or similar AI-related or decision-related applications. So, yes, we see the use of tools and humans-in-the-loop as a way to address the areas that you are researching.
The GRC Sphere
The coding process for a a new app starts with the idea what the AI should be capable of. Based on this vision, the programmers create the software based on mathematical formulas. In an ideal world the AI works as predicted. But, of course, this never happens. A program is complex and often different parts contradict each other and lead to malfunction. A certain part of the programmers’ tasks is an old-fashioned “try-and-error”. With the changing of variables, the software finally might do what it should do. Nevertheless the positive output, the solution is not explainable. In the normal day-by-day no problem, but if the software faces a difficult situation, its behavior is not predictable and may violate ruling law. Due to this, it is imperative that the software is explainable and this regarding documented by the responsible programmers and audited by the AI Compliance Officer.
It would be beneficial, if the AICO (Artificial intelligence Compliance Officers) masters coding and could “read” a software. Furthermore, case studies may test the software to analyze, how it reacts in special situations, which are out of the ordinary.