AI-Human Collaborative Model Gets Refined

AI-Human Collaborative Model Gets Refined

In improved hybrid Human-AI systems, AI decides the boundaries of partnership

Common knowledge might lead you to believe that when an artificial intelligence system and a human are working together, they produce the best results. Well, according to recent studies on artificial intelligence, it isn’t always the case. Sometimes AI ‘thinks’ that it should take the call – over the judgement of its human counterpart. Of course, the inverse might be true as well. In 2018, Amazon reportedly discarded a ‘secret’ AI hiring tool that was primarily favouring male candidates over its female counterparts, and unwittingly exacerbating pre-existing biases. When it comes to aspects such as hiring using AI algorithms, there has been a long history of recorded unfairness with regards to race, ethnicity or gender, and thus human overriding has often been deemed to be necessary.

However, recent reports suggest that this gap might be bridged soon. Researchers at MIT’s Computer Science and AI Laboratory (CSAIL) have now developed an AI system to do this kind of optimization based on the strengths and weaknesses of the human collaborator.

Several studies have revealed that in situations involving AI and humans coming together to perform designated tasks, either of them could do the task better while executing alone, rather than while collaborating. So much for teamwork! But it has been often noticed in AI-based  diagnostic methods, where automated systems do their bit of content moderation by filtering the findings before these are turned over to human specialists for the final check. This involves a fine line where machine scrutiny must stop, and human doctors take over; this borderline is where the automated systems falter. It is often difficult to create algorithms that can identify the perfect point when to stop, thus optimizing the AI-to-human handover. On one hand, overinvolvement of the AI system would prevent or postpone human decision-making that is mostly based on experience and complex situation-based judgements so crucial in medical settings;on the other hand, allowing the algorithm to continue with the task might have led to a quicker and more perfect decision based on the machine’s data-crunching abilities and computing speed. A Catch-22 situation indeed.

Researchers used two separate machine-learning models for this study.One of them was entrusted with the task of actually making the decision, and the other predicted whether the better decision was made by the algorithm or the human involved in the process.Thissecond model is called the “rejector,” and it goes on refining its predictive abilities with every decision it analyses. It also factors in areas outside the decision-point in question – like how much time the human had to make the decision, or the level of access the algorithm had to patient information.

The human-AI system was put to test in diverse situations and with multiple complexities thrown in. These even includedimage recognition and detection of hate speech. It was noted that the algorithm began to adapt successfully to the behaviour pattern of its human collaborators and successfully hand over the task to them as suits the situation. This led to a much greater degree of combined accuracy than it could be achieved in the former hybrid human-AI systems.The applications could be crucial – like an algorithm driven system that could suggest the right antibiotic after analysing the case-history, so that treatment is effective and there are no chances of acquiring antibiotic resistance through misuse. In the ideal scenario, the system is expected to learn the different biasesthat individual doctors have while diagnosing and prescribing – and correct any tendencies which might not offer optimum benefits for the patient.

The work is still at a purely experimental level, and the data being used for the research is nowhere near to real life levels of complexity. However, researchers are confident that the outcomes are promising, and the model could be refined to a level where it will work just as accurately with complex real-life decision-making scenarios.

Leave a comment

Your email address will not be published. Required fields are marked *

© 2023 Praxis. All rights reserved. | Privacy Policy
   Contact Us
Praxis Tech School
PGP in Data Science