Since the 1980s, human/machine interactions, and human-in-the-loop (HTL) scenarios, in particular, have been systematically studied. It was often predicted that with an increase in automation, less human-machine interaction would be needed over time. Advances in AI/ML have reduced human-machine interaction to some extent, but in some cases, the need for such interaction has dramatically increased. Human input is still relied upon by most common forms of AI/ML training, and often even more human insight is required than ever before. This brings us to a question:
As AI/ML technology continues to progress, what will the trajectory of human-machine interaction be over time and how might it differ from the status quo?
Shifting Human Interaction: Human Review of the Worst-performing ML Predictions
As AI/ML evolves and the baseline accuracy of models improves, the type of human interaction required will change from the creation of generalized ground truth from scratch to human review of the worst-performing ML predictions in order to improve and fine-tune models iteratively and cost-effectively. Deep learning algorithms thrive on labeled data and can be improved progressively if more training data is added over time. For example, a common use case is to annotate boundaries of buildings in satellite images of cities to create models that generate accurate street maps for navigation applications. Incorrect, biased, or subjective labels are prone to generating inconsistencies in the maps. Human review of every element of such ML-generated maps would be a painstaking if not impossible task, so the best approach is to analyze ML predictions programmatically, focus on the self-reported regions of low confidence, prioritize these for human review and editing, then reintroduce the results as new training data. The iterative nature of the process still relies on human input but the nature of this work will increasingly require more subject matter expertise and consensus on what answer is considered “most correct.”
The Future of Human-in-the-Loop: Deepening the Connection with Data Annotation
A recent report by Cognilytica noted that the data preparation tasks such as aggregating, labeling, and cleansing represent over 80% of the time consumed in most AI/ML projects. It is estimated that the market for third-party data labeling solutions was $150M in 2018 and will grow to over $1B by 2023. Labeling accuracy is increasingly becoming a primary concern – the industry has shifted from simple bounding boxes and speech transcription to pixel-perfect image segmentation and millisecond-level time slices in audio analysis.
In pathology, for example, detecting diseased cells in a tissue slide requires incredible accuracy as the diagnosis of disease and thereby a patient's plan of care depends on deriving the correct answer. The stakes are obviously extremely high, so the boundaries of diseased cells need to be labeled as accurately as possible. In the case of autonomous vehicles, identifying objects and activity in millisecond-level time slices is now the norm. When a car from a neighboring lane moves into the same lane as an autonomous vehicle, the reaction must be immediate while taking other factors into account - such as the location and speed of every other vehicle in the immediate vicinity. Human input on situations that require judgment when facing a series of potentially disastrous results is no longer theoretical, it has become the next frontier in data annotation.
As ML models approach the barrier of 100% accuracy, establishing ground truth intrinsically becomes more subjective, requiring increasingly higher levels of subject matter expertise and labeling precision. Voting mechanisms to decide the collective wisdom of expert-level human annotators are now used routinely. In 2019, a study published at the Open Data Science Conference (ODSC) compared the performance of full-time data labelers to crowdsourced workers on a simple transcription task. The crowdsourced workers had 10 times more errors than the professional annotators. A similar trend was observed for tasks such as sentiment analysis or extracting information from unstructured text. This study highlights that hiring a professionally managed workforce is often the optimal overall solution when taking both accuracy of results and cost into account. We anticipate that “commodity” data labeling currently offered by crowdsourcing and business process optimization organizations around the world will soon be displaced by smaller teams of annotation specialists with deep subject matter expertise. By extension, this shift will require more expensive labor, strict quality controls, specialized toolsets, and workflow automation to optimize the process versus huge teams of low cost labor.
BasicAI: Illuminating Human-AI Collaboration
Clearly, the landscape of AI/ML has undergone remarkable transformations since the 1980s. Despite significant progress, it's important to acknowledge that AI/ML remains a field in dynamic evolution. In this context, the interaction between humans and machines, which plays a pivotal role in model training, remains indispensable for the foreseeable future.
This collaboration between humans and AI is where BasicAI shines, offering not only cutting-edge solutions but also a robust annotation platform and professional outsourcing service. In a world where the intricacies of data labeling are paramount, BasicAI steps up to the plate, providing a seamless platform that achieves the interaction between AI capabilities and human expertise.
As annotation tasks become more intricate – from medical image analysis to sentiment classification – the need for adaptable human involvement grows. BasicAI leads this transformation, aligning services with the evolving AI/ML landscape. Its ability to harmonize advanced algorithms with human insights creates unmatched synergy, driving AI models toward unparalleled accuracy and adaptability.
In a future where AI's potential knows no bounds, BasicAI remains dedicated to refining the intricate dance between humans and machines. This dedication positions them as trailblazers on the journey toward AI excellence, serving as both a testament to the progress made thus far and a harbinger of the remarkable possibilities that lie ahead.