Brands
Discover
Events
Newsletter
More

Follow Us

twitterfacebookinstagramyoutube
Youtstory

Brands

Resources

Stories

General

In-Depth

Announcement

Reports

News

Funding

Startup Sectors

Women in tech

Sportstech

Agritech

E-Commerce

Education

Lifestyle

Entertainment

Art & Culture

Travel & Leisure

Curtain Raiser

Wine and Food

YSTV

ADVERTISEMENT
Advertise with us

Human-powered data solutions are the future of AI

Machines can’t teach themselves, at least not yet. With the right “human-in-the-loop” (HITL), AI and ML models can be made more accurate.

Human-powered data solutions are the future of AI

Thursday December 09, 2021 , 6 min Read

Artificial Intelligence (AI) is drastically altering how work is done. The greater impact, however, is its ability to enhance human capabilities.


Harvard Business Review research, covering 1,500 companies, found that firms achieve the most significant performance improvements when humans and machines work together.


Through such collaborative intelligence, humans and AI actively enhance each other’s complementary strengths: leadership, teamwork, creativity and social skills of the former, and the speed, scalability and quantitative capabilities of the latter.


An IBM study states that 80 percent of a data scientist’s time is spent in finding, cleansing, and organising data, leaving very little time for actual analysis. We have observed that about half of machine learning (ML) projects, 47 percent to be precise, never make it to production – that is, they never pass the proof-of-concept or trial stage.


These observations underscore the fact that organisations across industries are looking to invest heavily in AI and ML to gain a competitive edge.

How humans are collaborating with machines

Humans need to perform three crucial roles within AI: train machines to perform certain tasks; explain the outcomes of those tasks, especially when the results are counter-intuitive or controversial; and sustain the responsible use of machines.


Once designed, ML algorithms need humans to teach them how to perform a certain task. For example, training a translation algorithm to understand colloquial expressions, healthcare apps to detect variations in health graphs, and recommendation engines to recommend the right product or service to users.


AI reaches conclusions through processing data and requires an explanation for human behaviour-induced decision making.


For example, a doctor needs to understand how AI interprets certain inputs, say, a prescription for a specific set of symptoms. These explainers are important, especially in understanding ‘why’ actions, say, why an autonomous vehicle failed or led to a mishap.

Beyond training and explanations, AI also needs sustained and consistent monitoring of processes by humans to track errors or obstructions and to ensure efficiency.

For example, for autonomous vehicles, safety engineers need to track the movement of the cars and alert systems if they are endangering nearby humans or properties.

Dealing with edge cases

AI systems are complex, and with widening adoption, the complexities increase exponentially. AI algorithms often encounter errors, not performing as expected or programmed. These edge cases, such as malfunctions in autonomous vehicles leading to minor crashes, are negative outcomes of AI systems.


A self-driving car may not register surroundings like a human driver. It may not register the behavior of a pedestrian or may fail to identify an object on the road. These differences make it difficult for machines and AI-led systems to perform flawlessly or as programmed to.


For example, identifying construction workers holding street signs in inconsistent locations, signs that flip from moving school buses, differentiating between people or objects and their reflections, and many such other nuances affect how a human driver makes a decision while driving. For an autonomous vehicle, reading these situations can be complex.

In such cases, ML- and AI-driven systems need constant learning and development to understand human behavior and surroundings. They also need human intervention at crucial points to avoid negative outcomes or to deal correctively with unwanted ones.

Employment of human-in-the-loop (HITL)

When artificial intelligence leverages both human and machine intelligence to create machine learning models, it's called HITL. A traditional HITL approach involves people in a consistent process where they train, tune and test an algorithm.


In evolved HITL systems, humans feed the algorithm with training, tuning, and testing tasks so that the system gets smarter and more accurate. This is more effective than the traditional HITL model as it selects what it needs to learn next and sends the information back to a human commander for advanced training. Consequently, the AI model is actively learning and improving results.


A HITL approach is used for various AI projects, including natural language processing (NLP), computer vision, sentiment analysis, and transcription.


In all these cases, human intelligence integrated into the machine’s ability to learn and deliver is beneficial. The lack of human intervention may lead to undesirable results and failed AI systems.


For AI systems to work efficiently, you require large volumes of data that can be expensive and even impractical to source. Integrating human knowledge and intelligence can optimise the data being leveraged, reduce costs and increase the reliability of ML systems.

The right HITL approach

With increasingly complex requirements, the time taken to collect data, label, and annotate has increased. Several years ago, pixel-level semantic segmentation was hard to achieve and since then technologies like 3D Lidar, 3D point cloud multi-sensor fusion have further added to the complexity. Expertise with this type of work is required to develop data-driven technologies and deliver high-quality results.


To develop such complex algorithms quicker, organisations often employ crowdsourced workforces for effortless data annotation. However, the data science teams may have limited knowledge of and control over the people fulfilling the service, due to a lack of expertise in deploying algorithms for AI/ML systems or due to a shortage of teams. This results in more time and effort to analyse outputs and make decisions.


While some simple tasks such as labeling cats or cars in a still image can be executed by inexperienced annotators, complex data labeling requires evaluated experts. Crowdsourcing might also lack appropriate security or privacy standards, which can be a threat to data.


Expert annotators, with their domain expertise and knowledge provide high-quality results that further support AI/ML systems. These expert teams can analyse data needs, determine delivery volumes, time requirements, and also support AI teams in developing good practices and delivering guaranteed quality results.


The reality is machines cannot teach themselves. They need humans to be an eternal part of the training, learning, and sustaining process. For the foreseeable future, data solutions for AI systems are going to be human-powered as we are already seeing machine learning growing into newer domains, requiring knowledge to build advanced, self-learning systems. This will require an army of human experts to make AI work and deliver desired results.


Edited by Affirunisa Kankudti

(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)