📒
Machine & Deep Learning Compendium
  • The Machine & Deep Learning Compendium
    • Thanks Page
  • The Ops Compendium
  • Types Of Machine Learning
    • Overview
    • Model Families
    • Weakly Supervised
    • Semi Supervised
    • Active Learning
    • Online Learning
    • N-Shot Learning
    • Unlearning
  • Foundation Knowledge
    • Data Science
    • Data Science Tools
    • Management
    • Project & Program Management
    • Data Science Management
    • Calculus
    • Probability & Statistics
    • Probability
    • Hypothesis Testing
    • Feature Types
    • Multi Label Classification
    • Distribution
    • Distribution Transformation
    • Normalization & Scaling
    • Regularization
    • Information Theory
    • Game Theory
    • Multi CPU Processing
    • Benchmarking
  • Validation & Evaluation
    • Features
    • Evaluation Metrics
    • Datasets
    • Dataset Confidence
    • Hyper Parameter Optimization
    • Training Strategies
    • Calibration
    • Datasets Reliability & Correctness
    • Data & Model Tests
    • Fairness, Accountability, and Transparency
    • Interpretable & Explainable AI (XAI)
    • Federated Learning
  • Machine Learning
    • Algorithms 101
    • Meta Learning (AutoML)
    • Probabilistic, Regression
    • Data Mining
    • Process Mining
    • Label Algorithms
    • Clustering Algorithms
    • Anomaly Detection
    • Decision Trees
    • Active Learning Algorithms
    • Linear Separator Algorithms
    • Regression
    • Ensembles
    • Reinforcement Learning
    • Incremental Learning
    • Dimensionality Reduction Methods
    • Genetic Algorithms & Genetic Programming
    • Learning Classifier Systems
    • Recommender Systems
    • Timeseries
    • Fourier Transform
    • Digital Signal Processing (DSP)
    • Propensity Score Matching
    • Diffusion models
  • Classical Graph Models
    • Graph Theory
    • Social Network Analysis
  • Deep Learning
    • Deep Neural Nets Basics
    • Deep Neural Frameworks
    • Embedding
    • Deep Learning Models
    • Deep Network Optimization
    • Attention
    • Deep Neural Machine Vision
    • Deep Neural Tabular
    • Deep Neural Time Series
  • Audio
    • Basics
    • Terminology
    • Feature Engineering
    • Deep Neural Audio
    • Algorithms
  • Natural Language Processing
    • A Reality Check
    • NLP Tools
    • Foundation NLP
    • Name Matching
    • String Matching
    • TF-IDF
    • Language Detection Identification Generation (NLD, NLI, NLG)
    • Topics Modeling
    • Named Entity Recognition (NER)
    • SEARCH
    • Neural NLP
    • Tokenization
    • Decoding Algorithms For NLP
    • Multi Language
    • Augmentation
    • Knowledge Graphs
    • Annotation & Disagreement
    • Sentiment Analysis
    • Question Answering
    • Summarization
    • Chat Bots
    • Conversation
  • Generative AI
    • Methods
    • Gen AI Industry
    • Speech
    • Prompt
    • Fairness, Accountability, and Transparency In Prompts
    • Large Language Models (LLMs)
    • Vision
    • GPT
    • Mix N Match
    • Diffusion Models
    • GenAI Applications
    • Agents
    • RAG
    • Chat UI/UX
  • Experimental Design
    • Design Of Experiments
    • DOE Tools
    • A/B Testing
    • Multi Armed Bandits
    • Contextual Bandits
    • Factorial Design
  • Business Domains
    • Follow the regularized leader
    • Growth
    • Root Cause Effects (RCE/RCA)
    • Log Parsing / Templatization
    • Fraud Detection
    • Life Time Value (LTV)
    • Survival Analysis
    • Propaganda Detection
    • NYC TAXI
    • Drug Discovery
    • Intent Recognition
    • Churn Prediction
    • Electronic Network Frequency Analysis
    • Marketing
  • Product Management
    • Expanding Your Data Science Skills
    • Product Vision & Strategy
    • Product / Program Managers
    • Product Management Resources
    • Product Tools
    • User Experience Design (UX)
    • Business
    • Marketing
    • Ideation
  • MLOps (www.OpsCompendium.com)
  • DataOps (www.OpsCompendium.com)
  • Humor
Powered by GitBook
On this page
  • Tools
  • Myths
  • Crowd Sourcing
  • Disagreement
  • Inter agreement
  • Troubling shooting agreement metrics
  • Machine Vision annotation

Was this helpful?

  1. Natural Language Processing

Annotation & Disagreement

PreviousKnowledge GraphsNextSentiment Analysis

Last updated 2 years ago

Was this helpful?

Tools

  1. - using weak supervision to create less noisy labelled datasets

  2. weak supervision for multi-task learning. ,

    1. Yes, the Snorkel project has included work before on hierarchical labeling scenarios. The main papers detailing our results include the DEEM workshop paper you referenced () and the more complete paper presented at AAAI (). Before the Snorkel and Snorkel MeTaL projects were merged in Snorkel v0.9, the Snorkel MeTaL project included an interface for explicitly specifying hierarchies between tasks which was utilized by the label model and could be used to automatically compile a multi-task end model as well (demo here: ). That interface is not currently available in Snorkel v0.9 (no fundamental blockers; just hasn't been ported over yet).

    2. There are, however, still a number of ways to model such situations. One way is to treat each node in the hierarchy as a separate task and combine their probabilities post-hoc (e.g., P(credit-request) = P(billing) * P(credit-request | billing)). Another is to treat them as separate tasks and use a multi-task end model to implicitly learn how the predictions of some tasks should affect the predictions of others (e.g., the end model we use in the AAAI paper). A third option is to create a single task with all the leaf categories and modify the output space of the LFs you were considering for the higher nodes (the deeper your hierarchy is or the larger the number of classes, the less apppealing this is w/r/t to approaches 1 and 2).

    1. Figure 8 - -

  3. ,

  4. - prodigy open source alternative butwith users management & statistics out of the box

  5. Medium

  6. .ai - An AI powered semi-automated and automated annotation process for high quality data.object detection, analytics, nlp, active learning.

  7. Medium

  8. they talk about inter agreement etc. and their DS is .

    1. They must pass an english exam

    2. They get control questions to establish their reliability

    3. They get a few sentences over and over again to establish inter disagreement

    4. Two or more people get a overlapping sentences to establish disagreement

    5. 5 judges for each sentence (makes 4 useless)

    6. They dont know each other

    7. Simple rules to follow

    8. Random selection of sentences

    9. Even classes

    10. No experts

    11. Measuring reliability kappa/the other kappa.

Ideas:

  1. Active learning for a group (or single) of annotators, we have to wait for all annotations to finish each big batch in order to retrain the model.

  2. Annotate a small group, automatic labelling using knn

  3. Find a nearest neighbor for out optimal set of keywords per “category,

  4. For a group of keywords, find their knn neighbors in w2v-space, alternatively find k clusters in w2v space that has those keywords. For a new word/mean sentence vector in the ‘category’ find the minimal distance to the new cluster (either one of approaches) and this is new annotation.

Myths

    1. Myth One: One Truth Most data collection efforts assume that there is one correct interpretation for every input example.

    2. Myth Two: Disagreement Is Bad To increase the quality of annotation data, disagreement among the annotators should be avoided or reduced.

    3. Myth Three: Detailed Guidelines Help When specific cases continuously cause disagreement, more instructions are added to limit interpretations.

    4. Myth Four: One Is Enough Most annotated examples are evaluated by one person.

    5. Myth Five: Experts Are Better Human annotators with domain knowledge provide better annotated data.

    6. Myth Six: All Examples Are Created Equal The mathematics of using ground truth treats every example the same; either you match the correct result or not.

    7. Myth Seven: Once Done, Forever Valid Once human annotated data is collected for a task, it is used over and over with no update. New annotated data is not aligned with previous data.

Crowd Sourcing

  • Conclusions:

    • Experts are the same as a crowd

    • Costs a lot less $$$.

Disagreement

Inter agreement

  1. Cohens kappa (two people)

but you can use it to map a group by calculating agreement for each pair

The Kappa statistic varies from 0 to 1, where.

  • 0 = agreement equivalent to chance.

  • 0.1 – 0.20 = slight agreement.

  • 0.21 – 0.40 = fair agreement.

  • 0.41 – 0.60 = moderate agreement.

  • 0.61 – 0.80 = substantial agreement.

  • 0.81 – 0.99 = near perfect agreement

  • 1 = perfect agreement.

  1. Fleiss’ kappa, from 3 people and above.

Kappa ranges from 0 to 1, where:

  • 0 is no agreement (or agreement that you would expect to find by chance),

  • 1 is perfect agreement.

  • Fleiss’s Kappa is an extension of Cohen’s kappa for three raters or more. In addition, the assumption with Cohen’s kappa is that your raters are deliberately chosen and fixed. With Fleiss’ kappa, the assumption is that your raters were chosen at random from a larger population.

  • Krippendorff’s alpha is useful when you have multiple raters and multiple possible ratings.

  1. Krippendorfs alpha

  • Can handle various sample sizes, categories, and numbers of raters.

  • Values range from 0 to 1, where 0 is perfect disagreement and 1 is perfect agreement. Krippendorff suggests: “[I]t is customary to require α ≥ .800. Where tentative conclusions are still acceptable, α ≥ .667 is the lowest conceivable limit (2004, p. 241).”

  1. MACE - the new kid on the block. -

learns in an unsupervised fashion to

  1. a) identify which annotators are trustworthy and

  2. b) predict the correct underlying labels. We match performance of more complex state-of-the-art systems and perform well even under adversarial conditions

When evaluating redundant annotatio

ns (like those from Amazon's MechanicalTurk), we usually want to

  1. aggregate annotations to recover the most likely answer

  2. find out which annotators are trustworthy

  3. evaluate item and task difficulty

MACE solves all of these problems, by learning competence estimates for each annotators and computing the most likely answer based on those competences.

Calculating agreement

  1. Compare against researcher-ground-truth

  2. Self-agreement

  3. Inter-agreement

Troubling shooting agreement metrics

Machine Vision annotation

***

(redundant, % above chance, should not be used due to other reasons researched here)

is used when you have ranked data, like two people ordering 10 candidates from most preferred to least preferred.

.

Applies to any (i.e. ().

does exactly that. It tries to find out which annotators are more trustworthy and upweighs their answers.

-

cohen

Github computer Fleiss Kappa

, : as an alternative to kappa, and why

Imbalance data sets, i.e., why my

, Accuracy precision kappa

7 myths of annotation
Crowd Sourcing
The best tutorial on agreements, cohen, david, kappa, krip etc.
Why cohens kappa should be avoided as a performance measure in classification
Why it should be used as a measure of classification
Kappa in plain english
Multilabel using kappa
Kappa and the relation with accuracy
Kendall’s Tau
Ignores missing data entirely
measurement level
nominal, ordinal, interval, ratio
Supposedly multi label
MACE
Git
Medium
Kappa
Multi annotator with kappa (which isnt), is this okay?
1
Fleiss Kappa Example
GWET AC1
paper
Website, krippensorf vs fleiss calculator
Why is reliability so low when percentage of agreement is high?
Interpretation of kappa values
Interpreting agreement
CVAT
Snorkel
Git
Medium
Snorkel metal
Conversation
git
https://dl.acm.org/doi/abs/10.1145/3209889.3209898
https://arxiv.org/abs/1810.02840
https://github.com/HazyResearch/metal/blob/master/tutorials/Multitask.ipynb
mechanical turk calculator
Mturk alternatives
Workforce / onespace
Jobby
Shorttask
Samasource
pricing
definite guide
Brat nlp annotation tool
Prodigy by spacy
seed-small sample, many sample tutorial on youtube by ines
How to use prodigy, tutorial on medium plus notebook code inside
Doccano
Lighttag - has some cool annotation metrics\tests
Loopr
Assessing annotator disagreement
A great python package for measuring disagreement on GH
Reliability is key, and not just mechanical turk
7 myths about annotation
Annotating twitter sentiment using humans, 3 classes, 55% accuracy using SVMs.
partially publicly available
Exploiting disagreement
Vader annotation
Label studio