medkit.text.metrics.ner#

This package needs extra-dependencies not installed as core dependencies of medkit. To install them, use pip install medkit-lib[metrics-ner].

Classes:

SeqEvalEvaluator([tagging_scheme, ...])

Evaluator to compute the performance of labeling tasks such as named entity recognition.

SeqEvalMetricsComputer(id_to_label[, ...])

An implementation of MetricsComputer using seqeval to compute metrics in the training of named-entity recognition components.

class SeqEvalEvaluator(tagging_scheme='bilou', return_metrics_by_label=True, average='macro', tokenizer=None, labels_remapping=None)[source]#

Evaluator to compute the performance of labeling tasks such as named entity recognition. This evaluator compares TextDocuments of reference with its predicted annotations and returns a dictionary of metrics.

The evaluator converts the set of entities and documents to tags before compute the metric. It supports two schemes, IOB2 (a BIO scheme) and BILOU. The IOB2 scheme tags the Beginning, the Inside and the Outside text of a entity. The BILOU scheme tags the Beginning, the Inside and the Last tokens of multi-token entity as well as Unit-length entity.

For more information about IOB schemes, refer to the Wikipedia page

Hint

If tokenizer is not defined, the evaluator tokenizes the text by character. This may generate a lot of tokens with large documents and may affect execution time. You can use a fast tokenizer from HuggingFace, i.e. : bert tokenizer

>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased", use_fast=True)
Parameters
  • tagging_scheme (Literal['bilou', 'iob2']) – Scheme for tagging the tokens, it can be bilou or iob2

  • return_metrics_by_label (bool) – If True, return the metrics by label in the output dictionary. If False, only global metrics are returned

  • average (Literal['macro', 'weighted']) – Type of average to be performed in metrics. - macro, unweighted mean (default) - weighted, weighted average by support (number of true instances by label)

  • tokenizer (Optional[Any]) – Optional Fast Tokenizer to convert text into tokens. If not provided, the text is tokenized by character.

  • labels_remapping (Optional[Dict[str, str]]) – Optional remapping of labels, useful when there is a mismatch between the predicted labels and the reference labels to evaluate against. If a label (of a reference of predicted entity) is found in this dict, the corresponding value will be used as label instead.

Methods:

compute(documents, predicted_entities)

Compute metrics of entity matching giving predictions.

compute(documents, predicted_entities)[source]#

Compute metrics of entity matching giving predictions.

Parameters
  • documents (List[TextDocument]) – Text documents containing entities of reference

  • predicted_entities (List[List[Entity]]) – List of predicted entities by document

Return type

Dict[str, float]

Returns

Dict[str, float] – A dictionary with average and per type metrics if required. The metrics included are: accuracy, precision, recall and F1 score.

class SeqEvalMetricsComputer(id_to_label, tagging_scheme='bilou', return_metrics_by_label=True, average='macro')[source]#

An implementation of MetricsComputer using seqeval to compute metrics in the training of named-entity recognition components.

The metrics computer can be used with a Trainer

id_to_label:

Mapping integer value to label, it should be the same used in preprocess

tagging_scheme:

Scheme used for tagging the tokens, it can be bilou or iob2

return_metrics_by_label:

If True, return the metrics by label in the output dictionary. If False, only return average metrics

average:

Type of average to be performed in metrics. - macro, unweighted mean (default) - weighted, weighted average by support (number of true instances by attr value)

Methods:

compute(all_data)

Compute metrics using the tag representation collected by batches during the training/evaluation loop.

prepare_batch(model_output, input_batch)

Prepare a batch of tensors to compute the metric

prepare_batch(model_output, input_batch)[source]#

Prepare a batch of tensors to compute the metric

Parameters
  • model_output (BatchData) – A batch data including the logits predicted by the model

  • input_batch (BatchData) – A batch data including the labels of reference

Return type

Dict[str, List[List[str]]]

Returns

Dict[str, List[List[str]]] – A dictionary with the true and predicted tags representation of a batch data

compute(all_data)[source]#

Compute metrics using the tag representation collected by batches during the training/evaluation loop.

Parameters

all_data (Dict[str, List[Any]]) – A dictionary with the true and predicted tags collected by batches

Return type

Dict[str, float]

Returns

Dict[str, float] – A dictionary with average and per label metrics if required. The metrics included are : accuracy, precision, recall and F1 score.