yaset.nn

yaset.nn.cnn

class yaset.nn.cnn.CharCNN(char_embedding: torch.nn.modules.sparse.Embedding = None, filters: List[Tuple[int, int]] = None)

Bases: torch.nn.modules.module.Module

forward(input_matrix: torch.FloatTensor = None)
static init_weights(module)

yaset.nn.crf

Conditional random field

class yaset.nn.crf.ConditionalRandomField(num_tags: int, constraints: List[Tuple[int, int]] = None, include_start_end_transitions: bool = True)

Bases: torch.nn.modules.module.Module

This module uses the “forward-backward” algorithm to compute the log-likelihood of its inputs assuming a conditional random field model.

See, e.g. http://www.cs.columbia.edu/~mcollins/fb.pdf

num_tags : int, required
The number of tags.
constraints : List[Tuple[int, int]], optional (default: None)
An optional list of allowed transitions (from_tag_id, to_tag_id). These are applied to viterbi_tags() but do not affect forward(). These should be derived from allowed_transitions so that the start and end transitions are handled correctly for your tag type.
include_start_end_transitions : bool, optional (default: True)
Whether to include the start and end transition parameters.
forward(inputs: torch.Tensor, tags: torch.Tensor, mask: torch.ByteTensor = None) → torch.Tensor

Computes the log likelihood.

reset_parameters()
viterbi_tags(logits: torch.Tensor, mask: torch.Tensor) → List[Tuple[List[int], float]]

Uses viterbi algorithm to find most likely tags for the given inputs. If constraints are applied, disallows all other transitions.

yaset.nn.crf.allowed_transitions(constraint_type: str, labels: Dict[int, str]) → List[Tuple[int, int]]

Given labels and a constraint type, returns the allowed transitions. It will additionally include transitions for the start and end states, which are used by the conditional random field.

constraint_type : str, required
Indicates which constraint to apply. Current choices are “BIO”, “IOB1”, “BIOUL”, and “BMES”.
labels : Dict[int, str], required
A mapping {label_id -> label}. Most commonly this would be the value from Vocabulary.get_index_to_token_vocabulary()
List[Tuple[int, int]]
The allowed transitions (from_label_id, to_label_id).
yaset.nn.crf.is_transition_allowed(constraint_type: str, from_tag: str, from_entity: str, to_tag: str, to_entity: str)

Given a constraint type and strings from_tag and to_tag that represent the origin and destination of the transition, return whether the transition is allowed under the given constraint type.

constraint_type : str, required
Indicates which constraint to apply. Current choices are “BIO”, “IOB1”, “BIOUL”, and “BMES”.
from_tag : str, required
The tag that the transition originates from. For example, if the label is I-PER, the from_tag is I.
from_entity: str, required
The entity corresponding to the from_tag. For example, if the label is I-PER, the from_entity is PER.
to_tag : str, required
The tag that the transition leads to. For example, if the label is I-PER, the to_tag is I.
to_entity: str, required
The entity corresponding to the to_tag. For example, if the label is I-PER, the to_entity is PER.
bool
Whether the transition is allowed under the given constraint_type.
yaset.nn.crf.logsumexp(tensor: torch.Tensor, dim: int = -1, keepdim: bool = False) → torch.Tensor

A numerically stable computation of logsumexp. This is mathematically equivalent to tensor.exp().sum(dim, keep=keepdim).log(). This function is typically used for summing log probabilities. Parameters ———- tensor : torch.FloatTensor, required.

A tensor of arbitrary size.
dim : int, optional (default = -1)
The dimension of the tensor to apply the logsumexp to.
keepdim: bool, optional (default = False)
Whether to retain a dimension of size one at the dimension we reduce over.
yaset.nn.crf.viterbi_decode(tag_sequence: torch.Tensor, transition_matrix: torch.Tensor, tag_observations: Optional[List[int]] = None)

Perform Viterbi decoding in log space over a sequence given a transition matrix specifying pairwise (transition) potentials between tags and a matrix of shape (sequence_length, num_tags) specifying unary potentials for possible tags per timestep. Parameters ———- tag_sequence : torch.Tensor, required.

A tensor of shape (sequence_length, num_tags) representing scores for a set of tags over a given sequence.
transition_matrix : torch.Tensor, required.
A tensor of shape (num_tags, num_tags) representing the binary potentials for transitioning between a given pair of tags.
tag_observations : Optional[List[int]], optional, (default = None)
A list of length sequence_length containing the class ids of observed elements in the sequence, with unobserved elements being set to -1. Note that it is possible to provide evidence which results in degenerate labelings if the sequences of tags you provide as evidence cannot transition between each other, or those transitions are extremely unlikely. In this situation we log a warning, but the responsibility for providing self-consistent evidence ultimately lies with the user.
viterbi_path : List[int]
The tag indices of the maximum likelihood tag sequence.
viterbi_score : torch.Tensor
The score of the viterbi path.

yaset.nn.embedding

class yaset.nn.embedding.BertEmbeddings(model_config_file: str = None, model_file: str = None, model_type: str = None, do_lower_case: bool = None, vocab_dir: str = None, fine_tune: bool = False, only_final_layer: bool = False)

Bases: torch.nn.modules.module.Module

compute_embeddings(batch, cuda)
forward(batch, cuda)
class yaset.nn.embedding.Embedder(embeddings_options: dict = None, pretrained_matrix: numpy.ndarray = None, pretrained_matrix_size: (<class 'int'>, <class 'int'>) = None, mappings: dict = None, embedding_root_dir: str = None)

Bases: torch.nn.modules.module.Module

forward(batch, cuda)

yaset.nn.ensemble

yaset.nn.lstm

class yaset.nn.lstm.LSTMAugmented(lstm_hidden_size: int = None, input_dropout_rate: float = None, input_size: int = None, use_highway: bool = False)

Bases: torch.nn.modules.module.Module

forward(batch_packed)

yaset.nn.lstmcrf

class yaset.nn.lstmcrf.AugmentedLSTMCRF(constraints: list = None, embedder: yaset.nn.embedding.Embedder = None, ffnn_hidden_layer_use: bool = None, ffnn_hidden_layer_size: int = None, ffnn_activation_function: str = None, ffnn_input_dropout_rate: float = None, embedding_input_size: int = None, lstm_hidden_size: int = None, lstm_input_dropout_rate: float = None, lstm_layer_dropout_rate: int = None, mappings: dict = None, lstm_nb_layers: int = None, num_labels: int = None, lstm_use_highway: bool = False)

Bases: torch.nn.modules.module.Module

create_final_layer()
create_lstm_stack()
forward(*args, **kwargs)
forward_ensemble_lstm_attention(batch, cuda)
get_labels(batch, cuda)
get_loss(batch, cuda: bool = False)
get_loss_ensemble(batch, cuda)
infer_labels(batch, cuda)