Slovenian RoBERTa contextual embeddings model: SloBERTa 1.0

PID

The monolingual Slovene RoBERTa (A Robustly Optimized Bidirectional Encoder Representations from Transformers) model is a state-of-the-art model representing words/tokens as contextually dependent word embeddings, used for various NLP tasks. Word embeddings can be extracted for every word occurrence and then used in training a model for an end task, but typically the whole RoBERTa model is fine-tuned end-to-end.

SloBERTa model is closely related to French Camembert model https://camembert-model.fr/. The corpora used for training the model have 3.47 billion tokens in total. The subword vocabulary contains 32,000 tokens. The scripts and programs used for data preparation and training the model are available on https://github.com/clarinsi/Slovene-BERT-Tool

The released model here is a pytorch neural network model, intended for usage with the transformers library https://github.com/huggingface/transformers.

Identifier
PID http://hdl.handle.net/11356/1387
Related Identifier http://hdl.handle.net/11356/1397
Related Identifier https://rsdo.slovenscina.eu/en/semantic-resources-and-technologies
Metadata Access http://www.clarin.si/repository/oai/request?verb=GetRecord&metadataPrefix=oai_dc&identifier=oai:www.clarin.si:11356/1387
Provenance
Creator Ulčar, Matej; Robnik-Šikonja, Marko
Publisher Faculty of Computer and Information Science, University of Ljubljana
Publication Year 2020
Funding Reference info:eu-repo/grantAgreement/EC/H2020/825153
Rights The MIT License (MIT); https://opensource.org/licenses/mit-license.php; PUB
OpenAccess true
Contact info(at)clarin.si
Representation
Language Slovenian; Slovene
Resource Type toolService
Format text/plain; charset=utf-8; application/octet-stream; text/plain; downloadable_files_count: 4
Discipline Linguistics