Bart Pre-Trained Model For Indonesian Question-Answering Task

The process of obtaining information from a context tends to consume a lot of time. To reduce the time in obtaining the information, the use of pre-trained language model (PLM) based on the transformer architecture comes in handy. The PLM can be fine-tuned for certain tasks, one of which is for question-answering (QA) task. Question-answering tasks have generally been fine-tuned to encoder-based PLM, such as the Bidirectional Encoder Representations from Transformers (BERT), which is extractive and the generated answer is resulted from extracting the context. However, to be able to return an abstract answer, a PLM that is capable of natural language generation (NLG), such as the Bidirectional and Auto-Regressive Transformer (BART) is needed. Based on these facts, this study aims to fine-tuned the NLG PLM for abstractive or generative QA tasks. As the result, the fine-tuned BART model performs with 85.84 of F1 score and EM score of 59.42.