skip to main content

Alineació de paraules i mecanismes d'atenció en sistemes de traducció automàtica neuronal

Safont Gascón, Pol

2022

Texto completo disponível

Citações Citado por
  • Título:
    Alineació de paraules i mecanismes d'atenció en sistemes de traducció automàtica neuronal
  • Autor: Safont Gascón, Pol
  • Assuntos: Aprenentatge automàtic ; Bachelor's theses ; Computer software ; Machine learning ; Machine translating ; Natural language processing (Computer science) ; Neural networks (Computer science) ; Programari ; Tractament del llenguatge natural (Informàtica) ; Traducció automàtica ; Treballs de fi de grau ; Xarxes neuronals (Informàtica)
  • Notas: Treballs Finals de Grau (TFG) - Enginyeria Informàtica
    http://hdl.handle.net/2445/187820
  • Descrição: Treballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2022, Director: Daniel Ortiz Martínez [en] Deep Neural Networks have become the state of the art in many complex computational tasks. While they achieve great improvements over several benchmarking tasks year after year, they seem to operate as black boxes, making it hard for both data scientist and end users to assess their inner decision mechanisms and trust their results. While statistical and interpretable methods are widely used to analyze them, they don’t fully grasp their internal mechanisms and are prone to misleading results, leading to a need for better tools. As a result, self-explaining methods embedded inside the architecture of the neural networks have become a possible alternative, with attention mechanisms as one of the main new technics. The project main focus is the word alignment task, finding the most relevant translation relationships between source and target words in a pair of parallel sentences in different languages. This is a complex task of the Natural Language Processing and machine translation field, and we analyze the use of the novel attention mechanisms embedded in different encoder-decoder neural networks in order to extract the word to word alignments between source and target translations as a byproduct of the translation task. In the first part we analyze the background of the machine translation field: the main traditional statistical methods, the neural machine translation approach to the sequence to sequence problem and finally the word align task and the attention mechanism. In the second part, we implement a machine translation deep neural networks model: a recurrent neural network with an encoder-decoder architecture with attention. And we propose an alignment generation mechanism using the attention layer in order to extract and predict source to target word to word alignments. Finally, we train the neural networks with an English and French bilingual parallel sentence corpus and analyze the experimental results of the model for the translation and align word to word tasks, using a variety of metrics and suggest improvements and alternatives.
  • Data de criação/publicação: 2022
  • Idioma: Catalão

Buscando em bases de dados remotas. Favor aguardar.