skip to main content
Visitante
Meu Espaço
Minha Conta
Sair
Identificação
This feature requires javascript
Tags
Revistas Eletrônicas (eJournals)
Livros Eletrônicos (eBooks)
Bases de Dados
Bibliotecas USP
Ajuda
Ajuda
Idioma:
Inglês
Espanhol
Português
This feature required javascript
This feature requires javascript
Primo Search
Busca Geral
Busca Geral
Acervo Físico
Acervo Físico
Produção Intelectual da USP
Produção USP
Search For:
Clear Search Box
Search in:
Busca Geral
Or hit Enter to replace search target
Or select another collection:
Search in:
Busca Geral
Busca Avançada
Busca por Índices
This feature requires javascript
This feature requires javascript
Attention in Natural Language Processing
Galassi, Andrea ; Lippi, Marco ; Torroni, Paolo
IEEE transaction on neural networks and learning systems, 2021-10, Vol.32 (10), p.4291-4308
Piscataway: IEEE
Texto completo disponível
Citações
Citado por
Exibir Online
Detalhes
Resenhas & Tags
Mais Opções
Nº de Citações
This feature requires javascript
Enviar para
Adicionar ao Meu Espaço
Remover do Meu Espaço
E-mail (máximo 30 registros por vez)
Imprimir
Link permanente
Referência
EasyBib
EndNote
RefWorks
del.icio.us
Exportar RIS
Exportar BibTeX
This feature requires javascript
Título:
Attention in Natural Language Processing
Autor:
Galassi, Andrea
;
Lippi, Marco
;
Torroni, Paolo
Assuntos:
Computational modeling
;
Computer architecture
;
Distribution functions
;
Domains
;
Language
;
Natural language processing
;
Natural language processing (NLP)
;
neural attention
;
Neural networks
;
Representations
;
review
;
survey
;
Task analysis
;
Taxonomy
;
Visualization
É parte de:
IEEE transaction on neural networks and learning systems, 2021-10, Vol.32 (10), p.4291-4308
Notas:
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Descrição:
Attention is an increasingly popular mechanism used in a wide range of neural architectures. The mechanism itself has been realized in a variety of formats. However, because of the fast-paced advances in this domain, a systematic overview of attention is still missing. In this article, we define a unified model for attention architectures in natural language processing, with a focus on those designed to work with vector representations of the textual data. We propose a taxonomy of attention models according to four dimensions: the representation of the input, the compatibility function, the distribution function, and the multiplicity of the input and/or output. We present the examples of how prior information can be exploited in attention models and discuss ongoing research efforts and open challenges in the area, providing the first extensive categorization of the vast body of literature in this exciting domain.
Editor:
Piscataway: IEEE
Idioma:
Inglês
This feature requires javascript
This feature requires javascript
Voltar para lista de resultados
This feature requires javascript
This feature requires javascript
Buscando em bases de dados remotas. Favor aguardar.
Buscando por
em
scope:(USP_VIDEOS),scope:("PRIMO"),scope:(USP_FISICO),scope:(USP_EREVISTAS),scope:(USP),scope:(USP_EBOOKS),scope:(USP_PRODUCAO),primo_central_multiple_fe
Mostrar o que foi encontrado até o momento
This feature requires javascript
This feature requires javascript