skip to main content
Primo Search
Search in: Busca Geral

Generic Multimodal Gradient-based Meta Learner Framework

Enamoto, Liriam M. ; Weigang, Li ; Filho, Geraldo P. Rocha ; Costa, Paulo C.

2023 26th International Conference on Information Fusion (FUSION), 2023, p.1-8

International Society of Information Fusion

Sem texto completo

Citações Citado por
  • Título:
    Generic Multimodal Gradient-based Meta Learner Framework
  • Autor: Enamoto, Liriam M. ; Weigang, Li ; Filho, Geraldo P. Rocha ; Costa, Paulo C.
  • Assuntos: Adaptation models ; Biological system modeling ; Computational modeling ; cross-modal ; data fusion ; Degradation ; few-shot learning ; Machine learning ; meta-learning ; multimodal ; Natural language processing ; Transformers
  • É parte de: 2023 26th International Conference on Information Fusion (FUSION), 2023, p.1-8
  • Descrição: Research in Natural Language Processing, bio-medicine, and computer vision achieved excellent results in machine learning due to the success of the Transformer-based models. However, these excellent results depend on the labeled high-quality and large-scale datasets. If one of these requirements is not met, the model may lack generalization ability, and its performance will be unsatisfactory. To address these issues, this research proposes a Generic Multimodal Gradient-Based Meta Framework (GeMGF) trained from scratch to avoid language bias, learns from a few data, and reduces the model degradation trained on a finite dataset. GeMGF was evaluated using the benchmark dataset CUB-200-2011 for the text and image classification tasks. The results show that GeMGF outperforms the state-of-the-art models with 93.2% accuracy. GeMGF is simple, efficient, and adaptable to other data modalities and fields.
  • Editor: International Society of Information Fusion
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.