skip to main content
Visitante
Meu Espaço
Minha Conta
Sair
Identificação
This feature requires javascript
Tags
Revistas Eletrônicas (eJournals)
Livros Eletrônicos (eBooks)
Bases de Dados
Bibliotecas USP
Ajuda
Ajuda
Idioma:
Inglês
Espanhol
Português
This feature required javascript
This feature requires javascript
Primo Search
Busca Geral
Busca Geral
Acervo Físico
Acervo Físico
Produção Intelectual da USP
Produção USP
Search For:
Clear Search Box
Search in:
Busca Geral
Or hit Enter to replace search target
Or select another collection:
Search in:
Busca Geral
Busca Avançada
Busca por Índices
This feature requires javascript
This feature requires javascript
Optimizing model-agnostic random subspace ensembles
Huynh-Thu, Vân Anh ; Geurts, Pierre
Machine learning, 2024-02, Vol.113 (2), p.993-1042
[Periódico revisado por pares]
New York: Springer US
Texto completo disponível
Citações
Citado por
Exibir Online
Detalhes
Resenhas & Tags
Mais Opções
Nº de Citações
This feature requires javascript
Enviar para
Adicionar ao Meu Espaço
Remover do Meu Espaço
E-mail (máximo 30 registros por vez)
Imprimir
Link permanente
Referência
EasyBib
EndNote
RefWorks
del.icio.us
Exportar RIS
Exportar BibTeX
This feature requires javascript
Título:
Optimizing model-agnostic random subspace ensembles
Autor:
Huynh-Thu, Vân Anh
;
Geurts, Pierre
Assuntos:
Algorithms
;
Artificial Intelligence
;
Computer Science
;
Control
;
Engineering, computing & technology
;
Ensemble
;
Feature importances
;
Feature selection
;
Importance sampling
;
Ingénierie
,
informatique
&
technologie
;
Machine Learning
;
Mathematical models
;
Mechatronics
;
model-agnostic
;
Natural Language Processing (NLP)
;
Optimization
;
Parameters
;
Random subspaces
;
Randomization
;
Regularization
;
Robotics
;
Sciences informatiques
;
Simulation and Modeling
;
Special Issue of the ECML PKDD 2023 Journal Track
;
Subspaces
;
Supervised learning
É parte de:
Machine learning, 2024-02, Vol.113 (2), p.993-1042
Notas:
scopus-id:2-s2.0-85176091212
Descrição:
This paper presents a model-agnostic ensemble approach for supervised learning. The proposed approach is based on a parametric version of Random Subspace, in which each base model is learned from a feature subset sampled according to a Bernoulli distribution. Parameter optimization is performed using gradient descent and is rendered tractable by using an importance sampling approach that circumvents frequent re-training of the base models after each gradient descent step. The degree of randomization in our parametric Random Subspace is thus automatically tuned through the optimization of the feature selection probabilities. This is an advantage over the standard Random Subspace approach, where the degree of randomization is controlled by a hyper-parameter. Furthermore, the optimized feature selection probabilities can be interpreted as feature importance scores. Our algorithm can also easily incorporate any differentiable regularization term to impose constraints on these importance scores. We show the good performance of the proposed approach, both in terms of prediction and feature ranking, on simulated and real-world datasets. We also show that PRS can be successfully used for the reconstruction of gene regulatory networks.
Editor:
New York: Springer US
Idioma:
Inglês
This feature requires javascript
This feature requires javascript
Voltar para lista de resultados
Anterior
Resultado
13
Avançar
This feature requires javascript
This feature requires javascript
Buscando em bases de dados remotas. Favor aguardar.
Buscando por
em
scope:(USP_VIDEOS),scope:("PRIMO"),scope:(USP_FISICO),scope:(USP_EREVISTAS),scope:(USP),scope:(USP_EBOOKS),scope:(USP_PRODUCAO),primo_central_multiple_fe
Mostrar o que foi encontrado até o momento
This feature requires javascript
This feature requires javascript