skip to main content
Primo Search
Search in: Busca Geral

Semantic Information G Theory and Logical Bayesian Inference for Machine Learning

Lu, Chenguang

Information (Basel), 2019-08, Vol.10 (8), p.261 [Periódico revisado por pares]

Basel: MDPI AG

Texto completo disponível

Citações Citado por
  • Título:
    Semantic Information G Theory and Logical Bayesian Inference for Machine Learning
  • Autor: Lu, Chenguang
  • Assuntos: Algorithms ; Bayesian analysis ; Bayesian inference ; Classification ; confirmation measure ; Data compression ; Fuzzy sets ; Hypotheses ; Information theory ; Iterative algorithms ; Iterative methods ; Labels ; Machine learning ; Matching ; maximum mutual information classifications ; mixture models ; Multilabel learning ; Neural networks ; Optimization ; Probabilistic models ; Probability ; Random variables ; semantic information theory ; Semantics ; Statistical inference ; Statistical methods ; truth function
  • É parte de: Information (Basel), 2019-08, Vol.10 (8), p.261
  • Descrição: An important problem in machine learning is that, when using more than two labels, it is very difficult to construct and optimize a group of learning functions that are still useful when the prior distribution of instances is changed. To resolve this problem, semantic information G theory, Logical Bayesian Inference (LBI), and a group of Channel Matching (CM) algorithms are combined to form a systematic solution. A semantic channel in G theory consists of a group of truth functions or membership functions. In comparison with the likelihood functions, Bayesian posteriors, and Logistic functions that are typically used in popular methods, membership functions are more convenient to use, providing learning functions that do not suffer the above problem. In Logical Bayesian Inference (LBI), every label is independently learned. For multilabel learning, we can directly obtain a group of optimized membership functions from a large enough sample with labels, without preparing different samples for different labels. Furthermore, a group of Channel Matching (CM) algorithms are developed for machine learning. For the Maximum Mutual Information (MMI) classification of three classes with Gaussian distributions in a two-dimensional feature space,only 2–3 iterations are required for the mutual information between three classes and three labels to surpass 99% of the MMI for most initial partitions For mixture models, the Expectation-Maximization (EM) algorithm is improved to form the CM-EM algorithm, which can outperform the EM algorithm when the mixture ratios are imbalanced, or when local convergence exists. The CM iteration algorithm needs to combine with neural networks for MMI classification in high-dimensional feature spaces. LBI needs further investigation for the unification of statistics and logic.
  • Editor: Basel: MDPI AG
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.