[Deep] Learning Active Inference
David Bloomin
The Free Energy Principle posits that all persistent objects can be modeled as maximizing the model-evidence of their prior beliefs. They engage in Active Inference: updating their posterior from evidence, while navigating their environment to reduce uncertainty of their world-model and occupy states that concord with their prior beliefs. Since actual bayesian inference is computationally intractable, we can think of the agents as performing an easier computation of variational inference: minimizing a surrogate distribution. This framing unifies intelligent behavior at any scale, from cells to nation states, but fails to provide a recipe for actually creating an intelligent agent. The agent requires a generative model with a set of priors that must somehow be engineered or discovered. This talk proposes a concrete approach for using Deep Reinforcement Learning to find an approximation to an Active Inference Agent that can competently navigate complex environments.
Code: github.com/daveey
daveey.github.io/
x.com/daveey
Active Inference Institute information:
Website: activeinferenc...
Twitter: / inferenceactive
Discord: / discord
KZitem: / activeinference
Active Inference Livestreams: coda.io/@activ...
Негізгі бет ActInf GuestStream 085.1 ~ David Bloomin: "[Deep] Learning Active Inference"
Пікірлер