Created: Jul 12, 2020 11:49 AM
Description: Establish MAML as an approximation of Hierarchical Bayesian Inference
Materials: https://arxiv.org/pdf/1801.08930.pdf
Status: In progress
Meta-Learning is formulated as the extraction of domain-general information that can act as an inductive bias to improve learning efficiency in novel tasks.
How is this inductive bias implemented?
The paper claims that MAML approximates hierarchical Bayesian inference in the sense that the meta-parameters learned during training represent a prior on the tasks (a task-general prior). Hence, when given a new task to learn from, even with limited data and computation, the model is quickly able to adapt the parameters for optimum performance on the new task.
(I have for now omitted the math used to prove the equivalence)