Note: This codebase is a reimplementation of Meta-Sim, and currently has the MNIST experiments from the paper. Experiments show that the proposed method can greatly improve content generation quality over a human-engineered probabilistic scene grammar, both qualitatively and quantitatively as measured by performance on a downstream task. If the real dataset comes with a small labeled validation set, we additionally aim to optimize a meta-objective, i.e. We parametrize our dataset generator with a neural network, which learns to modify attributes of scene graphs obtained from probabilistic scene grammars, so as to minimize the distribution gap between its rendered outputs and target data. We propose Meta-Sim, which learns a generative model of synthetic scenes, and obtain images as well as its corresponding ground-truth via a graphics engine. The goal of our work is to automatically synthesize labeled datasets that are relevant for a downstream task. Training models to high-end performance requires availability of large labeled datasets, which are expensive to get. For technical details, please refer to:Īmlan Kar, Aayush Prakash, Ming-Yu Liu, Eric Cameracci, Justin Yuan, Matt Rusiniak, David Acuna, Antonio Torralba, Sanja Fidler Meta-Sim: Learning to Generate Synthetic Datasets
0 Comments
Leave a Reply. |