Abstract
Recent work has shown potential in using Mixed Integer Programming (MIP) solvers to optimize certain aspects of neural networks (NN). How- ever little research has gone into training NNs with solvers. State of the art methods to train NNs are typically gradient-based and require sig- nificant data, computation on GPUs and extensive hyper-parameter tuning. In contrast, training with MIP solvers should not require GPUs or hyper- parameter tuning but can likely not handle large amounts of data. Thus works builds on recent ad- vances that train binarized NNs using MIP solvers. We go beyond current work by formulating new MIP models to increase the amount of data that can be used and to train non-binary integer-valued net- works. Our results show that comparable results to using gradient descent can be achieved when mini- mal data is available.
Original language | English |
---|---|
Number of pages | 6 |
Publication status | Published - 2021 |
Event | IJCAI-PRICAI'20 Workshop on Data Science Meets Optimisation - Yokohama, Japan Duration: 7 Jan 2021 → 8 Jan 2021 |
Workshop
Workshop | IJCAI-PRICAI'20 Workshop on Data Science Meets Optimisation |
---|---|
Country/Territory | Japan |
City | Yokohama |
Period | 7/01/21 → 8/01/21 |