Accurate Neural Network Pruning Requires Rethinking Sparse Optimization
Published in TMLR, 2024
We show that, generally speaking, dense training settings are not optimal for sparse training for the same dataset/architecture. In particular, for computer vision, we show that extended training greatly improves the results; we explore the difficulty of finding the right recipes under sparsity.