Document Type

Article - preprint

Department

Claremont McKenna College, Mathematics (CMC)

Publication Date

8-30-2016

Abstract

We analyze a batched variant of Stochastic Gradient Descent (SGD) with weighted sampling distribution for smooth and non-smooth objective functions. We show that by distributing the batches computationally, a significant speedup in the convergence rate is provably possible compared to either batched sampling or weighted sampling alone. We propose several computationally efficient schemes to approximate the optimal weights, and compute proposed sampling distributions explicitly for the least squares and hinge loss problems. We show both analytically and experimentally that substantial gains can be obtained

Rights Information

© 2016 Needell, Ward

Terms of Use & License Information

Terms of Use for work posted in Scholarship@Claremont.

Included in

Mathematics Commons

Share

COinS