Presentation is loading. Please wait.

Presentation is loading. Please wait.

Peter Richtárik Randomized Dual Coordinate Ascent with Arbitrary Sampling 1 st UCL Workshop on the Theory of Big Data – London– January 2015.

Similar presentations


Presentation on theme: "Peter Richtárik Randomized Dual Coordinate Ascent with Arbitrary Sampling 1 st UCL Workshop on the Theory of Big Data – London– January 2015."— Presentation transcript:

1 Peter Richtárik Randomized Dual Coordinate Ascent with Arbitrary Sampling 1 st UCL Workshop on the Theory of Big Data – London– January 2015

2 Coauthors Zheng Qu Edinburgh, Mathematics Zheng Qu, P.R. and Tong Zhang Randomized dual coordinate ascent with arbitrary sampling arXiv:1411.5873, 2014 Tong Zhang Rutgers, Statistics Baidu, Big Data Lab

3 Part 1 MACHINE LEARNING

4 Statistical nature of data \[ \phi_i(a)=\tfrac{1}{2\gamma}(a-b_i)^2 \] \[\Downarrow\] \[ \frac{1}{n}\sum_{i=1}^n \phi_i(A_i^\top w) =\frac{1}{2\gamma}\|Aw-b\|_2^2 \] Data (e.g., image, text, measurements, …) Label \[A_i \in \mathbb{R}^{d\times m}, \qquad y_i \in \mathbb{R}^m \]

5 Prediction of labels from data \[(A_i,y_i)\sim \emph{Distribution}\] \[A_i^\top w \approx y_i\] Find Such that when (data, label) pair is drawn from the distribution Then Predicted labelTrue label Linear predictor

6 Measure of Success \[\mathbf{E} \left[ loss(A_i^\top w, y_i)\right] \] We want the expected loss (=risk) to be small: datalabel

7 Finding a Linear Predictor via Empirical Risk Minimization \[\min_{w\in \mathbb{R}^d} \frac{1}{n}\sum_{i=1}^n loss(A_i^\top w, y_i)\] \[(A_1,y_1), (A_2,y_2), \dots, (A_n,y_n)\sim \emph{Distribution}\] Draw i.i.d. data (samples) from the distribution Output predictor which minimizes the empirical risk:

8 Part 2 OPTIMIZATION

9 Primal Problem d = # features (parameters) n = # samples 1 - strongly convex function (regularizer) \[\min_{w \in \mathbb{R }^d}\;\; \left[ P(w) \equiv \frac{1}{n} \sum_{i=1} ^n \phi_i(A_i^ \top w) + \lambda g(w)\right] \] - smooth & convex regularization parameter

10 Assumption 1 Loss functions have Lipschitz gradient \[ \|\nabla \phi_i(a)-\nabla \phi_i(a')\|\;\; \leq\;\; \frac{1}{\gamma}\;\; \|a-a'\|, \quad a,a'\in \mathbb{R}^m\] Lipschitz constant

11 Assumption 2 \[g(w)\geq g(w') + \left + \frac{1}{2}\|w-w'\|^2, \quad w,w'\in \mathbb{R}^d\] Regularizer is 1-strongly convex subgradient

12 Dual Problem - strongly convex \[D(\alpha) \equiv - \lambda g^*\left(\frac{1}{\la mbda n}\sum_{i=1}^n A_i\alpha_i\right) - \frac{1}{n}\sum_{i= 1}^n \phi_i^*(- \alpha_i) 1 – smooth & convex \[\max_{\alpha=(\alpha_1,\dots,\alpha_n) \in \mathbb{R}^{N}=\mathbb{R}^{nm}} D(\alpha)\] \[g^*(w') = \max_{w\in \mathbb{R}^d} \left\{(w')^\top w - g(w)\right\}\] \[\phi_i^*(a') = \max_{a\in \mathbb{R}^m} \left\{(a')^\top a - \phi_i(a)\right\}\]

13 Part 3 ALGORITHM

14 Quartz

15 Fenchel Duality \[ P(w) - D(\alpha) \;\; = \;\; \lambda \left( g(w) + g^*\left(\bar{\alpha}\right)\right) + \frac{1}{n}\sum_{i=1}^n \phi_i(A_i^\top w) + \phi_i^*(-\alpha_i) = \] \[\lambda (g(w) + g^* \left(\bar{\alpha}\right)- \left\langle w, \bar{\alpha} \right\rangle ) + \frac{1}{n}\sum_{i=1}^n \phi_i(A_i^\top w) + \phi_i^*(-\alpha_i) + \left\langle A_i^\top w, \alpha_i \right\rangle\] \[\bar{\alpha} = \frac{1}{\lambda n} \sum_{i=1}^n A_i \alpha_i\] \[w = \nabla g^*(\bar{\alpha})\] \[\alpha_i = -\nabla \phi_i(A_i^\top w)\] Weak duality Optimality conditions

16 The Algorithm

17 Quartz: Bird’s Eye View \[(\alpha^t,w^t) \qquad \Rightarrow \qquad (\alpha^{t+1},w^{t+1})\] \[w^{t+1} \leftarrow (1-\theta)w^t + \theta {\color{red}\nabla g^*(\bar{\alpha}^t)}\] \[\alpha_i^{t+1} \leftarrow \left(1-\frac{\theta}{{\color{blue}p_i}}\right) \alpha_i^{t} + \frac{\theta}{{\color{blue}p_i}}{\color{red} \left(-\nabla \phi_i(A_i^\top w^{t+1})\right)} \] STEP 1: PRIMAL UPDATE STEP 2: DUAL UPDATE Choose a random set ${\color{blue}S_t}$ of dual variables ${\color{blue}p_i = \mathbf{P}(i\in S_t)}$

18 The Algorithm STEP 1 STEP 2 Convex combination constant

19 Randomized Primal-Dual Methods SDCA: SS Shwartz & T Zhang, 09/2012 mSDCA M Takac, A Bijral, P R & N Srebro, 03/2013 ASDCA: SS Shwartz & T Zhang, 05/2013 AccProx-SDCA: SS Shwartz & T Zhang, 10/2013 DisDCA: T Yang, 2013 Iprox-SDCA: P Zhao & T Zhang, 01/2014 APCG: Q Lin, Z Lu & L Xiao, 07/2014 SPDC: Y Zhang & L Xiao, 09/2014 Quartz: Z Qu, P R & T Zhang, 11/2014 {\footnotesize \begin{tabular}{|c|c|c|c|c|c|c|c|c|c} \hline Algorithm & 1-nice & 1-optimal & $\tau$-nice & arbitrary &{\begin{tabular}{c}additional\\speedup\end{tabular} } & { \begin{tabular}{c}direct\\p-d\\analysis\end{tabular}} & acceleration\\ \hline SDCA & $\bullet$ & & & & & & \\ \hline mSDCA & $\bullet$ & & $\bullet$ & & $\bullet$ & & \\ \hline ASDCA & $\bullet$ & & $\bullet$ & & & & $\bullet$ \\ \hline AccProx-SDCA &$\bullet$ & & & & & &$\bullet$\\ \hline DisDCA &$\bullet$ & &$\bullet$ & & & & \\ \hline Iprox-SDCA & $\bullet$ & $\bullet$ & & & & & \\ \hline APCG &$\bullet$ & & & & & &$\bullet$ \\ \hline SPDC &$\bullet$ & $\bullet$ &$\bullet$ & & & $\bullet$ &$\bullet$ \\ \hline \bf{Quartz} &{\color{red}$\bullet$} &{\color{red}$\bullet$} &{\color{red}$\bullet$} &{\color{red}$\bullet$} &{\color{red}$\bullet$} & {\color{red}$\bullet$} & \\ \hline \end{tabular} }

20 Part 4 MAIN RESULT

21 Assumption 3 (Expected Separable Overapproximation) \[ \mathbf{E} \left\| \sum_{i\in \hat{S}} A_i \alpha_i\right\|^2 \;\;\leq \;\; \sum_{i=1}^n {\color{blue} p_i} {\color{red} v_i}\|\alpha_i\|^2 \] \[ {\color{blue} p_i} = \mathbb{P}(i\in \hat{S}) \] Parameters ${\color{red}v_1,\dots,\color{red}v_n}$ satisfy: inequality must hold for all \[\alpha_1,\dots,\alpha_n \in \mathbb{R}^m\]

22 Complexity Theorem (QRZ’14) \[k\;\;\geq \;\; \max_i \left( \frac{1}{{\color{blue}p_i}} + \frac{{\color{red}v_i}}{{\color{blue}p_i} \lambda \gamma n}\right) \log \left( \frac{P(w^0)-D(\alpha^0)}{\epsilon} \right)\] \[\mathbf{E}\left[P(w^t)-D(\alpha^t)\right] \leq \epsilon\]

23 Part 5a UPDATING ONE DUAL VARIABLE AT A TIME \[A = [A_1,A_2,A_3,A_4,A_5] = \left(\begin{matrix} 0 & 0 & 6 & 4 & 9\\ 0 & 3 & 0 & 0 & 0\\ 0 & 0 & 3 & 0 & 1\\ 1 & 8 & 0 & 0 & 0\\ \end{matrix}\right)\]

24 \begin{table} \begin{tabular}{|c|c|} \hline & \\ Optimal sampling & $ \displaystyle n +\frac{\tfrac{1}{n}\sum_{i=1}^n L_i}{\lambda \gamma}$ \\ & \\ \hline & \\ Uniform sampling & $\displaystyle n +\frac{\max_i L_i}{\lambda \gamma}$ \\ & \\ \hline \end{tabular} \end{table} Complexity of Quartz specialized to serial sampling

25 Data \begin{table} \begin{tabular}{|c|c|c|c|} \hline {\bf Dataset} & \begin{tabular}{c}{\bf \# Samples}\\ $n$\end{tabular} & \begin{tabular}{c}{\bf \# features}\\ $d$ \end{tabular}& \begin{tabular}{c}{\bf density} \\ $nnz(A)/(nd)$\end{tabular}\\ \hline astro-ph & 29,882 & 99,757 & 0.08\% \\ \hline CCAT & 781,265 & 47,236 & 0.16\% \\ \hline cov1 & 522,911 & 54 & 22.22\%\\ \hline w8a & 49,749 & 300 & 3.91\% \\ \hline ijcnn1 & 49,990 & 22 & 59.09\% \\ \hline webspam & 350,000 & 254 & 33.52\% \\\hline \end{tabular} \end{table}

26 Standard primal update Experiment: Quartz vs SDCA, uniform vs optimal sampling “Aggressive” primal update Data = \texttt{cov1}, \quad $n=522,911$, \quad$\lambda = 10^{-6}$

27 Part 5b TAU-NICE SAMPLING (STANDARD MINIBATCHING)

28 Data sparsity A normalized measure of average sparsity of the data “Fully sparse data”“Fully dense data”

29 Complexity of Quartz \begin{table} \begin{tabular}{|c|c|} \hline &\\ \begin{tabular}{c}Fully sparse data\\ (${\color{blue}\tilde{\omega}=1}$) \end{tabular} & $\displaystyle \frac{n}{{\color{red}\tau}}+\frac{\max_i L_i}{\lambda \gamma {\color{red}\tau}}$\\ & \\ \hline & \\ \begin{tabular}{c}Fully dense data \\(${\color{blue}\tilde{\omega}=n}$) \end{tabular} & $\displaystyle \frac{n}{{\color{red}\tau}}+\frac{\max_i L_i}{\lambda \gamma}$ \\ & \\ \hline & \\ \begin{tabular}{c}Any data\\ (${\color{blue}1\leq \tilde{\omega}\leq n}$) \end{tabular} & $\displaystyle \frac{n}{{\color{red}\tau}}+\frac{\left(1+\frac{({\color{blue}\tilde{\omega}}-1)({\color{red}{\color{red}\tau}}-1)}{n-1}\right)\max_i L_i}{\lambda \gamma {\color{red}\tau}} $ \\ & \\ \hline \end{tabular} \end{table}

30 Speedup \[ \frac{T(1)}{T({\color{red}\tau})} \geq \frac{{\color{red}\tau}}{1+\frac{\tilde{\omega}-1}{n-1}}\geq \frac{{\color{red}\tau}}{2} \] \[1\leq {\color{red}\tau} \leq 2+\lambda \gamma n\] Assume the data is normalized: Then: Linear speedup up to a certain data-independent minibatch size: \[ T({\color{red}\tau}) \;\; = \;\; \frac{ \left(1+\frac{({\color{blue}\tilde{\omega}}-1)({\color{red}\tau}-1)}{(n-1)(1+\lambda \gamma n)}\right) }{{\color{red}\tau}} \times T({\color{red}1}) \] Further data-dependent speedup, up to the extreme case: \[ T({\color{red}\tau}) \leq \frac{2}{{\color{red}\tau}} \times T({\color{red}1}) \] \[ {\color{blue}\tilde{\omega}} = O(1) \] \[ T({\color{red}\tau}) = {\cal O} \left(\frac{T({\color{red}1})}{\color{red}\tau}\right)\]

31 Speedup: sparse data

32 Speedup: denser data

33 Speedup: fully dense data

34 astro_ph: n = 29,882 density = 0.08%

35 CCAT: n = 781,265 density = 0.16%

36 \begin{tabular}{c|c|c} \hline Algorithm & Iteration complexity & $g$\\ \hline && \\ SDCA & $\displaystyle n + \frac{1}{\lambda \gamma}$ & $\tfrac{1}{2}\|\cdot\|^2$ \\ &&\\ \hline &&\\ ASDCA & $\displaystyle 4 \times \max\left\{\frac{n}{{\color{red}\tau}},\sqrt{\frac{n}{\lambda\gamma {\color{red}{\color{red}\tau}}}},\frac{1}{\lambda \gamma {\color{red}\tau}},\frac{n^{\frac{1}{3}}}{(\lambda \gamma {\color{red}\tau})^{\frac{2}{3}}}\right\}$ & $\tfrac{1}{2}\|\cdot\|^2$\\ &&\\ \hline &&\\ SPDC & $\displaystyle \frac{n}{{\color{red}\tau}}+\sqrt{\frac{n}{\lambda \gamma {\color{red}\tau}}}$ & general \\ &&\\ \hline &&\\ \bf{Quartz } & $\displaystyle \frac{n}{{\color{red}\tau}}+\left(1+\frac{(\tilde \omega -1)({\color{red}\tau}-1)}{n-1}\right)\frac{1}{\lambda \gamma {\color{red}\tau}}$ & general\\ &&\\ \hline \end{tabular} Primal-dual methods with tau-nice sampling SS-Shwartz & T Zhang ‘13 Y Zhang & L Xiao ‘14

37 \begin{tabular}{c|c|c|c|c} \hline Algorithm & $\gamma\lambda n = \Theta(\tfrac{1}{\tau})$ & $\gamma\lambda n = \Theta(1)$ & $\gamma\lambda n = \Theta(\tau)$ & $\gamma\lambda n = \Theta(\sqrt{n})$\\ \hline & $\kappa = n\tau$ & $\kappa = n$ & $\kappa = n/\tau$ & $\kappa = \sqrt{n}$\\ \hline &&&&\\ SDCA & $n \tau$ & $n$ & $n$ & $n$\\ &&&&\\ \hline &&&&\\ ASDCA & $n$ & $\displaystyle \frac{n}{\sqrt{\tau}}$ & $\displaystyle \frac{n}{\tau}$ & $\displaystyle \frac{n}{\tau} + \frac{n^{3/4}}{\sqrt{\tau}}$\\ &&&&\\ \hline &&&&\\ SPDC & $n$ & $\displaystyle \frac{n}{\sqrt{\tau}}$ & $\displaystyle \frac{n}{\tau}$ & $\displaystyle \frac{n}{\tau} + \frac{n^{3/4}}{\sqrt{\tau}}$\\ &&&&\\ \hline &&&&\\ \bf{Quartz } & $\displaystyle n + \tilde{\omega}\tau$ & $\displaystyle \frac{n}{\tau} +\tilde{\omega}$ & $\displaystyle \frac{n}{\tau}$ & $\displaystyle \frac{n}{\tau} + \frac{\tilde{\omega}}{\sqrt{n}}$\\ &&&&\\ \hline \end{tabular} For sufficiently sparse data, Quartz wins even when compared against accelerated methods Accelerated

38 Part 6 ESO Zheng Qu and P.R. Coordinate Descent with Arbitrary Sampling II: Expected Separable Overapproximation arXiv:1412.8063, 2014

39 Computation of ESO parameters \[ \mathbf{E} \left\| \sum_{i\in \hat{S}} A_i \alpha_i\right\|^2 \;\;\leq \;\; \sum_{i=1}^n {\color{blue} p_i} {\color{red} v_i}\|\alpha_i\|^2 \] \[\Updownarrow\] \[ P \circ A^\top A \preceq Diag({\color{blue}p}\circ {\color{red}v})\] Lemma (QR’14b) {For simplicity, assume that m = 1} \[A = [A_1,A_2,\dots,A_n]\]

40 ESO \[\lambda'(M) \equiv \max_{\alpha\in \mathbb{R}^n} \frac{\alpha^\top M \alpha}{\alpha^\top Diag(M) \alpha}\] \[{\color{red} v_i} = \underbrace{\min\{\lambda'(P(\hat{S})),\lambda'(A^\top A)\}}_{\equiv\beta(\hat{S},A)} \|A_i\|^2\] For any sampling, ESO holds with Theorem (QR’14b) where \[\mathbf{P}(|\hat{S}|=\tau)=1 \quad \Rightarrow \quad\lambda'(P(\hat{S}))=\tau\] \[1 \leq \lambda'(A^\top A) \leq \underbrace{\max_j \|A_{j:}\|_0}_{\omega} \leq n\]

41 ESO \[\lambda'(M) \equiv \max_{\alpha\in \mathbb{R}^n} \frac{\alpha^\top M \alpha}{\alpha^\top Diag(M) \alpha}\] \[{\color{red} v_i} = \underbrace{\min\{\lambda'(P(\hat{S})),\lambda'(A^\top A)\}}_{\equiv\beta(\hat{S},A)} \|A_i\|^2\] \[\mathbf{P}(|\hat{S}|=\tau)=1 \quad \Rightarrow \quad\lambda'(P(\hat{S}))=\tau\] \[1 \leq \lambda'(A^\top A) \leq \underbrace{\max_j \|A_{j:}\|_0}_{\omega} \leq n\]

42 Part 7 DISTRIBUTED QUARTZ

43 Distributed Quartz: Perform the Dual Updates in a Distributed Manner Quartz STEP 2: DUAL UPDATE Data required to compute the update

44 Distribution of Data n = # dual variables Data matrix

45 Distributed sampling

46 Random set of dual variables

47 Distributed sampling & distributed coordinate descent P.R. and Martin Takáč Distributed coordinate descent for learning with big data arXiv:1310.2059, 2013 Previously studied (not in the primal-dual setup): Olivier Fercoq, Zheng Qu, P.R. and Martin Takáč Fast distributed coordinate descent for minimizing non strongly convex losses 2014 IEEE Int Workshop on Machine Learning for Signal Processing, May 2014 Jakub Marecek, P.R. and Martin Takáč Fast distributed coordinate descent for minimizing partially separable functions arXiv:1406.0238, June 2014 2 strongly convex & smooth convex & smooth

48 Complexity of distributed Quartz \[\frac{n}{c\tau} + \max_i\frac{\lambda_{\max}\left( \sum_{j=1}^d \left(1+\frac{(\tau-1)(\omega_j-1)}{\max\{n/c-1,1\}}+ \left(\frac{\tau c}{n} - \frac{\tau-1}{\max\{n/c-1,1\}}\right) \frac{\omega_j'- 1}{\omega_j'}\omega_j\right) A_{ji}^\top A_{ji}\right)}{\lambda\gamma c\tau} \]

49 Reallocating load: theoretical speedup n = 1,000,000 density = 100% n = 1,000,000 density = 0.01%

50 Reallocating load: experiment Data: webspam n = 350,000 density = 33.51%

51 Experiment Machine: 128 nodes of Hector Supercomputer (4096 cores) Problem: LASSO, n = 1 billion, d = 0.5 billion, 3 TB Algorithm: with c = 512 P.R. and Martin Takáč, Distributed coordinate descent method for learning with big data, arXiv:1310.2059, 2013

52 LASSO: 3TB data + 128 nodes

53 Experiment Machine: 128 nodes of Archer Supercomputer Problem: LASSO, n = 5 million, d = 50 billion, 5 TB (60,000 nnz per row of A) Algorithm: Hydra 2 with c = 256 Olivier Fercoq, Zheng Qu, P.R. and Martin Takáč, Fast distributed coordinate descent for minimizing non-strongly convex losses, IEEE Int Workshop on Machine Learning for Signal Processing, 2014

54 LASSO: 5TB data (d = 50b) + 128 nodes

55 Related Work Zheng Qu and P.R., Coordinate ascent with arbitrary sampling II: expected separable overapproximation, arXiv:1412.80630, 2014 Zheng Qu and P.R., Coordinate ascent with arbitrary sampling I: algorithms and complexity, arXiv:1412.8060, 2014 P.R. and Martin Takáč, On optimal probabilities in stochastic coordinate descent methods, arXiv:1310.3438, 2013

56 END

57 Part 5.4 PRODUCT SAMPLING

58 Product sampling \[X_1 := \{1,2\}, \quad X_2 := \{3,4,5\}\] Product sampling: Non-serial importance sampling ! In general: Each row (feature) has nonzeros in a single group of columns (examples) only

59 Complexity Meaning of the above notation:

60 Part X LOSS FUNCTIONS & REGULARIZERS

61 Example 1: Quadratic Loss \[ \phi_i(a)=\tfrac{1}{2\gamma}(a-b_i)^2 \] \[\Downarrow\] \[ \frac{1}{n}\sum_{i=1}^n \phi_i(A_i^\top w) =\frac{1}{2\gamma}\|Aw-b\|_2^2 \]

62 Example 2: Logistic Loss \[ \phi_i(a)=\tfrac{4}{\gamma}\log\left(1+e^{-b_i a}\right) \] \[\Downarrow\] \[ \frac{1}{n}\sum_{i=1}^n \phi_i(A_i^\top w) =\frac{4}{\gamma}\sum_{i=1}^n \log\left(1+ \exp^{-b_i A_i^\top w}\right) \]

63 0-1 loss 01 1 \[ \phi_i(a)=\left\{\begin{array} {ll} 1 & \quad a\leq 1 \\ 0 & \quad a\geq 1 \end{array}\right. \]

64 Hinge loss 01 1 \[ \phi_i(a)=\left\{\begin{array} {ll} 1-a & \quad a\leq 1 \\ 0 & \quad a\geq 1 \end{array}\right. \]

65 Example 3: Smoothed hinge loss 01 1 \[ \phi_i(a)=\left\{\begin{array} {ll} 1-a-\gamma/2 & \quad a\leq 1-\gamma \\ \displaystyle\frac{(1-a)^2}{2\gamma} & \quad 1-\gamma \leq a \leq 1 \\ 0 & \quad a\geq 1 \end{array}\right. \]

66 Example 4: Squared hinge loss 01 1 \[ \phi_i(a)=\left\{\begin{array} {ll} \displaystyle \frac{(1-a)^2}{2\gamma} & \quad a\leq 1 \\ 0 & \quad a\geq 1 \end{array}\right. \]


Download ppt "Peter Richtárik Randomized Dual Coordinate Ascent with Arbitrary Sampling 1 st UCL Workshop on the Theory of Big Data – London– January 2015."

Similar presentations


Ads by Google