 SuperSCS  1.3.2
The SuperSCS Algorithm

# Conic Optimization and HSDE

SCS and SuperSCS solve the following problem

\begin{eqnarray*} &&\mathrm{Minimize}\ \langle c, x \rangle\\ &&Ax + s = b\\ &&s\in\mathcal{K}, \end{eqnarray*}

where $$x\in\mathbb{R}^n$$, $$s\in\mathbb{R}^m$$ are the primal variables and $$\mathcal{K}\subseteq\mathbb{R}^m$$ is a nonempty, closed, convex cone.

The matrix $$A\in\mathbb{R}^{m\times n}$$ and the vector $$b\in\mathbb{R}^m$$ are the problem data.

The algorithm makes use of the homogeneous self-dual embedding (HSDE) which is the problem of finding a $$u=(x,y,\tau)\in\mathbb{R}^{n+m+1}$$ so that

\begin{eqnarray*} &&u\in\mathcal{C}\\ &&Qu\in\mathcal{C}^*\\ &&\langle u, Qu \rangle = 0, \end{eqnarray*}

where

\begin{eqnarray*} Q = \begin{bmatrix} 0 & A^* & c\\ -A & 0 & b\\ -c^* & -b^* & 0 \end{bmatrix} \end{eqnarray*}

and $$\mathcal{C} = \mathbb{R}^n\times \mathcal{K}^* \times \mathbb{R}_+$$ is a cone.

Equivalently, the HSDE can be written as a variational inequality:

\begin{eqnarray*} 0 \in Qu + N_{\mathcal{C}}(u), \end{eqnarray*}

where $$N_{\mathcal{C}}$$ is the normal cone of $$\mathcal{C}$$.

# Algorithmic Solution

## Douglas-Rachford Splitting for HSDE

The Douglas-Rachford splitting can be applied to the above variational problem.

Operator $$T$$ is a (firmly) nonexpansive operator on which we may apply the SuperMann algorithmic scheme.

Since $$Q$$ is a skew-symmetric linear operator, it is maximally monotone.

Since $$N_{\mathcal{C}}$$ is the subdifferential of the proper convex function $$\delta_{\mathcal{C}}$$, it is maximally monotone.

Additionally, because of Cor. 24.4(i) in [BauCom], $$Q+N_{\mathcal{C}}$$ is maximally monotone.

Therefore, we may apply the Douglas-Rachford splitting on this monotone inclusion.

The SCS algorithm [ODon16] is precisely the application of the Douglas-Rachford splitting (DRS) to $$N_{\mathcal{C}}{}+{}Q$$; this leads to the following iterations; see Sec. 7.3 in [RyuBoy16]

\begin{eqnarray*} \tilde{u}^{\nu} {}={}& (I+Q)^{-1}(u^{\nu}) \\ \bar{u}^{\nu} {}={}& \Pi_{\mathcal{C}}(2\tilde{u}^{\nu} - u^{\nu}) \\ u^{\nu+1} {}={}& u^{\nu} {}+{} \bar{u}^{\nu} {}-{} \tilde{u}^{\nu} \end{eqnarray*}

Here, on the other hand, we consider the opposite DRS splitting, $$Q+N_{\mathcal{C}}$$, which leads to the following DRS iterations

\begin{eqnarray*} \tilde{u}^{\nu} {}={}& \Pi_{\mathcal{C}}(u^{\nu}) \\ \bar{u}^{\nu} {}={}& (I+Q)^{-1}(2\tilde{u}^{\nu} - u^{\nu}) \\ u^{\nu+1} {}={}& u^{\nu} {}+{} \bar{u}^{\nu} {}-{} \tilde{u}^{\nu} \end{eqnarray*}

For any initial guess $$u^0$$, the iterates $$u^\nu$$ converge to a point $$u^\star$$ which satisfies the above monotone inclusion; see Thm. 25.6(i), (iv) in [BauCom11].

The linear system above can be either solved "directly" using a sparse LDL factorization or "indirectly" by means of the conjugate gradient method [ODon16].

The projection on $$\mathcal{C}$$ essentially requires that we be able to project on the dual cone, $$\mathcal{K}^*$$

The iterative method can be concisely written as

\begin{eqnarray*} u^{\nu+1} = Tu^\nu, \end{eqnarray*}

where $$T:\mathbb{R}^{N}\to\mathbb{R}^{N}$$ is a (firmly) nonexpansive operator [RyuBoy16]. As such it fits the Krasnosel'skii-Mann framework, because of Sec. 5.2 in [BauCom11], leading to the relaxed iterations

\begin{eqnarray*} u^{\nu+1} {}={} (1-\lambda)u^\nu + \lambda Tu^{\nu}, \end{eqnarray*}

with $$\lambda \in (0,2)$$ since $$T$$ is firmly nonexpansive and, as a result, it fits the SuperMann framework [ThePat16].

## SuperMann

SuperMann [ThePat16] reduces the problem of finding a fixed-point $$x^\star{}\in{}\operatorname{fix} T$$ to that of finding a zero of the residual operator

\begin{eqnarray*} R = I-T. \end{eqnarray*}

\begin{eqnarray*} w^{\nu} {}={}& u^{\nu} {}+{} \tau_\nu d^\nu, \\ u^{\nu+1} {}={}& u^\nu - \zeta_{\nu} Rw^\nu, \end{eqnarray*}

where $$d^{\nu}$$ are fast, e.g., quasi-Newtonian, directions and scalar parameters $$\tau_{\nu}$$ and $$\zeta_{\nu}$$ are appropriately chosen so as to guarantee global convergence.

At each step either trigger fast convergence (K1 steps) or ensure global convergence (K2 steps) as shown below.

Alongside, a sufficient decrease of the norm of the residual, $$\|Ru^{\nu}\|$$, may trigger a "blind update" of the form $$u^{\nu+1} {}={} u^{\nu} {}+{} d^{\nu}.$$

SuperSCS Algorithm

• Input: $$c_0, c_1, q\in [0,1)$$, $$\sigma\in (0,1)$$, $$u^0\in\mathbb{R}^N$$, $$\lambda\in(0,2)$$ and $$\epsilon>0$$
• $$\eta_0{}\gets{}\|Ru^0\|$$, $$r_{\text{safe}}\gets \eta_0$$
• For $$\nu=0,1,\ldots$$:
• Check termination criteria with tolerance $$\epsilon$$
• Choose direction $$d^{\nu}$$, let $$\tau_{\nu}{}\gets{}1$$
• If $$\|Ru^\nu\| {}\leq{} c_0 \eta_\nu$$ then (K0, blind update)
• $$u^{\nu+1} {}\gets{} w^\nu$$
• $$\eta_{\nu+1} {}\gets{} \|Ru^{\nu}\|$$
• else
• $$\eta_{\nu+1} {}\gets{} \eta_{\nu}$$
• Loop (linesearch):
• $$w^\nu {}\gets{} u^\nu {}+{} \tau_\nu d^\nu$$ and $$\rho_{\nu} {}\gets{} \langle Rw^\nu, u^\nu {-} Tw^\nu\rangle$$
• If $$\|Ru^\nu\| {}\leq{} r_{\text{safe}}$$ and $$\|Rw^\nu\| {}\leq{} c_1 \|Ru^\nu\|$$ then K1 steps)
• $$u^{\nu+1} {}\gets{} w^\nu$$,
• $$r_{\text{safe}} {}\gets{} \|Rw^\nu\| + q^\nu \eta_0$$
• exit linesearch
• Else If $$\rho_{\nu} {}\geq{} \sigma \|Ru^\nu\| \|Rw^\nu\|$$ then (K2 steps)
• $$u^{\nu+1} {}\gets{} u^\nu {}-{} \lambda\tfrac{\rho_\nu}{\|Rw^\nu\|^2}Rw^\nu$$
• exit linesearch
• Else
• $$\tau_{\nu}{}\gets{}{\tau_{\nu}}/{2}$$

By exploiting the structure of $$T$$ and, in particular, the fact that $$(I+Q)^{-1}$$ is a linear operator, we may avoid applying $$T$$ at every iteration of the line search loop.

Instead, at every line search iteration we only need to apply $$\Pi_{\mathcal{C}}$$ once.

SuperSCS does not need to solve a linear system within the line search loop.

Overall, save the computation of the residuals, at every iteration of SuperSCS we need to solve the linear system twice and invoke $$\Pi_{\mathcal{C}}$$ only $$1+l_\nu$$ times, where $$l_{\nu}$$ is the number of line search iterations.