# Examples¶

In the following we show how ForBES can be used to model and solve some well known optimization problems.

## Support vector machines¶

SVMs can be cast as the following convex optimization problem

\[\begin{split}\mathrm{minimize}\ &\frac{\lambda}{2}\|x\|_2^2 + \sum_{i=1}^m \max\{0, 1-b_iz_i\},\\
\mathrm{subject\ to}\ &Ax = z\end{split}\]

Therefore we have

\[x_1 = x,\ f(x) = \frac{\lambda}{2}\|x\|_2^2,\ g(z) = \sum_{i=1}^m \max\{0, 1-b_iz_i\},\ A_1 = A,\ B = -I\]

Function \(g\) is called hinge loss and is provided in the library (see Functions for more information). The problem can therefore be defined as:

```
f = squaredNorm(lambda);
g = hingeLoss(1, b); % vector b contains the labels
out = forbes(f, g, [], [], {A, -1, zeros(m, 1)});
```

## Sparse logistic regression¶

Consider the following problem

\[\mathrm{minimize}\ \sum_{i=1}^m\log(1+\exp(-b_i \langle a_{i}, x\rangle)) + r\|x\|_1\]

The smooth term in this case is the logistic function, and the nonsmooth term is the \(\ell_1\) regularization. We then have

\[\begin{split}f(x) &= \frac{1}{m}\sum_{i=1}^m \log(1+\exp(-x_i)) \\
g(x) &= r\|x\|_1 = \sum_i |x_i|\end{split}\]

This problem can be defined using the functions in the library (see Functions for more information) as follows:

```
f = logLoss(1/m);
g = l1Norm(r);
C = diag(sparse(b))*X; % vector b contains the labels, X is the design matrix
out = forbes(f, g, [], C);
```