Optimization and Algorithms
November 25, 2021
Mock exam
1. Non strongly convex function. (3 points) One of the following functions f : R2 → R is not strongly convex:
(A) f (x1 , x2) = jx1 + x2 j + x1(2) + (x1 — x2)2
(B) f (x1 , x2) = 4x1(2) + ex1 +x2 + 4x1x2 + x2(2)
(C) f (x1 , x2) = (x1 + x2)2 + jx1 j + (x1 — x2)2
(D) f (x1 , x2) = ex1 -x2 + 4x1(2) + 3x1 — 2x2 — 2x1x2 + x2(2)
(E) f (x1 , x2) = —3x1x2 + (x1 + 2x2)2 + (x1 — x2)+
(F) f (x1 , x2) = x1 + x1(2) — x2 + x2(2) + log(1 + ex1 +x2)
Which one?
Write your answer (A, B, C, D, E, or F) here:
2. True statement about convexity. (2 points) One of the following statements is true:
(A) iff : Rn → R is convex, then f has at least one global minimizer
(B) iff1 : Rn → R and f2 : R → R are both convex functions, thenf2 of1 is convex
(C) iff : Rn → R is strictly convex, then f has exacly one global minimizer
(D) iff1 : R → R is strongly convex , f2 : Rn → R is convex, and f2 (x) ≥ f1 (x) for each x ∈ Rn, then f2 is strongly convex
(E) iff : Rn → R is strictly convex, then f has at most one global minimizer
(F) iff : Rn → R is convex, then f2 is strongly convex
Which one?
Write your answer (A, B, C, D, E, or F) here:
3. Augmented Lagrangian method. (3 points) Consider the constrained problem
where f : Rn → R and h : Rn → R are diferentiable functions.
The augmented Lagrangian method applied to (1) solves, at each iteration, an opti- mization problem of one of the following forms:
(A)
where λ ∈ R and c > 0
(B)
where λ ∈ R and c > 0
(C)
where λ ∈ R and c > 0
(D)
where λ ∈ R
(E)
where c > 0
(F)
where λ ∈ R and c > 0
Which one?
Write your answer (A, B, C, D, E, or F) here:
4. Existence of global minimizers. (4 points) Consider the optimization problem
where the variables to optimize are c ∈ Rn and R ∈ R. The vectors xm and the scalars ωm are given for 1 ≤ m ≤ M, with ωm > 0 for all m. The scalar P is also given and denotes a positive constant: P > 0.
Show that (2) has at least one global minimizer.
5. Smooth control of an uncertain system. (4 points) Consider the optimization problem
where the variable to optimize is u1 , . . . , uK , with uk ∈ Rd for 1 ≤ k ≤ K. The vectors ak ∈ Rd and ck ∈ Rd and the scalars bk ∈ R and dk ∈ R are given for 1 ≤ k ≤ K. Also, the scalar U is given and denotes a positive constant: U > 0.
Show that (3) is a convex optimization problem.
6. Penalty method. (4 points) Consider the optimization problem
where the vectors ∈ R2 (s ≠ 0) and the scalar rare given. Assume that the function f is diferentiable and strongly convex. Let x* ∈ R2 be the global minimizer of (4).
Consider now the penalized problem
where ck > 0. Let xk* ∈ R2 be the global minimizer of (5).
Assume that (ck)k≥1 is an increasing sequence converging to +∞; that is, 0 < c1 < c2 < c3 < · · · and limk→+∞ ck = +∞ . Also, assume that the sequence (xk(*))k≥1
converges to some vector x, that is, limk→+∞ xk(*) = x.
Show that x = x*.
(You cannot invoke theorems about penalty methods; you must prove the equality above by yourself.)