- 1.1.1 Definition: Optimization Problem & Minimizer
- 1.1.2 Local Solutions, global Solutions
- 1.1.3 Multiobjective Optimization
- 1.1.4 Gradient (1. derivative), Hessian Matrix (2. derivative)
- 1.1.5 Slope, Curvature in a Multivariate Problem
- 1.1.6 Linear (Affine) Function l(x)
- 1.1.7 Quadratic Function phi(x)
- 1.1.8 Taylor Series Expansion of a single variate function y(x)
- 1.1.8.1 Optimality condition of the minimizer x* of s single variate function y(x)
- 1.1.9 Taylor Series Expansion of a multi variate function f(x)
- 1.1.9.1 Optimality condition of the minimizer x* of a multi variate function f(x)
- 1.1.9.2 Optimality Condition of a Quadratic Function phi(x)
- 1.2.1 Optimization Problems without Constraints
- 1.2.2 Optimization Problem with Linear Equality Constraints
- 1.2.3 Descent Direction and Optimality Conditions
- 1.2.3 Optimization Problem with Linear Inequality Constraints
- 1.4.1 Numerical Gradient: Difference Quotient
- 1.4.2 Analytical Gradients: Adjoints Variable Method
- 2.2.1 Unconstrained Optimization
- 2.2.1.1 Simplex Method, Polytope Method
- 2.2.1.2 Pattern Search, Method of Hooke and Jeeves
- 3.1.1 Newton Method
- 3.1.2 Quasi Newton Method
- 4.1.1 Lagrange Method
- 4.1.2 Quadratic Programming
- 4.1.2.1 Method of Elimination