The Optimization Problem Formulation
Optimization algorithms work to minimize (or maximize) an objective function subject to constraints on design variables and responses.
Minimize:  ${\psi}_{0}(x,b)$  (objective function) 
Subject to:  ${\psi}_{i}(x,b)\ge 0,\text{}i=1,\mathrm{...}\text{}p$  (inequality constraints) 
${\psi}_{i}(x,b)=0,\text{}i=p+1,\mathrm{...},\text{}m$
${b}_{L}\le b\le {b}_{U}$ 
(equality constraints) (design limits) 

The functions ${\psi}_{k}(x,b),\text{}k=0,\mathrm{...},m$ are assumed to have the form:
 $b$ is an ndimensional vector of realvalued design variables
 ${b}_{L}$ and ${b}_{U}$ are the lower and upper bounds, respectively, on the design variables
 $x$ is the set of states that the solver uses to represent the system
 ${\psi}_{k0}(x,b)$ is the value of the function from a previous simulation. For the first simulation, it is always zero.
The goal of the optimization effort is to minimize the objective function ${\psi}_{0}(x,b)$ while satisfying the constraints ${\psi}_{k}(x,b),\text{}k=\mathrm{1,\; ...,}\text{}m$ . Constraints are assumed to be inherently nonlinear. They can be either inequality or equality constraints.
${b}_{L}$ and ${b}_{U}$ define the lower and upper bounds for the elements of $b$ . The set of all allowable values of $b$ is known as the design space for the problem. A design point or a sample point is a particular set of values within the design space.
A design point is said to be feasible if and only if it satisfies all the constraints. Correspondingly, a design point is said to be infeasible if it violates one or more of the constraints. Our aim, of course, is to find a feasible design. Sometimes, due to the presence of constraints, this may not be possible.
 An initial value for the design variables $b$ is provided to the optimizer.
 The response quantities ${\psi}_{k}(x,b),\text{}k=0,\mathrm{...},m$ are computed by running a simulation.
 Some algorithm, sometimes a sensitivitybased method, is applied to generate a new $b$ that will either reduce the objective function, reduce the amount of infeasibility, or both.
 When a sensitivitybased method is used, the optimizer also needs to compute the sensitivity of the functions ${\psi}_{k}(x,b),\text{}k=0,\mathrm{...},m$ with respect to the design $b$ . This means the optimizer requires the matrix of partial derivatives, $\left[\frac{\partial \psi (x,b)}{\partial b}\right]$ .
 In an iterative fashion, new designs ( $b$ ) are generated by the optimizer until a determination is made that the optimizer has found a minimum or the iteration limits have been exceeded.
In some instances you may want to maximize the value of a certain objective. Without any loss of generality, you can convert it to a minimization problem by simply negating the objective you calculate. Thus, if you want to maximize a function $\chi (x,b)$ , you can convert it to a minimization problem by defining the cost function as $\psi (x,b)=\chi (x,b)$ .