Sequential Quadratic Programming (SQP)
A gradientbased iterative optimization method and is considered to be the best method for nonlinear problems by some theoreticians. In HyperStudy, Sequential Quadratic Programming has been further developed to suit engineering problems.
Usability Characteristics
 A gradientbased method, therefore it will most likely find the local optima.
 One iteration of Sequential Quadratic Programming will require a number of simulations. The number of simulations required is a function of the number of input variables since finite difference method is used for gradient evaluation. As a result, it may be an expensive method for applications with a large number of input variables.
 Sequential Quadratic Programming terminates if one of the conditions below are
met:
 One of the two convergence criteria is satisfied.
 Termination Criteria is based on the KarushKuhnTucker Conditions.
 Input variable convergence
 The maximum number of allowable iterations (Maximum Iterations) is reached.
 An analysis fails and the Terminate optimization option is the default (On Failed Evaluation).
 One of the two convergence criteria is satisfied.
 The number of evaluations in each iteration is automatically set and varies due to the finite difference calculations used in the sensitivity calculation. The number of evaluations in each iteration is dependent of the number of variables and the Sensitivity setting. The evaluations required for the finite difference are executed in parallel. The evaluations required for the line search are executed sequentially.
Settings
Parameter  Default  Range  Description 

Maximum Iterations  25  > 0  Maximum number of iterations allowed. 
Design Variable Convergence  0.0  >=0.0 
Input variable convergence parameter. Design has converged when
there are two consecutive designs for which the change in each input
variable is less than both (1) Design Variable
Convergence times the difference between its bounds, and (2)
Design Variable
Convergence times the absolute value of its initial value
(simply Design
Variable Convergence if its initial value is zero). There also
must not be any constraint whose allowable violation is exceeded in the
last design.
Note: A larger value allows for faster convergence, but
worse results could be achieved.
$$\begin{array}{l}\{\begin{array}{c}\left{x}_{j}^{i}{x}_{j}^{i1}\right<\gamma \cdot ({x}_{j}^{U}{x}_{j}^{L})\\ \{\begin{array}{c}\left{x}_{j}^{i}{x}_{j}^{i1}\right<\gamma \cdot \left{x}_{j}^{0}\right,{\text{if(x}}_{j}^{0}\ne \text{0)}\\ \left{x}_{j}^{i}{x}_{j}^{i1}\right\gamma ,{\text{if(x}}_{j}^{0}\text{=0)}\end{array}\\ {c}_{\mathrm{max}}^{k}\le {g}_{\mathrm{max}}\end{array}\\ i=k,k1;j=1,2,\mathrm{...},n\end{array}$$

On Failed Evaluation  Terminate optimization 


Parameter  Default  Range  Description 

Termination Criteria  1.0e4  >0.0  Defines the termination criterion, relates to satisfaction of
KuhnTucker condition of optimality. Recommended range: 1.0E3 to 1.0E10. In general, smaller values result in higher solution precision, but more computational effort is needed. For the nonlinear optimization problem:
$$\begin{array}{l}\begin{array}{ccc}\mathrm{min}& & f(x)\\ & {g}_{i}(x)\le 0& \\ s.t.& {h}_{j}(x)=0& \end{array}\\ \text{}i=1,\mathrm{...},m;j=1,\mathrm{...},1\end{array}$$
Sequential Quadratic Programming is
converged if:
$$\left{S}^{T}\cdot \nabla f\right+{\displaystyle \sum _{i=1}^{m}\left{\mu}_{i}\cdot {g}_{i}\right}+{\displaystyle \sum _{j=1}^{l}\left{\lambda}_{j}\cdot {h}_{j}\right\le \Delta}$$

Sensitivity  Forward FD 

Defines the way
the derivatives of output responses with respect to input
variables are calculated.
$$df/dx=\left(f(x+dx\right)f(x))/(dx)$$
$$df/dx=\left(f(x+dx\right)f(xdx))/(2*dx)$$
$$df/dx=((f(x)+f(x+dx))/34/3*f(x0.5dx))/dx$$
Tip: For higher solution
precision, 2 or 3 can be used, but more computational
effort is consumed.

Max Failed Evaluations  20,000  >=0  When On Failed Evaluations is set to Ignore failed evaluations (1), the optimizer will tolerate failures until this threshold for Max Failed Evaluations. This option is intended to allow the optimizer to stop after an excessive amount of failures. 
Use Perturbation size  No  No or Yes  Enables the use of Perturbation Size, otherwise an internal automatic perturbation size is set. 
Perturbation Size  0.0001  > 0.0 
Defines the size of the finite difference perturbation. For a variable x, with upper
and lower bounds (xu and xl, respectively), the following logic is used
to preserve reasonable perturbation sizes across a range of variables
magnitudes:

Use Inclusion Matrix  No 

