COMPARATIVE STUDY ON NUMERICAL METHODS FOR CONSTRAINED NONLINEAR OPTIMIZATION PROBLEMS

Show simple item record

dc.contributor.author fanta, Tesfaye
dc.contributor.author alemayehu, Getinet Major Advisor (PhD)
dc.date.accessioned 2018-01-28T18:11:42Z
dc.date.available 2018-01-28T18:11:42Z
dc.date.issued 2017-06
dc.identifier.uri http://localhost:8080/xmlui/handle/123456789/1210
dc.description 82 en_US
dc.description.abstract In this project work constrained nonlinear optimization problems subject to equality and inequality constraints were considered. The purpose of this paper was to present Numerical results by doing comparative study based on convergence, efficiency and error analysis among the nonlinear optimization techniques, namely SLP method, SQP and GRG. Emphasis was given on those conditions under which the approximate solutions are differentiable functions of the objective or constraint. The existence of solution for NLP was proved using Weierstrass theorem. The uniqueness of NLP has also been proved and the existing solution is unique. Successive Linear Programming (SLP), which is also known as the Method of Approximation Programming (MAP), solves nonlinear optimization problems via a sequence of linear programs. Each LP problem is generated by approximating the nonlinear objective and constraint functions using first-order Taylor series expansions. It converges slowly on problems with non-vertex optimum and will violate nonlinear constraints until convergence is reached. SQP methods find an approximate solution of a sequence of quadratic programming (QP) sub-problems in which a nonlinear objective function is minimized subject to the linearized constraints. It is an iterative algorithm, which employs quasi-Newton method to solve the system of equations representing the KKT point of the NLP problem. SQP methods are sensitive subject to errors in function and gradient evaluations. Each iteration takes, longer than the corresponding SLP iteration. It is more sensitive to numerical error in derivatives than GRG. The Generalized Reduced Gradient (GRG) Methods are algorithms for solving nonlinear programs of general structure. In this case, the variables are divided into basic and non basic variables. GRG requires more function evaluations than SQP. Generally, it is impossible to say one of the methods is the best for every problem. For these particular test problems the ranking of the algorithms were in the order of, SLP, SQP and GRG based on successive errors and based on convergence. On the other hand, based on efficiency (no of iteration), the ranking of the algorithms were GRG, SLP and SQP. Keywords: Optimality conditions of NLP, Sequential linear programming (SLP), sequential quadratic programming (SQP) and generalized reduced gradient method (GRG). en_US
dc.description.sponsorship Haramaya university en_US
dc.language.iso en en_US
dc.publisher Haramaya university en_US
dc.title COMPARATIVE STUDY ON NUMERICAL METHODS FOR CONSTRAINED NONLINEAR OPTIMIZATION PROBLEMS en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search HU-IR System


Advanced Search

Browse

My Account