This thesis is devoted to studying numerical solution methods for some classes of nonlinear optimization problems. These methods are motivated from the fact that many optimization problems in practice possess certain structures such as convexity, separability and sparsity. Moreover, solving a convex optimization problem is more efficient and reliable than a nonconvex one by using the state-of-the-art of convex optimization algorithms. Exploiting such specific structures and convex optimization techniques can lead to more efficient and reliable solution methods than conventional approaches.The content of the thesis is divided into two parts. Part I studies the sequential convex programming approach for solving nonconvex optimization problems, both parametric nonlinear programming and nonconvex semidefinite programming. Ageneric algorithmic framework which we call adjoint-based predictor-corrector sequential convex programming is proposed to treat parametric nonconvex optimization problems with general convex constraints. The algorithm is based on three ingredients, namely sequential convex programming, predictor-corrector path-following and adjoint-based optimization. The stability of the tracking errorsbetween approximation solutions and the true ones is proved. Without parameters, the algorithm collapses to the one which we call the sequential convex programming (SCP) method for solving nonconvex optimization problems. As a special case of SCP, we develop a generalized inner convex approximation method and a generalized convex-concave decomposition algorithm for solving a class of nonconvex semidefinite programming problems. We also show applications of these algorithms in static state/output feedback controller design. Numerical results are benchmarked via several standard numerical examples.Part II deals with decomposition approaches for separable optimization, both in the convex and nonconvex case. We develop several decomposition methods for solving separable convex optimization problems. The first class of algorithms is based on two main ingredients, namely smoothing via prox-functions and the excessive gap technique. The convergence of these algorithms is proved and the convergence rate is estimated. Extensions to the strongly convex case and inexact cases are also considered. The second class of algorithms makes use of smoothing techniques via self-concordant barrier functions and a path-following method.The algorithms developed in this part can be implemented in a parallel or distributed fashion. Several algorithmic variants are tested in numerical examples. We also show an application of these algorithms to the nonconvex case. This leads to a two- level decomposition algorithm for solving a class of separable nonconvex optimization problems.