Mastering Nonlinear Programming with Python: Two Practical Optimization Cases
This article introduces nonlinear programming, explains its challenges compared to linear programming, and demonstrates two concrete optimization examples solved with Python's SciPy library using gradient‑descent and SLSQP algorithms, complete with code and result interpretation.
Nonlinear Programming
Nonlinear programming refers to optimization problems where the objective function and/or constraints contain nonlinear functions. Unlike linear programming, these problems require more complex algorithms because the presence of nonlinearities makes finding the optimum more difficult.
In nonlinear programming, the nonlinear functions can take any form, such as polynomials, trigonometric, exponential, or logarithmic functions. Solving such problems typically relies on numerical optimization algorithms like gradient descent, Newton’s method, or quasi‑Newton methods.
Example 1
An example of a nonlinear programming problem is to minimize a quadratic objective while keeping the decision variables inside a circle of radius 3 centered at the origin.
One common algorithm is gradient descent, which iteratively moves opposite to the gradient until convergence.
The following Python code solves this example using scipy.optimize.minimize with the SLSQP method.
<code>import numpy as np
from scipy.optimize import minimize
# define objective and constraint
def objective(x):
return (x[0] - 1)**2 + (x[1] - 2.5)**2
def constraint(x):
return 9 - x[0]**2 - x[1]**2
x0 = np.array([0, 0])
solution = minimize(objective, x0, method='SLSQP', constraints={'fun': constraint, 'type': 'ineq'})
print(solution)
</code>The result shows a successful optimization with the optimal variables x = [1.0, 2.5] and objective value 0.
Example 2
A slightly more complex nonlinear programming problem where the objective is linear but the constraints include nonlinear (quadratic) functions. The problem can be solved with scipy.optimize.minimize using SLSQP.
Python implementation:
<code>import numpy as np
from scipy.optimize import minimize
def objective(x):
return 5*x[0] - 2*x[1] + x[2]
def constraint1(x):
return x[0] + x[1] + x[2] - 1
def constraint2(x):
return 1 - x[0]**2 - x[1]**2 - x[2]**2
def constraint3(x):
return x[0] - x[2]**2
def constraint4(x):
return -x[0] - x[1] - x[2] + 1
x0 = np.array([0, 0, 0])
solution = minimize(objective, x0, method='SLSQP', constraints=[
{'fun': constraint1, 'type': 'eq'},
{'fun': constraint2, 'type': 'ineq'},
{'fun': constraint3, 'type': 'ineq'},
{'fun': constraint4, 'type': 'ineq'}
])
print(solution)
</code>The output indicates a successful optimization with optimal variables approximately [-3.26e-10, 1.0, -2.53e-07] and objective value -2.0.
Model Perspective
Insights, knowledge, and enjoyment from a mathematical modeling researcher and educator. Hosted by Haihua Wang, a modeling instructor and author of "Clever Use of Chat for Mathematical Modeling", "Modeling: The Mathematics of Thinking", "Mathematical Modeling Practice: A Hands‑On Guide to Competitions", and co‑author of "Mathematical Modeling: Teaching Design and Cases".
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.