Linear programming (LP) is a mathematical optimization technique used to find the maximum or minimum value of an objective function subject to constraints represented by linear equations or inequalities. The objective function and the constraints are defined in terms of decision variables, which are the values that are being optimized. The goal of linear programming is to find the values of the decision variables that optimize the objective function, subject to the constraints.
A linear programming problem can be represented mathematically as follows:
Maximize (or Minimize)
$$z={c}_{1}{x}_{1}+{c}_{2}{x}_{2}+\mathrm{.}\mathrm{.}\mathrm{.}+{c}_{n}{x}_{n}$$
z = c_1x_1 + c_2x_2 + … + c_nx_n
Subject to:
$${a}_{11}{x}_{1}+{a}_{12}{x}_{2}+\mathrm{.}\mathrm{.}\mathrm{.}+{a}_{1n}{x}_{n}\le {b}_{1}\phantom{\rule{0ex}{0ex}}{a}_{21}{x}_{1}+{a}_{22}{x}_{2}+\mathrm{.}\mathrm{.}\mathrm{.}+{a}_{2n}{x}_{n}\le {b}_{2}\phantom{\rule{0ex}{0ex}}\mathrm{.}\mathrm{.}\mathrm{.}\phantom{\rule{0ex}{0ex}}{a}_{m1}{x}_{1}+{a}_{m2}{x}_{2}+\mathrm{.}\mathrm{.}\mathrm{.}+{a}_{mn}{x}_{n}\le {b}_{m}$$
a_{11}x_1 + a_{12}x_2 + … + a_{1n}x_n \leq b_1 \\ a_{21}x_1 + a_{22}x_2 + … + a_{2n}x_n \leq b_2 \\ … \\ a_{m1}x_1 + a_{m2}x_2 + … + a_{mn}x_n \leq b_m
And
$${x}_{1}\ge 0\phantom{\rule{0ex}{0ex}}{x}_{2}\ge 0\phantom{\rule{0ex}{0ex}}\mathrm{.}\mathrm{.}\mathrm{.}\phantom{\rule{0ex}{0ex}}{x}_{n}\ge 0$$
x_1 \geq 0 \\ x_2 \geq 0 \\ … \\ x_n \geq 0
where z
is the objective function to be optimized, c1, c2, ..., cn
are the coefficients of the objective function, x1, x2, ..., xn
are the decision variables, a11, a12, ..., amn
are the coefficients of the constraints, b1, b2, ..., bm
are the righthand side values of the constraints, and m
is the number of constraints. The constraints are represented by linear equations or inequalities that limit the possible values of the decision variables.
Linear programming is widely used in many fields, including finance, engineering, and operations research. Linear programming can be used to solve a wide range of optimization problems, including resource allocation problems, scheduling problems, and portfolio optimization problems. The simplex method is the most commonly used algorithm for solving linear programming problems, although other algorithms, such as the interiorpoint method, are also available.
Linear programming is a powerful tool for prescriptive analytics, as it allows decision makers to find the optimal solution to complex optimization problems, subject to constraints. By using linear programming, decision makers can make informed decisions based on data, mathematical models, and optimization techniques.
Linear programming (LP) is a mathematical optimization technique used to find the maximum or minimum value of an objective function subject to constraints represented by linear equations or inequalities. The objective function and the constraints are defined in terms of decision variables, which are the values that are being optimized. The goal of linear programming is to find the values of the decision variables that optimize the objective function, subject to the constraints.
In linear programming, the objective function and the constraints are represented by linear equations. A linear equation is an equation in which the variables are multiplied by constant coefficients and added together to give a single expression. For example, the equation 3x + 4y = 12
is a linear equation, where x
and y
are decision variables.
The objective function is a mathematical expression that defines the quantity being optimized. In a maximization problem, the objective function is written in terms of the decision variables, and the goal is to find the values of the decision variables that maximize the objective function. In a minimization problem, the objective function is written in terms of the decision variables, and the goal is to find the values of the decision variables that minimize the objective function.
The constraints are represented by linear equations or inequalities that limit the possible values of the decision variables. For example, if a decision variable x
must be greater than or equal to zero, the constraint can be represented by the inequality x >= 0
. The constraints define the feasible region of the problem, which is the set of all possible combinations of the decision variables that satisfy the constraints.
Linear programming optimization is performed using various algorithms, including the simplex method, the interiorpoint method, and the dual simplex method. The choice of algorithm depends on the problem being solved, the computational resources available, and the desired accuracy and efficiency of the solution.
from scipy.optimize import linprog
# Define the objective function coefficients
c = [1, 4]
# Define the constraint coefficients
A = [[3, 1], [1, 2]]
b = [6, 4]
# Define the bounds for the variables
x0_bounds = (None, None)
x1_bounds = (3, None)
# Solve the linear programming problem
res = linprog(c, A_ub=A, b_ub=b, bounds=[x0_bounds, x1_bounds], method='simplex')
# Print the optimization result
print("Optimal value:", res.fun)
print("Optimal variables:", res.x)
Here’s an example of using the simplex method to solve a linear programming problem in Python:
Suppose we have a manufacturing company that produces two products (A and B), and we have the following data:
 The profit per unit of product A is 10, and the profit per unit of product B is 15.
 Each unit of product A requires 2 hours of machine time and 3 hours of labor time, while each unit of product B requires 3 hours of machine time and 4 hours of labor time.
 The company has a total of 50 hours of machine time and 70 hours of labor time available.
We can formulate the linear programming problem as follows:
Maximize: z = 10x1 + 15x2
Subject to: 2x1 + 3x2 <= 50
(machine time constraint) 3x1 + 4x2 <= 70
(labor time constraint) x1 >= 0
(nonnegativity constraint for x1) x2 >= 0
(nonnegativity constraint for x2)
where x1
is the number of units of product A produced and x2
is the number of units of product B produced.
The simplex method involves iteratively finding the variable with the largest positive reduced cost (i.e., the variable that provides the greatest increase in the objective function per unit increase in the constraint) and increasing its value until a constraint is reached. The process is repeated until there are no more positive reduced costs or all variables are nonnegative.
Here’s an implementation of the simplex method in Python using the PuLP library:
import pulp
Define the decision variables
x1 = pulp.LpVariable(“x1”, lowBound=0)
x2 = pulp.LpVariable(“x2”, lowBound=0)
Define the linear programming problem
lp_problem = pulp.LpProblem(“Manufacturing Problem”, pulp.LpMaximize)
Define the objective function
lp_problem += 10x1 + 15x2, “Total Profit”
Define the constraints
lp_problem += 2x1 + 3x2 <= 50, “Machine Time Constraint”
lp_problem += 3x1 + 4x2 <= 70, “Labor Time Constraint”
Solve the linear programming problem
lp_problem.solve()
Print the results
print(“Status:”, pulp.LpStatus[lp_problem.status])
print(“Optimal Solution:”)
print(“x1 =”, x1.value())
print(“x2 =”, x2.value())
print(“Objective Function Value =”, pulp.value(lp_problem.objective))
Problem Type
Appropriate Optimization Formulation
Example
Linear Programming (LP)
Linear Programming (Primal or Dual Simplex Method)
Maximizing profit for a manufacturing company subject to resource constraints.
Integer Programming (IP)
MixedInteger Linear Programming (Primal or Dual Simplex Method with BranchandBound)
Scheduling a set of tasks with limited resources and specific start and end times.
Nonlinear Programming (NLP)
Nonlinear Programming (GradientBased or Heuristic Methods)
Minimizing the cost of a design subject to constraints on stress and displacement.
Convex Optimization
Convex Optimization (InteriorPoint Method or FirstOrder Methods)
Portfolio optimization with constraints on risk and return.
Quadratic Programming (QP)
Quadratic Programming (Primal or Dual InteriorPoint Method)
Optimizing the parameters of a machine learning algorithm subject to regularization constraints.
SecondOrder Cone Programming (SOCP)
SecondOrder Cone Programming (InteriorPoint Method)
Optimizing the parameters of a support vector machine with nonlinear constraints.
Semidefinite Programming (SDP)
Semidefinite Programming (InteriorPoint Method)
Optimizing the parameters of a Gaussian process regression model subject to positive semidefinite constraints on the covariance matrix.
$$\begin{array}{rl}{\displaystyle \text{maximize}}& {\displaystyle \phantom{\rule{1em}{0ex}}f(x)}\\ {\displaystyle \text{subjectto}}& {\displaystyle \phantom{\rule{1em}{0ex}}{g}_{i}(x)\le 0,\phantom{\rule{1em}{0ex}}i=1,2,\dots ,m}\\ {\displaystyle}& {\displaystyle \phantom{\rule{1em}{0ex}}{h}_{j}(x)=0,\phantom{\rule{1em}{0ex}}j=1,2,\dots ,p}\\ {\displaystyle}& {\displaystyle \phantom{\rule{1em}{0ex}}{x}_{k}\in \mathbb{Z},\phantom{\rule{1em}{0ex}}k\in I}\\ {\displaystyle}& {\displaystyle \phantom{\rule{1em}{0ex}}{x}_{l}\in \mathbb{R},\phantom{\rule{1em}{0ex}}l\in J}\end{array}$$
\begin{aligned} \text{maximize} & \quad f(x) \\ \text{subject to} & \quad g_i(x) \le 0, \quad i = 1, 2, \dots, m \\ & \quad h_j(x) = 0, \quad j = 1, 2, \dots, p \\ & \quad x_k \in \mathbb{Z}, \quad k \in I \\ & \quad x_l \in \mathbb{R}, \quad l \in J \end{aligned}
where:

f(x)
is the objective function to be optimized. 
g_i(x)
are the inequality constraints. 
h_j(x)
are the equality constraints. 
x_k
are the binary variables, withk
belonging to a setI
. 
x_l
are the continuous variables, withl
belonging to a setJ
.