Welcome file

Linear programming (LP) is a mathematical optimization technique used to find the maximum or minimum value of an objective function subject to constraints represented by linear equations or inequalities. The objective function and the constraints are defined in terms of decision variables, which are the values that are being optimized. The goal of linear programming is to find the values of the decision variables that optimize the objective function, subject to the constraints.

A linear programming problem can be represented mathematically as follows:

Maximize (or Minimize)

z=c1x1+c2x2+...+cnxn
z = c_1x_1 + c_2x_2 + … + c_nx_n

Subject to:

a11x1+a12x2+...+a1nxnb1a21x1+a22x2+...+a2nxnb2...am1x1+am2x2+...+amnxnbm
a_{11}x_1 + a_{12}x_2 + … + a_{1n}x_n \leq b_1 \\ a_{21}x_1 + a_{22}x_2 + … + a_{2n}x_n \leq b_2 \\ … \\ a_{m1}x_1 + a_{m2}x_2 + … + a_{mn}x_n \leq b_m

And

x10x20...xn0
x_1 \geq 0 \\ x_2 \geq 0 \\ … \\ x_n \geq 0

where z is the objective function to be optimized, c1, c2, ..., cn are the coefficients of the objective function, x1, x2, ..., xn are the decision variables, a11, a12, ..., amn are the coefficients of the constraints, b1, b2, ..., bm are the right-hand side values of the constraints, and m is the number of constraints. The constraints are represented by linear equations or inequalities that limit the possible values of the decision variables.

Linear programming is widely used in many fields, including finance, engineering, and operations research. Linear programming can be used to solve a wide range of optimization problems, including resource allocation problems, scheduling problems, and portfolio optimization problems. The simplex method is the most commonly used algorithm for solving linear programming problems, although other algorithms, such as the interior-point method, are also available.

Linear programming is a powerful tool for prescriptive analytics, as it allows decision makers to find the optimal solution to complex optimization problems, subject to constraints. By using linear programming, decision makers can make informed decisions based on data, mathematical models, and optimization techniques.

Linear programming (LP) is a mathematical optimization technique used to find the maximum or minimum value of an objective function subject to constraints represented by linear equations or inequalities. The objective function and the constraints are defined in terms of decision variables, which are the values that are being optimized. The goal of linear programming is to find the values of the decision variables that optimize the objective function, subject to the constraints.

In linear programming, the objective function and the constraints are represented by linear equations. A linear equation is an equation in which the variables are multiplied by constant coefficients and added together to give a single expression. For example, the equation 3x + 4y = 12 is a linear equation, where x and y are decision variables.

The objective function is a mathematical expression that defines the quantity being optimized. In a maximization problem, the objective function is written in terms of the decision variables, and the goal is to find the values of the decision variables that maximize the objective function. In a minimization problem, the objective function is written in terms of the decision variables, and the goal is to find the values of the decision variables that minimize the objective function.

The constraints are represented by linear equations or inequalities that limit the possible values of the decision variables. For example, if a decision variable x must be greater than or equal to zero, the constraint can be represented by the inequality x >= 0. The constraints define the feasible region of the problem, which is the set of all possible combinations of the decision variables that satisfy the constraints.

Linear programming optimization is performed using various algorithms, including the simplex method, the interior-point method, and the dual simplex method. The choice of algorithm depends on the problem being solved, the computational resources available, and the desired accuracy and efficiency of the solution.


from scipy.optimize import linprog  
​  
# Define the objective function coefficients  
c = [-1, 4]  
​  
# Define the constraint coefficients  
A = [[-3, 1], [1, 2]]  
b = [6, 4]  
​  
# Define the bounds for the variables  
x0_bounds = (None, None)  
x1_bounds = (-3, None)  
​  
# Solve the linear programming problem  
res = linprog(c, A_ub=A, b_ub=b, bounds=[x0_bounds, x1_bounds], method='simplex')  
​  
# Print the optimization result  
print("Optimal value:", res.fun)  
print("Optimal variables:", res.x)

Here’s an example of using the simplex method to solve a linear programming problem in Python:

Suppose we have a manufacturing company that produces two products (A and B), and we have the following data:

  • The profit per unit of product A is 10, and the profit per unit of product B is 15.
  • Each unit of product A requires 2 hours of machine time and 3 hours of labor time, while each unit of product B requires 3 hours of machine time and 4 hours of labor time.
  • The company has a total of 50 hours of machine time and 70 hours of labor time available.

We can formulate the linear programming problem as follows:

Maximize: z = 10x1 + 15x2 Subject to: 2x1 + 3x2 <= 50 (machine time constraint) 3x1 + 4x2 <= 70 (labor time constraint) x1 >= 0 (non-negativity constraint for x1) x2 >= 0 (non-negativity constraint for x2)

where x1 is the number of units of product A produced and x2 is the number of units of product B produced.

The simplex method involves iteratively finding the variable with the largest positive reduced cost (i.e., the variable that provides the greatest increase in the objective function per unit increase in the constraint) and increasing its value until a constraint is reached. The process is repeated until there are no more positive reduced costs or all variables are non-negative.

Here’s an implementation of the simplex method in Python using the PuLP library:

import pulp

Define the decision variables


x1 = pulp.LpVariable(“x1”, lowBound=0)
x2 = pulp.LpVariable(“x2”, lowBound=0)

Define the linear programming problem


lp_problem = pulp.LpProblem(“Manufacturing Problem”, pulp.LpMaximize)

Define the objective function


lp_problem += 10x1 + 15x2, “Total Profit”

Define the constraints


lp_problem += 2x1 + 3x2 <= 50, “Machine Time Constraint”
lp_problem += 3x1 + 4x2 <= 70, “Labor Time Constraint”

Solve the linear programming problem


lp_problem.solve()

Print the results


print(“Status:”, pulp.LpStatus[lp_problem.status])
print(“Optimal Solution:”)
print(“x1 =”, x1.value())
print(“x2 =”, x2.value())
print(“Objective Function Value =”, pulp.value(lp_problem.objective))

Problem Type

Appropriate Optimization Formulation

Example

Linear Programming (LP)

Linear Programming (Primal or Dual Simplex Method)

Maximizing profit for a manufacturing company subject to resource constraints.

Integer Programming (IP)

Mixed-Integer Linear Programming (Primal or Dual Simplex Method with Branch-and-Bound)

Scheduling a set of tasks with limited resources and specific start and end times.

Nonlinear Programming (NLP)

Nonlinear Programming (Gradient-Based or Heuristic Methods)

Minimizing the cost of a design subject to constraints on stress and displacement.

Convex Optimization

Convex Optimization (Interior-Point Method or First-Order Methods)

Portfolio optimization with constraints on risk and return.

Quadratic Programming (QP)

Quadratic Programming (Primal or Dual Interior-Point Method)

Optimizing the parameters of a machine learning algorithm subject to regularization constraints.

Second-Order Cone Programming (SOCP)

Second-Order Cone Programming (Interior-Point Method)

Optimizing the parameters of a support vector machine with non-linear constraints.

Semidefinite Programming (SDP)

Semidefinite Programming (Interior-Point Method)

Optimizing the parameters of a Gaussian process regression model subject to positive semidefinite constraints on the covariance matrix.

maximizef(x)subject togi(x)0,i=1,2,,mhj(x)=0,j=1,2,,pxkZ,kIxlR,lJ
\begin{aligned} \text{maximize} & \quad f(x) \\ \text{subject to} & \quad g_i(x) \le 0, \quad i = 1, 2, \dots, m \\ & \quad h_j(x) = 0, \quad j = 1, 2, \dots, p \\ & \quad x_k \in \mathbb{Z}, \quad k \in I \\ & \quad x_l \in \mathbb{R}, \quad l \in J \end{aligned}

where:

  • f(x) is the objective function to be optimized.
  • g_i(x) are the inequality constraints.
  • h_j(x) are the equality constraints.
  • x_k are the binary variables, with k belonging to a set I.
  • x_l are the continuous variables, with l belonging to a set J.

Posted

in

by

Tags: