Use linear programming in a model

I realized that a fairly elaborate combinatorial computation involved in my model can be reduced to a linear program, and then solved very efficiently for particular parameter values. I was wondering if there is a way to actually represent in pytensor/pymc the solution point of a given linear program (I know I can have it as a black box using @op).

Alternatively, I think it should be in principle possible to use the trick I found to do the computation in advance / semi-analytically and derive a piecewise linear function. So my other question is, is there a way to represent a piecewise linear function in pytensor?

Can you write it with nested pt.where (like np.where)?

1 Like

If you can write the program in matrix form you can just use pt.solve and get the answers. If you have binding constraints, though, that won’t really help you and you’ll need to resort to wrapping a solver in a black box.

If you go the black box way, you should be able to recycle the code from pytensor.minimize.L_op to get adjunct gradients. Or you could open a PR and contribute an Op for scipy.optimize.linprog :slight_smile:

1 Like

I don’t think it’s possible to write it out in matrix form. Regarding implementing it as an Op, thanks for the link, that helps. My optimization classes were a long time ago so it’s not clear to me if it’s easy to work out the gradient with respect to the active constraint parameters, it seems to me the existing code does not apply in this case. At any rate I don’t necessarily need the gradients.

(In my application, the model parameters occur in the b_ub term, the constant term of the inequality constraints.)

It does seem so, yes. Although the more I think about it the more I conclude it is not reasonable to try to do that in large dimension, the expression would get huge