Tensorflow: how to minimize under constraints

Question:

I am facing a numerical optimization problem subject to constraints, equalities and inequalities. It looks like everything is in tensorflow for this task, reading documentation such as https://www.tensorflow.org/api_docs/python/tf/contrib/constrained_optimization .

Though I am missing a minimal working example. I have done extensive googling with no result. Can anyone share some useful ressource with me? Preferably running in eager mode.

edit:

I have now found https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/constrained_optimization

I am still welcoming any additional resources.

Asked By: Learning is a mess

||

Answers:

There has been a few upvotes to this question since I asked, so I guess that more people are looking for a solution. In my case, for my specific problem I decided to move from tensorflow to pyomo to run the constrained optimization. Maybe this can help others.

Answered By: Learning is a mess

You can use TFCO which is available for TF > 1.4.

Here is a concrete example where we want to minimize:

(x – 2) ^ 2 + y

s.t.

  • x + y = 1
  • x > 0
  • y > 0
import tensorflow as tf

# Use the GitHub version of TFCO
# !pip install git+https://github.com/google-research/tensorflow_constrained_optimization
import tensorflow_constrained_optimization as tfco

class SampleProblem(tfco.ConstrainedMinimizationProblem):
    def __init__(self, loss_fn, weights):
        self._loss_fn = loss_fn
        self._weights = weights
   
    @property
    def num_constraints(self):
        return 4
   
    def objective(self):
        return loss_fn()
   
    def constraints(self):
        x, y = self._weights
        sum_weights = x + y
        lt_or_eq_one = sum_weights - 1
        gt_or_eq_one = 1 - sum_weights
        constraints = tf.stack([lt_or_eq_one, gt_or_eq_one, -x, -y])
        return constraints

x = tf.Variable(0.0, dtype=tf.float32, name='x')
y = tf.Variable(0.0, dtype=tf.float32, name='y')

def loss_fn():
    return (x - 2) ** 2 + y

problem = SampleProblem(loss_fn, [x, y])

optimizer = tfco.LagrangianOptimizer(
    optimizer=tf.optimizers.Adagrad(learning_rate=0.1),
    num_constraints=problem.num_constraints
)

var_list = [x,  y] + problem.trainable_variables + optimizer.trainable_variables()

for i in range(10000):
    optimizer.minimize(problem, var_list=var_list)
    if i % 1000 == 0:
        print(f'step = {i}')
        print(f'loss = {loss_fn()}')
        print(f'constraint = {(x + y).numpy()}')
        print(f'x = {x.numpy()}, y = {y.numpy()}')
Answered By: MajidL
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.