I am trying to implement max drawdown for my loss function using code with the format of:
x = cumulative product of returns tensor
z = cumulative max of x
g
Here's an implementation of cumulative_max
using a tensorflow while loop which takes n=len(x)
iterations. The code is copy-paste runnable as an example.
import tensorflow as tf
def tf_while_condition(x, loop_counter):
return tf.not_equal(loop_counter, 0)
def tf_while_body(x, loop_counter):
loop_counter -= 1
y = tf.concat(([x[0]], x[:-1]), axis=0)
z = tf.maximum(x, y)
return z, loop_counter
x = tf.constant([0,2,5,3,8,1,7])
cumulative_max, _ = tf.while_loop(cond=tf_while_condition,
body=tf_while_body,
loop_vars=(x, x.shape[0]))
with tf.Session() as sess:
print(sess.run(cumulative_max))
Result:
[0 2 5 5 8 8 8]
Note: If you have a large vector to compute and you don't need backprop, it's probably worthwhile to include back_prop=False
in the tf.while_loop
.
A key to understanding TF while loops is to understand that your python based functions, tf_while_condition
and tf_while_body
, are only called once to produce the relevant tensorflow operations. Those two functions are NOT called in a loop. The operations they return will be executed in a loop within the tensorflow graph during sess.run
computations.