You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To accumulate results from a dynamically unrolled loop, use tf.TensorArray
Use tf.TensorSpec(shape=..., dtype=...) in input_signature to avoid proliferation of graphs
Avoid creating stateful objects inside autographed functions (define tf.Variable outside and pass as argument or alternatively rely on lexical scope and closures)
Only use Python side effects to debug your traces
Iterate over Python data by wrapping it in a tf.data.Dataset (e.g., tf.data.Dataset.from_generator(...) or tf.data.Dataset.from_tensors(...)) and leveraging the for x in y idiom; if possible read data from files via TFRecordDataset/CsvDataset/etc.
Notes
Ordering of stateful operations in a tf.function replicates the semantics of Eager mode; there's no need to add manual control dependencies
In a Python if statement, if the condition is a tf.Tensor then the original statement is converted to tf.cond. Otherwise, the conditional is simply executed during tracing
In a Python if statement, if one branch creates a tf.Tensor used downstream, the other branch must also create it
If you have a break or early return clause that depends on a tf.Tensor, the top-level condition or iterable should also be a tensor
The shape/dtypes of all loop variables must stay consistent with each iteration
# all Graph functions will be eagerly executedtf.config.experimental_run_functions_eargerly(True)
# do not get autographed@tf.function(autograph=False)deff():
pass