diff --git a/frameworks.qmd b/frameworks.qmd index 972916b7..d3d8066c 100644 --- a/frameworks.qmd +++ b/frameworks.qmd @@ -226,7 +226,10 @@ Deep learning frameworks have traditionally followed one of two approaches for e For example: -```{{python}} x = tf.placeholder(tf.float32) y = tf.matmul(x, weights) + biases ``` +```{.python} +x = tf.placeholder(tf.float32) +y = tf.matmul(x, weights) + biases +``` The model is defined separately from execution, like building a blueprint. For TensorFlow 1.x, this is done using tf.Graph(). All ops and variables must be declared upfront. Subsequently, the graph is compiled and optimized before running. Execution is done later by feeding in tensor values. @@ -234,7 +237,10 @@ The model is defined separately from execution, like building a blueprint. For T PyTorch uses dynamic graphs, building the graph on-the-fly as execution happens. For example, consider the following code snippet, where the graph is built as the execution is taking place: -```{{python}} x = torch.randn(4,784) y = torch.matmul(x, weights) + biases ``` +```{.python} +x = torch.randn(4,784) +y = torch.matmul(x, weights) + biases +``` In the above example, there are no separate compile/build/run phases. Ops define and execute immediately. With dynamic graphs, definition is intertwined with execution. This provides a more intuitive, interactive workflow. But the downside is less potential for optimizations, since the framework only sees the graph as it is built.