graph – Objects and functions for computational graphs

Defines base classes Op and CLinkerOp.

The Op class is the base interface for all operations compatible with graph’s graph – Interface for the Theano graph routines.

class theano.graph.op.COp[source]

An Op with a C implementation.

make_c_thunk(node: theano.graph.basic.Apply, storage_map: List[Optional[List[Any]]], compute_map: List[bool], no_recycling: bool) Callable[[Callable[[theano.graph.basic.Apply, List[Any], List[Optional[List[Any]]], Optional[Tuple[Any]]], NoReturn], List[Optional[List[Any]]], List[bool], theano.graph.basic.Apply], Any][source]

Create a thunk for a C implementation.

Like Op.make_thunk, but will only try to make a C thunk.

make_thunk(node, storage_map, compute_map, no_recycling, impl=None)[source]

Create a thunk.

See Op.make_thunk.

Parameters:impl – Currently, None, ‘c’ or ‘py’. If ‘c’ or ‘py’ we will only try that version of the code.
class theano.graph.op.ExternalCOp(func_files: Union[str, List[str]], func_name: Optional[str] = None)[source]

Class for an Op with an external C implementation.

One can inherit from this class, provide its constructor with a path to an external C source file and the name of a function within it, and define an Op for said function.

c_cleanup_code_struct(node, name)[source]

Return an Apply-specific code string to be inserted in the struct cleanup code.

Parameters:
  • node (Apply) – The node in the graph being compiled
  • name (str) – A unique name to distinguish variables from those of other nodes.
c_code(node, name, inp, out, sub)[source]

Return the C implementation of an Op.

Returns C code that does the computation associated to this Op, given names for the inputs and outputs.

Parameters:
  • node (Apply instance) – The node for which we are compiling the current c_code. The same Op may be used in more than one node.
  • name (str) – A name that is automatically assigned and guaranteed to be unique.
  • inputs (list of strings) – There is a string for each input of the function, and the string is the name of a C variable pointing to that input. The type of the variable depends on the declared type of the input. There is a corresponding python variable that can be accessed by prepending “py_” to the name in the list.
  • outputs (list of strings) – Each string is the name of a C variable where the Op should store its output. The type depends on the declared type of the output. There is a corresponding python variable that can be accessed by prepending “py_” to the name in the list. In some cases the outputs will be preallocated and the value of the variable may be pre-filled. The value for an unallocated output is type-dependent.
  • sub (dict of strings) – Extra symbols defined in CLinker sub symbols (such as ‘fail’). WRITEME
c_code_cache_version()[source]

Return a tuple of integers indicating the version of this Op.

An empty tuple indicates an ‘unversioned’ Op that will not be cached between processes.

The cache mechanism may erase cached modules that have been superceded by newer versions. See ModuleCache for details.

See also

c_code_cache_version_apply

c_code_cleanup(node, name, inputs, outputs, sub)[source]

Stitches all the macros and “code_cleanup” together

c_init_code(**kwargs)[source]

Return a list of code snippets to be inserted in module initialization.

c_init_code_apply(node, name)[source]

Return a code string specific to the apply to be inserted in the module initialization code.

Parameters:
  • node (an Apply instance in the graph being compiled) –
  • name (str) – A string or number that serves to uniquely identify this node. Symbol names defined by this support code should include the name, so that they can be called from the c_code, and so that they do not cause name collisions.

Notes

This function is called in addition to c_init_code and will supplement whatever is returned from there.

c_init_code_struct(node, name, sub)[source]

Stitches all the macros and “init_code” together

c_support_code(**kwargs)[source]

Return utility code for use by a Variable or Op.

This is included at global scope prior to the rest of the code for this class.

Question: How many times will this support code be emitted for a graph with many instances of the same type?

Return type:str
c_support_code_apply(node, name)[source]

Return Apply-specialized utility code for use by an Op that will be inserted at global scope.

Parameters:
  • node (Apply) – The node in the graph being compiled.
  • name (str) – A string or number that serves to uniquely identify this node. Symbol names defined by this support code should include the name, so that they can be called from the CLinkerOp.c_code, and so that they do not cause name collisions.

Notes

This function is called in addition to CLinkerObject.c_support_code and will supplement whatever is returned from there.

c_support_code_struct(node, name)[source]

Return Apply-specific utility code for use by an Op that will be inserted at struct scope.

Parameters:
  • node (Apply) – The node in the graph being compiled
  • name (str) – A unique name to distinguish you variables from those of other nodes.
format_c_function_args(inp: List[str], out: List[str]) str[source]

Generate a string containing the arguments sent to the external C function.

The result will have the format: "input0, input1, input2, &output0, &output1".

get_c_macros(node: theano.graph.basic.Apply, name: str, check_input: Optional[bool] = None) Tuple[str][source]

Construct a pair of C #define and #undef code strings.

classmethod get_path(f: str) str[source]

Convert a path relative to the location of the class file into an absolute path.

Paths that are already absolute are passed through unchanged.

load_c_code(func_files: List[str]) NoReturn[source]

Loads the C code to perform the Op.

class theano.graph.op.Op[source]

A class that models and constructs operations in a graph.

A Op instance has several responsibilities:

  • construct Apply nodes via Op.make_node method,
  • perform the numeric calculation of the modeled operation via

the Op.perform method,

  • and (optionally) build the gradient-calculating sub-graphs via the

Op.grad method.

To see how Op, Type, Variable, and Apply fit together see the page on graph – Interface for the Theano graph.

For more details regarding how these methods should behave: see the Op Contract in the sphinx docs (advanced tutorial on Op-making).

L_op(inputs: List[theano.graph.basic.Variable], outputs: List[theano.graph.basic.Variable], output_grads: List[theano.graph.basic.Variable]) List[theano.graph.basic.Variable][source]

Construct a graph for the L-operator.

This method is primarily used by tensor.Lop and dispatches to Op.grad by default.

The L-operator computes a row vector times the Jacobian. The mathematical relationship is v \frac{\partial f(x)}{\partial x}. The L-operator is also supported for generic tensors (not only for vectors).

Parameters:
  • inputs (list of Variable) –
  • outputs (list of Variable) –
  • output_grads (list of Variable) –
R_op(inputs: List[theano.graph.basic.Variable], eval_points: Union[theano.graph.basic.Variable, List[theano.graph.basic.Variable]]) List[theano.graph.basic.Variable][source]

Construct a graph for the R-operator.

This method is primarily used by tensor.Rop

Suppose the op outputs

[ f_1(inputs), …, f_n(inputs) ]

Parameters:
  • inputs (a Variable or list of Variables) –
  • eval_points – A Variable or list of Variables with the same length as inputs. Each element of eval_points specifies the value of the corresponding input at the point where the R op is to be evaluated.
Returns:

rval[i] should be Rop(f=f_i(inputs),

wrt=inputs, eval_points=eval_points)

Return type:

list of n elements

static add_tag_trace(thing, user_line=None)[source]

Add tag.trace to a node or variable.

The argument is returned after being affected (inplace).

Parameters:
  • thing – The object where we add .tag.trace.
  • user_line – The max number of user line to keep.

Notes

We also use config.traceback__limit for the maximum number of stack level we look.

default_output = None[source]

An int that specifies which output Op.__call__ should return. If None, then all outputs are returned.

A subclass should not change this class variable, but instead override it with a subclass variable or an instance variable.

do_constant_folding(fgraph: theano.graph.fg.FunctionGraph, node: theano.graph.basic.Apply) bool[source]

Determine whether or not constant folding should be performed for the given node.

This allows each Op to determine if it wants to be constant folded when all its inputs are constant. This allows it to choose where it puts its memory/speed trade-off. Also, it could make things faster as constants can’t be used for in-place operations (see *IncSubtensor).

Parameters:node (Apply) – The node for which the constant folding determination is made.
Returns:res
Return type:bool
get_params(node: theano.graph.basic.Apply) theano.graph.params_type.Params[source]

Try to detect params from the op if Op.params_type is set to a ParamsType.

grad(inputs: List[theano.graph.basic.Variable], output_grads: List[theano.graph.basic.Variable]) List[theano.graph.basic.Variable][source]

Construct a graph for the gradient with respect to each input variable.

Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.

Parameters:
  • inputs (list of Variable) – The input variables.
  • output_grads (list of Variable) – The gradients of the output variables.
Returns:

grads – The gradients with respect to each Variable in inputs.

Return type:

list of Variable

make_node(*inputs: theano.graph.basic.Variable) theano.graph.basic.Apply[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:node – The constructed Apply node.
Return type:Apply
make_py_thunk(node: theano.graph.basic.Apply, storage_map: List[Optional[List[Any]]], compute_map: List[bool], no_recycling: bool, debug: bool = False) Callable[[Callable[[theano.graph.basic.Apply, List[Any], List[Optional[List[Any]]], Optional[Tuple[Any]]], NoReturn], List[Optional[List[Any]]], List[bool], theano.graph.basic.Apply], Any][source]

Make a Python thunk.

Like Op.make_thunk but only makes python thunks.

make_thunk(node: theano.graph.basic.Apply, storage_map: List[Optional[List[Any]]], compute_map: List[bool], no_recycling: bool, impl: Optional[str] = None) Callable[[Callable[[theano.graph.basic.Apply, List[Any], List[Optional[List[Any]]], Optional[Tuple[Any]]], NoReturn], List[Optional[List[Any]]], List[bool], theano.graph.basic.Apply], Any][source]

Create a thunk.

This function must return a thunk, that is a zero-arguments function that encapsulates the computation to be performed by this op on the arguments of the node.

Parameters:
  • node – Something previously returned by self.make_node.
  • storage_map – dict variable -> one-element-list where a computed value for this variable may be found.
  • compute_map – dict variable -> one-element-list where a boolean value will be found. The boolean indicates whether the variable’s storage_map container contains a valid value (True) or if it has not been computed yet (False).
  • no_recycling – List of variables for which it is forbidden to reuse memory allocated by a previous call.
  • impl (str) – Description for the type of node created (e.g. "c", "py", etc.)

Notes

If the thunk consults the storage_map on every call, it is safe for it to ignore the no_recycling argument, because elements of the no_recycling list will have a value of None in the storage map. If the thunk can potentially cache return values (like CLinker does), then it must not do so for variables in the no_recycling list.

self.prepare_node(node, …) is always called. If we try ‘c’ and it fail and we try again ‘py’, prepare_node will be called twice.

abstract perform(node: theano.graph.basic.Apply, inputs: List[theano.graph.basic.Variable], output_storage: List[Optional[List[Any]]], params: Optional[Tuple[Any]] = None) NoReturn[source]

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node (Apply) – The symbolic Apply node that represents this computation.
  • inputs (Sequence) – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.
  • output_storage (list of list) – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.
  • params (tuple) – A tuple containing the values of each entry in __props__.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

prepare_node(node: theano.graph.basic.Apply, storage_map: List[Optional[List[Any]]], compute_map: List[bool], impl: Optional[str]) NoReturn[source]

Make any special modifications that the Op needs before doing Op.make_thunk.

This can modify the node inplace and should return nothing.

It can be called multiple time with different impl. It is the op responsibility to don’t re-prepare the node when it isn’t good to do so.

class theano.graph.op.OpenMPOp(openmp: Optional[bool] = None)[source]

All op using OpenMP code should inherit from this Op.

This op will check that the compiler support correctly OpenMP code. If not, it will print a warning and disable openmp for this Op. Then it will generate the not OpenMP code.

This is needed as EPD on Windows g++ version spec information tell it support OpenMP, but does not include the OpenMP files.

We also add the correct compiler flags in c_compile_args.

c_compile_args(**kwargs)[source]

Return the compilation arg “fopenmp” if openMP is supported

c_headers(**kwargs)[source]

Return the header file name “omp.h” if openMP is supported

gxx_support_openmp: Optional[bool] = None[source]

True/False after we tested this.

prepare_node(node, storage_map, compute_map, impl)[source]

Make any special modifications that the Op needs before doing Op.make_thunk.

This can modify the node inplace and should return nothing.

It can be called multiple time with different impl. It is the op responsibility to don’t re-prepare the node when it isn’t good to do so.

static test_gxx_support()[source]

Check if openMP is supported.

update_self_openmp() NoReturn[source]

Make sure self.openmp is not True if there is no support in gxx.

theano.graph.op.compute_test_value(node: theano.graph.basic.Apply)[source]

Computes the test value of a node.

Parameters:node (Apply) – The Apply node for which the test value is computed.
Returns:The tag.test_value`s are updated in each `Variable in node.outputs.
Return type:None
theano.graph.op.get_test_value(v: theano.graph.basic.Variable) Any[source]

Get the test value for v.

If input v is not already a variable, it is turned into one by calling as_tensor_variable(v).

Raises:AttributeError if no test value is set.
theano.graph.op.get_test_values(*args: theano.graph.basic.Variable) Union[Any, List[Any]][source]

Get test values for multiple `Variable`s.

Intended use:

for val_1, …, val_n in get_debug_values(var_1, …, var_n):
if some condition on val_1, …, val_n is not met:
missing_test_message(“condition was not met”)

Given a list of variables, get_debug_values does one of three things:

  1. If the interactive debugger is off, returns an empty list

  2. If the interactive debugger is on, and all variables have

    debug values, returns a list containing a single element. This single element is either:

    1. if there is only one variable, the element is its value
    2. otherwise, a tuple containing debug values of all the variables.
  3. If the interactive debugger is on, and some variable does

    not have a debug value, issue a missing_test_message about the variable, and, if still in control of execution, return an empty list.

theano.graph.op.lquote_macro(txt: str) str[source]

Turn the last line of text into a \-commented line.

theano.graph.op.missing_test_message(msg: str) NoReturn[source]

Displays msg, a message saying that some test_value is missing, in the appropriate form based on config.compute_test_value:

off: The interactive debugger is off, so we do nothing. ignore: The interactive debugger is set to ignore missing inputs,

so do nothing.

warn: Display msg as a warning.

Raises:AttributeError – With msg as the exception text.
theano.graph.op.ops_with_inner_function: Dict[theano.graph.op.Op, str] = {<class 'theano.compile.builders.OpFromGraph'>: 'fn', <class 'theano.scan.op.Scan'>: 'fn'}[source]

Registry of Ops that have an inner compiled Theano function.

The keys are Op classes (not instances), and values are the name of the attribute that contains the function. For instance, if the function is self.fn, the value will be ‘fn’.

We need that to be able not to run debug checks a number of times that is exponential in the nesting level of those ops. For instance, Scan will be registered here.