random
– Random number functionality#
The pytensor.tensor.random
module provides random-number drawing functionality
that closely resembles the numpy.random
module.
High-level API#
PyTensor assigns NumPy RNG states (i.e. Generator
objects) to
each RandomVariable
. The combination of an RNG state, a specific
RandomVariable
type (e.g. NormalRV
), and a set of distribution parameters
uniquely defines the RandomVariable
instances in a graph.
This means that a “stream” of distinct RNG states is required in order to
produce distinct random variables of the same kind. RandomStream
provides a
means of generating distinct random variables in a fully reproducible way.
RandomStream
is also designed to produce simpler graphs and work with more
sophisticated Op
s like Scan
, which makes it a user-friendly random variable
interface in PyTensor.
For an example of how to use random numbers, see Using Random Numbers. For a technical explanation of how PyTensor implements random variables see Pseudo random number generation in PyTensor.
- class pytensor.tensor.random.RandomStream[source]#
This is a symbolic stand-in for
numpy.random.Generator
.- updates()[source]#
- Returns:
a list of all the (state, new_state) update pairs for the random variables created by this object
This can be a convenient shortcut to enumerating all the random variables in a large graph in the
update
argument topytensor.function
.
- seed(meta_seed)[source]#
meta_seed
will be used to seed a temporary random number generator, that will in turn generate seeds for all random variables created by this object (viagen
).- Returns:
None
- gen(op, *args, **kwargs)[source]#
Return the random variable from
op(*args, **kwargs)
.This function also adds the returned variable to an internal list so that it can be seeded later by a call to
seed
.
- uniform, normal, binomial, multinomial, random_integers, ...
See :ref: Available distributions
<_libdoc_tensor_random_distributions>
.
from pytensor.tensor.random.utils import RandomStream rng = RandomStream() sample = rng.normal(0, 1, size=(2, 2)) fn = pytensor.function([], sample) print(fn(), fn()) # different numbers due to default updates
Low-level objects#
- class pytensor.tensor.random.op.RandomVariable(name=None, ndim_supp=None, ndims_params=None, dtype=None, inplace=None, signature=None)[source]#
An
Op
that produces a sample from a random variable.This is essentially
RandomFunction
, except that it removes theouttype
dependency and handles shape dimension information more directly.- R_op(inputs, eval_points)[source]#
Construct a graph for the R-operator.
This method is primarily used by
Rop
.- Parameters:
inputs – The
Op
inputs.eval_points – A
Variable
or list ofVariable
s with the same length as inputs. Each element ofeval_points
specifies the value of the corresponding input at the point where the R-operator is to be evaluated.
- Return type:
rval[i]
should beRop(f=f_i(inputs), wrt=inputs, eval_points=eval_points)
.
- default_output = 1[source]#
An
int
that specifies which outputOp.__call__()
should return. IfNone
, then all outputs are returned.A subclass should not change this class variable, but instead override it with a subclass variable or an instance variable.
- grad(inputs, outputs)[source]#
Construct a graph for the gradient with respect to each input variable.
Each returned
Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of typeNullType
for that input.Using the reverse-mode AD characterization given in [1], for a \(C = f(A, B)\) representing the function implemented by the
Op
and its two arguments \(A\) and \(B\), given by theVariable
s ininputs
, the values returned byOp.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and \(\bar{B}\), for some scalar output term \(S_O\) of \(C\) in\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]- Parameters:
inputs – The input variables.
output_grads – The gradients of the output variables.
- Returns:
The gradients with respect to each
Variable
ininputs
.- Return type:
grads
References
- make_node(rng, size, *dist_params)[source]#
Create a random variable node.
- Parameters:
rng (RandomGeneratorType) – Existing PyTensor
Generator
object to be used. Creates a new one, ifNone
.size (int or Sequence) – NumPy-like size parameter.
dtype (str) – The dtype of the sampled output. If the value
"floatX"
is given, thendtype
is set topytensor.config.floatX
. This value is only used whenself.dtype
isn’t set.dist_params (list) – Distribution parameters.
Results –
------- –
out (Apply) – A node with inputs
(rng, size, dtype) + dist_args
and outputs(rng_var, out_var)
.
- perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
- Parameters:
node – The symbolic
Apply
node that represents this computation.inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
.output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The
output_storage
list might contain data. If an element of output_storage is notNone
, it has to be of the right type, for instance, for aTensorVariable
, it has to be a NumPyndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to thisOp.perform()
; they could’ve been allocated by anotherOp
’sperform
method. AnOp
is free to reuseoutput_storage
as it sees fit, or to discard it and allocate new memory.
- class pytensor.tensor.random.type.RandomGeneratorType[source]#
A Type wrapper for
numpy.random.Generator
.The reason this exists (and
Generic
doesn’t suffice) is thatGenerator
objects that would appear to be equal do not compare equal with the==
operator.This
Type
also works with adict
derived fromGenerator.__get_state__
, unless thestrict
argument toType.filter
is explicitly set toTrue
.- filter(data, strict=False, allow_downcast=None)[source]#
XXX: This doesn’t convert
data
to the same type of underlying RNG type asself
. It really only checks thatdata
is of the appropriate type to be a validRandomGeneratorType
.In other words, it serves as a
Type.is_valid_value
implementation, but, because the defaultType.is_valid_value
depends onType.filter
, we need to have it here to avoid surprising circular dependencies in sub-classes.