torchlogix.functional.GradFactor

class torchlogix.functional.GradFactor(*args, **kwargs)[source]
__init__(*args, **kwargs)

Methods

__init__(*args, **kwargs)

apply(*args, **kwargs)

backward(ctx, grad_y)

Define a formula for differentiating the operation with backward mode automatic differentiation.

forward(ctx, x, f)

Define the forward of the custom autograd Function.

jvp(ctx, *grad_inputs)

Define a formula for differentiating the operation with forward mode automatic differentiation.

mark_dirty(*args)

Mark given tensors as modified in an in-place operation.

mark_non_differentiable(*args)

Mark outputs as non-differentiable.

mark_shared_storage(*pairs)

maybe_clear_saved_tensors

name

register_hook

register_prehook

save_for_backward(*tensors)

Save given tensors for a future call to backward().

save_for_forward(*tensors)

Save given tensors for a future call to jvp().

set_materialize_grads(value)

Set whether to materialize grad tensors.

setup_context(ctx, inputs, output)

There are two ways to define the forward pass of an autograd.Function.

vjp(ctx, *grad_outputs)

Define a formula for differentiating the operation with backward mode automatic differentiation.

vmap(info, in_dims, *args)

Define the behavior for this autograd.Function underneath torch.vmap().

Attributes

dirty_tensors

generate_vmap_rule

materialize_grads

metadata

needs_input_grad

next_functions

non_differentiable

requires_grad

saved_for_forward

saved_tensors

saved_variables

to_save

static forward(ctx, x, f)[source]

Define the forward of the custom autograd Function.

This function is to be overridden by all subclasses. There are two ways to define forward:

Usage 1 (Combined forward and ctx):

@staticmethod
def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any:
    pass

Usage 2 (Separate forward and ctx):

@staticmethod
def forward(*args: Any, **kwargs: Any) -> Any:
    pass


@staticmethod
def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None:
    pass
  • The forward no longer accepts a ctx argument.

  • Instead, you must also override the torch.autograd.Function.setup_context() staticmethod to handle setting up the ctx object. output is the output of the forward, inputs are a Tuple of inputs to the forward.

  • See Extending torch.autograd for more details

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward() if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward() if they are intended to be used for in jvp.

static backward(ctx, grad_y)[source]

Define a formula for differentiating the operation with backward mode automatic differentiation.

This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the vjp function.)

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computed w.r.t. the output.