Functional Module

The functional module contains the core mathematical operations and utility functions for logic gate computations.

Logic Operations

bin_op

bin_op_s

bin_op_cnn

bin_op_cnn_slow

A slower, non-optimized version of bin_op_cnn for clarity.

compute_all_logic_ops_vectorized

Compute all 16 logic operations in a single vectorized operation.

Utility Functions

get_unique_connections

GradFactor

Constants

torchlogix.functional.ID_TO_OP = {0: <function <lambda>>, 1: <function <lambda>>, 2: <function <lambda>>, 3: <function <lambda>>, 4: <function <lambda>>, 5: <function <lambda>>, 6: <function <lambda>>, 7: <function <lambda>>, 8: <function <lambda>>, 9: <function <lambda>>, 10: <function <lambda>>, 11: <function <lambda>>, 12: <function <lambda>>, 13: <function <lambda>>, 14: <function <lambda>>, 15: <function <lambda>>}

dict() -> new empty dictionary dict(mapping) -> new dictionary initialized from a mapping object’s

(key, value) pairs

dict(iterable) -> new dictionary initialized as if via:

d = {} for k, v in iterable:

d[k] = v

dict(**kwargs) -> new dictionary initialized with the name=value pairs

in the keyword argument list. For example: dict(one=1, two=2)

Dictionary mapping logic gate IDs (0-15) to their corresponding operations.

Each logic gate represents one of the 16 possible binary Boolean operations:

  • 0: False (always 0)

  • 1: AND (a ∧ b)

  • 2: A and not B (a ∧ ¬b)

  • 3: A (identity for a)

  • 4: not A and B (¬a ∧ b)

  • 5: B (identity for b)

  • 6: XOR (a ⊕ b)

  • 7: OR (a ∨ b)

  • 8: NOR (¬(a ∨ b))

  • 9: XNOR (¬(a ⊕ b))

  • 10: NOT B (¬b)

  • 11: B implies A (b → a)

  • 12: NOT A (¬a)

  • 13: A implies B (a → b)

  • 14: NAND (¬(a ∧ b))

  • 15: True (always 1)

Function Details

torchlogix.functional.bin_op(a, b, i)[source]
torchlogix.functional.bin_op_s(a, b, i_s)[source]
torchlogix.functional.bin_op_cnn(a, b, i_s)[source]
torchlogix.functional.bin_op_cnn_slow(a, b, i_s)[source]

A slower, non-optimized version of bin_op_cnn for clarity.

torchlogix.functional.compute_all_logic_ops_vectorized(a, b)[source]

Compute all 16 logic operations in a single vectorized operation.

Returns a tensor with shape […, 16] where the last dimension contains all 16 logic operations applied to inputs a and b.

torchlogix.functional.get_unique_connections(in_dim, out_dim, device='cuda')[source]
class torchlogix.functional.GradFactor(*args, **kwargs)[source]

Bases: Function

static forward(ctx, x, f)[source]

Define the forward of the custom autograd Function.

This function is to be overridden by all subclasses. There are two ways to define forward:

Usage 1 (Combined forward and ctx):

@staticmethod
def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any:
    pass

Usage 2 (Separate forward and ctx):

@staticmethod
def forward(*args: Any, **kwargs: Any) -> Any:
    pass


@staticmethod
def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None:
    pass
  • The forward no longer accepts a ctx argument.

  • Instead, you must also override the torch.autograd.Function.setup_context() staticmethod to handle setting up the ctx object. output is the output of the forward, inputs are a Tuple of inputs to the forward.

  • See Extending torch.autograd for more details

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward() if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward() if they are intended to be used for in jvp.

static backward(ctx, grad_y)[source]

Define a formula for differentiating the operation with backward mode automatic differentiation.

This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the vjp function.)

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computed w.r.t. the output.