Layers Module

The layers module contains the core neural network layers for building logic gate networks.

Convolutional Layers

class torchlogix.layers.LogicConv2d(in_dim, device='cuda', grad_factor=1.0, channels=1, num_kernels=16, tree_depth=None, receptive_field_size=None, implementation=None, connections='random', weight_init='residual', stride=1, padding=0, parametrization='raw', temperature=1.0, forward_sampling='soft')[source]

Bases: Module

2d convolutional layer with differentiable logic operations.

This layer implements a 2d convolution with differentiable logic operations. It uses a binary tree structure to combine input features using logical operations.

__init__(in_dim, device='cuda', grad_factor=1.0, channels=1, num_kernels=16, tree_depth=None, receptive_field_size=None, implementation=None, connections='random', weight_init='residual', stride=1, padding=0, parametrization='raw', temperature=1.0, forward_sampling='soft')[source]

Initialize the 2d logic convolutional layer.

Parameters:
  • in_dim (Union[int, tuple[int, int]]) – Input dimensions (height, width)

  • device (str) – Device to run the layer on

  • grad_factor (float) – Gradient factor for the logic operations

  • channels (int) – Number of input channels

  • num_kernels (int) – Number of output kernels

  • tree_depth (int) – Depth of the binary tree

  • receptive_field_size (int) – Size of the receptive field

  • implementation (str) – Implementation type (“python” or “cuda”)

  • connections (str) – Connection type: “random” or “unique”. The latter will overwrite the tree_depth parameter and use a full binary tree of all possible connections within the receptive field.

  • stride (int) – Stride of the convolution

  • padding (int) – Padding of the convolution

  • parametrization (str) – Parametrization to use (“raw” or “walsh”)

forward(x)[source]

Implement the binary tree using the pre-selected indices.

get_random_receptive_field_pairs()[source]

Generate random index pairs within the receptive field for each kernel. May contain self connections and duplicate connections.

get_random_unique_receptive_field_pairs()[source]

Generate random unique index pairs within the receptive field for each kernel. No self-connections or duplicate pairs.

apply_sliding_window(pairs_tuple)[source]

Apply sliding window to the receptive field pairs across all kernel positions.

get_indices_from_kernel_pairs(pairs_tuple)[source]
training: bool
class torchlogix.layers.LogicConv3d(in_dim, device='cuda', grad_factor=1.0, channels=1, num_kernels=16, tree_depth=None, receptive_field_size=None, implementation=None, connections='random', stride=1, padding=None)[source]

Bases: Module

3d convolutional layer with differentiable logic operations.

This layer implements a 3d convolution with differentiable logic operations. It uses a binary tree structure to combine input features using logical operations.

__init__(in_dim, device='cuda', grad_factor=1.0, channels=1, num_kernels=16, tree_depth=None, receptive_field_size=None, implementation=None, connections='random', stride=1, padding=None)[source]

Initialize the 3d logic convolutional layer.

Parameters:
  • in_dim (Union[int, tuple[int, int, int]]) – Input dimensions (height, width, depth)

  • device (str) – Device to run the layer on

  • grad_factor (float) – Gradient factor for the logic operations

  • channels (int) – Number of input channels

  • num_kernels (int) – Number of output kernels

  • tree_depth (int) – Depth of the binary tree

  • receptive_field_size (Union[int, tuple[int, int, int]]) – Size of the receptive field

  • implementation (str) – Implementation type (“python” or “cuda”)

  • connections (str) – Connection type: “random” or “unique”. The latter will overwrite the tree_depth parameter and use a full binary tree of all possible connections within the receptive field.

  • stride (int) – Stride of the convolution

  • padding (int) – Padding of the convolution

forward(x)[source]

Implement the binary tree using the pre-selected indices.

get_random_receptive_field_pairs()[source]

Generate random index pairs within the receptive field for each kernel. May contain self connections and duplicate connections.

get_random_unique_receptive_field_pairs()[source]

Generate random unique index pairs within the receptive field for each kernel. No self-connections or duplicate pairs.

apply_sliding_window(pairs_tuple)[source]

Apply sliding window to the receptive field pairs across all kernel positions.

get_indices_from_kernel_pairs(pairs_tuple)[source]
training: bool
class torchlogix.layers.OrPooling(kernel_size, stride, padding=0)[source]

Bases: Module

Logic gate based pooling layer.

__init__(kernel_size, stride, padding=0)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

training: bool
forward(x)[source]

Pool the max value in the kernel.

Dense Layers

class torchlogix.layers.LogicDense(in_dim, out_dim, device='cpu', grad_factor=1.0, implementation=None, connections='random', weight_init='residual', parametrization='raw', temperature=1.0, forward_sampling='soft')[source]

Bases: Module

The core module for differentiable logic gate networks. Provides a differentiable logic gate layer.

__init__(in_dim, out_dim, device='cpu', grad_factor=1.0, implementation=None, connections='random', weight_init='residual', parametrization='raw', temperature=1.0, forward_sampling='soft')[source]
Parameters:
  • in_dim (int) – input dimensionality of the layer

  • out_dim (int) – output dimensionality of the layer

  • device (str) – device (options: ‘cuda’ / ‘cpu’)

  • grad_factor (float) – for deep models (>6 layers), the grad_factor should be increased (e.g., 2) to avoid vanishing gradients

  • implementation (str) – implementation to use (options: ‘cuda’ / ‘python’).

  • connections (str) – method for initializing the connectivity of the logic gate net

grad_factor

The CUDA implementation is the fast implementation. As the name implies, the cuda implementation is only available for device=’cuda’. The python implementation exists for 2 reasons: 1. To provide an easy-to-understand implementation of differentiable logic gate networks 2. To provide a CPU implementation of differentiable logic gate networks

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

forward_python(x)[source]
forward_cuda(x)[source]
forward_cuda_eval(x)[source]

WARNING: this is an in-place operation.

Parameters:

x (PackBitsTensor)

Returns:

extra_repr()[source]

Return the extra representation of the module.

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

training: bool
get_connections(connections, device='cuda')[source]
get_gate_ids()[source]

Computes most-probable gate for each learned set of weights. Returns tensor of most-probable gate IDs.

Other Layers

class torchlogix.layers.GroupSum(k, tau=1.0, beta=0.0, device='cuda')[source]

Bases: Module

The GroupSum module.

__init__(k, tau=1.0, beta=0.0, device='cuda')[source]
Parameters:
  • k (int) – number of intended real valued outputs, e.g., number of classes

  • tau (float) – the (softmax) temperature tau. The summed outputs are divided by tau.

  • device

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

extra_repr()[source]

Return the extra representation of the module.

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

training: bool
class torchlogix.layers.LearnableThermometerThresholding(init_thresholds, slope=10.0)[source]

Bases: Module

__init__(init_thresholds, slope=10.0)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

get_thresholds()[source]
freeze_thresholds()[source]
forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.