Layers Module

The layers module contains the core layers for building logic neural networks.

Convolutional Layers

class torchlogix.layers.LogicConv2d(in_dim, channels=1, num_kernels=16, tree_depth=None, receptive_field_size=2, stride=1, padding=0, device='cpu', grad_factor=1.0, lut_rank=2, parametrization='raw', parametrization_kwargs=None, connections='fixed', connections_kwargs=None)[source]

Bases: _LogicConvNd

2D convolutional layer with differentiable logic operations.

This layer implements a 2D convolution where each output location is computed by evaluating a learned logic tree over a receptive field. Instead of linear filters, it uses a binary tree of differentiable logic operations (LUTs) applied to selected positions in the receptive field, per kernel and per spatial location.

__init__(in_dim, channels=1, num_kernels=16, tree_depth=None, receptive_field_size=2, stride=1, padding=0, device='cpu', grad_factor=1.0, lut_rank=2, parametrization='raw', parametrization_kwargs=None, connections='fixed', connections_kwargs=None)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

class torchlogix.layers.LogicConv3d(in_dim, channels=1, num_kernels=16, tree_depth=None, receptive_field_size=2, stride=1, padding=0, device='cpu', grad_factor=1.0, lut_rank=2, parametrization='raw', parametrization_kwargs=None, connections='fixed', connections_kwargs=None)[source]

Bases: _LogicConvNd

3D convolutional layer with differentiable logic operations.

This layer implements a 3D convolution where each output location is computed by evaluating a learned logic tree over a receptive field. Instead of linear filters, it uses a binary tree of differentiable logic operations (LUTs) applied to selected positions in the receptive field, per kernel and per spatial location.

__init__(in_dim, channels=1, num_kernels=16, tree_depth=None, receptive_field_size=2, stride=1, padding=0, device='cpu', grad_factor=1.0, lut_rank=2, parametrization='raw', parametrization_kwargs=None, connections='fixed', connections_kwargs=None)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

class torchlogix.layers.OrPooling2d(kernel_size, stride, padding=0)[source]

Bases: Module

Logic gate based pooling layer.

__init__(kernel_size, stride, padding=0)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Pool the max value in the kernel.

class torchlogix.layers.OrPooling3d(kernel_size, stride, padding=0)[source]

Bases: Module

Logic gate based pooling layer.

__init__(kernel_size, stride, padding=0)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Dense Layers

class torchlogix.layers.LogicDense(in_dim, out_dim, device='cpu', grad_factor=1.0, lut_rank=2, parametrization='raw', parametrization_kwargs=None, connections='fixed', connections_kwargs=None)[source]

Bases: LogicBase

Fully-connected logic gate layer with differentiable learning.

This module provides the core implementation of Differentiable Deep Logic Gate Networks. Each neuron learns a Boolean logic function (LUT) that operates on a subset of input features.

Parameters:
  • in_dim (int) – Number of input features.

  • out_dim (int) – Number of neurons (output features).

  • device (str) – Device to run the layer on (‘cpu’ or ‘cuda’).

  • grad_factor (float) – Gradient scaling factor.

  • lut_rank (int) – Rank of the LUTs used in the layer.

  • parametrization (str) – Type of parametrization to use (‘raw’, ‘warp’, ‘light’).

  • parametrization_kwargs (dict) – Additional keyword arguments for parametrization.

  • connections (str) – Type of connections to use (‘fixed’, ‘learnable’, etc.).

  • connections_kwargs (dict) – Additional keyword arguments for connections.

__init__(in_dim, out_dim, device='cpu', grad_factor=1.0, lut_rank=2, parametrization='raw', parametrization_kwargs=None, connections='fixed', connections_kwargs=None)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Applies the LogicDense transformation to the input.

For each neuron, the layer: 1. Selects lut_rank input features according to the connection

pattern in self.indices.

  1. Samples (or selects) LUT weights based on self.weight and the sampler strategy.

  2. Evaluates the resulting binary operation.

Parameters:

x – Input tensor of shape (..., in_dim). The last dimension must match self.in_dim.

Returns:

A tensor of shape (..., out_dim) containing the neuron outputs.

extra_repr()[source]

Return the extra representation of the module.

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

get_luts_and_ids(**kwargs)[source]

Computes the most probable LUT and its ID for each neuron.

Method is dependent on the chosen parametrization.

Returns:

  • luts: Boolean tensor of shape (out_dim, 2 ** lut_rank), where each row is the most probable LUT truth table for a neuron (entry is True for output 1, False for 0).

  • ids: Integer tensor of shape (out_dim,) where each entry is the integer ID of the corresponding LUT, obtained by interpreting its truth table as a binary number (or None if not applicable for high lut_rank).

Return type:

Tuple[torch.Tensor, torch.Tensor]

get_luts(**kwargs)[source]

Computes the most probable LUT for each neuron.

Method is dependent on the chosen parametrization.

Returns:

Boolean tensor of shape (out_dim, 2 ** lut_rank),

where each row is the most probable LUT truth table for a neuron (entry is True for output 1, False for 0).

Return type:

torch.Tensor

get_regularization_loss(regularizer)[source]

Computes regularization loss for the layer.

Returns:

Scalar tensor representing the regularization loss.

Return type:

torch.Tensor

rescale_weights(method)[source]

Rescales the weights of the layer according to the specified method.

Parameters:

method (str) – Rescaling method. Options are ‘clip’, ‘abs_sum’, ‘L2’.

Other Layers

class torchlogix.layers.GroupSum(k, tau=1.0, beta=0.0, device='cpu')[source]

Bases: Module

The GroupSum module.

__init__(k, tau=1.0, beta=0.0, device='cpu')[source]
Parameters:
  • k (int) – number of intended real valued outputs, e.g., number of classes

  • tau (float) – the (softmax) temperature tau. The summed outputs are divided by tau.

  • device

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

extra_repr()[source]

Return the extra representation of the module.

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

class torchlogix.layers.Binarization(thresholds, feature_dim=-2, **kwargs)[source]

Bases: Module, ABC

Abstract base class for binarization modules.

__init__(thresholds, feature_dim=-2, **kwargs)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

get_thresholds()[source]

Return thresholds. Subclasses can override for learnable thresholds.

abstractmethod forward(x)[source]

Subclasses must implement forward.

Return type:

Tensor

static get_uniform_thresholds(data_set, num_bits, one_per)[source]
Return type:

Tensor

static get_distributive_thresholds(data_set, num_bits, one_per)[source]

Compute distributive (quantile-based) thresholds.

Return type:

Tensor

one_per:
  • “global”: one set of thresholds for entire tensor

  • “feature”: per-feature thresholds (last dimension)

  • “channel”: per-channel thresholds (dim=1, conv tensors)

static get_initial_thresholds(data_set, num_bits, one_per, method='uniform')[source]
Return type:

Tensor

class torchlogix.layers.FixedBinarization(thresholds, feature_dim=-2, **kwargs)[source]

Bases: Binarization

Binarization with fixed (non-learnable) thresholds.

__init__(thresholds, feature_dim=-2, **kwargs)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Subclasses must implement forward.

Return type:

Tensor

class torchlogix.layers.SoftBinarization(thresholds, temperature=0.1, feature_dim=-2, **kwargs)[source]

Bases: Binarization

Soft binarization with fixed thresholds using sigmoid.

__init__(thresholds, temperature=0.1, feature_dim=-2, **kwargs)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Subclasses must implement forward.

Return type:

Tensor

class torchlogix.layers.LearnableBinarization(thresholds, feature_dim=-2, temperature_sampling=0.1, temperature_softplus=0.1, forward_sampling='soft', max_grad_norm=0.001, **kwargs)[source]

Bases: Binarization

__init__(thresholds, feature_dim=-2, temperature_sampling=0.1, temperature_softplus=0.1, forward_sampling='soft', max_grad_norm=0.001, **kwargs)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

get_thresholds()[source]

Return thresholds. Subclasses can override for learnable thresholds.

forward(x)[source]

Subclasses must implement forward.