Layers Module
The layers module contains the core layers for building logic neural networks.
Convolutional Layers
- class torchlogix.layers.LogicConv2d(in_dim, channels=1, num_kernels=16, tree_depth=None, receptive_field_size=2, stride=1, padding=0, device='cpu', grad_factor=1.0, lut_rank=2, parametrization='raw', parametrization_kwargs=None, connections='fixed', connections_kwargs=None)[source]
Bases:
_LogicConvNd2D convolutional layer with differentiable logic operations.
This layer implements a 2D convolution where each output location is computed by evaluating a learned logic tree over a receptive field. Instead of linear filters, it uses a binary tree of differentiable logic operations (LUTs) applied to selected positions in the receptive field, per kernel and per spatial location.
- __init__(in_dim, channels=1, num_kernels=16, tree_depth=None, receptive_field_size=2, stride=1, padding=0, device='cpu', grad_factor=1.0, lut_rank=2, parametrization='raw', parametrization_kwargs=None, connections='fixed', connections_kwargs=None)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- class torchlogix.layers.LogicConv3d(in_dim, channels=1, num_kernels=16, tree_depth=None, receptive_field_size=2, stride=1, padding=0, device='cpu', grad_factor=1.0, lut_rank=2, parametrization='raw', parametrization_kwargs=None, connections='fixed', connections_kwargs=None)[source]
Bases:
_LogicConvNd3D convolutional layer with differentiable logic operations.
This layer implements a 3D convolution where each output location is computed by evaluating a learned logic tree over a receptive field. Instead of linear filters, it uses a binary tree of differentiable logic operations (LUTs) applied to selected positions in the receptive field, per kernel and per spatial location.
- __init__(in_dim, channels=1, num_kernels=16, tree_depth=None, receptive_field_size=2, stride=1, padding=0, device='cpu', grad_factor=1.0, lut_rank=2, parametrization='raw', parametrization_kwargs=None, connections='fixed', connections_kwargs=None)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- class torchlogix.layers.OrPooling2d(kernel_size, stride, padding=0)[source]
Bases:
ModuleLogic gate based pooling layer.
- class torchlogix.layers.OrPooling3d(kernel_size, stride, padding=0)[source]
Bases:
ModuleLogic gate based pooling layer.
- __init__(kernel_size, stride, padding=0)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Dense Layers
- class torchlogix.layers.LogicDense(in_dim, out_dim, device='cpu', grad_factor=1.0, lut_rank=2, parametrization='raw', parametrization_kwargs=None, connections='fixed', connections_kwargs=None)[source]
Bases:
LogicBaseFully-connected logic gate layer with differentiable learning.
This module provides the core implementation of Differentiable Deep Logic Gate Networks. Each neuron learns a Boolean logic function (LUT) that operates on a subset of input features.
- Parameters:
in_dim (int) – Number of input features.
out_dim (int) – Number of neurons (output features).
device (str) – Device to run the layer on (‘cpu’ or ‘cuda’).
grad_factor (float) – Gradient scaling factor.
lut_rank (int) – Rank of the LUTs used in the layer.
parametrization (str) – Type of parametrization to use (‘raw’, ‘warp’, ‘light’).
parametrization_kwargs (dict) – Additional keyword arguments for parametrization.
connections (str) – Type of connections to use (‘fixed’, ‘learnable’, etc.).
connections_kwargs (dict) – Additional keyword arguments for connections.
- __init__(in_dim, out_dim, device='cpu', grad_factor=1.0, lut_rank=2, parametrization='raw', parametrization_kwargs=None, connections='fixed', connections_kwargs=None)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)[source]
Applies the LogicDense transformation to the input.
For each neuron, the layer: 1. Selects
lut_rankinput features according to the connectionpattern in
self.indices.Samples (or selects) LUT weights based on
self.weightand the sampler strategy.Evaluates the resulting binary operation.
- Parameters:
x – Input tensor of shape
(..., in_dim). The last dimension must matchself.in_dim.- Returns:
A tensor of shape
(..., out_dim)containing the neuron outputs.
- extra_repr()[source]
Return the extra representation of the module.
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- get_luts_and_ids(**kwargs)[source]
Computes the most probable LUT and its ID for each neuron.
Method is dependent on the chosen parametrization.
- Returns:
luts: Boolean tensor of shape(out_dim, 2 ** lut_rank), where each row is the most probable LUT truth table for a neuron (entry is True for output 1, False for 0).ids: Integer tensor of shape(out_dim,)where each entry is the integer ID of the corresponding LUT, obtained by interpreting its truth table as a binary number (or None if not applicable for high lut_rank).
- Return type:
Tuple[torch.Tensor, torch.Tensor]
- get_luts(**kwargs)[source]
Computes the most probable LUT for each neuron.
Method is dependent on the chosen parametrization.
- Returns:
- Boolean tensor of shape
(out_dim, 2 ** lut_rank), where each row is the most probable LUT truth table for a neuron (entry is True for output 1, False for 0).
- Boolean tensor of shape
- Return type:
Other Layers
- class torchlogix.layers.GroupSum(k, tau=1.0, beta=0.0, device='cpu')[source]
Bases:
ModuleThe GroupSum module.
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchlogix.layers.Binarization(thresholds, feature_dim=-2, **kwargs)[source]
-
Abstract base class for binarization modules.
- __init__(thresholds, feature_dim=-2, **kwargs)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- static get_distributive_thresholds(data_set, num_bits, one_per)[source]
Compute distributive (quantile-based) thresholds.
- Return type:
- one_per:
“global”: one set of thresholds for entire tensor
“feature”: per-feature thresholds (last dimension)
“channel”: per-channel thresholds (dim=1, conv tensors)
- class torchlogix.layers.FixedBinarization(thresholds, feature_dim=-2, **kwargs)[source]
Bases:
BinarizationBinarization with fixed (non-learnable) thresholds.
- class torchlogix.layers.SoftBinarization(thresholds, temperature=0.1, feature_dim=-2, **kwargs)[source]
Bases:
BinarizationSoft binarization with fixed thresholds using sigmoid.
- class torchlogix.layers.LearnableBinarization(thresholds, feature_dim=-2, temperature_sampling=0.1, temperature_softplus=0.1, forward_sampling='soft', max_grad_norm=0.001, **kwargs)[source]
Bases:
Binarization