Layers Module
The layers module contains the core neural network layers for building logic gate networks.
Convolutional Layers
- class torchlogix.layers.LogicConv2d(in_dim, device='cuda', grad_factor=1.0, channels=1, num_kernels=16, tree_depth=None, receptive_field_size=None, implementation=None, connections='random', weight_init='residual', stride=1, padding=0, parametrization='raw', temperature=1.0, forward_sampling='soft')[source]
Bases:
Module2d convolutional layer with differentiable logic operations.
This layer implements a 2d convolution with differentiable logic operations. It uses a binary tree structure to combine input features using logical operations.
- __init__(in_dim, device='cuda', grad_factor=1.0, channels=1, num_kernels=16, tree_depth=None, receptive_field_size=None, implementation=None, connections='random', weight_init='residual', stride=1, padding=0, parametrization='raw', temperature=1.0, forward_sampling='soft')[source]
Initialize the 2d logic convolutional layer.
- Parameters:
in_dim (
Union[int,tuple[int,int]]) – Input dimensions (height, width)device (
str) – Device to run the layer ongrad_factor (
float) – Gradient factor for the logic operationschannels (
int) – Number of input channelsnum_kernels (
int) – Number of output kernelstree_depth (
int) – Depth of the binary treereceptive_field_size (
int) – Size of the receptive fieldimplementation (
str) – Implementation type (“python” or “cuda”)connections (
str) – Connection type: “random” or “unique”. The latter will overwrite the tree_depth parameter and use a full binary tree of all possible connections within the receptive field.stride (
int) – Stride of the convolutionpadding (
int) – Padding of the convolutionparametrization (
str) – Parametrization to use (“raw” or “walsh”)
- get_random_receptive_field_pairs()[source]
Generate random index pairs within the receptive field for each kernel. May contain self connections and duplicate connections.
- get_random_unique_receptive_field_pairs()[source]
Generate random unique index pairs within the receptive field for each kernel. No self-connections or duplicate pairs.
- apply_sliding_window(pairs_tuple)[source]
Apply sliding window to the receptive field pairs across all kernel positions.
- training: bool
- class torchlogix.layers.LogicConv3d(in_dim, device='cuda', grad_factor=1.0, channels=1, num_kernels=16, tree_depth=None, receptive_field_size=None, implementation=None, connections='random', stride=1, padding=None)[source]
Bases:
Module3d convolutional layer with differentiable logic operations.
This layer implements a 3d convolution with differentiable logic operations. It uses a binary tree structure to combine input features using logical operations.
- __init__(in_dim, device='cuda', grad_factor=1.0, channels=1, num_kernels=16, tree_depth=None, receptive_field_size=None, implementation=None, connections='random', stride=1, padding=None)[source]
Initialize the 3d logic convolutional layer.
- Parameters:
in_dim (
Union[int,tuple[int,int,int]]) – Input dimensions (height, width, depth)device (
str) – Device to run the layer ongrad_factor (
float) – Gradient factor for the logic operationschannels (
int) – Number of input channelsnum_kernels (
int) – Number of output kernelstree_depth (
int) – Depth of the binary treereceptive_field_size (
Union[int,tuple[int,int,int]]) – Size of the receptive fieldimplementation (
str) – Implementation type (“python” or “cuda”)connections (
str) – Connection type: “random” or “unique”. The latter will overwrite the tree_depth parameter and use a full binary tree of all possible connections within the receptive field.stride (
int) – Stride of the convolutionpadding (
int) – Padding of the convolution
- get_random_receptive_field_pairs()[source]
Generate random index pairs within the receptive field for each kernel. May contain self connections and duplicate connections.
- get_random_unique_receptive_field_pairs()[source]
Generate random unique index pairs within the receptive field for each kernel. No self-connections or duplicate pairs.
- apply_sliding_window(pairs_tuple)[source]
Apply sliding window to the receptive field pairs across all kernel positions.
- training: bool
Dense Layers
- class torchlogix.layers.LogicDense(in_dim, out_dim, device='cpu', grad_factor=1.0, implementation=None, connections='random', weight_init='residual', parametrization='raw', temperature=1.0, forward_sampling='soft')[source]
Bases:
ModuleThe core module for differentiable logic gate networks. Provides a differentiable logic gate layer.
- __init__(in_dim, out_dim, device='cpu', grad_factor=1.0, implementation=None, connections='random', weight_init='residual', parametrization='raw', temperature=1.0, forward_sampling='soft')[source]
- Parameters:
in_dim (
int) – input dimensionality of the layerout_dim (
int) – output dimensionality of the layerdevice (
str) – device (options: ‘cuda’ / ‘cpu’)grad_factor (
float) – for deep models (>6 layers), the grad_factor should be increased (e.g., 2) to avoid vanishing gradientsimplementation (
str) – implementation to use (options: ‘cuda’ / ‘python’).connections (
str) – method for initializing the connectivity of the logic gate net
- grad_factor
The CUDA implementation is the fast implementation. As the name implies, the cuda implementation is only available for device=’cuda’. The python implementation exists for 2 reasons: 1. To provide an easy-to-understand implementation of differentiable logic gate networks 2. To provide a CPU implementation of differentiable logic gate networks
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- forward_cuda_eval(x)[source]
WARNING: this is an in-place operation.
- Parameters:
x (
PackBitsTensor)- Returns:
- extra_repr()[source]
Return the extra representation of the module.
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- training: bool
Other Layers
- class torchlogix.layers.GroupSum(k, tau=1.0, beta=0.0, device='cuda')[source]
Bases:
ModuleThe GroupSum module.
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- extra_repr()[source]
Return the extra representation of the module.
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- training: bool
- class torchlogix.layers.LearnableThermometerThresholding(init_thresholds, slope=10.0)[source]
Bases:
Module- __init__(init_thresholds, slope=10.0)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.