TorchLogix Package

The main TorchLogix package provides the core functionality for differentiable logic neural networks.

Main Classes

CompiledLogicNet

Unified compiled logic network that handles convolutional, pooling, and linear layers.

Package Contents

The main package for torchlogix.

class torchlogix.CompiledLogicNet(model, input_shape, device='cpu', num_bits=64, cpu_compiler='gcc', verbose=False, use_bitpacking=True, apply_groupsum_scaling=True)[source]

Bases: Module

Unified compiled logic network that handles convolutional, pooling, and linear layers.

__init__(model, input_shape, device='cpu', num_bits=64, cpu_compiler='gcc', verbose=False, use_bitpacking=True, apply_groupsum_scaling=True)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

get_gate_code(var1, var2, gate_op)[source]

Generate C code for a logic gate operation.

Return type:

str

get_gate_verilog(var1, var2, gate_op)[source]

Generate Verilog code for a logic gate operation.

Parameters:
  • var1 (str) – Name of first input variable (Verilog syntax)

  • var2 (str) – Name of second input variable (Verilog syntax)

  • gate_op (int) – Gate operation ID (0-15)

Return type:

str

Returns:

Verilog expression string

get_c_code()[source]

Generate the complete C code for the network.

Return type:

str

get_verilog_code(module_name='torchlogix_net', pipeline_stages=0)[source]

Generate complete Verilog code for the network.

Parameters:
  • module_name (str) – Name of the top-level Verilog module

  • pipeline_stages (int) – Number of pipeline stages to insert (0 = fully combinational) - 0: Fully combinational (no registers, 1 cycle latency, may not synthesize for large models) - 1: Single output register (1 cycle latency, helps synthesis) - N: Divide layers into N pipeline stages (N cycle latency) - Use len(layers) for full layer-level pipelining

Return type:

str

Returns:

Complete Verilog code as a string with specified pipelining

Examples

# Fully combinational (original behavior) verilog = model.get_verilog_code(pipeline_stages=0)

# Output register only (helps with large designs) verilog = model.get_verilog_code(pipeline_stages=1)

# 4 pipeline stages (divide layers into 4 groups) verilog = model.get_verilog_code(pipeline_stages=4)

# Full layer-level pipelining (register between each layer) verilog = model.get_verilog_code(pipeline_stages=999) # or len(layers)

export_hdl(output_dir, module_name='torchlogix_net', format='verilog', pipeline_stages=0)[source]

Export the network as HDL files.

Parameters:
  • output_dir (str) – Directory to write HDL files

  • module_name (str) – Name of the top-level module

  • format (str) – HDL format - “verilog” or “vhdl” (only verilog supported currently)

  • pipeline_stages (int) – Number of pipeline stages (0 = combinational, see get_verilog_code)

Return type:

None

compile(opt_level=1, save_lib_path=None, verbose=False)[source]

Compile the network to a shared library.

forward(x, verbose=False)[source]

Forward pass through the compiled network.

Return type:

IntTensor

static load(save_lib_path, input_shape, num_classes=None, num_bits=64)[source]

Load a compiled network from a shared library.

Note: input_shape is required here since we’re loading a pre-compiled library without access to the original model structure for inference.

class torchlogix.PackBitsTensor(t, bit_count=32, device='cuda')[source]

Bases: object

__init__(t, bit_count=32, device='cuda')[source]
group_sum(k)[source]
flatten(start_dim=0, end_dim=-1, **kwargs)[source]

Returns the PackBitsTensor object itself. Arguments are ignored.

Submodules