neural network (nn) extensions

Since PhotonTorch is a photonic simulation framework in the first place, we require some extra functionalities that PyTorch does not offer out of the box.

Below you can find a short summary:

  • photontorch.nn.Buffer: A special kind of tensor that automatically will be added to the ._buffers attribute of the Module. Buffers are typically used as parameters of the model that do not require gradients.

  • photontorch.nn.Module: Extends torch.nn.Module, with some extra features, such as automatically registering a [Buffer](.nn.Buffer) in its ._buffers attribute, modified .cuda() calls and some extra functionalities.

  • photontorch.nn.BoundedParameter: A bounded parameter is a special kind of torch.nn.Parameter that is bounded between a certain range. Under the hood it registers an unbounded weight in the photontorch.nn.Module and a class property calculating the desired parameter on the fly when it is asked by performing a scaled sigmoid on the weight.

  • photontorch.nn.MSELoss: mean squared error loss function which takes latency differences between input stream and target stream into account.

  • photontorch.nn.BERLoss: bit error rate loss function which takes latency differences between input stream and target stream into account. The resulting BER is not differentiable.

  • photontorch.nn.BitStreamGenerator: a bitstream generator with proper lowpass filtering.

nn

neural network (nn) extensions

class photontorch.nn.nn.BERLoss(threshold=0.5, latency=0.0, warmup=0, bitrate=40000000000.0, samplerate=160000000000.0)

Bases: photontorch.nn.nn._Loss

Bit Error Rate (non-differentiable)

__init__(threshold=0.5, latency=0.0, warmup=0, bitrate=40000000000.0, samplerate=160000000000.0)
Parameters
  • threshold (float) – threshold value (where to place the 0/1 threshold)

  • latency (float) – fractional latency [in bit lengths]. This value can be a floating point number bigger than 1.

  • warmup (int) – integer number of warmup bits. warmup bits are disregarded during the loss calculation.

  • bitrate (float) – the bit rate of the signal [in Hz]

  • samplerate (float) – the sample rate of the signal [in Hz]

forward(prediction, target, threshold=None, latency=None, warmup=None, bitrate=None, samplerate=None)

Calculate loss :param prediction: prediction power tensor. Should be broadcastable to tensor with shape (# timesteps, # wavelengths, # readouts, # batches) :type prediction: Tensor :param target: target power tensor. Should be broadcastable to the same shape as prediction. :type target: Tensor :param threshold: override threshold value (where to place the 0/1 threshold) :type threshold: optional, float :param latency: [bits] override fractional latency in bit lengths. This value can be a floating point number bigger than 1. :type latency: optional, float :param warmup: [bits] override integer number of warmup bits. warmup bits are disregarded during the loss calculation. :type warmup: optional, int :param bitrate: [1/s] override data rate of the bitstream (defaults to bitrate found in environment) :type bitrate: optional, float :param samplerate: [1/s] override the sample rate of the signal (defaults to samplerate found in environment) :type samplerate: optional, float

Note

If a bitrate and/or samplerate can be found in the current environment, those values will be regarded as keyword arguments and hence get precedence over the values given during the loss initialization.

Note

Although the causality of using negative latencies is questionable, they are allowed. However, each (fractional) negative latency should be compensated with an (integer) number of warmup bits (rounded up) to make it work.

training
class photontorch.nn.nn.BitStreamGenerator(bitrate=40000000000.0, samplerate=160000000000.0, cutoff_frequency=None, filter_order=1, seed=None, dtype=None, device=None)

Bases: photontorch.nn.nn.Module

Generate a bitstream from a sequence of bits (or from a random seed)

__init__(bitrate=40000000000.0, samplerate=160000000000.0, cutoff_frequency=None, filter_order=1, seed=None, dtype=None, device=None)
Parameters
  • bitrate (float) – [1/s] data rate of the bitstream

  • samplerate (float) – [1/s] sample rate of the bitstream

  • cutoff_frequency (float) – [1/s] cutoff frequency of the bitstream. If None: no lowpass filtering.

  • filter_order (int) – filter order to enforce cutoff frequency

  • seed (int) – seed used to generate bits (if needed)

  • dtype (torch.dtype) – dtype to generate the bits for. None -> “torch.get_default_dtype()”

  • device (torch.device) – device to generate the bits on. None -> “cpu”

Note

Although the causality of using negative latencies is questionable, they are allowed. However, each (fractional) negative latency should be compensated with an (integer) number of warmup bits (rounded up) to make it work.

forward(bits=100, bitrate=None, samplerate=None, cutoff_frequency=None, filter_order=None, seed=None, dtype=None, device=None)

generate a bitstream from a sequence of bits (or from a random seed)

Parameters
  • bits (int|sequence) –

    • if int: generate that number of bits, then create stream.

    • if sequence: interpret the sequence as bits, then create stream.

  • bitrate (optional, float) – [1/s] override data rate of the bitstream (defaults to bitrate found in environment)

  • samplerate (optional, float) – [1/s] override the sample rate of the signal (defaults to samplerate found in environment)

  • cutoff_frequency (optional, float) – [1/s] override cutoff frequency of the bitstream. If None: no lowpass filtering.

  • filter_order (optional, int) – override filter order to enforce cutoff frequency

  • seed (optional, int) – override seed used to generate bits (if needed)

  • dtype (optional, torch.dtype) – override dtype to generate the bits for. None -> “torch.get_default_dtype()”

  • device (optional, torch.device) – override device to generate the bits on. None -> “cpu”

Note

If a bitrate and/or samplerate can be found in the current environment, those values will be regarded as keyword arguments and hence get precedence over the values given during the BitStreamGenerator initialization.

Note

Although the causality of using negative latencies is questionable, they are allowed. However, each (fractional) negative latency should be compensated with an (integer) number of warmup bits (rounded up) to make it work.

training
class photontorch.nn.nn.BoundedParameter(data=None, bounds=0, 1, requires_grad=True)

Bases: torch.nn.parameter.Parameter

A BoundedParameter is special Parameter that is bounded between a range.

Under the hood it registers an unbounded weight in our photontorch.nn.Module and a class property calculating the desired parameter value on the fly by performing a scaled sigmoid

Note

For the registration of the BoundedParameter to work, you need to use the photontorch.nn.Module, which is a subclass of torch.nn.Module.

class photontorch.nn.nn.Buffer(data=None, requires_grad=False)

Bases: torch.Tensor

A Buffer is a Module Variable which is automatically registered in _buffers

Each Module has an OrderedDict named _buffers. In this Dictionary, all model related parameters that do not require optimization are stored.

The Buffer class makes it easier to register a buffer in the Module. If an attribute of a module is set to a Buffer, it will automatically be added to the _buffers attribute.

Note

For the automatic registration of the Buffer to work, you need to use the photontorch.nn.Module, which is a subclass of torch.nn.Module.

class photontorch.nn.nn.MSELoss(latency=0.0, warmup=0, bitrate=40000000000.0, samplerate=160000000000.0)

Bases: photontorch.nn.nn._Loss

Mean Squared Error for bitstreams

forward(prediction, target, latency=None, warmup=None, bitrate=None, samplerate=None)

Calculate loss :param prediction: prediction power tensor. Should be broadcastable to tensor with shape (# timesteps, # wavelengths, # readouts, # batches) :type prediction: Tensor :param target: target power tensor. Should be broadcastable to the same shape as prediction. :type target: Tensor :param latency: [bits] override fractional latency in bit lengths. This value can be a floating point number bigger than 1. :type latency: optional, float :param warmup: [bits] override integer number of warmup bits. warmup bits are disregarded during the loss calculation. :type warmup: optional, int :param bitrate: [1/s] override data rate of the bitstream (defaults to bitrate found in environment) :type bitrate: optional, float :param samplerate: [1/s] override the sample rate of the signal (defaults to samplerate found in environment) :type samplerate: optional, float

Note

If a bitrate and/or samplerate can be found in the current environment, those values will be regarded as keyword arguments and hence get precedence over the values given during the loss initialization.

Note

Although the causality of using negative latencies is questionable, they are allowed. However, each (fractional) negative latency should be compensated with an (integer) number of warmup bits (rounded up) to make it work.

training
class photontorch.nn.nn.Module

Bases: torch.nn.modules.module.Module

Torch.nn Module extension with some extra features.

__init__()

Initializes internal Module state, shared by both nn.Module and ScriptModule.

cpu()

Transform the Module to live on the CPU

cuda(device=None)

Transform the Module to live on the GPU

Parameters

device (int) – index of the GPU device.

property is_cuda

check if the model parameters live on the GPU

to(*args, **kwargs)

move the module to a device (cpu or cuda)

training