API Reference
Functional Interface
- torch_dxdt.dxdt(x, t, kind=None, dim=-1, **kwargs)[source]
Compute the derivative of x with respect to t using the specified method.
This is the functional interface to the differentiation methods. It creates an instance of the specified method and calls its d method.
- Parameters:
x – torch.Tensor of shape (…, T) containing the signal values.
t – torch.Tensor of shape (T,) containing the time points.
kind – Method name. One of: - “finite_difference”: Symmetric finite differences - “savitzky_golay”: Savitzky-Golay polynomial filtering - “spectral”: FFT-based spectral differentiation - “spline”: Smoothing spline differentiation - “kernel”: Gaussian process kernel methods - “kalman”: Kalman smoother If None, uses finite_difference with k=1.
dim – Dimension along which to differentiate. Default -1.
**kwargs – Keyword arguments passed to the method constructor.
- Returns:
torch.Tensor of same shape as x containing dx/dt.
- Available kwargs by method:
finite_difference: k (window size), periodic (bool)
savitzky_golay: window_length, polyorder, order, periodic
spectral: order, filter_func
spline: s (smoothing parameter)
kernel: sigma, lmbd, kernel
kalman: alpha
whittaker: lmbda (smoothing parameter), d_order (difference order)
Example
>>> t = torch.linspace(0, 2*torch.pi, 100) >>> x = torch.sin(t) >>> dx = dxdt(x, t, kind="finite_difference", k=1)
- torch_dxdt.dxdt_orders(x, t, orders=(1, 2), kind=None, dim=-1, **kwargs)[source]
Compute multiple derivative orders simultaneously.
This function efficiently computes multiple derivative orders in a single pass, avoiding redundant computation. For methods that support it (e.g., SavitzkyGolay), shared computation is reused across orders.
- Parameters:
x – torch.Tensor of shape (…, T) containing the signal values.
t – torch.Tensor of shape (T,) containing the time points.
orders – Sequence of derivative orders to compute. Default (1, 2). Order 0 returns the smoothed signal (if supported).
kind – Method name. Currently best supported by: - “savitzky_golay”: Most efficient, computes all orders in one pass Other methods fall back to computing each order separately.
dim – Dimension along which to differentiate. Default -1.
**kwargs – Keyword arguments passed to the method constructor.
- Returns:
dict mapping order -> torch.Tensor of same shape as x.
Example
>>> t = torch.linspace(0, 2*torch.pi, 100) >>> x = torch.sin(t) >>> derivs = dxdt_orders(x, t, orders=[0, 1, 2], ... kind="savitzky_golay", ... window_length=11, polyorder=4) >>> x_smooth = derivs[0] # Smoothed signal >>> dx = derivs[1] # First derivative >>> d2x = derivs[2] # Second derivative
- torch_dxdt.smooth_x(x, t, kind=None, dim=-1, **kwargs)[source]
Compute the smoothed version of x using the specified method.
Not all methods support smoothing. Methods that do: - spline - kernel - kalman - whittaker
- Parameters:
x – torch.Tensor of shape (…, T) containing the signal values.
t – torch.Tensor of shape (T,) containing the time points.
kind – Method name (see dxdt for available methods).
dim – Dimension along which to smooth. Default -1.
**kwargs – Keyword arguments passed to the method constructor.
- Returns:
torch.Tensor of same shape as x containing the smoothed signal.
- Raises:
NotImplementedError – If the specified method does not support smoothing.
Base Class
- class torch_dxdt.Derivative[source]
Bases:
ABCAbstract base class for numerical differentiation methods.
All differentiation methods should inherit from this class and implement the d method for computing derivatives.
- abstractmethod d(x, t, dim=-1)[source]
Compute the derivative of x with respect to t.
- Parameters:
x (
Tensor) – Tensor of shape (…, T) containing the signal values. Multiple signals can be batched along leading dimensions.t (
Tensor) – Tensor of shape (T,) containing the time points. Must be evenly spaced for most methods.dim (
int) – The dimension along which to differentiate. Default is -1 (last dimension).
- Return type:
- Returns:
Tensor of same shape as x containing the derivative dx/dt.
- d_orders(x, t, orders=(1, 2), dim=-1)[source]
Compute multiple derivative orders simultaneously.
This method computes multiple derivative orders in an efficient manner, avoiding redundant computation where possible. For methods that support it (e.g., SavitzkyGolay), shared computation like polynomial fitting is reused across orders.
- Parameters:
x (
Tensor) – Tensor of shape (…, T) containing the signal values.t (
Tensor) – Tensor of shape (T,) containing the time points.orders (
Sequence[int]) – Sequence of derivative orders to compute. Default is (1, 2). Order 0 returns the smoothed signal (if supported).dim (
int) – The dimension along which to differentiate. Default is -1 (last dimension).
- Return type:
- Returns:
Dictionary mapping order -> derivative tensor. Each tensor has the same shape as x.
Example
>>> sg = SavitzkyGolay(window_length=11, polyorder=4) >>> derivs = sg.d_orders(x, t, orders=[1, 2]) >>> dx = derivs[1] # First derivative >>> d2x = derivs[2] # Second derivative
- smooth(x, t, dim=-1)[source]
Compute the smoothed version of x (if supported by the method).
- Parameters:
- Return type:
- Returns:
Tensor of same shape as x containing the smoothed signal.
- Raises:
NotImplementedError – If the method does not support smoothing.
Differentiation Methods
Finite Difference
- class torch_dxdt.FiniteDifference(k=1, periodic=False)[source]
Bases:
DerivativeCompute the symmetric numerical derivative using finite differences.
Uses the Taylor series expansion to compute coefficients for symmetric finite difference schemes of arbitrary window size. The implementation uses nn.Conv1d with fixed weights for efficiency and differentiability.
- Parameters:
Example
>>> fd = FiniteDifference(k=1) >>> t = torch.linspace(0, 2*torch.pi, 100) >>> x = torch.sin(t) >>> dx = fd.d(x, t) # Should approximate cos(t)
Savitzky-Golay
- class torch_dxdt.SavitzkyGolay(window_length, polyorder, order=1, deriv=None, pad_mode='replicate', periodic=False)[source]
Bases:
DerivativeCompute numerical derivatives using Savitzky-Golay filtering.
Fits a polynomial of given order to a window of points and takes the derivative of the polynomial. This provides smoothing while computing derivatives, making it robust to noise.
The implementation uses scipy to compute the filter coefficients once, then applies them efficiently using PyTorch convolution.
This method is particularly efficient for computing multiple derivative orders at once via d_orders(), as the same polynomial fit can yield all derivative orders in a single convolution pass.
- Parameters:
window_length (
int) – Length of the filter window (must be odd and > polyorder).polyorder (
int) – Order of the polynomial used to fit the samples.order (
int) – Order of the derivative to compute. Default is 1. Use order=1 for first derivative, order=2 for second derivative, etc.deriv (
int|None) – Alias for order (deprecated, use order instead).pad_mode (
str) – Padding mode for handling boundaries. Options are: - ‘replicate’: Repeat the edge value (default, good for monotonic signals) - ‘reflect’: Mirror the signal at the boundary (good for symmetric signals) - ‘circular’: Wrap around (only for periodic signals)periodic (
bool) – Deprecated, use pad_mode=’circular’ instead. If True, sets pad_mode=’circular’. Default False.
Example
>>> sg = SavitzkyGolay(window_length=11, polyorder=4, order=1) >>> t = torch.linspace(0, 2*torch.pi, 100) >>> x = torch.sin(t) + 0.1 * torch.randn(100) >>> dx = sg.d(x, t) # First derivative >>> # Compute multiple orders efficiently: >>> derivs = sg.d_orders(x, t, orders=[0, 1, 2]) >>> x_smooth, dx, d2x = derivs[0], derivs[1], derivs[2] >>> # Use reflect padding for better boundary behavior: >>> sg_reflect = SavitzkyGolay(window_length=11, polyorder=4, pad_mode='reflect')
- VALID_PAD_MODES = ('replicate', 'reflect', 'circular')
- __init__(window_length, polyorder, order=1, deriv=None, pad_mode='replicate', periodic=False)[source]
- d_orders(x, t, orders=(1, 2), dim=-1)[source]
Compute multiple derivative orders simultaneously and efficiently.
This method computes all requested derivative orders with shared preprocessing (dim permutation, padding), avoiding redundant computation. This is more efficient than calling d() multiple times.
- Parameters:
x (
Tensor) – Input tensor of shape (…, T) or (T,)t (
Tensor) – Time points tensor of shape (T,)orders (
Sequence[int]) – Sequence of derivative orders to compute. Default is (1, 2). Order 0 returns the smoothed signal. Maximum order is polyorder.dim (
int) – Dimension along which to differentiate. Default -1.
- Return type:
- Returns:
Dictionary mapping order -> derivative tensor. Each tensor has the same shape as x.
Example
>>> sg = SavitzkyGolay(window_length=11, polyorder=4) >>> derivs = sg.d_orders(x, t, orders=[0, 1, 2]) >>> x_smooth = derivs[0] # Smoothed signal >>> dx = derivs[1] # First derivative >>> d2x = derivs[2] # Second derivative
Spectral
- class torch_dxdt.Spectral(order=1, filter_func=None)[source]
Bases:
DerivativeCompute numerical derivatives using spectral (Fourier) methods.
Transforms to Fourier space, multiplies by (i * omega)^order, and transforms back. This method is very accurate for smooth, periodic data.
- Parameters:
order (
int) – Order of the derivative. Default is 1.filter_func – Optional function to filter frequencies before differentiation. Takes wavenumbers as input and returns weights. Example: lambda k: (torch.abs(k) < 10).float()
Note
Assumes the data is periodic over the sample interval.
Works best for smooth, band-limited signals.
For non-periodic data, consider windowing or other methods.
Example
>>> spec = Spectral(order=1) >>> t = torch.linspace(0, 2*torch.pi, 100, endpoint=False) >>> x = torch.sin(t) >>> dx = spec.d(x, t) # Should approximate cos(t)
Spline
- class torch_dxdt.Spline(s=0.01, order=3)[source]
Bases:
DerivativeCompute numerical derivatives using cubic spline interpolation.
Fits a smoothing spline to the data and computes derivatives analytically. Uses torch.linalg.solve for the linear system, making it differentiable.
- Parameters:
Note
The current implementation uses a simplified Whittaker smoother approach rather than true B-splines, as it’s more amenable to differentiable implementation in PyTorch.
Example
>>> spl = Spline(s=0.01) >>> t = torch.linspace(0, 2*torch.pi, 100) >>> x = torch.sin(t) + 0.1 * torch.randn(100) >>> dx = spl.d(x, t) # Smoothed derivative
Kernel
- class torch_dxdt.Kernel(sigma=1.0, lmbd=0.1, kernel='gaussian')[source]
Bases:
DerivativeCompute numerical derivatives using kernel (Gaussian Process) methods.
Fits a Gaussian process to the data using a specified kernel function, then computes derivatives from the posterior mean.
- Parameters:
Note
This method is differentiable but can be slow for large datasets due to the O(n^3) complexity of solving the linear system.
Example
>>> ker = Kernel(sigma=1.0, lmbd=0.1) >>> t = torch.linspace(0, 2*torch.pi, 100) >>> x = torch.sin(t) + 0.1 * torch.randn(100) >>> dx = ker.d(x, t) # Kernel-smoothed derivative
Kalman
- class torch_dxdt.Kalman(alpha=1.0)[source]
Bases:
DerivativeCompute numerical derivatives using Kalman smoothing.
The Kalman smoother finds the maximum likelihood estimator for a process whose derivative follows Brownian motion. This provides smooth, probabilistically-principled derivative estimates.
- The method minimizes:
||H*x - z||^2 + alpha * ||G*(x, dx)||_Q^2
where z is the noisy observation, x is the smoothed signal, dx is its derivative, and Q encodes the Brownian motion covariance.
- Parameters:
alpha (
float) – Regularization parameter. Larger values give smoother results. If None, uses a default value of 1.
Note
This implementation uses a batched linear solver and is fully differentiable, but may be slow for very long time series.
Example
>>> kal = Kalman(alpha=1.0) >>> t = torch.linspace(0, 2*torch.pi, 100) >>> x = torch.sin(t) + 0.1 * torch.randn(100) >>> dx = kal.d(x, t) # Kalman-smoothed derivative
Whittaker
- class torch_dxdt.Whittaker(lmbda=100.0, d_order=2)[source]
Bases:
DerivativeCompute numerical derivatives using Whittaker-Eilers smoothing.
The Whittaker-Eilers smoother uses penalized least squares with a difference penalty to smooth noisy data. Derivatives are computed by applying finite differences to the smoothed signal.
This method is particularly effective for: - Strongly noisy data - Signals where global smoothness is desired - Cases where you want explicit control over smoothness vs. fidelity
The smoothness is controlled by the parameter lmbda: - Small lmbda (~1): Less smoothing, follows data closely - Large lmbda (~1e6): Heavy smoothing, very smooth result
- Parameters:
lmbda (
float) – Smoothing parameter. Larger values give smoother results. Typical values range from 1 to 1e6 depending on noise level.d_order (
int) – Order of the difference penalty (default 2). - 1: Penalizes first differences (piecewise constant) - 2: Penalizes second differences (piecewise linear, most common) - 3: Penalizes third differences (smoother curves)
Example
>>> wh = Whittaker(lmbda=100.0) >>> t = torch.linspace(0, 2*torch.pi, 100) >>> x = torch.sin(t) + 0.1 * torch.randn(100) >>> dx = wh.d(x, t) # Smoothed derivative >>> x_smooth = wh.smooth(x, t) # Just smoothing
- d_orders(x, t, orders=(1, 2), dim=-1)[source]
Compute multiple derivative orders simultaneously and efficiently.
The Whittaker smoother computes the smoothed signal once and then derives multiple derivative orders from it, avoiding redundant smoothing computation.
- Parameters:
- Return type:
- Returns:
Dictionary mapping order -> derivative tensor. Each tensor has the same shape as x.
Example
>>> wh = Whittaker(lmbda=100.0) >>> derivs = wh.d_orders(x, t, orders=[0, 1, 2]) >>> x_smooth = derivs[0] # Smoothed signal >>> dx = derivs[1] # First derivative >>> d2x = derivs[2] # Second derivative