Skip to content

Commit 8a9bc1f

Browse files
committed
realized that robustdiffclassic differs fundamentally from the old robustdiff, because there is now a huber_const that balances differently between l1 and Huber and keeps Huber from flattening out as M->0. No point in keeping it. It was really just for a chance to look at some big experimental results, and it's clear it's not better than the new robsutdiff. The new one is interestingly not as strong as the old buggy one under conditions of higher bandlimit but does just as well with outliers. Results are hard to compare in that case, though, because I'm now using the robustified loss function and wasn't before with old robustdiff. The old one suffered from being underexpressive, and the new one suffers from being hard to optimize because it has so many parameters.
1 parent cbd28d6 commit 8a9bc1f

File tree

3 files changed

+2
-11
lines changed

3 files changed

+2
-11
lines changed
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
"""This module implements constant-derivative model-based smoothers based on Kalman filtering and its generalization.
22
"""
3-
from ._kalman_smooth import kalman_filter, rts_smooth, rtsdiff, constant_velocity, constant_acceleration, constant_jerk, robustdiff, convex_smooth, robustdiffclassic
3+
from ._kalman_smooth import kalman_filter, rts_smooth, rtsdiff, constant_velocity, constant_acceleration, constant_jerk, robustdiff, convex_smooth

pynumdiff/kalman_smooth/_kalman_smooth.py

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -307,10 +307,6 @@ def robustdiff(x, dt, order, log_q, log_r, proc_huberM=6, meas_huberM=0):
307307
return x_states[0], x_states[1]
308308

309309

310-
def robustdiffclassic(x, dt, order, log_qr_ratio, huberM):
311-
return robustdiff(x, dt, order, 4, 4 - log_qr_ratio, huberM, huberM)
312-
313-
314310
def convex_smooth(y, A, Q, C, R, proc_huberM, meas_huberM):
315311
"""Solve the optimization problem for robust smoothing using CVXPY. Note this currently assumes constant dt
316312
but could be extended to handle variable step sizes by finding discrete-time A and Q for requisite gaps.

pynumdiff/optimize/_optimize.py

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
from ..polynomial_fit import polydiff, savgoldiff, splinediff
1515
from ..basis_fit import spectraldiff, rbfdiff
1616
from ..total_variation_regularization import tvrdiff, velocity, acceleration, jerk, iterative_velocity, smooth_acceleration, jerk_sliding
17-
from ..kalman_smooth import rtsdiff, constant_velocity, constant_acceleration, constant_jerk, robustdiff, robustdiffclassic
17+
from ..kalman_smooth import rtsdiff, constant_velocity, constant_acceleration, constant_jerk, robustdiff
1818
from ..linear_model import lineardiff
1919

2020
# Map from method -> (search_space, bounds_low_hi)
@@ -96,11 +96,6 @@
9696
'log_r': (-5, 16),
9797
'proc_huberM': (0, 6),
9898
'meas_huberM': (0, 6)}),
99-
robustdiffclassic: ({'order': {1, 2, 3}, # warning: order 1 hacks the loss function when tvgamma is used, tends to win but is usually suboptimal choice in terms of true RMSE
100-
'log_qr_ratio': [float(k) for k in range(-1, 16, 4)],
101-
'huberM': [0., 5, 20]}, # 0. so type is float. Good choices here really depend on the data scale
102-
{'log_qr_ratio': (-1, 18),
103-
'huberM': (0, 1e2)}), # really only want to use l2 norm when nearby
10499
lineardiff: ({'kernel': 'gaussian',
105100
'order': 3,
106101
'gamma': [1e-1, 1, 10, 100],

0 commit comments

Comments
 (0)