Skip to content

Conversation

@pavelkomarov
Copy link
Collaborator

Moving the TVR module over to the new standard and making some simplifications in the process.

@pavelkomarov
Copy link
Collaborator Author

This one branches off from where #102 leaves off, so we should merge that one first.

derivatives = [var[N:]]
for i in range(N):
d = cvxpy.cumsum(derivatives[-1]) + var[i]
derivatives.append(d)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You were storing all the derivatives before, which I don't think is necessary, since we only use the final construction of these values. I now don't bother remembering all the levels.

# Define the variables for the highest order derivative and the integration constants
var = cvxpy.Variable(len(x) + N)
deriv_values = cvxpy.Variable(len(x)) # values of the order^th derivative, in which we're penalizing variation
integration_constants = cvxpy.Variable(order) # constants of integration that help get us back to x
Copy link
Collaborator Author

@pavelkomarov pavelkomarov Jun 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've split up the variables to give them more specific names and make indexing less confusing.

r = cvxpy.sum(gamma*cvxpy.tv(derivatives[0]))
y = deriv_values
for i in range(order):
y = cvxpy.cumsum(y) + integration_constants[i]
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is effectively 1st order FD in reverse. But we could imagine doing a more sophisticated FD scheme here. Maybe we should. Shouldn't be too hard. Shall I open an issue for it?

d = np.cumsum(derivative_values[-1]) + var.value[i]
derivative_values.append(d)
for i, _ in enumerate(derivative_values):
derivative_values[i] = derivative_values[i]/(dt**(N-i))
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You were going through and properly normalizing all the derivatives' values, but we only use the 1st derivative, so now I loop through just freely forgetting values until the 1st derivative, and then I only divide by dt for that one.

for i in range(order-1): # stop one short to get the first derivative
y = np.cumsum(y) + integration_constants.value[i]
dxdt_hat = y/dt # y only holds the dx values; to get deriv scale by dt
x_hat = np.cumsum(y) + integration_constants.value[order-1] # smoothed data
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The loop now stops 1 short, so I get the final x_hat with one last cumsum.

dxdt_hat = (dxdt_hat[0:-1] + dxdt_hat[1:])/2
dxdt_hat = (dxdt_hat[0:-1] + dxdt_hat[1:])/2 # take first order FD to smooth a touch
ddxdt_hat_f = dxdt_hat[-1] - dxdt_hat[-2]
dxdt_hat_f = dxdt_hat[-1] + ddxdt_hat_f
Copy link
Collaborator Author

@pavelkomarov pavelkomarov Jun 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dislike these lines. We already have dxdt_hat, so why should we need to do a further FD step? I think the answer is smoothness, but then you're left with something 1-short when you take [:-1] - [1:], so you try to fill in a last value with a special value. This strikes me as similar to #88. That special value is calculated as the diff of the last two values, which gives some kind of dx, which you can then tack on, hoping the derivative is constant in that last step.

Would a higher-order FD scheme obviate the need for post-processing smoothing like this?

dxdt_hat_f = dxdt_hat[-1] + ddxdt_hat_f # What is this doing? Could we use a 2nd order FD above natively?
dxdt_hat = np.hstack((dxdt_hat, dxdt_hat_f))

# fix first point
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the first point need to be fixed this way? I think you're just losing information with this update to the [0] index.

"""
Use convex optimization (cvxpy) to solve for the jerk total variation regularized derivative.
Default solver is MOSEK: https://www.mosek.com/
def jerk_sliding(x, dt, params, options=None, gamma=None, solver=solver):
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this function valuable? It seems kind of out of place. Is its sliding functionality compatible with that of the linear_model module? Can they both import _slide_func from utilities in the future?

This was referenced Jun 26, 2025
@pavelkomarov pavelkomarov changed the base branch from master to smooth-fd-docstrings June 26, 2025 23:58
@pavelkomarov pavelkomarov mentioned this pull request Jun 27, 2025
@pavelkomarov pavelkomarov changed the base branch from smooth-fd-docstrings to master June 27, 2025 18:17
@pavelkomarov pavelkomarov merged commit 1c52991 into master Jun 27, 2025
1 check passed
@pavelkomarov pavelkomarov deleted the move-tvr-tests branch June 27, 2025 21:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants