-
Notifications
You must be signed in to change notification settings - Fork 22
#68 for TVR #106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
#68 for TVR #106
Conversation
|
This one branches off from where #102 leaves off, so we should merge that one first. |
| derivatives = [var[N:]] | ||
| for i in range(N): | ||
| d = cvxpy.cumsum(derivatives[-1]) + var[i] | ||
| derivatives.append(d) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You were storing all the derivatives before, which I don't think is necessary, since we only use the final construction of these values. I now don't bother remembering all the levels.
| # Define the variables for the highest order derivative and the integration constants | ||
| var = cvxpy.Variable(len(x) + N) | ||
| deriv_values = cvxpy.Variable(len(x)) # values of the order^th derivative, in which we're penalizing variation | ||
| integration_constants = cvxpy.Variable(order) # constants of integration that help get us back to x |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've split up the variables to give them more specific names and make indexing less confusing.
| r = cvxpy.sum(gamma*cvxpy.tv(derivatives[0])) | ||
| y = deriv_values | ||
| for i in range(order): | ||
| y = cvxpy.cumsum(y) + integration_constants[i] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is effectively 1st order FD in reverse. But we could imagine doing a more sophisticated FD scheme here. Maybe we should. Shouldn't be too hard. Shall I open an issue for it?
| d = np.cumsum(derivative_values[-1]) + var.value[i] | ||
| derivative_values.append(d) | ||
| for i, _ in enumerate(derivative_values): | ||
| derivative_values[i] = derivative_values[i]/(dt**(N-i)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You were going through and properly normalizing all the derivatives' values, but we only use the 1st derivative, so now I loop through just freely forgetting values until the 1st derivative, and then I only divide by dt for that one.
| for i in range(order-1): # stop one short to get the first derivative | ||
| y = np.cumsum(y) + integration_constants.value[i] | ||
| dxdt_hat = y/dt # y only holds the dx values; to get deriv scale by dt | ||
| x_hat = np.cumsum(y) + integration_constants.value[order-1] # smoothed data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The loop now stops 1 short, so I get the final x_hat with one last cumsum.
| dxdt_hat = (dxdt_hat[0:-1] + dxdt_hat[1:])/2 | ||
| dxdt_hat = (dxdt_hat[0:-1] + dxdt_hat[1:])/2 # take first order FD to smooth a touch | ||
| ddxdt_hat_f = dxdt_hat[-1] - dxdt_hat[-2] | ||
| dxdt_hat_f = dxdt_hat[-1] + ddxdt_hat_f |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dislike these lines. We already have dxdt_hat, so why should we need to do a further FD step? I think the answer is smoothness, but then you're left with something 1-short when you take [:-1] - [1:], so you try to fill in a last value with a special value. This strikes me as similar to #88. That special value is calculated as the diff of the last two values, which gives some kind of dx, which you can then tack on, hoping the derivative is constant in that last step.
Would a higher-order FD scheme obviate the need for post-processing smoothing like this?
| dxdt_hat_f = dxdt_hat[-1] + ddxdt_hat_f # What is this doing? Could we use a 2nd order FD above natively? | ||
| dxdt_hat = np.hstack((dxdt_hat, dxdt_hat_f)) | ||
|
|
||
| # fix first point |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the first point need to be fixed this way? I think you're just losing information with this update to the [0] index.
| """ | ||
| Use convex optimization (cvxpy) to solve for the jerk total variation regularized derivative. | ||
| Default solver is MOSEK: https://www.mosek.com/ | ||
| def jerk_sliding(x, dt, params, options=None, gamma=None, solver=solver): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this function valuable? It seems kind of out of place. Is its sliding functionality compatible with that of the linear_model module? Can they both import _slide_func from utilities in the future?
Moving the TVR module over to the new standard and making some simplifications in the process.