Skip to content

Add APIs to prune completed state for LSPS1/LSPS2/LSPS5#4526

Draft
f3r10 wants to merge 1 commit intolightningdevkit:mainfrom
f3r10:feat/add_apis_to_prune_completed_state
Draft

Add APIs to prune completed state for LSPS1/LSPS2/LSPS5#4526
f3r10 wants to merge 1 commit intolightningdevkit:mainfrom
f3r10:feat/add_apis_to_prune_completed_state

Conversation

@f3r10
Copy link
Copy Markdown

@f3r10 f3r10 commented Mar 31, 2026

No description provided.

@ldk-reviews-bot
Copy link
Copy Markdown

ldk-reviews-bot commented Mar 31, 2026

👋 I see @TheBlueMatt was un-assigned.
If you'd like another reviewer assignment, please click here.

///
/// Returns an [`APIError::APIMisuseError`] if the counterparty has no state, the order is
/// unknown, or the order is in a non-terminal state.
pub async fn prune_order(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, is the right API to require downstream code list the specific orders to prune or should we have some kind of "prune all failed orders older than X" API? @tnull wdyt?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right @TheBlueMatt, it would be much better something like this:
handler.prune_orders(client_id, Duration::from_secs(30 * 24 * 3600)).await?;

I am going to update that part. Thanks for the early review

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I agree, allowing to prune older entries in an interval is great. At some point we still might want to expose the LSPS{1,2} service state via the API, at which point allowing to drop specific orders would make sense, but not sure if that's not better done in a dedicated follow-up.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be better in a follow-up when service state listing exists

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ready the new method using an interval @tnull @TheBlueMatt

Add `prune_orders(counterparty_node_id, max_age: Duration)` to both
`LSPS1ServiceHandler` and `LSPS1ServiceHandlerSync`. It removes all
terminal orders (`CompletedAndChannelOpened` / `FailedAndRefunded`) for
a given peer that are at least `max_age` old, persists the updated state,
and returns the number of entries removed.

Passing `Duration::ZERO` prunes all terminal orders immediately regardless
of age, which is the recommended approach to unblock a client that has
hit the per-peer request limit due to accumulated failed orders.

On the `PeerState` layer, `prune_terminal_orders(now, max_age)` uses
`retain` for a single-pass removal and sets `needs_persist` only when
at least one entry is removed.
@f3r10 f3r10 force-pushed the feat/add_apis_to_prune_completed_state branch from 25a876b to ebaa7ed Compare April 8, 2026 17:40
@f3r10
Copy link
Copy Markdown
Author

f3r10 commented Apr 8, 2026

Trying to add the API for LSPS2, using the same strategy as LSPS1 (using intervals), I realize it is not quite possible because OutboundJITChannel has no created_at timestamp.
It would only be possible to create a function like this prune_channel(counterparty_node_id, intercept_scid), but it would be necessary to list all scids

The solution would be to add created_at: LSPSDateTime to OutboundJITChannel (new TLV field — backwards-compatible since missing fields default on read).

My question is, should this change be done in this same PR or in a follow-up @tnull @TheBlueMatt wdyt?

@f3r10 f3r10 requested review from TheBlueMatt and tnull April 8, 2026 17:49
@TheBlueMatt
Copy link
Copy Markdown
Collaborator

Yea, makes sense to add a created-at timestamp in LSPS2 requests imo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants