Bug Description
In long OpenCode sessions with heavy webfetch usage, DCP successfully runs compress and reports large reductions, but it still tends to trigger compress again on later turns instead of settling down.
The problem is especially noticeable when the session contains many large webfetch outputs, because they do not remain only as tool results. After subsequent turns, their contents often get carried forward into normal assistant messages and later compression summaries. Once that happens, the original fetched content is no longer just easy-to-prune tool output; it becomes part of the regular conversational context.
This is most noticeable in sessions that include:
- many
webfetch tool calls (main issue)
- large fetched documents
- long assistant replies that summarize or reuse fetched content
In these cases, older webfetch content seems to stop behaving like tool output that can be cheaply pruned and instead survives indirectly through large assistant text and compression summaries. Once the session reaches that shape, DCP can keep trying to compress again and again.
Expected Behavior
After a successful compression in a long webfetch-heavy session, DCP should reduce enough stale context to stop repeatedly pushing another compression on the next turns.
Large previously fetched content should not keep the session stuck in a near-constant “compress again” state after compression has already succeeded.
Debug Context Logs
[
{
"role": "assistant",
"tokens": {
"input": 286229,
"output": 4,
"reasoning": 0,
"cache": {
"write": 0,
"read": 0
}
}
},
{
"role": "user",
"parts": [
{
"type": "text",
"text": "Hola estas?"
}
]
}
]
2026-04-08T03:24:12.866Z INFO persistence: Saved session state to disk | totalTokensSaved=679707
2026-04-08T03:24:13.217Z INFO prune: Injected compress summary | summaryLength=22855
2026-04-08T03:24:13.217Z INFO prune: Injected compress summary | summaryLength=104944
2026-04-08T03:24:13.217Z INFO prune: Injected compress summary | summaryLength=3536
2026-04-08T03:24:13.317Z INFO hooks: Attached compression time to blocks | blocks=1 durationMs=16626
2026-04-08T03:24:59.183Z INFO prune: Injected compress summary | summaryLength=22855
2026-04-08T03:24:59.183Z INFO prune: Injected compress summary | summaryLength=104944
2026-04-08T03:24:59.183Z INFO prune: Injected compress summary | summaryLength=3536
This seems to show that compression summaries are being injected successfully, but the session still does not converge well afterwards and DCP keeps re-entering compression pressure in large webfetch-heavy sessions.
Relevant context snapshot observation:
After compression, the transformed context still contained very large assistant text blocks, and one assistant turn still showed:
- input: 286229
- output: 4
Tool Call Details
This tends to happen after /dcp compress in long sessions with large webfetch history.
Compression succeeds and reports large removals, but later turns still tend to push toward compress again. In the affected sessions, the remaining context appears to be dominated by large assistant text blocks rather than fresh tool outputs.
/dcp sweep helps very little in this state, which suggests that the issue is no longer mainly about live tool outputs, but about large fetched content having already been carried into assistant messages and compression summaries.
DCP Version
3.1.9
Opencode Version
1.4.0
Model
Other (specify in description)
Additional Context
This seems especially problematic in documentation-heavy or research-heavy sessions where webfetch is used repeatedly on large pages.
The issue does not look like compression completely failing. Compression summaries are clearly being injected, but the session still does not converge well afterwards and DCP keeps re-entering compression pressure.
Model
gpt-5.4-high
Bug Description
In long OpenCode sessions with heavy
webfetchusage, DCP successfully runscompressand reports large reductions, but it still tends to triggercompressagain on later turns instead of settling down.The problem is especially noticeable when the session contains many large
webfetchoutputs, because they do not remain only as tool results. After subsequent turns, their contents often get carried forward into normal assistant messages and later compression summaries. Once that happens, the original fetched content is no longer just easy-to-prune tool output; it becomes part of the regular conversational context.This is most noticeable in sessions that include:
webfetchtool calls (main issue)In these cases, older
webfetchcontent seems to stop behaving like tool output that can be cheaply pruned and instead survives indirectly through large assistant text and compression summaries. Once the session reaches that shape, DCP can keep trying to compress again and again.Expected Behavior
After a successful compression in a long
webfetch-heavy session, DCP should reduce enough stale context to stop repeatedly pushing another compression on the next turns.Large previously fetched content should not keep the session stuck in a near-constant “compress again” state after compression has already succeeded.
Debug Context Logs
[ { "role": "assistant", "tokens": { "input": 286229, "output": 4, "reasoning": 0, "cache": { "write": 0, "read": 0 } } }, { "role": "user", "parts": [ { "type": "text", "text": "Hola estas?" } ] } ] 2026-04-08T03:24:12.866Z INFO persistence: Saved session state to disk | totalTokensSaved=679707 2026-04-08T03:24:13.217Z INFO prune: Injected compress summary | summaryLength=22855 2026-04-08T03:24:13.217Z INFO prune: Injected compress summary | summaryLength=104944 2026-04-08T03:24:13.217Z INFO prune: Injected compress summary | summaryLength=3536 2026-04-08T03:24:13.317Z INFO hooks: Attached compression time to blocks | blocks=1 durationMs=16626 2026-04-08T03:24:59.183Z INFO prune: Injected compress summary | summaryLength=22855 2026-04-08T03:24:59.183Z INFO prune: Injected compress summary | summaryLength=104944 2026-04-08T03:24:59.183Z INFO prune: Injected compress summary | summaryLength=3536 This seems to show that compression summaries are being injected successfully, but the session still does not converge well afterwards and DCP keeps re-entering compression pressure in large webfetch-heavy sessions. Relevant context snapshot observation: After compression, the transformed context still contained very large assistant text blocks, and one assistant turn still showed: - input: 286229 - output: 4Tool Call Details
This tends to happen after
/dcp compressin long sessions with largewebfetchhistory.Compression succeeds and reports large removals, but later turns still tend to push toward
compressagain. In the affected sessions, the remaining context appears to be dominated by large assistant text blocks rather than fresh tool outputs./dcp sweephelps very little in this state, which suggests that the issue is no longer mainly about live tool outputs, but about large fetched content having already been carried into assistant messages and compression summaries.DCP Version
3.1.9
Opencode Version
1.4.0
Model
Other (specify in description)
Additional Context
This seems especially problematic in documentation-heavy or research-heavy sessions where
webfetchis used repeatedly on large pages.The issue does not look like compression completely failing. Compression summaries are clearly being injected, but the session still does not converge well afterwards and DCP keeps re-entering compression pressure.
Model
gpt-5.4-high