Raise fragmented init data limit for large mod lists#878
Raise fragmented init data limit for large mod lists#878
Conversation
There was a problem hiding this comment.
Pull request overview
Raises the maximum allowed size of initialization “raw” payload data in ClientInitDataPacket to support large mod lists / init payloads while still relying on fragmented-packet size guardrails.
Changes:
- Introduces an explicit max length for
ClientInitDataPacket.rawData. - Applies the max length to
BindRemainingwhen bindingrawData.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| buf.BindEnum(ref staticCtorRoundMode); | ||
| buf.Bind(ref defInfos, BinderOf.Identity<KeyedDefInfo>()); | ||
| buf.BindRemaining(ref rawData); | ||
| buf.BindRemaining(ref rawData, maxLength: MaxRawDataLength); |
There was a problem hiding this comment.
This change increases the allowed rawData size well beyond PacketBuffer.DefaultMaxLength (32,767). There are existing packet roundtrip tests, but nothing appears to exercise the large-payload path for ClientInitDataPacket; adding a regression test that roundtrips an init packet with rawData.Length > DefaultMaxLength (and below the fragmented max) would help prevent accidental reintroduction of the previous limit.
| buf.BindEnum(ref staticCtorRoundMode); | ||
| buf.Bind(ref defInfos, BinderOf.Identity<KeyedDefInfo>()); | ||
| buf.BindRemaining(ref rawData); | ||
| buf.BindRemaining(ref rawData, maxLength: MaxRawDataLength); |
There was a problem hiding this comment.
Could you do some big limit instead of linking it to the MaxFragmentPacketTotalSize (32MiB if I remember correctly)? It just seems excessive to have the limit be that big. Something like 1 << 21 (2MiB) should be way more than enough, but you can raise it some more if you think it's too low
There was a problem hiding this comment.
You're right that linking directly to MaxFragmentPacketTotalSize (32 MiB) is excessive. However, I want to flag that rawData carries more than just lightweight metadata — it includes:
- The full active mod list (packageId, name, steamId, source per mod)
- File hashes for every file in every active mod (relative path + hash each)
- All syncable mod config file contents (when
syncConfigsis enabled)
Everything is GZip-compressed, but with 300+ mods and thousands of mod files, the compressed payload can grow substantially. The original issue reported was exactly this: players with large modlists were failing during bootstrap because the save.zip upload exceeded the previous default limit.
A fixed 2 MiB cap might work for most cases, but I'm worried it could still cut off heavy modlists. A few options I see:
- Higher fixed cap — something like
1 << 23(8 MiB), well below 32 MiB but with more headroom - Dynamic calculation — compute the limit based on active mod count or snapshot the actual
WriteServerDatasize at host time, then set the cap to e.g.actualSize * 2with a reasonable floor/ceiling - Just use
1 << 21as you suggested and bump it later if someone hits it — since it's aconst, it's easy to adjust
What's your preference? I'm leaning toward option 1 (fixed 8 MiB) for simplicity, but happy to go with whatever you think is best.
There was a problem hiding this comment.
The limit right now is 32KiB and I think I've seen maybe a couple of people with this issue, if at all. The increase to 2MiB means the limit is over 50 times bigger, which I find hard to ever reach. I'd prefer 3, but 1 is fine too
There was a problem hiding this comment.
What about make it a configurable limit ? I actually agree with you but still happens 🤔
There was a problem hiding this comment.
Do you think it's going to keep happening when the limit is that much bigger? It could be made configurable, but I'm not sure the extra complexity justifies it - how do you make sure every client has a big enough limit? Just tell players to manually adjust it in the settings? That's not a great UX. Also, the packet would then need to depend on settings, which isn't great either as it introduces another dependency, which is non-obvious. If it really does happen after raising the limit, I'm fine with increasing it even more then, but I highly doubt it for now
There was a problem hiding this comment.
Yeah no, i just propose alternatives, im fine with wathever you suggest, after all it's an easy edit in case, ill set 1mb at this point and then let's see, since for now the limit i seen was 42~kb
- Change MaxRawDataLength from MaxFragmentPacketTotalSize (32 MiB) to (1 << 20) - 6 - 1 MB is more than enough for large mod lists - Subtract 6 bytes for SendFragmented first-fragment overhead (3F fix)
This PR raises the fragmented init data limit for large mod lists.
Why:
What this change does:
Scope: