Minecraftforge: [Network] Reduce Vanilla&Modded Packet Spamming

Created on 30 Nov 2017  路  9Comments  路  Source: MinecraftForge/MinecraftForge

right now vanilla uses writeAndFlush for sending packet in NetworkManager#dispatchPacket. This will s end a TCP packet for each individual packet for each tick, this causes alot of traffic for very little data amoutn because of the big TCP header. by only calling write and then flush each tick (or per two ticks) it would exrtremly reduce the traffic amount (from tousands of packet with ~8 bytes)

Edit: to also improve modded connections I think we should edit the FMLEmbeddedChannel and over write the writeAndFlush method to only call write and set a flag to flush from an extra trimer.

Also is FMLEmbeddedChannel everywere used or do we have different channles?

Stale

Most helpful comment

ah, I misread. So the current situation is 1 flush / packet right, and we're proposing aggregate all packets and do 1 flush / tick?

I think as long as that doesn't interfere with gameplay it could be a good optimization

All 9 comments

what are you proposing we do, not flush every tick? can't that result in desyncs since everything arrives at the client delayed by at least a tick?

do you have any numbers on how much bandwidth this would actually save?

Currently after every packet is flushed, this is at least 3 packets per entity per tick (with is way to much). generaly we should only flush _once_ after each tick (at most).

I have make some statistics:
I recorded with Wireshark: https://www.hastebin.com/ezujitohex (IP-Adresses are replaced)
The setting was an modded minecraft, with Forge "1.12.2-14.23.0.2501". (it will also happens the same with any other version). We used a one of our survial world, with not too much entities, so NO special "benchmark" situation.

Summarized:
~3000 TCP-Packages in 10 seconds => 15 per tick
A typical TCP-Package has an overhead of 54 Byte. The typical content lengths used by minecraft are 7-20 Bytes (larger packets happens less common). So the raw TCP overhead is 72% - 89%.
For example the typical MTU can be up to 1500 Bytes, so in good case the TCP overhead could be ~4%. (not realistic for minecraft sync, but we can improve it.)
Further more the TCP protocoll says that the connection partner has to send an ACK for recived data. So we got 1000 empty TCP-ACK Packages (54 Byte). which is some more overhead (33% of all packages).
Many Packages seems to be vanilla (for example: SPacketEntityHeadLook, SPacketEntity, SPacketEntityVelocity, ...)

So if we write all packages to the chanel and call flush only at the end of every Server/Client tick, we can reduce the Packages per tick to 1 per direction (+ ~1 for ACK), so max 4 per Tick, presumably without sync problems.
Which means we reduce the TCP overhead from ~810 Bytes to <216 Bytes per tick. Which is ~70%.
And much more with worlds containing/using many enitities or many other sync Objects.

*Edit:
This would also reduce the chance of "TCP Previous segment not captured" at overloaded Internet connections. (which cause a retransmission)

ah, I misread. So the current situation is 1 flush / packet right, and we're proposing aggregate all packets and do 1 flush / tick?

I think as long as that doesn't interfere with gameplay it could be a good optimization

it seems like the channel is already flushed from NetworkManager.processReceivedPackets which is called by both client and server every tick

so this is potentially just changing the call to write? would have to test basically every vanilla packet to make sure nothing breaks

Edit: of course, that only covers vanilla code - i have no idea where forge packets are written to the channel

It may make sense to flush for player input packets and their responses to keep the input delay down (worst case delay could be 1/10 second). But everything that is created within the tick can be buffered.

This sounds interesting, but really needs a benchmark implementation for comparisons purposes.
It's hard to tell if this will have a serious impacts subjectively, and it's also important to try it out and see the downsides in practice.

Since there are potentially severe downsides to messing with networking at this level, a PR for this should be behind an opt-in config option.

This issue has been automatically marked as stale because it has not had activity in a long time. If this issue is still relevant and should remain open, please reply with a short explanation (e.g. "I have checked the code and this issue is still relevant because ___." or "Here's a screenshot of this issue on the latest version"). Thank you for your contributions!

This issue has been automatically closed because it has not had activity in a long time. Please feel free to reopen it or create a new issue.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

NovaViper picture NovaViper  路  3Comments

tterrag1098 picture tterrag1098  路  3Comments

Tamaized picture Tamaized  路  3Comments

MJRLegends picture MJRLegends  路  3Comments

ghost picture ghost  路  3Comments