codepig666
02-05-2003, 08:38 PM
---edit---
After I posted this, I noticed that fee had already posted the same thing with even more detail.
While much of this is redundant, some of it is still useful I think. If not, just flame it a few times and I'll kill the thread.
----end edit---
I thought I would throw out a few thoughts on the concepts of the combo packets for people to think about and or flame me for:
The Good:
The network is a bottleneck. As the server makes its rounds to send data to any given client, it frequently has more than one packet in the queue.
In the old model, it would pull packets off the queue for a client up to the point where it reached either the end of the queue or some predetermined threshhold, sending out each packet one right after the other.
In the new model, when possible, it takes the queue'd packets and processes them into a combo-packet so that it only has to perform one network send per pass.
From a SERVER perspective, there is zero change in processing efficiency: The same number of packets will be processed off the queue as in the old model. There is a gain in network efficiency from the fact that what was previously a group of packets is now a single, larger packet.
The Bad:
The network is a bottleneck. As the client receives packets, it opens and processes them one at a time.
The natural latency of networks provided a natural smoothing effect. For example, if the server sent out 10 position update packets in the same 100 millisecond period, it is quite likely that the client received and processed them over the course of half a second or even more.
In the new model, all 10 of these small updates arrive in a single packet. They are processed in far more rapid succession than if they had arrived the old way.
Think of it like this: If you watch a movie at 30 frames per second, it is smooth because you view each frame for 1/30th of a second.....but if you watch the same movie with all 30 frames being displayed in the second half of each second, it would be bizarrely choppy.
Because the smallest packets are the most likely candidates for combo-packets, things like positional updates (a small packet) will be the hardest hit.
I really don't know what they could do to compensate for this effect. The natural answer would be to include timing information and then have the client space out the changes, but since players are already contending with lag, this additional delay would add to that pain. It would give back the smoothness but at the cost of leaving the player further behind the action, time-wise.
Perhaps the smarter approach is to measure the queue depth and only begin combining packets if it passed a certain threshold which would be indicative of a raid environment. That way people soloing or in regular groups would never experience the new problems while those involved in raids would reap the benefits.
Peace
After I posted this, I noticed that fee had already posted the same thing with even more detail.
While much of this is redundant, some of it is still useful I think. If not, just flame it a few times and I'll kill the thread.
----end edit---
I thought I would throw out a few thoughts on the concepts of the combo packets for people to think about and or flame me for:
The Good:
The network is a bottleneck. As the server makes its rounds to send data to any given client, it frequently has more than one packet in the queue.
In the old model, it would pull packets off the queue for a client up to the point where it reached either the end of the queue or some predetermined threshhold, sending out each packet one right after the other.
In the new model, when possible, it takes the queue'd packets and processes them into a combo-packet so that it only has to perform one network send per pass.
From a SERVER perspective, there is zero change in processing efficiency: The same number of packets will be processed off the queue as in the old model. There is a gain in network efficiency from the fact that what was previously a group of packets is now a single, larger packet.
The Bad:
The network is a bottleneck. As the client receives packets, it opens and processes them one at a time.
The natural latency of networks provided a natural smoothing effect. For example, if the server sent out 10 position update packets in the same 100 millisecond period, it is quite likely that the client received and processed them over the course of half a second or even more.
In the new model, all 10 of these small updates arrive in a single packet. They are processed in far more rapid succession than if they had arrived the old way.
Think of it like this: If you watch a movie at 30 frames per second, it is smooth because you view each frame for 1/30th of a second.....but if you watch the same movie with all 30 frames being displayed in the second half of each second, it would be bizarrely choppy.
Because the smallest packets are the most likely candidates for combo-packets, things like positional updates (a small packet) will be the hardest hit.
I really don't know what they could do to compensate for this effect. The natural answer would be to include timing information and then have the client space out the changes, but since players are already contending with lag, this additional delay would add to that pain. It would give back the smoothness but at the cost of leaving the player further behind the action, time-wise.
Perhaps the smarter approach is to measure the queue depth and only begin combining packets if it passed a certain threshold which would be indicative of a raid environment. That way people soloing or in regular groups would never experience the new problems while those involved in raids would reap the benefits.
Peace