lidanh
Hi guys,
We have experienced a weird behavior with the Lightstreamer and we want to hear your input.
A few months ago we moved to a newer version in the Lightstreamer client (Javascript API) - 6.0.1 and 5.1.1 Colosseo on the server.
Basically we moved to the new version because we wanted to take advantage on the new WebSocket Streaming integration (HTTP Streaming never worked for us).
We were very happy with the results since WebSocket Streaming is better than HTTP Polling on all aspects (mostly performance-wise).
Today, we investigated an issue that causes really poor performance when having bad latency to the Lightstreamer server (ping ~400).
The problem we found is that WebSocket Streaming is doing TOO MANY roundtrips to the server when receiving the snapshot.
It appears that the Lightstreamer protocol for WebSocket forces a roundtrip for each ~1.2kb (as oppose to HTTP-Polling that bringing almost ~20kb on each roundtrip for snapshot).
The result of this issue is that the performance of the snapshot data is REALLY poor. It can take ~25 seconds with bad latency WHILE HTTP-Polling takes only ~4 seconds. (WebSocket Streaming is performing ~40 roundtrips and HTTP-Polling only ~3)
To make sure this isn't a WebSocket limitation, we created our own WebSocket server (using an open-source) and sent the same data with 1-3 roundtrips and it was around 10 times faster.
We tried to look for a solution on the Server configuration to send more data on each roundtrip and we couldn't find anything that worked.
We tried to the following configurations that basically did nothing:
max_buffer_size
sendbuf
max_outbound_frame_size (We had high hopes for this one)
Please take a look at the attached screenshots for basic demonstration.
We need your prompt reply on this issue it has been raised by a live customer.
Thanks,
Lidan Hackmon.
Giuseppe Corti
Hi Lidan,
I can confirm that is not a specific limitation of WebSocket and the behaviour would be the same for HTTP streaming. This is because in streaming mode the factory configuration of Lightstreamer try to optimize the latency of real-time updates and not sending large blocks of data.
However, in your case, in our opinion leveraging on the sendbuf parameter was supposed to work, indeed the factory configuration is quite low in order to improve the detection of network problems, but it disadvantages the sending of very large Snapshot.
By increasing the value of sendbuf you should see the TCP sends packets of about the same size (1.2 Kb) but sent without waiting for the rountrip. You have evidence that acting in this way you did not get this result?
Could you share some results of the experiments with sendbuf value changed (set values Vs. results)?
Thank you,
Giuseppe
lidanh
Hi Giuseppe,
Thank you for your answer.
I have tested the sendbuf value again. After reading your comment I realize it doesn't supposed to actually increase the buffer size (frame size) as I thought.
It should change the way Lightstreamer sends data to client so that it won't wait for the client's answer for a new roundtrip.
I didn't see this change yesterday when I was testing it because I was looking for a larger roundtrips and not faster once.
I can confirm that changing this value to 10000 indeed fixed the issue. It looks like the data is being transferred much faster (Since we have less waiting time for latency).
Two question:
1. What are the disadvantages of changing this value? (performance-wise)
2. Is this the best way to handle this issue? At first I thought that the best solution is increasing the buffer size (so that there will be less roundtrips). What do you think about this approach?
Thanks for your help!
Lidan Hackmon
Giuseppe Corti
Hi Lidan,
The size of messages you have experimented is actually the typical size of TCP packets sent by Lightstreamer and it is not affected by sendbuf setting. But increasing the sendbuf value in order to allow it to contain more than one TCP packet, it allow to the TCP protocol to send more packets at once, without waiting the ack reply. I think that it was the waiting for ack reply of each packet that raised your issue.
1. We expect no deterioration in terms of performance with values of sendbuf wider, quite the opposite. However, in the factory configuration and we generally recommend a low value in order to early detect network issues and then take recovery actions as soon as possible.
2. Send larger packets (and thus reduce roundtrips) would be even better but we know that the TCP does not impose a roundtrip for each packet and so we were confident that even with a larger sendbuf the situation would be improved.
Alternatively there would be a client-side solution which involves to manage different clients in different ways. But this requires that each client is able to recognize the case to handle independently. Please let me know if I should go deeper on this.
Regards,
Giuseppe
lidanh
Hi Giuseppe,
Thank you for your reply.
There is no need to go deeper into the client-side solution, I don't think this is the way we want to go.
In any case, I would still like to test the performance with larger frames per WebSocket packet. The performance were improved significantly since we changed the sendbuf value, but I still wonder if we can improve it even more.
I'll mention that we tried testing this through that open source WebSocket server. We had the best performance by increasing the size of each WebSocket frame (about 10kb each instead of 1.2kb Lighstreamer is doing) in addition to to the sendbuff change behavior.
Thanks,
Lidan.
Giuseppe Corti
Hi Lidan,
Okay, if you want to go down that road then you have to use <MSS> parameter, which is considered private and that's why you did not find it in the lighstreamer_conf.xml file.
Please add something like
<MSS>10000</MSS>
in lighstreamer_conf.xml, for convenience immediately below the setting of <sendbuf>.
But please note that the factory setting is considered to be the optimal trade-off between the different network scenarios possible. I remind you that it is not excluded that in some cases the TCP nevertheless decides to break up into smaller packets.
We therefore recommend that you perform various tests in your typical scenarios to assess the value of <MSS> or if it is appropriate to return to the default value.
Regards,
Giuseppe
lidanh
Hi Giuseppe,
Thank you for your help.
The MSS value seems to be working. The performance of the snapshot were improved as well. We are considering leaving the MSS with the value of 6000.
By the way, is it possible to perform GZip compression when using WebSocket? I looked at the Server configuration and I noticed that it doesn't work with streaming. Is there any other way to get it working with streaming?
Thanks,
Lidan.
Giuseppe Corti
Hi Lidan,
I can confirm that for the moment compression is only available for HTTP streaming and not with WebSockets.
But we are working on that, waiting for the officiality of draft "Compression Extensions for WebSocket" ( draft-ietf-hybi-permessage-compression ).
lidanh
OK,
Thank you Giuseppe!
HoangTranVinh
Dear all,
I have the same problem.
I am lucky when I found this thread :Smile_Ab:
I want to know:
If I set the MSS:
<MSS>10000</MSS>
then the value of <sendbuf> should be ?
and other relate parameters (if any)
Thanks for your support,
HoangTV.
Giuseppe Corti
Hi HoangTV,
If you decide to change the factory default values for the parameters like <sendbuf> and <MSS> then their tuning should be the result of intense tests in your typical scenario.
In any case, I expect that the value of <sendbuf> is always kept higher than that of <MSS> (even slightly, in your case i can speculate 12000).
But, please let me to repeat that you should thoroughly test the choices taken.
Regards,
Giuseppe
HoangTranVinh
Many thanks :Smile_Ab:
Regards,
HoangTV
Alessandro Alinone
In most cases, acting on <sendbuf>only should be enough, without touching <MSS>. Try to increase sendbuf as much as you want (up to 2147483647) and see if the situation improves.
Here is how it works.
By default, Lightstreamer keeps a tiny sendbuffer (1600 bytes). This allows the server to implement very fast detection of network congestion and adaptively throttle the data flow. What happens with a small sendbuffer is that the congestion windows is kept small as well. This works great with normal streaming activity. But if a huge amount of data needs to be delivered in a short time (similar to a normal download activity), then such a small sendbuffer would slow down the data transfer a lot. That's why increasing the sendbuffer, possibly to very large values (for example, try 100,000,000 bytes), will boost download performance.