indrajit
Hi,
is there any difference between sdk-dotnet adapters of Moderato Edition and commercial edition (that is, Lightstreamer Allegro, Presto, or Vivace)
because on my machine I installed two editions and I made some changes to dotnet adapter in moderato edition according my requirements, Here LS can push the data on to client perfectly. But when I copy and replace the same dotnet adapter onto commercial edition, The LS unable to push the data onto client continuosly (after pushing first time the connection is closing). Any suggestions regarding this issue.
Thnaks,
Dario Crivelli
No, there is no difference. We have to investigate the behaviour on the Allegro/Presto/Vivace version of your application in more depth.
Which connection do you observe it's closing? Client-Server or Server-Remote Server?
May you please show us the Server log?
indrajit
Here is my log file attached.
Dario Crivelli
I see that the connection is not really closed, but it is rebound 2 times in a few seconds (which is unusual), then it remains open until a page reload is performed.
Moreover, the "Content length too small" warning is issued, which is unusual as well;
this case should never happen, as the <content_length> setting in the Server configuration file should always be large enough.
May you please check the <content_length> setting?
I see that you subscribe to 256 fields. Are your updates very large? A Server log with the LightstreamerLogger.pump category at DEBUG would reveal this.
indrajit
Dario I checked the <content_length> setting of configuration file, there I am not changing any thing it was actually a default setting.
Yes, I am subscribing for 256 fields and my updates for each filed are about 2 kb to 3 kb.
And for your information with the same settings I can push the data continuosly onto client in moderato edition but the same thing is not happening with commecial version.
Here I am attaching log files(generated today) of moderato edition & commercial edition.
Dario Crivelli
If the update length is so huge, then the bandwidth restrictions become important.
Indeed the Moderato and commercial editions differ on this aspect.
Note that your client requests a bandwidth limit to the Server of 10 kbps, which is very low. While in Moderato edition the feature is not supported and the request is ignored (as shown in the log), in the commercial edition the limit is applied and this slows down the updates flow very much.
Are you really interested to bandwidth control in your application or did you just forget the "setMaxBandwidth" call in your client code?
About the "Content length too small" warning, it seems not to signal any problem in this case; however, it should be avoided, by enlarging the <content_length> setting.
indrajit
Dario, I increased the bandwidth(setMaxBandwidth) upto 100000 kbps. But there is no much improvement in pushing the data onto client continuously.
In moderato I can push tha data continuosly onto clinet.
my client side code is as follows:
function startEngine(eng) {
// eng.context.setDebugAlertsOnClientError(false);
eng.policy.setMaxBandwidth(100000);
eng.policy.setIdleTimeout(30000);
eng.policy.setPollingInterval(5000);
eng.connection.setLSHost(null);
eng.connection.setLSPort(null);
eng.connection.setAdapterName("STOCKLISTDEMO");
eng.changeStatus("STREAMING");
}
I am attaching the log file generated after improving the bandwidth.
indrajit
Hi,
To day I am again observed the behavior of pushing data by setting different bandwidhts at client side which are as follows:
1. //eng.policy.setMaxBandwidth(100000); (1 st case I am commented the bandwidth line)
So, here I observed that LS server pushing all 256 fields data at a time per every 1 1/2 - 2 minutes of time ( note that it is not streaming data continuosly)
2. eng.policy.setMaxBandwidth(100000);
Here I observed that LS server pushing all 256 fields data at a time per every 1 min 20 sec - 1 1/2 minutes of time ( note that it is not streaming data continuosly)
3. eng.policy.setMaxBandwidth(50000);
Here I observed that LS server pushing all 256 fields data at a time per every 1 min sec - 1 1/2 minutes of time ( note that it is not streaming data continuosly)
4.eng.policy.setMaxBandwidth(30);
Here I observed that LS server pushing all 256 fields data at a time per every 50 sec - 1 min 10 sec. of time ( note that it is not streaming data continuosly)
what ever the timings I mentioned above are not exact timings, some times they are also not in specified boundaries.
So, here I think that Bandwidth is not playing an important role in streaming of data.
Because, by using Moderato edition I can see streaming of data per every sec. at client side ( in moderato it is not pushing the data entire at once to client) and the pushing of changed data also takes per every second, Now which is not happening with commercial edition and it also appear as not live.
Dario Crivelli
A further bandwidth limitation is probably posed by the Metadata Adapter.
You use Remote Metadata and Data Adapters, right? If you use the Remote Metadata Adapter as it comes from our examples, then this is definitely possible.
Note that both the supplied "DotNetServers.bat" and the "DotNetCustomServer.bat" launch scripts set the "max_bandwidth=40" command line argument. This instructs the Metadata Adapter (which is the ready made LiteralBasedProvider) to return 40 kbps upon "getAllowedMaxBandwidth". You should just remove those arguments.
Note that updates for a single item are always atomic. If your item contains 256 images and all of them have changed, then all that data is sent by the server in a single event. Hence, in order to honour a stringent bandwidth limit, a long pause is imposed afterwards.
This explains what you observe (if the above assumptions are right).
indrajit
Thanks Dario,
That solution worked for me, Now I can visualize the continuous changes at client side. But it is some what slow, Is there any way to increase the perfoemance of LS server to feel that what ever changes happening are as live.
Dario Crivelli
May you please provide us with more details on the expected and observed behaviour? What makes you feel that the update sequence is unnatural?
Note that if your application sends 2-3 kb fields and there are as many as 256 fields in an item and the fields change frequently, then the bottleneck might be on the client side as well.
Are those fields images that the client renders on the screen?
If the different fields are not meant to be updated by the client at the same time, you could rather try to define 256 items with one field each and see if the final effect is better.
indrajit
hi,
our scenario is like this,
we are continuously capturing screen shots and pushing them to server(we are capturing a screen shot for every 250 milli seconds).
So, for that we made an application which divides each screen shot into 256 image blocks and push them onto server continuously one by one in base64 format by using tcp/ip socketing.
And when client browses to LS Server URL he sees the screen shots which we are capturing at other end.
We are capturing & sending the screen shots and client can see those screen shots perfectly but client is taking some time( 8 to 10 seconds) to see what we captured.
So,here my question is that can we do any thing at LS server end to reduce that time taken by client browser to see captured images(ie.8 to 10 sec).
Dario Crivelli
Is 10 seconds the total time from taking the screenshot to seeing it displayed on the client? Is the client at 100% CPU during the process? Are you using the Javascript Client Library? (note that the Javascript Client Library is not meant for getting large amounts of data and may not be efficient for this task).
Lightstreamer allows you to operate in two directions:
- Sending the image slices independently. If you can display the image slices independently, then you can request them in such a way that each one comes in a different update and can be displayed as soon as possible, without the need to wait for the other images.
- Slowing the frequency of the screenshot updates. The Javascript Client Library has a mechanism to adaptively slow down the update frequency when the client can't keep the pace of the updates, though it has never been tested on very big updates.
But I'm not sure that the above is the kind of manipulation you are looking for. May you please clarify furtherly?
indrajit
yes,
we are using Javascript Client Library , is there any replacement for this because every 250 millisec. we are pushing the 256 image blocks. OR shall we go for dotnet client library to make image appearence as live.
Dario Crivelli
The Javascript Client Library was designed with textual data in mind; more generally, it was not designed with huge amounts of data in mind. Hence, we have no benchmarks for a usage of your kind. May you please confirm that your client CPU reaches 100% during the processing?
May you also please confirm that you are not interested to an adaptive slowdown of the refreshing frequency based on the client capability?
If this is the case, then we just suppose that part of the overhead is due to the Javascript interpreter, so that a dotnet based client may be more efficient. By the way, if you just turn off image rendering on the client but still subscribe to all the data what do you observe on the CPU of the client machine?
indrajit
on client side the Usage of CPU is not 100 %
when I subscribe to images on client side CPU's average usage is 7% and
when I subscribe to data without images CPU's average usage is 5%.
Dario Crivelli
As you have removed all bandwidth limits (which, with very huge data, could introduce significant delays)
and you haven't put any stringent frequency limit,
there is nothing in Lightstreamer that can cause delays of several seconds.
Therefore, there must be a bottleneck or a blocking behaviour somewhere else.
However, Lightstreamer may be involved only in concurring to causing a CPU bottleneck; but the client (which is the most sensitive part) seems not to be overwhelmed (unless you are using a multi-CPU machine; please confirm).
Hence, there is still no evidence that Lightstreamer is involved in the overall delay.
Before investigating further, may you please try reducing the load (requesting and rendering only one of the 256 subimages, for instance) and see if there is no delay in this case?
indrajit
I am capturing and dividing the whole desktop image into 256 small image blocks,and sending them to LS server in base64 foramt and again I am extracting and joining them as a single image at LS Client side this is my original scenario.
And as you said to check a single image block ( means a single field in my Item), in that case when I am pushing Image blocks onto server they all are pushed randomely (means the 1st field is not going update first,last ie.256th field is not going to up date in last,. They all are updated randomely).
So,when I am checking with single image block the LS server is taking almost same amount of time(used to take push all image blocks) to push the data (image block) onto client.
And I am running LS server on one machine and checking the data by running client on other machine.
Dario Crivelli
Unfortunately, I can't understand the point about random updates of the image blocks and why this affects the delay.
When you tried with a single image block, you only reduced the "schema" requested on the client side to one field (the upper left block, for instance), is it right? And didn't you get any latency improvement?
Commenting this from Lightstreamer point of view, if we rule out client-originated overhead, it remains to check for server-originated overhead, though we don't expect it to be higher. Please, furtherly reduce the load by only pass a single image block (the same block as in the client schema) to the Update method call on the Remote Data Adapter side and see what happens.
If the delay is still high, we can move to Server log analysis (which is difficult now, when all image blocks are involved).