Faster sending of files from network source
I noted that sending files from network sources is very slow. When checking the source, I realized that this is because the file to be sent is accessed byte for byte.
Attached is a patch that implements a simple "caching" to greatly improve speed. In fact, simply the file is loaded in one transfer into the memory before it is sent.
I implemented this only for the "simple" Send File, not for ZMODEM, etc., as I have no ability to test these. I have tried to implement it in a way that the caching does not interfere with these functions. I think, extension to support also these functions should be easy. (However, some functions use _llseek, which is not implemented in the caching yet.)
It would be great to see this patch (or maybe an improved version) in the next release.
Thank you for your feedback.
We will review your patch in our team and decide whether your patch is committed
into the trunk repository.
Thank you for your patience.
any news regarding this?
I am sorry your reply is late.
Now we discuss your patch. We think that the legacy 16bit API are
removed and the API are replaced to 32bit API for performance
Please wait a moment.
I have checked the new version with the 32b API. I think the speed over network has improved, but it is still much slower than using a local file (2kB/s vs. 10kB/s at 115.200). So may I ask you to reconsider using my patch or to implement a similar approach. (I think it is essential to read the file not byte for byte, but in larger chunks. I guess reading always e.g. 256 bytes would also work, my patch reads the complete file at once at the begin of the transfer.)
Again, we have discussed your request. Unfortunately, your whole patch can not be comitted
into the Tera Term repository because we think below code will be lower throughput by using
+ if(SendVar->FileSize && SendVar->FileSize < 2000000000)
+ SendVar->FileCacheBuffer = (BYTE *)malloc(SendVar->FileSize);
+ fc = _lread(SendVar->FileHandle, SendVar->FileCacheBuffer, SendVar->FileSize);
Probably, if the 2GB file is specified, the malloc and the _lread function will stall
for a long time.
Would you please think again your patch and send newer patch to us?