[BBLISA] Large file transfer in 65ms latency

John P. Rouillard rouilj at cs.umb.edu
Thu Apr 4 17:42:09 EDT 2013


In message
<D903A33936D7D942B97A5BE9E8F3D90C72071595C9 at AUSP01VMBX06.collaborati
onhost.net>, Tika Mahata writes:

>I am looking for the solution to transfer large oracle database
>files about 2TB

I assume oracle will be shut down and the files will not be in use.
If not you need to use some oracle (replication, export, hot backup)
mechanism to transfer or make a static copy of the files otherwise you
will end up with garbage on the destination system.

>between two datacenters which have latency of 65ms with 1gpbs p2p
>link. What are the options to reduce the transfer time?

Hard to beat:

  tar -c | gzip | nc  -> nc | ungzip | tar -x

if you are doing an initial transfer. Also you can try a parallel
transfer using:

    lftp -e 'mirror -c --parallel=3 --use-pget=5  /source/oracle/data/dir/. .' sftp://user@remotehost

that transfers 3 files at a time from
remotehost:/source/oracel/data/dir to . using 5 data streams (total of
15 streams transferring data).

Note both of these can fill your link and make other traffic across it
laggy.

>I have Linux at both ends. Any software or protocol or tuning
>the OS parameters?

I think there are some params to tweak the window sizes for tcp so if
you have a low error link you may be able to squeeze out some
additional performance by reducing the overhead for the ack
mechanism. But I don't know if it's worth it and could cause more
problems if your link loses packets.

I'll leave this part of the question for others to answer.

--
				-- rouilj
John Rouillard
===========================================================================
My employers don't acknowledge my existence much less my opinions.



More information about the bblisa mailing list