My last post is few years old and it’s not because I have lack of good ideas to be written, but because of my poor time management I’m trying to improve. It’s January, time to make some New Year resolutions :), so I will try to post more often.
Recently, on one project I had to load several CSV files from a remote linux host to a DB2 database residing also on a linux host. Sounds easy, but the files were quite big (several GiB), so I tried to do it in the most efficient way. Here is my approach:

  1. Compress the file on the source system, because usually the text files  have low entropy, so the compression ration is very good.
  2. Create a named pipe on the target host
  3. Send the data over the network using ssh in a compressed format
  4. Uncompress the data on the fly and redirect the uncompressed output into the named pipe
  5. Use the named pipe as an input file for the DB2 loading tool

I believe this is the most efficient way for loading data from a remote host. It has also few advantages. You don’t need space for uncompressing the file on the target host and you don’t even need space for compressed file on the target, at all. This approach also saves a lot of network traffic because we are moving the data over the network in a compressed format. It was very useful in my scenario because I loaded the data to a columnar database (DB2 BLU and Sybase IQ) which usually have also a very good compression ratio of the data and the amount of the storage was quite restricted on the target.

Here is an example:

source$ gzip file.csv
target$ mkfifo -m 0666  /load/file.csv
source$ cat file.csv.gz | ssh target 'gunzip -c - > /load/file.csv'
target$ db2 "load FROM /load/file.csv ... ";

One thought on “Fast data loading from a remote host

  1. pierre says:

    Nice trick with the named pipe. Other than that – quite a common solution.

Leave a Reply

Your email address will not be published. Required fields are marked *

This Blog will give regular Commentators DoFollow Status. Implemented from IT Blögg