Software Engineer

more fun linux development stuff…

· by jsnby · Read in about 4 min · (740 Words)
Computers

After getting a basic “hello world” SNMP example to work, I set out to test this method of data transfer against two others…. writing a custom client/server app and doing wgets against an apache web server. The goal here is to be able to quickly transfer about 10 lines of text from one machine(the remote hosts) to another machine(my monitoring machine on an administrative network) in a manner that is efficient. The test case is transfering 10 lines of text 60 times in a row, from essentially what would mimick a single data transfer from each of 60 machines. We’re doing the transfers serially and not in parallel like in real world conditions just to get benchmarks of the various transfer methods.

Throughout this project, I’ll be using perl for just about everything. I know some of you hate perl because it’s ugly, hard to read, etc., but I love it. I’d take it over python any day.

First up, SNMP. This application is going to run on my admin box and poll the remote machines.  I decided to use perl’s Net::SNMP module because it seemed easy to use. I started off by doing 10 gets for each machine, one for each line. The poller process simply dropped the data on the floor after receiving it. This proved to be very slow. My first attempt was somewhere near 50 seconds. This was due to me having to instantiate a bunch of stuff to do each individual get. Next, I tried doing the 10 gets again, this time reusing the same session. This brough me down to 34 seconds.

I got to thinking about it, and decided to try and take the 10 lines of text from the file, replace the \n characters with pipes (|) , and then do the transfer in a single get. I coded it up and this proved to be very quick….about 3 seconds for all 60 machines. I decided to see just how much text I could send in a single get…..found out that the limit is 1KB. That’s pretty much a deal-breaker. The fact that I would now have to maintain custom snmpd.conf files was also a slight turn-off to this method.

Next up, my custom client/server app. I stripped down a simple chat application that I wrote in perl. I am running the receiver (the server) on the admin box and the client on the remote machine. Since it worked so well to transfer the text as a single line, I started with this approach. Again, the input(once received by the server) was simply dropped on the floor. I ran my test and I got about 2.5 seconds. Not bad….definitely better than my results using SNMP. In theory, this should be the fastest method, but since I wrapped my simple script inside a bigger script, every time I ran the simple client script, it had to re-interpret all the perl modules and I believe that this is why the performance was so bad.

Final method….using simple wgets against an apache web server. Since we were already running apache on the admin box and had connectivity from our remote machines to the admin network, this was looking like a very good solution. I performed two tests….doing GET requests and doing POST requests. First up was the GET request. Since we don’t have the LWP modules installed on our boxes, this meant I had to shell-out from my app to do a wget. I wrote a simple php receiver that looked to see if my variable was set and if it was respond with a 1, else a 0 and just simply drop the input on the floor. I though that this would be a deal-breaker, but I was pleasantly surprised. Again, I coded this up to transfer the text as a single chunk. Doing the GET requests, I got a time of a little over a second for all 60 requests. I then tried  doing wgets with the -post-data flag set. Doing the posts, I got under a second for all 60 transfers.

This was not quite what I had expected. I anticipated that apache would add some overhead on the receiving end.

So, my application will end up being written in a perl client to send the data and a php receiver to get the data and do something cool with it….stay tuned to find out what exactly it is I’m up to.