ANNOUNCEMENT: Live Wireshark University & Allegro Packets online APAC Wireshark Training Session
April 17th, 2024 | 14:30-16:00 SGT (UTC+8) | Online

Wireshark-users: Re: [Wireshark-users] (no subject)

From: Martin Visser <martinvisser99@xxxxxxxxx>
Date: Thu, 9 Jul 2009 11:37:28 +1000
Kevin,

From my experience at least 90% of perceived database application performance issues are blamed on the network. When I do analysis of the actual traffic however, I find that 90% of the problem is that the application designers have not taken into account what the network is able to provide!

Many applications seemed to developed on high-speed (gigabit) low-latency networks, with unloaded machines. Only right at the final integration stage/roll-out do those developers actually start to get interested in the actual environment they are deploying to.

For many app developers, the network is pretty much a black box, so sadly it is the job of network engineers to acquire the data from the network and present it back to the application people. Luckily Wireshark can help immensely with this. Things you can do include:-

1. Understand basic network infrastructure parameters - max throughput, latency (round-trip time). Error-rates should not normally be an issue, but they can be when wireless/satellite is the physical media. Max throughput can be tested using load generation tools like iperf, or you can even glean this from looking at the IOgraphs of the real traffic. Looking at SYN-SYN?ACK pairs will give you a good idea of the minimum RTT possible (as the this tends to happen in the kernel and is not normally load related).

2. You will want to know if Quality of Service has been designed into the network, and look at how it works. Under congestion do packets get delayed or dropped?

3. How does the protocol work? Are client and server very chatty (and hence impacted by RTT). When a client query occurs, is there a long delay before the "time to first byte"?  This indicates the amount of prrocessing time the backend has to perform. Or does a client query result in a large server response - and does client need to wait until last byte received from the server until it can process the result (and then display it) - what is the "time to last byte" for your query in question? Is the application protocol dependent on things like DNS or other name resolution and is it working optimally. Does the client talk to a farm of servers, and is the load-balancing/redirection optimal.

Obviously having the protocol specification is useful, or hopefully Wireshark can decode enough for you to see the conversation. (This may mean decoding SSL, or maybe ven disabling encryption for a particular client to simplify testing). Otherwise you can infer quite a bit from simply watching the flow ( I guess a bit like how spies can gain intelligence from simply detecting patterns of transmissions).

If I am in a situation of not knowing a lot about the protocol, I try to use a coordinate approach where I basically watch the client screen, and Wireshark, recording the approximate frame numbers as a way of synchronising what the client/server does and what the network sees.

BTW a client initiated TCP RST usually indcates it is "done" with the session. This happens either as a natural course of the application protocol (ie it received the necessary response), or if the client receives an application response that says something like "don't ask me that, try something else" or even as a result application traffic on that session has quiesced for a timeout period and hence it wants to free up resources.



Regards, Martin

MartinVisser99@xxxxxxxxx