hping2

hping2 is a network tool able to send custom TCP/UDP/ICMP packets and display target replies. It work pretty much like ping but with far more options. It can be used among other things to:

  • test firewall rules
  • make port scanning
  • test net performance using different protocols, packet size, etc…
  • Path MTU discovery
  • traceroute like under different protocols

Testing port state

One of the most useful use-case of hping is to test if a TCP port is open or not:

# hping -S -p 22 192.70.106.78
HPING 192.70.106.78 (eth0 192.70.106.78): S set, 40 headers + 0 data bytes
len=40 ip=192.70.106.78 ttl=64 DF id=0 sport=22 flags=RA seq=0 win=0 rtt=0.2 ms

The RA flag indicate that the TCP port 22 is closed. The remote host have sent a RST/ACK in response to our SYN packet. If the port was open the flag would have been SA instead.

Not that you can also use the ++ parameter to automatically increase the port number:

# hping -S -p ++80 192.168.10.1

Port scanning

hping can be use as a lightweight port scanner:

# hping -S --scan 20-22,80,8080 -V 192.168.100.1
using eth0, addr: 192.168.100.18, MTU: 1500
Scanning 192.168.100.1 (192.168.100.1), port 20,21,22,80,8080
5 ports to scan, use -V to see all the replies
+----+-----------+---------+---+-----+-----+
|port| serv name |  flags  |ttl| id  | win |
+----+-----------+---------+---+-----+-----+
   20 ftp-data   : ..R.A...  64     0     0
   21 ftp        : ..R.A...  64     0     0
   22 ssh        : .S..A...  64     0  5840
   80 www        : .S..A...  64     0  5840
 8080 http-alt   : .S..A...  64     0  5840
All replies received. Done.
Not responding ports:

Firewall mapping

traceroute is usually the first utility people use for this task but it’s limited to UDP “probe” packets (on port 53 by default). hping can use any protocol:

# hping -z -t 6 -S mail.test.com -p 143
TTL 0 during transit from ip=10.1.5.3
7: TTL 0 during transit from ip=10.1.5.3
8: TTL 0 during transit from ip=10.2.5.3
9: TTL 0 during transit from ip=10.3.5.3
10: TTL 0 during transit from ip=10.4.5.3
11: TTL 0 during transit from ip=10.6.5.3
....
len=46 ip=10.5.5.3 flags=SA DF seq=33 ttl=47 id=0 win=5840 rtt=4341.3ms

Doing a SYN attack

hping can forge packets with a spoofed IP address using the -a parameter. Together with the -i (for interval) option, you can use it to make a SYN attack :

# hping -a 192.168.10.99 -S 192.168.10.33 -p 80 -i u1000

Transferring file

hping can be use in very creative manners, for example to transfer a file between two hosts you have access to, through a very ‘closed’ firewall.

On the receiving end we need to start hping in listener mode, and specify a ‘signature’ string that indicate the beginning of the file content:

# hping 192.168.10.66 --listen signature --safe --icmp > myfile

On the sending side, you must ‘sign’ the packet, with the signature used at the receiving site, and indicate the file to read:

# hping 192.168.10.44 --icmp -d  100 --sign signature
--file myfile

ICMP, TCP or UDP can be use indiscriminately.

Further Reading and sources

[DRBD] Fixing a split-brain

For the most part DRBD is pretty resilient but if a power failure occurs on both nodes or if you screw up an update on a Corosync cluster, you have a good chance to finish with a split-brain situation. In that case DRBD automatically disconnect the resources and let you must fix the mess by hand.

Check nodes status

cat /proc/drbd
version: 8.4.0 (api:1/proto:86-100)
GIT-hash: 28753f559ab51b549d16bcf487fe625d5919c49c build by gardner@, 2011-12-1
2 23:52:00
 0: cs:StandAlone ro:Secondary/Unknown ds:UpToDate/DUnknown   r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:76

The master isn’t happy and running.
The secondary node isn’t better either:

cat /proc/drbd
version: 8.4.0 (api:1/proto:86-100)
GIT-hash: 28753f559ab51b549d16bcf487fe625d5919c49c build by gardner@, 2011-12-1
2 23:52:00
 0: cs:StandAlone ro:Primary/Unknown ds:UpToDate/DUnknown   r-----
    ns:0 nr:0 dw:144 dr:4205 al:5 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:100

Fixing the cluster

To repair the cluster we will declare one node “obsolete” (we choose the secondary here) and then reconnect resources so they can resume synchronization.

On the “obsolete” node:

drbdadm secondary all
drbdadm disconnect all
drbdadm -- --discard-my-data connect all

On the master node:

drbdadm primary all
drbdadm disconnect all
drbdadm connect all

[OpenVZ] Limit memory usage for file-caching

OpenVZ kernel handle memory exactly like vanilla one. By default it will use as many memory as possible to cache data and file. So if you look at memory usage with a simple free -m command inside a container, and pick the wrong line, you can conclude that you’re lacking RAM when you not.

If you want to have a better idea of memory usage you can check the content of the /proc/meminfo:

# cat /proc/meminfo | egrep 'MemTotal|MemFree|Slab|SReclaimable|Buffers'
MemTotal:       24485028 kB
MemFree:          949880 kB
Buffers:         7601704 kB
Slab:            4725268 kB
SReclaimable:    4658232 kB

Buffers, and SReclaimable are use respectively for I/O buffering and file content caching and can be reallocated to processes needing RAM on-demand. Therefore this memory is in fact ‘free’.

Now one interesting thing about OpenVZ is that you can limit this file caching behavior by container, using the dcachesize variable, like this:

# vzctl set  --dcachesize 268435456:295279001 --save

Here we limit the VM to use 256Mb for file caching. Note that you must use the ubc container’s memory management schema not the standard SLM:

# vzctl set  --save --slmmode ubc

Maximum HTTP header size

The HTTP specification (RFC2616 for HTTP/1.1) doesn’t define a maximum header size.
However, in practice, all servers have limits, for header numbers and header field size:

Apache 2.x: 8K
Nginx: 8K
IIS: 8K-16K (depending version)

If a request line exceed the limit a 414 Request-URI Too Large error is returned. If a request header field exceed the limit a 400 Bad Request error is returned. In order to be sure that a request will be proceed by all HTTP server, it’s better to limit the request size to not exceed 8190 Bytes (and yes that include cookie data).

If you can’t do that, the only remaining solution is to increase the limits values. For apache you can play with the LimitRequestFieldSize and LimitRequestLine parameters. For nginx take a look at the large_client_header_buffers parameter.

Keep in mind that increasing theses values to much will seriously degrade performance.