[OpenVZ] Limit memory usage for file-caching

OpenVZ kernel handle memory exactly like vanilla one. By default it will use as many memory as possible to cache data and file. So if you look at memory usage with a simple free -m command inside a container, and pick the wrong line, you can conclude that you’re lacking RAM when you not.

If you want to have a better idea of memory usage you can check the content of the /proc/meminfo:

# cat /proc/meminfo | egrep 'MemTotal|MemFree|Slab|SReclaimable|Buffers'
MemTotal:       24485028 kB
MemFree:          949880 kB
Buffers:         7601704 kB
Slab:            4725268 kB
SReclaimable:    4658232 kB

Buffers, and SReclaimable are use respectively for I/O buffering and file content caching and can be reallocated to processes needing RAM on-demand. Therefore this memory is in fact ‘free’.

Now one interesting thing about OpenVZ is that you can limit this file caching behavior by container, using the dcachesize variable, like this:

# vzctl set  --dcachesize 268435456:295279001 --save

Here we limit the VM to use 256Mb for file caching. Note that you must use the ubc container’s memory management schema not the standard SLM:

# vzctl set  --save --slmmode ubc

[OpenVZ] NFSd v3 inside a container

Hypervisor prerequisite

In order to get a working NFS server daemon inside a container, you must satisfy some prerequisites on the hypervisor. First the kernel version must be recent enough. Using the latest RHEL5 or RHEL6 OpenVZ kernels is recommended. Also you need vzctl in version 3.0.24 or superior. Next you must install and load the nfsd kernel module.

Only then you can enable the nfsd capability for the container, like this:

vzctl set $ID --feature nfsd:on --save

Don’t forget to restart the container to activate this capability. After that, to simplify the VM firewall configuration i recommend you to explicitly set the lockd binding ports:

hypervisor:~# vi /etc/modprobe.d/lockd.conf
options lockd nlm_udpport=2045 nlm_tcpport=2045

NFS VM configuration

First install the nfs-kernel-server and rpcbind packages. Specify the RPC ports to use and disable NFSv4 support:

nfsvm:~# vi /etc/default/nfs-kernel-server
# Options for rpc.mountd.
# If you have a port-based firewall, you might want to set up
# a fixed port here using the --port option. For more information,
# see rpc.mountd(8) or http://wiki.debian.org/SecuringNFS
# To disable NFSv4 on the server, specify '--N 4' here
RPCMOUNTDOPTS="--manage-gids -N 2 -N 4 --port 2048"
nfsvm:~# vi /etc/default/nfs-common
# Options for rpc.statd.
#   Should rpc.statd listen on a specific port? This is especially useful
#   when you have a port-based firewall. To use a fixed port, set this
#   this variable to a statd argument like: "--port 4000 --outgoing-port 4001".
#   For more information, see rpc.statd(8) or http://wiki.debian.org/SecuringNFS
STATDOPTS="--port 2046 --outgoing-port 2047"

Don’t forget to restart the daemon.
Lastly, modify the VM firewall configuration: open the ports 111 tcp/udp and the range 2045-2049 tcp/udp for all NFS-clients IP.

Further Reading and sources

[OpenVZ] FTP inside containers

To enable FTP inside containers, you must first make sure the proper modules are loaded on the host:

modprobe ip_conntrack
modprobe ip_conntrack_ftp

Don’t forget to add them to /etc/modules
Them inside the /etc/vz/vz.conf setting file add/modify the following line:

IPTABLES="ipt_REJECT ipt_tos ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length iptable_nat ipt_state ipt_conntrack ip_conntrack_ftp"

Then restart the vz service.

[OpenVZ] iptables: Memory allocation problem

Let say you add a new iptable rule inside an container, but this time this happen:

# iptables -I INPUT -s 123.123.123.123 -j DROP
iptables: Memory allocation problem

Where does it come from ?

You probably hit the limit of the numiptent parameter. Check its failcounts:

# egrep "failcnt|numiptent" /proc/user_beancounters

If it’s greater than zero, you have your answer.

Increase the limit

On the host you can redefine the limit (soft and hard) for a container like this:

# vzctl set VPS_ID --save --numiptent 800:1000

Here i double default values.

[OpenVZ] Enable iptable inside containers

To enable iptable inside containers, you must first make sure the proper modules are loaded on the host:

modprobe xt_state
modprobe xt_tcpudp
modprobe ip_conntrack

Don’t forget to add them to /etc/modules
Them inside the /etc/vz/vz.conf setting file, add the following line:

IPTABLES="ipt_REJECT ipt_tos ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length ipt_state"

Then restart the vz service.

[OpenVZ] vzctl enter and env variables

When doing a vzctl enter from an OpenVZ hypervisor you go inside the container but… without any environment variables :

hypervisor:~# vzctl enter container
entered into VE 101
container:/# echo $LANG

container:/# su -
container:~# echo $LANG
en_US.UTF-8

You can workaround this ‘problem’ by patching the /root/.bashrc from the container to execute a su - like this :

if [ "$LANG" = "" ]; then
    exec su -
fi

[OpenVZ] Useful one-liner

List containers sorted by CPU usage:

# vzlist -o ctid,name,laverage

List containers sorted by TCP sender buffer usage:

# vzlist -H -o ctid,name,tcpsndbuf | sort -r -n -k3

List containers sorted by TCP receive buffer usage:

# vzlist -H -o ctid,name,tcprcvbuf | sort -r -n -k3

List containers sorted by overall resources consumption:

for i in `vzlist -H -o veid`;do echo $i; vzcalc $i; echo "========"; done;

Add a new network interface to a container:

# vzctl set <id> --netif_add eth0 --save