venerdì 7 febbraio 2014

Proxmox playground - Part 1

Notes
I've decided to play a little bit with Proxmox, this web-interface cluster distro, that allows easly to migrate VM among nodes. Proxmox is basically based on Redhat Cluster. The Proxmox suite miss a tool for handling the cluster resources (at least I didn't find it); still, the container part is based on OpenVZ, outdated by LXC. Despite this, the worst thing now is the lack of the documentation for v3.0.

Scenario
I want to build a small dev environment where a new vm (I will use containers actually - CT) will be spawned by developers everytime they need one. Since the software set for those vm will be large (database, backend, general tools ..) I will create a CT template. The other machines thus, will be cloned from this one

Index
In this Proxmox experiment I will configure  in HA:

 1) a service IP, on which reaching the cluster
 2) a CT template, from where originate the CT clones
 4) a script for the CT creation/spawing automation

The document assumes that a 2nodes Proxmox "cluster", will be already setup and running. Briefly the setup here is
 * Cluster name: vicinet
 * Node01: 'proxmox' - 192.168.1.13
 * Node02: 'proxmox2' - 192.168.1.14
 * Storage: NFS /mnt/pve/shared on a separate node

Configure FENCING first
In order to proceed with any HA resource, we need to configure the fencing. Fencing is basically the procedure by which you exclude a node from the cluster in case of malfunction, sending to this latter a shutdown/reboot command. Such commands could be sent by Apc ups as well as an IPMI device and so on. The software which will trigger the device (ups, ipmi..) to send commands is called Agent

In the Redhat Cluster, the package 'fence-agent' will provides several fence agents that will probably cover your device too (apc, cisco, ibmblade .. but also
ipmi, ilo and many more). On Proxmox the package is called 'fence-agents-pve'

NOTE: Proxmox works with VMs (kvm) and Containers (OpenVZ). Since my experiment here was just to test the creation/automation of Linux Containers I could quietly setup Proxmox in VirtualBox

For my fencing setup, I found this fence_vbox; you can also find agents for vmware, virsh, in the case

Enable fencing on Debian on each node
Uncomment the last line of the file /etc/default/redhat-cluster-pve
# this file is sourced by the following init scripts:
# /etc/init.d/cpglockd
# /etc/init.d/cman
# /etc/init.d/rgmanager

FENCE_JOIN="yes"

reload these services in this order and *outside the /etc/pve directory*
/etc/init.d/rgmanager stop
/etc/init.d/cman reload
/etc/init.d/rgmanager start
/etc/init.d/pve-cluster restart

(I did an alias in .bashrc, like 'alias restart-cluster='...')

Test if the fencing was joined
proxmox:~# fence_tool ls
fence domain
member count  2
victim count  0
victim now    0
master nodeid 2
wait state    none
members       1 2

Edit the cluster.conf file with the proper procedure (cluster.conf.new ..) adding the fencing setup; I will just report mine, just to have an example:
<clusternodes>
    <clusternode name="proxmox" nodeid="1" votes="1">
      <fence>
        <method name="1">
          <device action="reboot" ipaddr="Proxmox" name="VBox"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="proxmox2" nodeid="2" votes="1">
      <fence>
        <method name="1">
          <device action="reboot" ipaddr="Proxmox2" name="VBox"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>

  <fencedevices>
    <fencedevice agent="fence_vbox" login="zmo" name="VBox" host="192.168.1.65"/>
  </fencedevices>

 * 'proxmox' 'proxmox2', are the hostnames of the proxmox nodes
 * 'Proxmox' 'Proxmox2', are the GUEST names in VirtualBox
 * 'VBox', names the fence configuration to use
 * '192.168.1.65', is the IP of the VirtualBox HOST

As said, this is just an example of fencing configuration in Proxmox; test and use your own agent and values

Add a cluster IP
This will be the cluster IP, that will migrate between node in case of fails. Configure this migration pattern, writing a first <failoverdomain> section

<rm>
  <failoverdomains>
      <failoverdomain name="vicinet" ordered="1" restricted="0">
        <failoverdomainnode name="proxmox" priority="1"/>
        <failoverdomainnode name="proxmox2" priority="2"/>
      </failoverdomain>
  </failoverdomains>
</rm>

A resource assigned to this failoverdomain, will have 'proxmox' as preferred node and in case proxmox get failed will be migrate on 'proxmox2'. So, a brief explanation
 * ordered: assign a preference order for the resource by priority (1=max, 100=min)
 * restricted: the resource could not migrate (or even started manually) if the nodes defined are not available. We turn 'restricted' off (0)

Add the IP configuration:

<rm>
  <failoverdomains>
      <failoverdomain name="vicinet" ordered="1" restricted="0">
        <failoverdomainnode name="proxmox" priority="1"/>
        <failoverdomainnode name="proxmox2" priority="2"/>
      </failoverdomain>
  </failoverdomains>
  <resources>
      <ip address="192.168.1.100" monitor_link="5"/>
  </resources>
  <service autostart="1" domain="vicinet" name="ip_cluster" recovery="relocate">
      <ip ref="192.168.1.100"/>
  </service>
</rm>

 * monitor_link is just a monitor for the link state, just for that ip
 * relocate means that the service will move on automatically on the preferred node ('proxmox' in this case) when, for example, this will be back online from a down
 * autostart, the service is started by the cluster itself

Reload the cluster and see that you have a new resource on your cluster (use clustat
Cluster Status for vicinet @ Sun Dec 29 10:54:32 2013
Member Status: Quorate

 Member Name             ID      Status
 ------ ----             ----    -----
 proxmox                 1       Online, Local, rgmanager
 proxmox2                2       Online, rgmanager

 Service Name            Owner (Last)          State         
 ------- ----            ----- ------          -----         
 service:ip_cluster      proxmox               started

Also see the new interface on the node proxmox (use 'ip tools' and not 'ifconfig')

3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 01:E0:27:29:3e:fc brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.13/24 brd 192.168.1.255 scope global vmbr0
    inet 192.168.1.100/24 scope global secondary
 






lunedì 28 ottobre 2013

Log4j syslogAppender and Rsyslog

This is how I workarounded an annoying situation, working with log4j.syslogAppender (the syslog module from the Java logging library) and Rsyslog togheter, in order to sends Tomcat logs to a remote log analyzer (Splunk, on the same machine with Rsyslog), that understands 'log4j' format

The article is just meant to show some Rsyslog hacks

Of course, I noticed the problem from the log analyzer, which wasn't working because of the confused log format. So firstly, I wanted to have a look how the logs were cominig to Rsyslog

To show all the fields, I've enabled the RSYSLOG_DebugFormat, next to the entry in /etc/rsyslog.conf
local1.info /var/log/log_analyzer/application.log;RSYSLOG_DebugFormat
This the awfull result:
Debug line with all properties:
FROMHOST: 'localhost', fromhost-ip: '127.0.0.1', HOSTNAME: '2013-10-25', PRI: 142,
syslogtag '19:', programname: '19', APP-NAME: '19', PROCID: '', MSGID: '-',
TIMESTAMP: 'Oct 25 19:38:51', STRUCTURED-DATA: '-',
msg: '38:51,915 INFO
As shown, those fields are totally messed up (HOSTNAME: '2013-10-25', MSGID: '-' ..) The reasons behind this "unmatching" log4j/rsyslog fields, could be either a misconfiguration log4j side (don't ask me where since the configuration is minimal) or the broken syslogAppender module. Reading on the web I was convincing myself much for the latter

So, my decision at the moment is to bypass as much as I can, the Rsyslog handling of those logs in order to store the logs exactly as they come from log4j

These are the steps

1 - Let's define a custom template
Rsyslog has its default template, so for first we need to setup a custom template (Log4j, which basically will avoid any formatting for our logs) and make it point to our "local1.info" entry
$template Log4j, "%rawmsg%\n"
local1.info /var/log/log_analyzer/application.log;Log4j
2 - Turn the control character escaping off
With %rawmsg%, our logs will look like this
<142>2013-10-25 20:20:53,862 INFO  ["http-80"] org.apache.cxf.interceptor: Inbound Message#012Content-Type: xxx#012Headers: xxx
As we can see the string has many #012 in the middle. This represents a form feed control character. The default global option in Rsyslog is meant to escape the control characters, so let's disable it
$EscapeControlCharactersOnReceive off
Verify the Rsyslog configuration
lurch~# rsyslogd -N1
...
rsyslogd: Warning: backward compatibility layer added to following directive to rsyslog.conf: ModLoad imuxsock
rsyslogd: End of config validation run. Bye.
And restart the service
lurch~# service rsyslog restart 
Let's have a look to log again
<142>2013-10-25 20:20:53,862 INFO  ["http-80"] org.apache.cxf.interceptor: Inbound Message
Content-Type: xxx
Headers: xxx
Much better but

3 - Remove some chars from the string
Our string still begins with a <142>. This is the PRIO value in Rsyslog. Since the message is a RAW message, I assume that the PRIO is sent directly from log4j, so it wont be possible to play with Rsyslog fields, in order to remove it
So I decide to cut off the first characters in the string(<142>), in the template definition
$template Log4j, "%rawmsg:6:$%\n"
Finally:
2013-10-25 20:20:53,862 INFO  ["http-80"] org.apache.cxf.interceptor: Inbound Message
Content-Type: xxx
Headers: xxx

domenica 10 marzo 2013

LXC and cgroup.memory on Debian

Two days ago on Lurch, I was trying to show/set a memory limit for a container (LXC), using "lxc-cgroup -n <container> memory.limit_in_bytes"

Unfortunately, I got the message "lxc-cgroup: missing cgroup subsystem", that I've firstly intended as "I couldn't mount this cgroup at this session"

Briefly, asking about memory cgroup to LXC, everything was ok


while asking to linux not


Another confusing point to me, was the check of the dmesg output, that showed memory cgroup between the others

So, after a little of googling, I have understood like, the memory cgroup  is just not enabled on Debian by default. That because having the cgroup.memory enabled, costs around 15Mb of ram, that is obviously a waste if you don't use that cgroup

In order to have the availability of said cgroup, you need to instruct the Grub by /etc/default/grub with the boot parameter cgroup_enable=memory

The amount of memory reserved to the cgroup nos is printed out during the boot time
In the end I could set my cgroup memory limit

sabato 9 marzo 2013

Pxe with Dnsmasq

Just a couple of words..

For a much too long period of my life I have always manually changed the "pxelinux" entry in the "dnsmasq.d/domain.conf" file, to achieve the boot with this and with that image depending to the needed install distro

I just have finally found a way to serve each pxe images in one shot and even with a confortable menu list

That is accomplished adding something like:
Needless to say that you could even make point each pxe entry to a different tftp servers

mercoledì 2 gennaio 2013

VirtualBox port-forwarding

Today on my Munich-Florence train, I've waisted some time just trying some ways for "sshing" my puppet VirtualBox guest. Usually DHCPclient does everything and, once the ip address is got by the guest, I can easily "ssh" into it. This time, I had not a wifi connection and I've encoutered some annoying bounces on my ssh attempts. It looked like the guest was unreachable. I've instinctively tried every "nat", "bridged", "host-only" options.. and in the end .. it touched me to read the documentation :) (by the way, a very good paper) 

..and it was so i've discovered this official best practice to port-forward services on a VirtualBox guest

In this example (exactly extracted from the doc), we are going to portforwarding the SSH service, from our HOST 8888 port, to the GUEST (vm name: "Puppet Test Machine") on 22 port:

"sshService" is just a label.

In this way, our HOST will keep the forward on each interfaces. Anyway, it's possible to bind a specific interface though.

Now that we have our forward ready, we can connect the loopback on the given port
This rule will be permanet unless you explicitly remove it. See the rule properties:
Then delete the rule:

sabato 17 novembre 2012

Python-stdeb

This is what i've done, to make a shiny python library of mine becomes a fully respectable deb package. Move in your python directory, where everything has been started and create your setup.py file, more or less like this sample:
So, in your directory you list:
Create the stdeb configuration file stdeb.cfg
This file will write parameters in the soon generated DEBIAN/control file. A complete list of the parameters translation is here: http://pypi.python.org/pypi/stdeb#stdeb-cfg-configuration-file
Given that, build your package with:
NOTE: It will compile the package with mypylib-"version"; the version is the one written in setup.py (version="")

enjoy your deb_dist directory.

Mikrotik (winbox): VLAN and bridge

The other day i needed to add a new wifi access point in a new place in our building. Our network, like many others, is partitioned in VLANs. We have almost a dedicated VLAN for any different network segment.

Our best practice, where permitted, is to keep access points as untouched as possible, working the VLANs on the management switches. This develop a better/centralized net control to us. However this time the location was linked to the rest of the infrastructure by a non-management switch.



The "Port 40" is just where the trunk from the unlucky zone, ends on the management switch.

So, since the NON-management switch could not collaborate with us, we are forced to tag the VLAN traffic on the AP itself.



In this article we will start from the default Mikrotik configuration up to achieve our new Bridge for the new VLAN. The only stuff we have already configured, is the switch port (in our case the number 40) marked as tagged with VID 6. Im obviously referring to the management switch.
note: I assume you already have a server for this VLAN (dhcp server and so on ..)
otherwise: http://www.cyberciti.biz/tips/howto-configure-linux-virtual-local-area-network-vlan.html

Let's start opening winbox which allow us to connect the mikrotik through the proper mac address, avoiding lost of connection playing around bridge, VLANs ..

~$ wine winbox.exe



Create the new VLAN interface which will be tied to the master-local port on the router



Why master-local port?

The master-local port on the routerboard is the collection of the slave ports, then i act on that to give the same address (vid 6) for both wireless and every wire connections

Once the VLAN interface is created, move to the "Bridge" tab and close the existing one (bridge-local), by clicking "disable" (the red "X")



.. i've forgot .. this VLAN is basically for guests, so the new bridge will be named "bridge-guest". Open "Bridge" tab and add that:



Move in the near tab "Ports". Double click on "master-local" and replace the "interface" field with the VLAN interface "guest" and the "bridge" field with the just bridge-guest



Configure "wlan1" interface replacing the field "bridge" from ''bridge_local'' to ''bridge_guest''; then go to "IP/DHCP Client". If everything goes ok, you will be able to get an ip address from your server on the other side. When on the tab, click on "+" and in the "Interface" field, choose "bridge-guest". Click ok and



thats all.