Fixing locale errors in Ubuntu 8.04

I’ve hit this problem a few times, and figured I’d leave a note for myself how to fix it. Ubuntu 8.04 seems to hiccup sometimes (on a VPS) for generating the correct locales. In particular, I get this error, a lot:

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

Normally I just do ‘dpkg-reconfigure locales’, but with 8.04, this doesn’t seem to do squat. The solution is to edit the /var/lib/locales/supported.d/local file, and insert the correct locales (it will normally not exist, so create it):

# cat /var/lib/locales/supported.d/local
zh_TW.UTF-8 UTF-8
zh_TW BIG5
zh_TW.EUC-TW EUC-TW
en_US.UTF-8 UTF-8
en_US ISO-8859-1
en_US.ISO-8859-15 ISO-8859-15

You can then do a ‘dpkg-reconfigure locales’ and they will be generated correctly. For a list of supported locales, try this:

cat /usr/share/i18n/SUPPORTED | grep US

Files between ESX and Linux via NFS

I like ESX. I like Linux. It is absurdly easy to configure Linux as an NFS server and mount it in ESXIi).

Installed NFS

I currently use Ubuntu Server for my home lab, but the process is basically the same for Red Hat and derivatives.

sudo apt-get install nfs-common
sudo apt-get install nfs-kernel-server

Next, configure NFS so it can server your local LAN. Normally you would list only specific servers, but, well, we’re being cheap and dirty today. Open /etc/exports in VI or your editor of choice.

/etc/exports

/media/disk/Images 192.168.0.0/24(rw,no_root_squash,async

Restart NFS.

sudo /etc/init.d/nfs-common

Go to Configuration -> Storage -> Add Storage.

Select NFS

Fill in the info, see screenshot.

Wait a minute. Voila! New datastore.

Images to come shortly.

Beginning Scripting ESXi

I’m not impressed too often with much software, especially the closed source kind. I find a leaning preference to all things FOSS. If I had a million dollars, I’d likely spend all day contributing to all the projects I wish I had time to contribute to. Regardless, there are a select few closed-source products that I believe are truly excellent. I mean, the type of software where you aren’t asking “I wish this could do this” and start asking “I wonder what else this can do.”

While I’ve played around with most types of virtualization out there (OpenVZ, Xen, V-Server, qemu…), I’ve really found a soft spot for VMWare.

Don’t get me wrong, if I was going to host a heap of Linux web servers I would absolutely use Xen, but for a heterogeneous environment, I haven’t used anything as easy as VMWare’s products. Not that I judge a product by how easy it is to use, not by a long shot, but ease of use sure makes judging other factors easier.

Regardless, this isn’t a post trumpeting VMWare. I just realized tonight that some of the VMs I have running don’t need to be except for certain hours of the day, or if condition A is true. The first example is my backup mail server; I really don’t need it even powered on unless my main server is down. The second example is my Server 2003 instance, which has VI3 on it; I don’t need this running unless I’m asleep. One of the most useful resources I’ve seen for the vmrun command is over at VirtualTopia – loaded with examples.

Turn off via time

On my “monitoring” instance, which is always up, I’ve decided to install the script that controls my VM. I’ve opted to use a soft shutdown.

192.168.0.10 = ESXi box

datastore1 = name of datastore that hosts VMs

#!/bin/sh
 
vmrun -t esx -h https://192.168.0.10/sdk -u root -p root_password stop "[datastore1] Server 2003 R2/Server 2003 R2.vmx" soft

I have that saved in a file called stop_2003.sh in /opt/vmware/bin; make sure it isn’t world readable. I also have a start_2003.sh:

#!/bin/sh
 
vmrun -t esx -h https://192.168.0.10/sdk -u root -p root_password start "[datastore1] Server 2003 R2/Server 2003 R2.vmx"

Next, edit root’s crontab (crontab -e):

# m h  dom mon dow   command
0 8 * * * /opt/vmware/bin/start_2003.sh
0 23 * * * /opt/vmware/bin/stop_2003.sh

The conditional task is a tad bit more tricky, but just a tad. Ping won’t do, since the mailserver could go down itself, so install nmap. Create a script:

#!/bin/bash

if nmap -p25 -PN -sT -oG - mail.kelvinism.com | grep 'Ports:.*/open/' >/dev/null ; then
echo \`time\` >> mailserver.log
else
/opt/vmware/bin/start_mail.sh
fi

And sticking with our theme, start_mail.sh:

#!/bin/sh

vmrun -t esx -h https://192.168.0.10/sdk -u root -p root_password start "[datastore1] Mail Server/Mail Server.vmx"

This of course changes the crontab entry to:

#!/bin/bash
 
if nmap -p25 -PN -sT -oG - mail.kelvinism.com | grep 'Ports:.*/open/' >/dev/null ; then
echo `time` >> mailserver.log
else
/opt/vmware/bin/start_mail.sh
fi

So, that’s it. detect_port.sh is lacking any type of error detection or redundancy - if one packet/scan is dropped, the mail server will turn on. I’ll re-work this at some point, but it works for now.

Update: Vmware has also released a decent blog entry about using vmrun: on their blog.

NetFlow into MySQL with flow-tools

I’ve been side-tracked on another little project, and keep coming back to NetFlow. For this project I’ll need to access NetFlow data with Django, but this is a bit tricky. First, I’m sort of lazy when it comes to my own project; maybe not lazy, I just like taking the most direct route. The most up-to-date NetFlow collector I noticed was flow-tools, and there is even a switch to export the information into MySQL. Sweet! However, I wanted to insert the flows into MySQL automatically, or at least on a regular basis. I first started writing a python script that would do the job, but after a few minutes noticed flow-capture had a rotate_program switch, and started investigating. Since I somehow couldn’t find anywhere instructions how to insert the data automatically, here’s what I came up with:

  1. Download flow-tools; make sure to configure with –with-mysql (and you’ll have to make sure you have the needed libraries).
  2. Create a new database, I called mine ’netflow'.
  3. Create a table that can contain all the netflow fields, a sample is below. I added a “flow_id” field that I used as a primary key, but you don’t necessarily need this.
CREATE TABLE `flows` (
`FLOW_ID` int(32) NOT NULL AUTO_INCREMENT,
`UNIX_SECS` int(32) unsigned NOT NULL default '0',
`UNIX_NSECS` int(32) unsigned NOT NULL default '0',
`SYSUPTIME` int(20) NOT NULL,
`EXADDR` varchar(16) NOT NULL,
`DPKTS` int(32) unsigned NOT NULL default '0',
`DOCTETS` int(32) unsigned NOT NULL default '0',
`FIRST` int(32) unsigned NOT NULL default '0',
`LAST` int(32) unsigned NOT NULL default '0',
`ENGINE_TYPE` int(10) NOT NULL,
`ENGINE_ID` int(15) NOT NULL,
`SRCADDR` varchar(16) NOT NULL default '0',
`DSTADDR` varchar(16) NOT NULL default '0',
`NEXTHOP` varchar(16) NOT NULL default '0',
`INPUT` int(16) unsigned NOT NULL default '0',
`OUTPUT` int(16) unsigned NOT NULL default '0',
`SRCPORT` int(16) unsigned NOT NULL default '0',
`DSTPORT` int(16) unsigned NOT NULL default '0',
`PROT` int(8) unsigned NOT NULL default '0',
`TOS` int(2) NOT NULL,
`TCP_FLAGS` int(8) unsigned NOT NULL default '0',
`SRC_MASK` int(8) unsigned NOT NULL default '0',
`DST_MASK` int(8) unsigned NOT NULL default '0',
`SRC_AS` int(16) unsigned NOT NULL default '0',
`DST_AS` int(16) unsigned NOT NULL default '0',
PRIMARY KEY (FLOW_ID)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
  1. Setup your router so it sends netflow packets to your linux box (see README/INSTALL)
  2. Create a “rotate program” that will actually enter in the information into mysql.
kelvin@monitor:/usr/bin$ cat flow-mysql-export 
#!/bin/bash

flow-export -f3 -u "username:password:localhost:3306:netflow:flows" < /flows/router/$1
  1. Create the /flows/router directory
  2. Start flow-capture (9801 is the port netflow traffic is being directed to); all done.
flow-capture -w /flows/router -E5G 0/0/9801 -R /usr/bin/flow-mysql-export

Zenoss Default Password

I’ve evaluated Zenoss before, but forgot the default password, and searching for it didn’t come up with anything quickly. I tried everything under the sun: password, 1234, admin, God, Sex, but alas, grep to the rescue:

kelvin@monitor:/usr/local/zenoss/zenoss/etc$ grep admin *
hubpasswd:admin:zenoss

Update: it is listed on page 4 of the Admin PDF :)

Install ESX from a USB (no CDROM)

My little server doesn’t have a cdrom, but I didn’t want to actually run ESX from a USB (i.e. esx-on-a-stick). Here are my notes of configuring a flash disk to boot the ESX installer (so you can install it onto a local disk). For this demo, my USB is /dev/sdb

  1. Install the syslinux utils to your computer (apt-get install syslinux mboot)
  2. Install the MBR
sudo install-mbr /dev/sdb
  1. Copy all the files from the ISO to your fat32 formated partition
  2. Install syslinux
sudo syslinux /dev/sdb1
  1. Move isolinux.cfg to syslinux.cfg, and try booting. If it doesn’t work, edit syslinux.cfg says something like:
default menu.c32
menu title ESXi Boot
timeout 100

label ESXi
menu label Boot VMware ESXi
kernel mboot.c32
append vmkernel.gz --- binmod.tgz --- environ.tgz --- cim.tgz
ipappend 2
  1. Unplug your USB, put it in your server, reboot, boot to USB-HDD (or select the USB disk), and install ESX to the local disk. You will likely be greeted with a sign saying “MBR FA:”, where you need to press “A” and then “1”.

Using Django with SQL Server and IIS

As you can tell from reading some of the other pages, I like Linux and open source. But I also like to answer the question “what if…” This post is my [brief] run down of answering “what if I could run Django on Server 2003 with SQL Server and IIS.” Why, you may ask? To be honest with you, at this point, I don’t really know. One of the deciding factors was seeing that the django-mssql project maintains support for inspectdb, which means I could take a stock 2003 server running SQL Server, inspect the DB, and build a web app on top of it. The Django docs offer a lengthy howto for using Django with IIS and SQL Server, but the website for PyISAPIe seems to have been down for the last month or so. Without further delay, below are my notes on installing Django with SQL Server and IIS.

1a) Install python-2.x.x.msi from python.org

1b) Consider adding C:\Python25\ to your Path (right click My Computer, Advanced, Environment Variables. Enter in blahblahblah;C:\Python25\)

  1. Download a 1.0+ branch of Django (and 7-zip if you need it)

3a) Extract the contents of the Django. From inside Django-1.0, execute:

C:\Python25\python.exe setup.py install

3b) Consider adding C:\Python25\Script to your path.
4) Look in C:\Python25\Lib\site-packages – confirm there is a Django package.
5) Checkout django-mssql (http://code.google.com/p/django-mssql/), copy sqlserver_ado from inside source to the site-packages directory
6) Download and install PyWin32 from sf.net
7) Start a test project in C:\Inetpub\ called ’test'

c:\Python25\scripts\django-admin.py startproject test

8a) Create a database using SQL Management Studio, create a user. (First, go to the Security dropdown. Right click Logins, add a new user. Next, right click Databases, New Database. Enter in the name, and change the owner to the user you just created).

8b) Edit the settings.py and add ‘sqlserver_ado’ and add database credentials. Use the below example if your database comes up in the Studio as COMPUTERNAME\SQLEXPRESS (you are using SQLExpress).

import os
DATABASE_ENGINE = 'sqlserver_ado'           # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'.
DATABASE_NAME = 'crmtest'             # Or path to database file if using sqlite3.
DATABASE_USER = 'crmtest'             # Not used with sqlite3.
DATABASE_PASSWORD = 'password'         # Not used with sqlite3.
DATABASE_MSSQL_REGEX = True
DATABASE_HOST =  os.environ['COMPUTERNAME'] + r'\SQLEXPRESS' # I use SQLEXPRESS
DATABASE_PORT = ''             # Set to empty string for default. Not used with sqlite3.
  1. Install/download FLUP: http://www.saddi.com/software/flup/dist/flup-1.0.1.tar.gz
python setup.py install

10a) Download pyisapi-scgi from http://code.google.com/p/pyisapi-scgi/

10b) Extract the files to somewhere you can remember on your computer, like, c:\scgi
11) Double click pyisapi_scgi.py
12a) Follow the directions here: http://code.google.com/p/pyisapi-scgi/wiki/howtoen – I set a temporary different port since I’m just testing this out.
12b) The last few parts might be better served with an image or two:

Using an app pool to get the right permissions

(No resource/photo)

The SCGI configuration file

(No resource/photo)

Properties of the web site

(No resource/photo)
13) Start the scgi process from the Django folder directory

python manage.py runfcgi method=threaded protocol=scgi port=3033 host=127.0.0.1
  1. Test your django page, http://192.168.12.34:8080

(No resource/photo)

Backup OpenFiler to S3

Backing up your Openfiler box to S3

While I don’t think most pople would expect to backup their entire NAS/SAN to Amazon’s S3, there might be a few very crucial things you need to backup.

I’ve seen an implementation using Ruby and s3sync – something that I do on my server – but I’m trying to migrate everything to Python. Although there are a lot of great tools out there for S3, many of them Python-based, I wanted to do one thing and do it well: have one complete full backup available, and using as little bandwidth as possible. In these regards Duplicity would work well, except I wanted the ability to browse the S3 store using any other tool.

I’ve digged deeper into s3cmd, which I had noticed a long time ago, but I failed to notice it has a sync option. I have tested it out, and it appears to work very, very well. Here’s how to use it with OF.

First, download s3cmd. You’ll need to use subversion, so I first checked it out to my laptop, then uploaded it via SSH to OF. I put my s3cmd folder in /opt.

  
[root@files opt]# ls  
openfiler  s3cmd  
[root@files opt]#   

If you don’t have elementtree installed, now is a good time to install it.

  
conary update elementtree:python  

We need to next configure s3cmd with our AWS creds.

  
[root@files s3cmd]# ./s3cmd --configure  

In the end I didn’t configure encryption for my files (so just hit enter), but you may choose to do so. I have configured the transfer to use HTTPS, however.

  
Save settings? [y/N] y  
Configuration saved to '/root/.s3cfg'  

Cool. Now create a bucket on S3 for your NAS, e.g. blah2134accesskey.openfiler, using whatever method you choose (I typically use Cockpit). Now that you have a bucket, configure a really simple script to drop in cron:

  
#!/bin/bash  
  
/opt/s3cmd/s3cmd sync /mnt/openfiler/data/profiles/bunny s3://blah2134accesskey.openfiler/mnt/openfiler/data/profiles/bunny  
/opt/s3cmd/s3cmd sync /mnt/openfiler/data/profiles/kelvin-pc s3://blah2134accesskey.openfiler/mnt/openfiler/data/profiles/knicholson/kelvin-pc  

Sweet! I like this approach quite a bit: I get file-level access to anything on the NAS, you don’t have to actually install anything, and it ‘just works.’

Configure Timevault to Remote Server

Using TimeVault with a shared drive as a backend is actually quite easy, but it does require a few special things setup. Note: this is gonna be a brief summary.

Install samba-tools, smbfs…

sudo apt-get install samba-tools smbfs

A lot more other stuff may install as well.

Create a script that mounts your samba share. You could also do this in fstab, but I tend to suspend my laptop when I come home, and I like clicking buttons.

#!/bin/bash

mount -t cifs //192.168.44.2/kelvin /mnt/backups -o netbiosname=KELVIN-PC,iocharset=utf8,credentials=/home/kelvin/Apps/.smb-details.txt

smb-details.txt includes:

username=DOMAIN\\kelvin
password=mypassword

Finally, create a folder called ’timevault’ or something inside your mapped share, then launch TimeVault and configure it to use the above mentioned /mnt/backups/timevault folder. Configure Timevault as normal.

PyGTK + py2exe for Windows

I’m writing down these quick notes so I can remember the steps for getting py2exe to work with GTK.

  • Download the GTK+ runtime
  • Download py2exe
  • Copy over your project into the windows box
  • Create a setup.py file (see below)
  • Run “c:\Python25\python.exe setup.py py2exe”
  • Copy over the lib, etc, and share folder from C:\Program Files\GTK2-Runtime into the dist folder
  • Run app!

setup.py:

from distutils.core import setup
import py2exe

setup(
    name = 'ploteq',
    description = 'Bunnys Plotting Tool',
    version = '1.0',

    windows = [
        {
        'script': 'ploteq.py',
        }
    ],

    options = {
        'py2exe': {
        'packages':'encodings',
        'includes': 'cairo, pango, pangocairo, atk, gobject', 
        }
    },

    data_files=[
        'ploteq.glade',
    ]
)