Thursday 29 December 2011

Etckeeper on Ubuntu

There's a package in the Ubuntu repositories called 'etckeeper', which is a brilliant little tool for tracking changes to your configuration stored under the /etc directory.

The way it works is that it puts the whole of the /etc directory under version control, using one of either Bazaar, Mercurial, Git or Darcs. Then, whenever changes are made, either directly by modifying a file, or indirectly by running something like "apt-get update" it makes a note of who made the changes and what the differences in the files were. It then becomes easy to roll back configuration changes, find out who made the change and what the specific change to the configuration file was.

There's a good writeup on the basic usage on the Ubuntu Server Guide page.

Installing 'etckeeper' is as easy as:

sudo apt-get install etckeeper

It will use Bazaar as the VCS by default and will commit the first revision on installation.

Then, let's say that we wanted to install Apache, we would do this using "apt-get" as per usual, but during the install there'd be an extra section dealing with the commit by etckeeper:


$ sudo apt-get install apache2
[sudo] password for srdan:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libcap2
Suggested packages:
apache2-doc apache2-suexec apache2-suexec-custom
The following NEW packages will be installed:
apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libcap2
0 upgraded, 10 newly installed, 0 to remove and 3 not upgraded.
Need to get 3,248 kB of archives.
After this operation, 11.7 MB of additional disk space will be used.
Do you want to continue [Y/n]? Y
Get:1 http://archive.ubuntu.com/ubuntu/ oneiric/main libcap2 amd64 1:2.21-2 [12.2 kB]
Get:2 http://archive.ubuntu.com/ubuntu/ oneiric/main libapr1 amd64 1.4.5-1 [88.8 kB]
...
...
...
ldconfig deferred processing now taking place
Committing to: /etc/
modified .etckeeper
added apache2
added apache2/apache2.conf
added apache2/conf.d
added apache2/envvars
added apache2/httpd.conf
added apache2/magic
added apache2/mods-available
added apache2/mods-enabled
added apache2/ports.conf
...
added rc6.d/K09apache2
added ufw/applications.d/apache2.2-common
Committed revision 6.


So, we can see that as well as installing Apache, apt-get has committed the changes to the /etc/ directory as well. If you want to see what those changes are, you can using the "bzr log" command as below:


$ sudo bzr log /etc/apache2/httpd.conf
------------------------------------------------------------
revno: 6
committer: srdan
branch nick: etckeeper1 /etc repository
timestamp: Thu 2011-12-29 12:05:11 +1300
message:
committing changes in /etc after apt run

Package changes:
+apache2 2.2.20-1ubuntu1.1
+apache2-mpm-worker 2.2.20-1ubuntu1.1
+apache2-utils 2.2.20-1ubuntu1.1
+apache2.2-bin 2.2.20-1ubuntu1.1
+apache2.2-common 2.2.20-1ubuntu1.1
+libapr1 1.4.5-1
+libaprutil1 1.3.12+dfsg-2
+libaprutil1-dbd-sqlite3 1.3.12+dfsg-2
+libaprutil1-ldap 1.3.12+dfsg-2
+libcap2 1:2.21-2


So from the above output we can see who made the last change to this file, we can see that it was made as a part of an "apt" run and we can see the list of other packages that were installed as a part of this run.

When you make a change to any files under /etc directly, the changes aren't committed straight away, but are rather committed daily (probably by a cron job). To see whether there are any files which have been modified but not committed, use the "bzr status" command:


$ sudo bzr status /etc
modified:
apache2/sites-available/default


And to commit the change manually, use the "etckeeper commit" command:


$ sudo etckeeper commit "Changed the email address of the default webmaster"
Committing to: /etc/
modified apache2/sites-available/default
Committed revision 7.


Hopefully this has been a good introduction to etckeeper and it's use. While it won't stop people breaking things due to bad configuration, at least it can be helpful for preserving a working config and quickly determining what changes were made, by who and for what reason.

Sunday 18 December 2011

belongsTo Relationships in Grails and DB schema

There are three (two actually as far as DB schemas are concerned) types of belongsTo relationships in Grails:




The first one "belongsTo = [customer: Customer" defines a one-to-one relationship between CustomerOrders and Customers. It stores the reference in the "CUSTOMER_ORDER" table in a "CUSTOMER_ID" column:


The second and third types of belongsTo relationships produce the same type of schema, with neither the "CUSTOMER" or "CUSTOMER_ORDER" tables containing references to each other, but rather that relationship information being kept in a "CUSTOMER_CUSTOMER_ORDER" table.


How to view HSQLDB grails contents

If you're working with Grails it's sometimes very useful to be able to view the layout and contents of the database that you're working with. If you're using the default HSQLDB engine that comes with Grails, one of the ways to achieve this is to launch the "Database Manager" application from your BootStrap file, giving you a GUI which you can use to explore the database.


class BootStrap {
def init = { servletContext ->
org.hsqldb.util.DatabaseManager.main()
}
def destroy = {
}
}
This will result in the Database Manager GUI being launched:




You just need to change the last part of the URL to "devDB" (or whatever you've chosen to name your development database) and hit the "Ok" button. You should be presented with the Database Manager's default screen, showing all of the tables in a list on the left.

Selecting the + next to each table allows you to see the columns/fields of each table and so forth. Entering SQL commands in the text portion and hitting the 'Execute' button, while using the application will allow you to see the contents of your database in real-time.

Note that closing the Database Manager window will close the application.

Additionally the Database Manager program is available from the "universe" debian and ubuntu repositories under the package name "hsqldb-utils". However I was unable to connect to the in-memory HSQLDB instance from a Database Manager instance launched outside of Grails. I'm assuming this is because it's launched as a seperate process, it gets its own memory space. It should be possible to get it to connect to the HSQLDB instance when using a file based database.

NOTE: The above instructions apply to pre 2.0 Grails versions. Since Grails 2.0, the way to see the Database schema has been to access the web interface, which is launched automatically at: http://localhost:8080/[ApplicationName]/dbconsole. You might need to change the URL, but shouldn't have to enter a password.

Tuesday 29 November 2011

How to see files installed by a package

If you've ever downloaded a package on Debian (or any of its derivatives) and want to see which files were installed, it's easily accomplished using the 'dpkg-query' command:

$ dpkg-query -L vsftpd
/.
/usr
/usr/share
/usr/share/doc
/usr/share/doc/vsftpd
/usr/share/doc/vsftpd/FAQ.gz
/usr/share/doc/vsftpd/examples
...
...
/usr/share/doc/vsftpd/changelog.Debian.gz
/usr/share/doc/vsftpd/REWARD
/usr/share/doc/vsftpd/SPEED
/usr/share/man
/usr/share/man/man8
/usr/share/man/man8/vsftpd.8.gz
/usr/share/man/man5
/usr/share/man/man5/vsftpd.conf.5.gz
/usr/sbin
/usr/sbin/vsftpd
/etc
/etc/init.d
/etc/pam.d
/etc/pam.d/vsftpd
/etc/init
/etc/init/vsftpd.conf
/etc/vsftpd.conf
/etc/logrotate.d
/etc/logrotate.d/vsftpd
/etc/ftpusers
/etc/init.d/vsftpd


The above example is for the 'vsftpd' package, but this equally applies to all of the other packages installed using dpkg/apt.

In case you're on a system which uses yum (Fedora, RedHat, CentOS etc...) you can use 'rpm -ql [package name]':


# rpm -ql yum-cron
/etc/cron.daily/0yum.cron
/etc/rc.d/init.d/yum-cron
/etc/sysconfig/yum-cron
/etc/yum/yum-daily.yum
/etc/yum/yum-weekly.yum
/usr/share/doc/yum-cron-3.2.29
/usr/share/doc/yum-cron-3.2.29/COPYING

Thursday 27 October 2011

Amazon EC2 I/O benchmarks

So, after my previous post where I used bonnie++ to benchmark my desktop machine, I decided to benchmark the I/O performance of the virtual server (or as Amazon like to call it, an "instance") with the same tool.

The results are as follows:
Note that I've taken a screenshot of the results as opposed to pasting them in this time, as the last time I couldn't get the columns to line up.

So, what can we tell from these numbers? How do they compare with my desktop SATA hard drive? Here are the same numbers from the SATA drive:


Well, we can see that when it comes to the Sequential Output numbers, the local SATA wins in all three of the numbers and the CPU utilization is exactly the same on both of the systems. It's the same story for the Sequential Input numbers, although there doesn't seem to be as much of a difference as with the Sequential Output numbers.

In the Random Seeks, the AWS machine blows away the local SATA drive, both in terms of Random seeks per second and the latency numbers. The only place where it loses out is in consuming much more CPU than the local system.

In the Sequential Create category, AWS also comes out on top, although, for some reason I wasn't able to get Read and Delete numbers for the local disk.

The Random Create category is pretty much even, with neither system really standing out.

So, there you have it, the AWS infrastructure isn't as bad as I had originally thought. Also, one thing to keep in mind is that the type of EC2 instance tested was just the basic "free" instance which Amazon provides. I would expect that better provisioned instances would perform even better when it comes to the same test.

Wednesday 19 October 2011

LAMP stack with Ubuntu JeOS

JeOS stands for "Just enough operating system" and referrs to an operating system with the bare minimum software installed to run the application required. Ubuntu's server edition makes it easy to install a JeOS minimal system.

To install a minimal LAMP (Linux, Apache, MySQL, PHP) stack on top of an Ubuntu JeOS instance, simply boot off of an Ubuntu server CD and then when it comes to the install screen, highlight the "Install Ubuntu Server" and then press F4 to bring up the "Modes" menu. From this menu select either "Install a minimal system" or "Install a minimal virtual machine", depending on whether you are installing a VM or a physical server.


From there, carry on with the rest of the install process and when it comes time for the software selection, check the "LAMP Stack" option. During the install process you'll be asked for a MySQL root password.

After this is done, you'll end up with an OS that takes up only about 650MB, compared to the regular Ubuntu server option which takes up 2GB. The main advantage of the smaller size is that the server or VM doesn't run any unnecessary services or expend any extra resources in keeping superfluous software updated. In addition, if it is a VM, it makes it much easier to transport and turns it into more of an "appliance"

Wednesday 27 July 2011

Eclipse - How to show line numbers

To get the editor pane to show line numbers go to 'Window' -> 'Preferences' -> 'General' -> 'Text Editors' and click the checkbox labelled 'Show line numbers.

In graphical form:

Saturday 23 July 2011

PHP Development with NetBeans 7.0 - Debugging

In my previous post I talked about getting a PHP development environment set up using NetBeans and a locally installed Apache server on Ubuntu. In this post, we're going to be creating a slightly more complicated project to showcase the debugging abilities of NetBeans, which integrates very nicely with Xdebug.

Xdebug is a PHP extension which enables you to debug your code, same as if you were using something like VisualStudio with C#. In order to install and enable Xdebug on Ubuntu, you just need to run the command:

sudo apt-get install php5-xdebug

After this package has been installed, you have to enable remote debugging on Xdebug (it is disabled by default). Add the line:

xdebug.remote_enable=on

to the /etc/php5/conf.d/xdebug.ini file. In order for the changes to take effect, you will have to restart Apache:

/etc/init.d/apache2 restart

If you ever want to verify what the Xdebug settings are set to, you can see them on the default phpinfo() page.

Now that we've got Xdebug running, go ahead and create a new PHP project in NetBeans and name it 'PHPIncrement_Add':


You can leave the 'Run Configuration' screen settings set to the defaults:

In the template file, remove the php section and edit the file to include a simple form as below:

Then create a new PHP web page by right clicking on 'Source Files' -> 'New' -> 'PHP Web Page' and name the page results.php.

When results.php opens in the editor window, modify it to look like the following:

Ok, so now we have a simple web application, which takes two integers, increments them and then displays the result. We can go ahead and test this out by hitting the 'Run Project' button (or pressing F6) and we should see the following in our browser:

And after hitting the 'Submit' button, we can see the result:

So now we have a very simple, working PHP application. Now lets see how we can debug our program. We've already installed and configured Xdebug, we just need to modify a few options in NetBeans. In NetBeans, go to 'Tools' -> 'Options' -> 'PHP' -> 'Debugging' and check the 'Watches and Balloon Evaluation' checkbox:

After you check the box, you might get a warning saying "Please note that Xdebug is known to be unstable if this option is enabled". I've never had Xdebug crash on me with this option enabled so far. Also note that I've left the 'Stop at First Line' option unchecked, this is just my personal preference.

Next we can go ahead and set some breakpoints by clicking in the left hand margin of the editor window, where the line numbers are shown:

To start our debugging session, we can click on the 'Debug Project' button, which is next to the 'Run Project' button, or alternatively press Ctrl+F5.

At the start we are presented with the index.php page, so just go ahead and enter some numbers and hit the 'Submit' button. Once this is done, you will automatically switch back to NetBeans, with the debugging session started:



If you select the 'Variables' tab on the bottom pane and press the 'Step Into' (F7) button, you can see the values of $number1 and $number2:

Press the 'Continue' (F5) button and the code should keep executing until it gets to the breakpoint in the incrementAdd function. Now the 'Variables' pane doesn't show us the value of $result, however we can fix that simply by right clicking on $result in the editor pane, selecting 'New Watch...' and entering $result as the 'Watch Expression':

Note that the value of $result will be null on our breakpoint, but after pressing the 'Step Into' button, it's value is set.

Hit the 'Continue' button and you will see the results.php page displaying the results of the calculation.

So, in this post we've successfully installed and configured Xdebug and got it working with our NetBeans IDE, as well creating a simple project and going through a basic debugging session. Please keep in mind that this is a very simple example and that debugging for such a basic project would probably not be necessary. The debugger is especially useful when we have a obscure bug which we need to track down, we want to see the flow of control in a complex piece of code or maybe verify what values are being passed to the database.

PHP Development with NetBeans 7.0

In this post, I'm going to be going through creating a simple PHP project using the NetBeans 7.0 IDE and deploying it to a development Apache HTTP server, running on the same machine. All of the instructions below were tested on Ubuntu 10.04 and assume that you have a LAMP stack (although we don't need the MySQL component for now) and NetBeans installed (I've used the All bundle, which comes with PHP) and have the userdir module for Apache turned on (see my previous post).

In terms of usability, I find NetBeans to be one of the best open source PHP IDE's available, with excellent support for things like autocomplete and debugging, which make development a whole lot easier.

To start off with, open up NetBeans and create a new PHP Project:

I've set the project name to "PHPHelloWorld" and I found that NetBeans detected my ~/public_html folder automatically:

NetBeans also detected my local web server and URL settings:

For our simple HelloWorld application, we don't require any PHP frameworks:

After clicking 'Finish', we end up with the default PHP template:

Now I added some code, including a function to the index.php file. I've included a function in the code so that when you type it out you can see some of the NetBeans autocomplete features:

Finally, click on the 'Run Project' button (or hit F6) and watch as your program deploys:

Apache HTTP Server userdir module

As a part of setting up a development environment for creating web applications/sites to be deployed to Apache HTTP Server, one of the things I would highly recommend is making use of the userdir module. This module allows a user to create their own directory (under /home/[user]/public_html) and have that directory automatically be made accessible by Apache at http://localhost/~[username]/ and thus skips a lot of the headaches that are caused by permission problems.

To set up this module, first you need to create this directory:

mkdir ~/public_html

The next step is to enable the module:

sudo a2enmod userdir

Note that if you want to change the name of the directory or any other settings for this folder, you can do so by editing the /etc/apache2/mods-available/userdir.conf file.

Then, finally we just need to restart apache for the module to be loaded:

sudo /etc/init.d/apache2 restart

If it all went well, you should now be able to open your browser and browse to http://localhost/~[username]/ and see the contents of your public_html directory.

Another very important thing to mention is that by default, PHP processing is disabled on this directory. If you need to turn on PHP processing, you need to modify the /etc/apache2/mods-available/php5.conf file:


<IfModule mod_php5.c>
<FilesMatch "\.ph(p3?|tml)$">
SetHandler application/x-httpd-php
</FilesMatch>
<FilesMatch "\.phps$">
SetHandler application/x-httpd-php-source
</FilesMatch>
# To re-enable php in user directories comment the following lines
# (from <IfModule ...> to </IfModule>.) Do NOT set it to On as it
# prevents .htaccess files from disabling it.
<IfModule mod_userdir.c>
<Directory /home/*/public_html>
php_admin_value engine Off
</Directory>
</IfModule>
</IfModule>


Like it says in the commented out sections of this file, you just need to comment out the mod_userdir.c section to enable PHP on the ~/public_html directory.

Apache HTTP Server VirtualHost directive

Once of the things which always catches me out is the use of the VirtualHost directive in the configuration files for the Apache HTTP server. When you need to set up virtual hosting (i.e. more than one host off of the same IP, differentiated by the hostname) you need to use this directive. However, don't make the assumption that you can do something like:

<VirtualHost www.example.com>...</VirtualHost>

and that this will result in having a server defined for the 'www.example.com' hostname. The VirtualHost directive is used only to define the IP that this "virtual server" should listen on. It does not define which hostname it should reply to. While the above configuration is legal, the actual behaviour of Apache's HTTP Server is to lookup the hostname, convert it to an IP and use that in the directive. Functionally, it is no different than doing:

<VirtualHost [IP Address of www.example.com host]>...</VirtualHost>

Using the hostname in this part of the configuration might lead to some unexpected behaviour. For me, I added this directive with a hostname, expecting that the configuration section would only apply to a particular hostname, when in fact it matched all hostnames using that IP.

The correct way to define a hostname based virtual host is to make use of the ServerName and additionally ServerAlias directives inside the VirtualHost stanza.

Tuesday 21 June 2011

I/O Benchmarks with Bonnie++

So, after my last article using the palimsest application, I decided to give a tool called Bonnie++ a try. From the description on the authors' website:

"Bonnie++ is a benchmark suite that is aimed at performing a number of simple tests of hard drive and file system performance. Then you can decide which test is important and decide how to compare different systems after running it. I have no plans to ever have it produce a single number, because I don't think that a single number can be useful when comparing such things."

So, after running the default program as provided by Ubuntu repositories, we get the following output:

$ bonnie++
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
srdan-desktop 8G 505 98 83083 10 36609 4 2514 86 96195 5 155.8 3
Latency 16674us 1003ms 1830ms 41497us 269ms 1014ms
Version 1.96 ------Sequential Create------ --------Random Create--------
srdan-desktop -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 11634 22 +++++ +++ 17458 25 16959 30 +++++ +++ 17864 23
Latency 55341us 601us 638us 753us 43us 123us
1.96,1.96,srdan-desktop,1,1308644057,8G,,505,98,83083,10,36609,4,2514,86,96195,5,155.8,3,16,,,,,11634,22,+++++,+++,17458,25,16959,30,+++++,+++,17864,23,16674us,1003ms,1830ms,41497us,269ms,1014ms,55341us,601us,638us,753us,43us,123us


So, overall we have a lot more detail, although not as many pretty graphs as we had with Palimpsest. Also, I'm afraid my blogger skills have let me down, because the output above actually looks a lot better from the command line, but I can't for the life of me figure out how to retain tabs when posting.

Looking at the output we can see that the tool tests five main categories:

* Sequential Output (Per Chr, Block, Rewrite)
* Sequential Input (Per Chr, Block)
* Random Seeks
* Sequential Create (Create, Read, Delete)
* Random Create (Create, Read, Delete)

According to the documentation the first three categories simulate the same kind of I/O load that a database would normally put onto a disk, while the last two categories simulate the kind of load that you would expect to see in an NNTP and/or web caching server. From what I could tell, the first three tests all create a single file to test on and the last two create a host of small files.

One thing to note is that this is a test of both the disk and the filsystem (and kernel), unlike the palimpsest benchmark, which only tests the disk. Another thing to note is that as well as getting the I/O throughput we're also seeing the %CP figure, i.e. how taxing the operation is on the CPU. This might be an important factor, when trying to determine what kind of CPU you need for your web caching server.

Overall, bonnie++ is a very good tool for looking at the holistic performance of any storage system.

Saturday 11 June 2011

I/O Benchmarks with GNOME Disk Utility

Recently I decided to tidy up my home office. In the process I found a lot of old computer hardware which has built up over the years. One of the more interesting things I found was several old (ATA) hard drives. This got me thinking how I could make use of these drives. The first thing that popped into mind was using them for benchmarking purposes. i.e. to get familiar with the tools used to benchmark I/O. Being a Ubuntu user, I noticed that there's a really nice utility installed by default (go to 'System'->'Administration'->'Disk Utility'). This is actually a program called Palimpsest/GNOME Disk Utility and as well as having the ability to benchmark hard drives it also gives you the ability to use the disks' SMART capabilities to get some idea of the number of bad sectors as well as other warning signs that the drive may be failing.

So, I've shut down my computer and plugged in the drives (no hot plugging with ATA unfortunately), booted up and ran the 'Read-Only' benchmark. This gives some basic numbers showing the maximum, minimum and average read times. When I went to do the 'Read/Write' benchmark, I found that you had to completely format the disk in order to benchmark it. This involved not only deleting all of the partitions, but also deleting the MBR. Once this was done, I was able to run the Read/Write tests on both of the 40GB drives. As well as the max, min and avg times, you also get a pretty graph:

The red line corresponds to writes, with the blue line corresponding to reads. I'm not sure what all of the green points and lines correspond to. So, how did the drives perform? One of them (the one pictured above) had quite a bit of variance as you can see in the graph above. Also, the read/write rates crossed at about 40% of the way through the test, which I don't quite understand. As a comparison, the other 40GB drive I tested was amazingly stable, with the minimum read rate only 1.2 MB/s below the maximum and the write max/min only differing by 0.1 MB/s. The output of this drive can be seen below:


So, now the only question left is "How will they perform in a RAID array?". After putting the two drives into a RAID-0 array (note that you need to install the mdadm package first), we see the following results:


The results are quite interesting in that they show that the avg read rate for the array was actually worse (21.5 MB/s) than either of the two individual drives (24.6 MB/s and 22.3 MB/s). The avg write rate was noticeably higher at 26 MB/s, compared to the individual results of 22.1 and 22.9 MB/s.

So what can we conclude from this? We probably shouldn't look too much into the results as both of the hard drives are old and from different manufacturers. However, in saying that, I think it does show that write intensive applications might benefit from RAID0 more than read intensive ones.

Tuesday 3 May 2011

Installing APR based Tomcat Native library

After installing Tomcat I kept seeing the following message in the Catalina logs:

INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/lib/jvm/java-6-openjdk/jre/lib/i386/client:/usr/lib/jvm/java-6-openjdk/jre/lib/i386:/usr/lib/jvm/java-6-openjdk/jre/../lib/i386:/usr/java/packages/lib/i386:/usr/lib/jni:/lib:/usr/lib

This got my curiosity up and after learning about APR and the benefits it supposedly offers, I decided I would try it out and see if I could notice any difference in performance. Luckily, there's a package in Ubuntu for the Tomcat native libraries, making it easy to install:

sudo apt-get install libtcnative-1

This should install the library as well as all of the packages that it depends on. I restarted Tomcat after installing this package, but it still didn't detect the library. I couldn't get around this problem and didn't manage to solve it. It turned out that the next time I started my laptop I didn't get the warning in the logs, but instead I saw:

03/05/2011 3:39:15 PM org.apache.catalina.core.AprLifecycleListener init
INFO: Loaded APR based Apache Tomcat Native library 1.1.19.
03/05/2011 3:39:15 PM org.apache.catalina.core.AprLifecycleListener init
INFO: APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].


I think that some part of the Tomcat/JVM stack scans for the libraries when the computer starts and caches them from then on and this was what was causing Tomcat to not see the native library files straight away after install.

Now that Tomcat had detected and was using the libraries, I loaded up a few test projects as well as looking at the example projects included with the Tomcat install, but couldn't notice any difference in performance. Hopefully in the next few weeks I can go ahead and test out Tomcat performance using JMeter, ab or something similar and compare the results with APR and without.

NOTE: Just as an update to this post, the process for installing Tomcat Native on CentOS is a lot more complicated as there is no 'libtcnative' package for CentOS and the best guide I've found for it so far involves compiling 'libtcnative' from source.

Tuesday 26 April 2011

VirtualBox - Increase screen resolution of guest

In order to get a decent resolution in your VirtualBox (Ubuntu) guest, you need to install the virtualbox guest X11 package:

sudo apt-get install virtualbox-ose-guest-x11

Grails part 3 - database and deployment

In this, part 3 of the series on Grails, I am going to talk about how to configure our web application to use the MySQL database as our default permanent store and then how to create a war file and deploy to Tomcat.

Firstly, we're going to install MySQL. Luckily the state of Linux package management has advanced to the point where this is as simple as:

sudo apt-get install mysql-server

It should ask what you want to set the root user password to and then install the database. Once it has installed, the database server should be started. To check this, you can run the ps aux | grep mysql command. You should see a line in the output with:

mysql 1004 0.0 0.6 178908 27960 ? Ssl 21:50 0:00 /usr/sbin/mysqld

Then we need to create the database:

CREATE DATABASE Bookstore_dev;

Now that we have our database ready to go, we need to get the MySQL driver so that our grails application can connect to the database. Go to http://www.mysql.com/downloads/ and download the Connector/J package from the MySQL connectors section. Uncompress the package and copy and paste the mysql-connector-java-5.1.15-bin.jar file into the BookStore/lib folder.

Next we'll need to modify our BookStore/grails-app/conf/DataSource.groovy file to specify that we want grails to use the MySQL database instead of the typical HSQLDB that is used. The DataSource.groovy file has three different sections "development", "test" and "production", corresponding to the three different stages of the development process and the three different environments that you can work with in grails. You can define a different database for each stage/environment. There is also a default "dataSource" section at the top, which is used unless the values are overwritten in each of the different sections. To start with, we're going to specify that the development environment should use the MySQL database. We can do this by modifying the development section to look like:

development {
dataSource {
dbCreate = "create"
url = "jdbc:mysql://localhost/Bookstore_dev"
driverClassName = "com.mysql.jdbc.Driver"
username = "root"
password = "password"
}
}


Ofcourse you'll need to change the username and password to whatever you've set them to. I'll also point out that it's not good practice to use the root user to access our database, because if our application gets hacked, our whole database would be compromised. It would be best from a security standpoint to create a new user with privileges limited to the "Bookstore_dev" database. However, since this is just our development database and we're only making it available to our local computer network for the time being it should be ok.

If we now start up our application using the grails run-app command, and once it's started browse to http://localhost:8080/BookStore, we should be able to see our application. We can then add some dummy data to check that it's getting saved to the database. I've gone ahead and added the authors "Stephen King" and "Robert A. Heinlein" and the books "Pet Cemetery", "Stranger in a Strange Land", "The Moon is a Harsh Mistress" (associating them with their respective authors). If you log into the database and have a look at it's contents you can see that the values have been added:

mysql> USE Bookstore_dev;
mysql> SHOW TABLES;
+-------------------------+
| Tables_in_Bookstore_dev |
+-------------------------+
| author |
| book |
+-------------------------+
2 rows in set (0.01 sec)

mysql> SELECT * FROM author;
+----+---------+--------------------+
| id | version | name |
+----+---------+--------------------+
| 1 | 0 | Stephen King |
| 2 | 0 | Robert A. Heinlein |
+----+---------+--------------------+
2 rows in set (0.01 sec)

mysql> SELECT * FROM book;
+----+---------+-----------+----------------------------+
| id | version | author_id | title |
+----+---------+-----------+----------------------------+
| 1 | 0 | 1 | Pet Cemetery |
| 3 | 0 | 2 | Starship Troopers |
| 4 | 0 | 2 | Stranger in a Strange Land |
+----+---------+-----------+----------------------------+
3 rows in set (0.00 sec)

So, we can see that the data is being saved to the database and that the association between the Author and Book object is represented with the 'author_id' field in the Book table. It's also worth noting the "version" field which is updated by grails every time any of the fields in the row are modified.

So now that we've got an application which uses a database it's time to deploy it to our *production* server. We're going to modify our DataSource.groovy file to ensure that the production environment (the one we're going to deploy) also uses the MySQL database:

production {
dataSource {
dbCreate = "update"
url = "jdbc:mysql://localhost/Bookstore_dev"
driverClassName = "com.mysql.jdbc.Driver"
username = "root"
password = "password"
}
}


Make sure you've executed the run-app command with dbCreate set to "create" before deploying this production code as dbCreate = "update" expects the tables to already be created in the database.

Now we can create the war file which we're going to upload to Tomcat through the manager web-app by running the command grails prod war. This generates a production environment war file. The production environment is optimized for code efficiency, while the development and testing environments are optimized for developer productivity. Once the command finishes executing we should have our war file under BookStore/target/BookStore-0.1.war. The 0.1 is the application version number and can be changed in the BookStore/application.properties file.

Now we can log into our Tomcat manager application (found at http://localhost:8080/manager/html if you've setup Tomcat according to the previous post), go to the Deploy section, select our WAR file and hit 'Deploy'. Once the page refreshes we should see our BookStore app in the list of applications and the column "Running" should be set to "true". We can now click on the link in the Path column to go to our web-app and start using it.

As an alternative way to deploy your application, you can also make use of the tomcat grails plugin. In order to do this you need to add a few variables to the BookStore/grails-app/conf/Config.groovy file, namely:

tomcat.deploy.username = "[tomcat manager user]"
tomcat.deploy.password = "[tomcat manager password]"
tomcat.deploy.url = "http://localhost:8080/manager"


Deploying the application is now achieved from the command line with:

grails prod tomcat deploy

This code essentially does the same thing that we did, makes a war file and deploys it, but might be preferable as it is only one step instead of two.

So there you have it, we've taken our simple web application, configured it to use a permanent datastore and deployed it to our Tomcat webserver.

NOTE: If you need to update your deployed application, the way to do it is to first "undeploy" the application from Tomcat, which can be done with "grails prod tomcat undeploy"

Wednesday 20 April 2011

Tomcat6 on Ubuntu

In this post I'm going to talk about getting Tomcat version 6 up and running on Ubuntu. I'm going to be using Ubuntu 10.10. Installing Tomcat is quite simple, due to it being in the repositories. Simply run:

sudo apt-get install tomcat6 tomcat6-admin tomcat6-examples tomcat6-docs

And that's it :) You now have a working install of Tomcat version 6 on your machine. In order to see it working point your browser to http://localhost:8080/. You'll see the Apache "It Works!" message as well as a message about where everything is and links to the docs, examples, manager and host-manager applications.

Now, in order to get access to the manager and host-manager applications you're going to have to add a user. To do this, modify the file at /etc/tomcat6/tomcat-users.xml and add the following section:

<role rolename="admin"/>
<role rolename="manager"/>
<user username="srdan" password="password" roles="admin,manager"/>


After editing the file and restarting the Tomcat server we should be able to log into the manager application by pointing our browser at http://localhost:8080/manager/html and we should be able to log into the host-manager by going to http://localhost:8080/host-manager/html. The manager will allow us to deploy, undeploy, start and stop our applications and the host-manager allows us to declare, remove, start and stop our virtual hosts.

NOTE: From Tomcat 6.0.3 onwards, there is no "manager" role that is recognised by the manager application, rather it has been split up into four roles for security purposes:

manager-gui - allows access to the HTML GUI and the status pages
manager-script - allows access to the text interface and the status pages
manager-jmx - allows access to the JMX proxy and the status pages
manager-status - allows access to the status pages only

To test out the deployment, we're going to create a very simple web application. First, create a new directory and change into it:

mkdir HelloWorld
cd HelloWorld


Next we're going to create a servlet with the following code:

import java.io.*;
import java.util.*;
import javax.servlet.*;
import javax.servlet.http.*;

public class HelloWorld extends HttpServlet {


public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
response.setContentType("text/html");
PrintWriter out = response.getWriter();

out.println("<html>");
out.println("<head>");

out.println("<title>Hello World!</title>");
out.println("</head>");
out.println("<body>");

out.println("<h1>Hello World!</h1>);

out.println("</body>");
out.println("</html>");
}
}

And put it into a file called HelloWorld.java. After this we need to create a WEB-INF and WEB-INF/classes directory. Once we've done this, we compile the above code with:

javac -classpath "/usr/share/tomcat6/lib/*" HelloWorld.java

This should result in a HelloWorld.class file in the same directory. We need to move this file to the WEB-INF/classes directory:

mv HelloWorld.class WEB-INF/classes/

Now we need to add a web.xml file, which is going to tell Tomcat about our application. For our purposes we need to create a file with the following contents:

<?xml version="1.0" encoding="ISO-8859-1"?>

<web-app xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
version="2.5">

<description>
Simple Hello World Servlet.
</description>
<display-name>Simple Hello World Servlet</display-name>

<servlet>
<servlet-name>HelloWorld</servlet-name>
<servlet-class>HelloWorld</servlet-class>
</servlet>

<servlet-mapping>
<servlet-name>HelloWorld</servlet-name>
<url-pattern>/HelloWorld</url-pattern>
</servlet-mapping>
</web-app>


Once that's done, make sure that the file is in the WEB-INF directory, and we can go ahead and create our war file with:

jar -cvf hello.war .

Now we should be able to go back to the manager application at http://localhost:8080/manager/html, go to the Deploy section, select our war file, upload it and have it deploy straight away.

Assuming everything went fine, we can now go to http://localhost:8080/hello/HelloWorld and see our servlet in action, printing out the famous "Hello World!" message in HTML.

That concludes this post. Hopefully now you know how to install Tomcat on Ubuntu, manage it using the manager and host-manager applications, as well as how to create and deploy a very simple web application to the server.

Monday 18 April 2011

Grails part 2 - First Project

In my last post we talked a little bit about the web framework known as Grails and went through the steps required to get a working Grails development environment setup. In this post we are going to go ahead and create our first working application using the framework.

For the first application, we're going to be creating a very simple application called BookStore. It's going to consist of only two types of "objects", called "Book" and "Author". So, to begin with, the first command we want to run is:

grails create-app BookStore

This will go ahead and create the project as well as producing a lot of output. The output is the result of Grails creating the default project directory structure. Being a COC framework, grails assumes certain things about how we're going to layout our project, meaning that the unit tests are under BookStore/test/unit, the integration tests are under BookStore/test/integration and the CSS files are under BookStore/web-app/css.

Next we need to change into the newly created project directory and create the Book domain class:

cd BookStore
grails create-domain-class bookstore.Book


The "create-domain-class" command has gone and created two new files for our project, one to define our Book class and the other a Unit test class for Book. They can be found under:

BookStore/grails-app/domain/bookstore/Book.groovy
BookStore/grails-app/test/unit/bookstore/Book.groovy


In order to do something a little more useful with our domain class, we're going to have to give it "properties". In order to do this, open the domain class definition for Book and modify it to look like the following:

package bookstore
class Book {

String title
Author author
static constraints = {
}
}


We now have a Book class with a "name" and an "author" property. However, we haven't created the Author class yet, so this property isn't pointing to anything at the moment. Let's change that by creating an Author class:

grails create-domain-class bookstore.Author

This should create the Author domain class in much the same way that it created the Book class. Let's modify this new class to have a name property and let's add a property to point to all of the books that this author has written. Modify the BookStore/grails-app/domain/bookstore/Author.groovy file to look like the following:

package bookstore
class Author {
String name
static hasMany = [books:Book]
static constraints = {
}
}


The line static hasMany = [books:Book] is an important one as it defines the "many" part of the "one-to-many" relationship between Book and Author. i.e. each Book has one Author and an Author can have many Books.

We can now run our application with:

grails run-app

And then navigating to http://localhost:8080/BookStore/. However, at the moment this just shows us the default grails home page and there's not much we can do with it. This is because we haven't defined any controllers in our application. Controllers, of MVC fame, are the things which handle the application logic of our web-app. What does this actually mean? Well, they receive incoming requests, decide where to send them and how to deal with any incoming data. They also supply data to the view layer to be used to display our objects to the user.

In order to grace our application with some controllers and views, we will execute the generate-all command which will get us up and running with both a controller and views:

grails generate-all bookstore.Book

We then also need to do this for the Author class:

grails generate-all bookstore.Author

Now, when we run the grails run-app command and navigate to http://localhost:8080/BookStore we should see two controllers on the default Grails page. Selecting either of these controllers will allow us to execute the full set of CRUD commands on the Book and Author classes.

If you've had a go at creating a new Author and/or Book object(s) you'll notice that when creating a new Book for example there is an Author drop down generated containing a list of all of the Authors. However, instead of displaying the Author's name property, the drop down is listing the id and the class type, which is not very pretty. To change this behaviour we're going to overwrite the toString method for both the Book and Author class.

Modify Book.groovy to look like:

package bookstore
class Book {

String title
Author author
static constraints = {
}
String toString(){ return title }
}


And Author.groovy to look like:

package bookstore
class Author {
String name
static hasMany = [books:Book]
static constraints = {
}
String toString(){ return name }
}


And that's it for now. We've ended up creating a simple BookStore application with two domain objects in a one-to-many relationship.

I've only started scratching the surface of what can be done with Grails. In the coming weeks I'd like to go over unit tests, views, authentication and authorization and as many more topics as time will allow.

Friday 8 April 2011

Grails - Ruby on Rails for the JVM

This post will be about a framework for the JVM called Grails, which stands for (or rather, used to stand for) Groovy on Rails. Grails is a RoR-like framework making use of the Groovy language and the "Convention over Configuration" paradigm which RoR seems to have made so popular.

First a little bit of history as to how I discovered grails. I was introduced to the grails framework in a round about way, when I started looking for the perfect RoR IDE and ended up using Netbeans (funnily enough, Netbeans has since dropped support for RoR). As a way of getting familiar with Netbeans I started working my way through the Netbeans tutorials, trying out the different frameworks and technologies along the way. This eventually led me to Grails. If there's one thing to be taken away from this little story it's to never stop learning new ways to write code and that the Netbeans tutorials are an excellent resource :-)

So, to get on with the show. We will be using a clean install of Ubuntu 10.10 (desktop edition) as our operating system, so bear in mind that you may have to tweak the commands below depending on your enviroment. The first thing we need to do before installing grails is to ensure that we have a Java runtime environment and a Java Developement Kit installed. Ubuntu 10.10 should come with the OpenJDK JRE installed by default. To check this out we open a terminal run java -version and we should see the following as output:

java version "1.6.0_20"
OpenJDK Runtime Environment (IcedTea6 1.9.7) (6b20-1.9.7-0ubuntu1)
OpenJDK Client VM (build 19.0-b09, mixed mode, sharing)


To install the JDK, we run:

sudo apt-get install openjdk-6-jdk

and install all of the necessary packages.

In order to run grails from the command line, we'll also need to set the JAVA_HOME and GRAILS_HOME variables as well as adding these to the PATH. We can achieve this by adding the following to the bottom of our ~/.bashrc file:

export JAVA_HOME=/usr/lib/jvm/java-6-openjdk
export GRAILS_HOME=$HOME/grails
PATH=$PATH:$JAVA_HOME/bin:$GRAILS_HOME/bin


NOTE: We'll have to open a new console window for these environment variables to become available.

Now that we've setup the environment, we can go to grails.org and download the latest Grails binary (1.3.7 at the time of this article). I usually move it to the home folder so that it's sitting under ~/grails-1.3.7. The observant amongst you will have noticed that in our ~/.bashrc file we've set $GRAILS_HOME to ~/grails previously, so in order to make this work, we create a symbolic link to this folder using ln -s ~/grails-1.3.7 ~/grails. We should now have our grails environment set up, so running grails from the terminal should give the following output:

Welcome to Grails 1.3.7 - http://grails.org/
Licensed under Apache Standard License 2.0
Grails home is set to: /home/srdan/grails

No script name specified. Use 'grails help' for more info or 'grails interactive' to enter interactive mode


Congratulations, we now have a working grails install!

That's it for this post, in the next one we'll be having a look at creating our first grails project.

Tuesday 29 March 2011

Book Review: Version Control with Subversion

I just finished reading the online version of the Version Control with Subversion book, which is available for free at the link provided. The book was an incredible help in teaching me the ins and outs of subversion. The only thing I can say that was perhaps not as great as it should have been is that it sometimes went into too much detail, exploring too many of the options and fringe use cases.

While I have managed to set up a repository, make it available over WebDAV and have a Redmine install use it, all without having read this book, this was mostly done by following a recipe. This means that I didn't delve too much into the workings of the software or all of the different configurations that you could have. This area is where this book shines. It tells you how you could have done things differently and the possible reasons for doing so. I personally found several configuration options which I have tweaked on my own Subversion/Apache setup to make it more secure and efficient.

Overall, a good read and while I can't say for sure that everything in this book will be relevant to you, I can say for sure that nearly everyone using Subversion will find something in this book for them.

Monday 21 March 2011

Eclipse IDE indentation shortcut

If you're editing some code in Eclipse and you want to indent it, you just select the block of code and hit the TAB key. But what about if you want go go back the other way? In this case, you just use Shift+TAB.

Simple really, not sure why they don't have the keyboard shortcut listed next to the menu option to do the same thing.

Saturday 5 March 2011

Changing JRE's in Ubuntu

If you're working with Java on Ubuntu, you have two main choices as to which runtime to use. The open source OpenJDK and the official Sun runtime. Luckily, due to the alternatives system in Ubuntu (probably inherited from Debian) you can install both of these and switch between them as you see fit.

To see your currently installed JRE's run the command:

$ update-java-alternatives -l

and you should see some output like:

java-6-openjdk 1061 /usr/lib/jvm/java-6-openjdk
java-6-sun 63 /usr/lib/jvm/java-6-sun


This assumes that you've installed both of the runtimes. If you've only got one or the other installed, you will only see one line in the above output.

Now that we know which runtimes are installed, it would also be nice to be able to see which one we are using at the moment. Run the command:

$ java -version

If you see the ouput:

java version "1.6.0_20"
OpenJDK Runtime Environment (IcedTea6 1.9.7) (6b20-1.9.7-0ubuntu1~10.04.1)
OpenJDK 64-Bit Server VM (build 19.0-b09, mixed mode)


Your are running the OpenJDK runtime. If you see:

java version "1.6.0_24"
Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)


You are running the Sun runtime. As an aside, the OpenJDK version is generally always a few versions behind the official Sun version.

So now that we know which runtimes we have installed and which one we are using at the moment, we can finally switch between them, using the update-java-alternatives command.

To switch to Sun's JRE:

sudo update-java-alternatives --set java-6-sun

To switch to OpenJDK:

sudo update-java-alternatives --set java-6-openjdk

To learn about the OpenJDK project and how it differs from the Sun Java runtime, you can have a look at:

http://openjdk.java.net/
http://en.wikipedia.org/wiki/OpenJDK

The 'update-java-alternatives' command has a lot more options allowing you to choose to switch just the JRE (and keep the JDK the same), or only switch out the browser plugin. To see all of the alternative options, have a look at the man page 'man update-java-alternatives'.

Wednesday 16 February 2011

Installing from URL on Citirx XenServer

Citirx XenServer has a nice feature which lets you install a Linux directly from the repository. The one catch is that you have to get the URL right and if you don't you only get the error when you try and start up the VM. I've only tried out CentOS and Debian, the URL's for these distros are:

http://ftp.ca.debian.org/debian/ (Debian)

http://mirror.csclub.uwaterloo.ca/centos/5.5/os/x86_64/ (CentOS)

Note that since our host server is in Canada, I have used the Canadian mirrors. If you're using a different mirror, you'll have to replace the hostname with the hostname of your mirror.

Tuesday 15 February 2011

How to change timezone on RHEL/CentOS

Here’s how to change the timezone on CentOS/RHEL:

cp /usr/share/zoneinfo/Pacific/Auckland /etc/localtime

Tuesday 8 February 2011

Installing Redmine + SVN on Ubuntu 10.04

I've got a new workstation pc. It's not much (no hardware supported virtualization, onboard graphics) but it does the job. I've decided to take it and basically turn it into the ultimate development machine. That is to say, start right from the beginning and do things properly. The last sentence probably means a lot of different things to a lot of different programmers, so what does it mean to me? Well basically:

* Ability to access the machine remotely (Dynamic DNS + SSH/Web access)
* A Version Control System (Subversion, thank you very much)
* Project management/issue tracking software
* Development web server + database (Tomcat & MySQL)

So, today I will be writing how to get Subversion + Redmine up and running on Ubuntu 10.04. Partly to help someone out there that might be having the same problem and partly because I'll probably be having to install it again in the future and having a written record might save some time :-)

I've installed both Subversion and Redmine from the repositories and generally used the sample configuration which came with the packages, with a little bit of tweaking of course.

Installing Subversion is as easy as running:

sudo apt-get install subversion libapache2-svn

Next, I mostly followed the subversion install instructions found on the ubuntu wiki page, setting up a private repository. The wiki writeup is excellent, telling you how to get up and running quickly, whether you use subversion over HTTP/HTTPS/SVN/SVN+SSH.

Redmine was a little more tricky to install, although still a lot easier than I remember it being before it was in the repositories. Start by running:

sudo apt-get install redmine redmine-mysql

If you don't use MySQL, use one of the alternates (redmine-sqlite, redmine-pgsql). The sample configuration for Apache can be found at /usr/share/doc/redmine/examples/. I used the 'apache2-alias.conf' file by pasting it into /etc/apache2/sites-available/redmine and running:

sudo a2ensite redmine
sudo /etc/init.d/apache2 restart

After this I got an error saying that one of the apache directives was in the wrong place:

... waiting Syntax error on line 14 of /etc/apache2/sites-enabled/redmine:
SocketPath cannot occur within section


I then tried moving the directive to the /etc/apache2/mods-available/fcgid.conf, however this ended up giving me a different set of errors in the apache error log:

[Sun Feb 13 16:23:33 2011] [error] (13)Permission denied: mod_fcgid: couldn't bind unix domain socket /var/run/redmine/sockets/default/6126.28
[Sun Feb 13 16:23:33 2011] [warn] (13)Permission denied: mod_fcgid: spawn process /usr/share/redmine/public/dispatch.fcgi error

Finally, I commented this line out from both sections and was able to get redmine going without fcgi (Later I realised that I had installed passenger and it was running and that this was probably the reason that fcgi kept throwing the errors.

So, then you can navigate to http://localhost/redmine/ and configure the admin user (default username/pass is admin:admin) and you should have a working Redmine install.