AgileGallery OpenLaszlo Source Code

December 5th, 2008 § 1

I just released the source code for the OpenLaszlo flash photo gallery on github. Enjoy!

Python CSV to Fixed Sized Text Tables

December 5th, 2008 § 0

Here is a quick and simple Python class I hacked up to take comma separated values and reformat them to a fixed column text table. Supports multi-line rows, column width limits, and creates a header row automatically using the data from the first row of the CSV input.

» Read the rest of this entry «

PHP Array to Text Tables

October 31st, 2008 § 0

I needed this for a little project so I coded it up. I haven’t done a lot of tests but it works just fine for formatting the associative arrays I have run through it.

The class supports multi-line rows, limiting the width of the column, and automatically creating a heading based on the keys from the associative array.

Usage and Output Example

Text Table Formatted Output of above example:

| COMPANY  | ID |       BALANCE       |
| AIG      | 1  | -$99,999,999,999.00 |
| Wachovia | 2  | -$10,000,000.00     |
| HP       | 3  | $555,000.000.00     |
| IBM      | 4  | $12,000.00          |

Full class source code after the jump…
» Read the rest of this entry «

Accurate Web Application Benchmarking Methodology

October 1st, 2008 § 0

I recently was searching for a benchmark comparing the performance of the PDO and ADOdb Database Abstraction Libraries for PHP applicable to use in a Web application, and came up with nothing satisfactory on the subject. There were several benchmarks floating around but I noticed a problem with the methodology used.

A Flawed Methodology

  1. Create a separate script for each library to be benchmarked
  2. Within that script, create a time marker at the start and end of the script
  3. After the first time marker, include the library
  4. Between the time markers, execute X (example: X=500) iterations of a block of code which calls into the method(s) of the library

The benchmarkers then executed each script and calculated the time difference between the start/end time markers of each script to determine the winning library.

Why this Benchmarking Methodology is less Accurate

When benchmarking libraries to be included at runtime in a PHP driven web application, there will be varying overhead for the actual inclusion of the library. I would assume this applies not only to PHP, but to languages such as Python, Perl, Ruby, etc.

Thus any benchmarking methodology which fails to factor in the library inclusion cost in a realistic proportion to the calls to that library will be skewed, sometimes badly. Never in my experience have I needed to expose performance-critical code that iterates through 500 calls to a database abstraction library per a single hit. This would be a ration of 1 library inclusion per 500 method calls into that library.

A more accurate ratio of library includes to library method calls is 1 to 3. So on an average, for one hit to an application where the library is included, we call methods of that library three times.

A More Accurate Benchmarking Methodology

We are benchmarking a web application, so we need our 500 iterations on the client side, not inside a high count loop inside the application.

Each of our client side iterations will be a separate request causing our library to be loaded once, and methods of that library to be called in a realistic ratio that would simulate real application calls.

To achieve this, we use a tool such as ApacheBench on the client side and make 500 requests (example below). We still have a script for each library we wish to benchmark, but we model the method calls within that script to a more realistic number, such as three.

# Library A results
ab -n 500 http://localhost/library-a.php

# Library B results
ab -n 500 http://localhost/library-b.php

The Result

In the case of Benchmarking PDO vs ADOdb, I saw benchmarks using the flawed benchmark methodology which put PDO at only a 125% speedup over the ADOdb library.

When I benchmarked (see full benchmark here) the PDO library provide as much as a 2840% speedup over the ADOdb Library.

My conclusion - load time inclusion times in web languages makes a huge difference.

Benchmarking PDO and ADOdb Database Abstraction Libraries

June 23rd, 2008 § 2

I was unable to locate a satisfactory benchmark of PDO vs. ADOdb, so I decided create some to get an idea of the performance differences.

I expected PDO to beat ADOdb easily, since PDO is a native PHP library and requires no run-time include. See my Web App benchmarking methodology here to understand why I was unsatisfied with the existing PDO vs ADOdb and PEAR::NET library benchmarks I found online.

» Read the rest of this entry «