Web Programming Best Practices

Recently, Patrick had a great post on web coding best practices. I wanted to echo some of his points, and enhance it with a few of my own.  As products have moved to the web, I think some of the best practices have been loosened since many web languages are less formal than the desktop varieties, and creating a new build is often as easy as moving a file around.

Key takeaways from Patrick:

1.       Have a staging server. Every change should have a dedicated test environment setup as identical to production as possible. Patrick recommends using a hosted server. I personally have a dedicated desktop which works fine. One risk area I see for myself is problems with MX entries, networking setups, etc. which are hard to replicate.
2.       Use source control. I personally use CVS. I am not as advanced as Patrick here. I still prefer to SFTP or SSH my files across from the staging server rather than deploy. This is an area I could use some improvement.
3.       Repeatable Deployments. I have a nightly script which backs up all my configurations, database, and latest web files into several tar files. To re-create the server, I just have to drop these where required. On my to-do list is to generate a script to do all of this for me.

My Additional Best Practices

There are a few more lifesaving activities I have learned from my day job a big company leading teams and in my night life as a lone coder.
1.       Unit Testing
2.       Automated Regression Testing
3.       Statistical Analysis of program and user behavior

Unit testing

When programming in a language like Java, unit testing is built into the process with frameworks like JUnit. Less so for web languages. I personally code using PHP, and I don’t currently have a good unit testing framework. Rather, I have a dedicated page in my web application for admins which performs all unit tests I have written for the site. Simply loading this page will bring up (hopefully) a field of green “Pass” metrics along with the test case and function tested.

If you are new to unit testing, it is pretty easy to get started. Look at every function your application uses, then think about all the different types of things that might be passed to it. Each one should have a well-defined result expected. Before writing a function, I make sure to write a long comment detailing expected inputs and outputs. From this, I add some new entries on my unit test page for each test case I can think of, including bad data. Every time the function breaks due to something unexpected, I am sure to add a test case. This helps future-proof the function against changes which break existing requirements and will catch most logic errors not found at compile time.

Automated Regression Testing

Unit testing helps, but it is only part of the way to a full test solution. There will be a lot of functionality which can’t be easily tested with a unit test, or full transaction flows which have to work through multiple steps. For this, I recommend developing a full regression test, with well-defined inputs and success criteria. I personally use a series of Chickenfoot scripts in Firefox to test business flows. Every potential action of a user should have its own test script.

A common user process to test might be: user lands on home page->user searches for product X -> user reads 2 reviews ->user clicks purchase -> user logs in -> user enters payment details -> user submits order -> user reviews order.

I have automated scripts for many of these kinds of business flows. Whenever I conduct usability testing, I watch what the user does, and write down their actions to create a flow test from. I also create multiple variations of each flow based on what real users do, or what I think they would do. Every time I am working on a new release, I can run this suite of tests and quickly determine that everything is working as expected. Generally, I run this test suite after completing all unit testing and basic sanity checks using my own browser.

Statistical Analysis

Sometimes, errors are very hard to spot, or changes result in unexpected consequences. This is one area I plan to implement, but currently don’t have in production. To create statistical analysis dashboards, you first have to collect data over time. This can be using an analytics package like Google Analytics, or other server based data collection methods.
Once you have some data, you can create spec limits, generally 1.5 sigma levels around the mean. (If “sigma levels” sounds like gibberish to you, it is the same as 1.5 standard deviations, but in six sigma terminology.). Imagine the scenario that your application always uses up 30-50% CPU. After a release, you see it jump to 75%. Since this is more than 1.5 sigma levels, your dashboard would pop up a warning that CPU is out of spec, and may need investigation.

CPU is a simple way to illustrate the concept, however this could be more effectively used in the same way as A/B tests are used – analyze visitors behavior, conversion rates, click throughs, and other metrics for variance. If you see a shift after a change, then you probably have something that needs to be dealt with.

What else?

Are there other key things you do to manage your business or project cycles which should be considered coding best practices?

Be Sociable, Share!

    0 comments… add one

    Leave a Comment

    Next Post:

    Previous Post: