Category Archives: Programming

Update: IP address detection in PHP

We now actually have an even better workaround for the IP address problem I talked about last time.  Near the top of any PHP page,  include the following line:


After that, the normal IP address variable $_SERVER[‘REMOTE_ADDR’] will hold the actual IP instead of one of the four load-balancer addresses.  $_SERVER[‘HTTP_WOU_REAL_IP’] will continue to hold the real IP as well, but you should use the normal variable whenever possible.

By the way, the patch is already included in the WOU standard template, so if you are using those topinclude and bottominclude files, you only need to include the above code if you need the IP address before you call topinclude. Since it uses require_once(), there will be no conflict when the topinclude calls the patch again.

IP address detection in PHP

In PHP, the standard way to get the user’s IP address is from the $_SERVER[‘REMOTE_ADDR’] variable.  Unfortunately, that doesn’t work on our main webserver.  That’s because that server is really several servers sitting behind a special server called a load-balancer.  When anyone goes to, they are really going to the load-balancer, which hands off the request to one of the actual webservers.  This is a fairly common setup in the web world, because it means the whole website will no longer go down if a single webserver fails.

Unfortunately, there’s a drawback.  Because of some peculiarities of our network setup, when the load-balancer passes a request to one of our webservers, that server sees the load-balancer’s IP address instead of the actual user’s.  If you code in PHP on our server, you may have noticed that $_SERVER[‘REMOTE_ADDR’] always has one of four specific IP addresses. (Though you might expect only one from my oversimplified description just now.)

Luckily, we now have a workaround. Dave McEvilly figured out how to have the load-balancer include the user’s actual IP in a new, custom variable when it hands the request to whichever actual webserver it chooses.  So the normal ‘REMOTE ADDR’ variable still has one of those same four addresses, but you can use $_SERVER[‘HTTP_WOU_REAL_IP’] to get the IP of the user who made the actual request.

I know this was a pretty technical post, but as always you can contact me with questions at


It’s already added to the standard template, so after you have called the topinclude file, you can


Not to toot my own horn or anything…

Yeah, whenever anybody says that, they actually mean they *are* going to brag about something, I know.  I think I just did something kinda cool, though.  See, one of the departments has a few pages of their website on a third-party webhost, due to a long and complicated story that I’m not going to get into because this is already going to be a long explanation.

This hosting company is flexible enough that Danielle could make our template work over there, but there really wasn’t any way to include the departmental navigation files from our site.  They’d just have to edit both the local copy for their pages on our servers, and the hardcoded version on every page on the other web host.  Making it worse was the fact that the webhost only allows a certain number of edits before they start charging every time a file is changed (which still boggles my mind.)

Ordinarily we could just use curl() or the like to fetch the file from our server, but this company doesn’t use PHP.  They still run ColdFusion, of all things.  If ColdFusion has a way to fetch and include offsite files, I don’t know it. But I figured out a way to use jQuery and JSONP to have the pages on the other host talk to our webserver and get the sidebar include file they need.  They just need to source a JS script from our site, and it reads variables from the page to know which navigation files to include.  I could’ve hard-coded it for this one department, but I hate doing that when I can make a tool that can be used again.

But, security! I hear some of you saying.  You’re right that it’s a bad idea to let people fetch files off your site based on javascript code; anybody can mess with it using Firebug or some such, and change the variables.  That won’t work too well here, though; it’s locked down to specific folders and filename patterns (no slashes or .. for instance) plus there are a couple more secuirty features I’m not going to talk about.

If you want more detail, email me.  That’s all for now.


As of a bit after 5:00 PM Friday, our main webserver is running on Apache, rather than Sun’s Java Enterprise System Webserver as it has been for years. Originally this change was meant as a test, but by late Friday night (AKA Saturday morning) things looked good enough that we decided to run with it.

Unfortunately, what looks good at 4:00 AM isn’t always so great by the light of day. Ever since then I’ve been running around putting out fires.

Here’s a brief list of the problems we’ve seen (most are already solved.)

  • Cold Fusion (.cfm) pages don’t work on the new server. All those sites I’ve found have been redirected back to the old server where they do work.
  • Blog admin didn’t work. I had to point that back to the old server too, for now.
  • Portal single-signon links weren’t all working. The link for blog admin is fixed. The WOUAlert link isn’t fixed yet, but at least I know the problem and am working on it.
  • PHP pages using old-style code block tags didn’t work. Some people were coding PHP using the old, deprecated tags to delimit blocks of PHP code. The correct way to do this is . I had to tell the new webserver to allow the old-school code, but we really need to get rid of it because it can get mixed up with other languages. BTW, this is what caused the quick links on the homepage not to work, so I’m not totally innocent here myself.
  • Old-style PHP database calls didn’t work. I replaced the PEAR/DB module so some of this stuff will work again, but there might be other stuff made without PEAR/DB and with obsolete database tags that won’t work until it is rewritten.
  • Overly tight security settings. Some pages weren’t able to get external files that they needed and so were erroring out.

There’s more, but I need to get back to that WOUAlert problem.

User creation process improvements

A few weeks ago, when Brian was on vacation, some of the rest of us had a communication breakdown about creating new user accounts, and several new employees had to wait entirely too long before they could log on. This was at least partially my fault.

Brian is going to be gone again next week, but this time we won’t have these problems because we’ve improved the process. First of all, we found out why most of the notifications were misrouted and fixed that. Also, I’ve added some more automation to the faculty/staff account creation system, so there’s less work to do. I can’t really talk about the details because that would mean giving out too many specifics about our servers, but several steps that formerly had to be done by hand now happen by themselves. The weird part was how easy it was to do, once we took another look at the process; once upon a time it had to be complicated, but thanks to various changes we’ve made in the last few years, a bunch of stuff was no longer needed.

Anyway it’s way the heck late at night and I need to get out of here. At least the prettymail stuff is working , um, pretty well. (Yeah, this is my 2AM sense of humor.)


OK, it’s a quarter to two, I’ve been here since noon (you can criticize my schedule when you start working fourteen-hour days) All week I’ve been wracking my brains over one single annoying bug in the new prettymail system. Three messages out of 100 come across completely blank, for no apparent reason whatsoever. Their source code looks exactly as it should — even though these are annoying MS-Word copied emails with hundreds of lines of insane proprietary CSS code, I’m pretty sure of that. I’ve read pages and pages of documentation, I know all about how to build multilevel MIME-type email messages in five different encodings, I’ve gone so far as to manually change the source code in every way I can think of, and pipe the message through the mail system over and over and over again.

But they don’t show up. Completely blank. No matter what I do. It blows my mind. I’ve tried everything I can think of.

This kind of thing is just part of being a programmer. Something will stop you cold in your tracks and you have to beat your head on the wall until you find that one letter that’s supposed to be capitalized but isn’t, or that one variable that got replaced by a local value when you weren’t looking, or whatever.

The problem is, I really need to get this done because people are waiting on me. Other projects are waiting on me. I can’t afford to take this long on something like this. One thing I do know, though; it’s 2 AM now. I’m sure not going to solve it right now. Time to have a weekend and try some more next week.

Pretty email

You may or may not remember that for a while we converted the all faculty/staff email list to a system of multiple categorized list that people could opt out of. The mechanism basically grabbed email out of the inbox, turned it into plain text, wiped out attachments and replaced them with links to copies of those same files on the website, and put the messages into a database where certain people could approve and categorize them. Another process then searched for approved emails and sent them to the lists belonging to the right categories.

This system turned out not to work well enough, especially the part where it converted everything to plain text. Email messages can have all kinds of cruft in them, including buckets of formatting codes from MS Word, weirdly encoded characters from odd email systems, forwarded messages with attachments, messages forwarded AS attachments, etc. So we had too many messages coming through the system all messed up, and we had to switch back to the old way of doing things.

Since then I’ve been working on a better system and it’s finally coming together. Instead of converting to plain text, I’m converting to the standard multipart/alternative format that contains an HTML body to be used in HTML-capable email systems like the WOU webmail, and also a plaintext body for email programs that can’t display HTML (or where people don’t like HTML and have turned it off.)

It’s a pain because I still need to parse an unknown number of attachments, forwarded messages, etc. The attachments have to be removed and copied to the webserver so we don’t have to cram each one through the mailserver a thousand times. I need to replace image links within the email with links to the website copies, and remove various kinds of custom formatting… blah, I’m tired. More later.

Change in wou_ldap.vnum_to_uid

Fair warning: this entry will make little or no sense to you unless you work in UCS and do PL/SQL programming.

I’ve made a change to wou_util.wou_ldap.vnum_to_uid, specifically to the way it deals with V-numbers that are attached to multiple user accounts. Before, if you passed a usertype as the optional second parameter, and it couldn’t find a uid matching that type, it would still return a uid if it found one of another type that had the given V-number.

As of today, passing the second parameter will make the function behave more strictly; if a user account of the given type cannot be found, the function will return zero even if there is a user account or another type that has the given V-number.

In other words, passing a usertype to vnum_to_uid() means you want a matching uid only if it also matches the given usertype.

If you only pass a single parameter, the function will behave exactly as before; if multiple accounts are found, it will return the last one found. This is usually the most recently created account, but don’t rely on that always being true.

Oh, and one other note: there is a new usertype, “Alumnus”. All LDAP accounts of people who have graduated from WOU have this type. It is possible for someone to have both Student and Alumnus, for example if they graduated and then returned for a Masters program.

More on the Mini

I mentioned that I didn’t like the keyboard on my mini, and it turns out a lot of mini-9 owners share that feeling. I was looking around on the forums at last night and found out about a different keyboard you can order from Dell for fifteen bucks. Apparently by shrinking the spacebar and backspace keys by a fair bit, and slightly narrowing the others, they’ve gotten a much more normal arrangement. I tried to order it, but apparently it’s out of stock; they’re going to email me when it gets back in.

I did find out about another deal, though; they were selling 2GB memory modules for thirty bucks. Oddly enough, had I ordered my mini originally with 2GB, it would have added $50 to the price, so I grabbed the chance. I want to run Windows XP in a virtual machine on the thing, and that takes a fair chunk of RAM.

Wait, you may say, aren’t you running Windows already? Nope, though you can get the Dell Minis with Windows, it’s more expensive that way. To get the best price you need to get them with Ubuntu Linux. In case you’re not really up on the computer world, Linux is a free operating system (well, technically a group of free operating systems) very similar to Unix, which has been around since the 1970s and is still used on a lot of servers, including many here at WOU.

Linux has been around since the 1990s, but until fairly recently, you had to be a serious computer geek to get much use out of it. The Ubuntu project is one of several efforts to change that, and it’s been very successful, combining the many open-source programs and systems to build a variant of Linux that’s probably the easiest ever for non-geeks to get into.

It’s so easy that when I decided I didn’t like the somewhat idiot-proofed version of Ubuntu that came with my Mini, I was able to completely wipe and reinstall it with version 9.04, the latest and greatest, in just a couple of hours. I’m liking 9.04 (AKA “Jaunty Jackalope” in Ubuntu’s naming scheme) a lot better than the version I started with, and I only had to fix one little problem for it to work perfectly on my Mini. There are a bunch of very useful instructions available at so I didn’t have to spend hours hunting around for obscure snippets of information as I did when I tried installing other versions of Linux on other machines in the past.

Anyway, back to work. After a slow few months, I’m starting to feel like I’m getting some programming mojo back, and that feels pretty good. Hopefully things keep looking up, because I’m behind on some stuff that really needs to be finished soon.