20070630

3 things I learned about software in and out of school

Following the lead of Carnage4Life and Scott Hanselman, here are my own items:

Three things I learned while in school

  1. Some people can program, others can't. Those who can't will never be good at programming, even if they work really, really hard. I learned this when I saw somebody stare at a compiler error message for 30 minutes (until I helped them with it) for a simple missing parenthesis! Staring at the error message for more than a minute is not going to help you figure it out when all it says is "Parse error..."
  2. It's hard to know when to stop generalizing. I learned that in Software Engineering when we had heated discussions about whether we should bring generalizations to the next level, something I was very much against.
  3. Working in a group is more fun, but can be disturbing to by-standers.

Three things I learned while not in school

  1. Business people tend to ignore technology under the pretense that business concerns are more important, but it's not a good idea. Technology problems don't really go away by themselves, even if CPUs become vastly faster every year.
  2. Software is a highly volatile industry. When I left school, it was the .com boom, 99.99% employment, etc. etc. Since then, I've seen ups and downs, but always in a very protracted time period.
  3. As you grow older, your experience will tend to grow stale. The reason for this is that as you gain experience, you are more and more likely to become the team lead or senior programmer of your team. As such, you won't have anybody to "pull" you forward as you did when you team lead or senior guy was there and you tried to show off that you were a better coder/designer. The only fix is to always remain open to ideas, regardless of how junior the person who expresses the idea may be most of the time.

20070627

Lesser known IoC containers

The point of this article is to talk about my experience with IoC containers other than the 1000-pound gorilla in the IoC container space.

Through some interesting circumstances (mostly, my hard-headedness about not including 15 megabytes of stuff just to get an IoC container; note that I do know Spring is modular, but it wasn't at the time), I'm one of the few people who hasn't been using Spring. At work, we've mostly used PicoContainer as our middleware "glue" layer. I've also dabbled with Guice, and Plexus (a bit) when messing around with Maven Mojos. This is a summary of my experience with each.

PicoContainer

Let's cover the IoC container I feel I know best first--PicoContainer. PicoContainer is a small (the JAR is 100 KB or so), type 3 (constructor injection) IoC library.

And that's pretty much the only thing it will ever be, if you don't customize the library. It does constructor injection. It supports hierarchical containers to resolve dependency conflicts. And that's it.

However, the real power of PicoContainer is its open architecture. You can implement your own component management strategy by implementing ComponentAdapter and ComponentAdapterFactory. There are a few example adapters. But overall, this lets you futz around with the component instantiation without having to resort to aspect-oriented programming or exotic annotations. Also, since you are in full control of which ComponentAdapter is used for which component, you can completely customize the instantiation on a per-component basis.

As an example of how useful this can be, it was very simple to implement a specialized adapter that can inject some dependencies through construction injection, and then looks for more dependencies to inject using setter injection. When my experiences with Guice left me somewhat lukewarm, I was able to quickly whip up an @Inject annotation for PicoContainer.

Finally, the hierarchical container facility is extremely useful for one-off dependency injections, such as injecting dependencies in objects retrieved from the persistent store by Hibernate.

So, in conclusion:

Pros

  1. Very small
  2. Reasonably fast, to the point that it can be used with throwaway containers
  3. Works even with old JDKs such as 1.4
  4. Component adapters are extremely flexible
  5. Containers are very lightweight, can create and throw away child containers at will

Cons

  1. The authors believe constructor injection is the only viable option. However, although constructor injection enforces integrity, it's also very awkward, especially when you get more than 5 dependencies. OK, you probably shouldn't have that many, but sometimes, especially in Struts actions or some other equally centralized point of logic, you will get that many, and it's still less awkward than having a façade just to cut down the dependencies.
  2. Although the adapters are very flexible, there are very few useful implementations that come with PicoContainer. One that supports Guice-like semantics would be welcome.
  3. The hierarchical system for resolving dependencies works well in some situations, but is extremely awkward in others. Granted, it's possible to write a component adapter that uses an alternative resolution mechanism, but in the IoC realm, PicoContainer is a bit like assembly language, or Forth; even relatively common cases have to be implemented by hand.
  4. There is relatively little integration from 3rd-party frameworks (such as Struts, Wicket, and so on). On the other hand, such integration is always very simple to write with PicoContainer, which is not necessarily the case with every IoC container.
  5. Error stack traces are OK, but sometimes very confusing; specifically, it's not always clear which dependency was missing when trying to instantiate a component.

Summary

If you want a small container and don't need many fancy features, or if you prefer to write such features yourself to gain maximum flexibility, PicoContainer fits the deal. In my mind, it's more of an IoC library than an IoC framework.

Guice

Guice is a relative newcomer in the IoC container arena. Its main claim of fame, despite what its adopters will tell you, is that it comes from Google. Also, unlike PicoContainer, it came out at a time where people started being annoyed at massive XML files, such as those that show up in many industrial Spring-based code bases.

It immediately grabbed the interest of many a-bloggers, showed up on dzone, yadda yadda. The question is, can you believe the hype?

Well, underneath the hype is actually a very nice IoC container that goes the extra mile to provide you with near-perfect error reporting, and that is not afraid to sacrifice purity to the altar of practicality. Guice supports, out of the box, all sorts of dependency injection scenarios, provided that you mark your dependencies with the @Inject annotation. From the theoretical point of view, this is more invasive than a run-of-the-mill IoC container, since classic IoC containers should, in theory, let you use plain objects without any special annotations. This, many argue, makes Guice much less flexible than Spring or PicoContainer.

In practice, I found that adding annotations makes sense in most cases. Externalizing all dependency-related informations sounds great in theory, but in practice, you'll end up with 80% of your services having only one sensible implementation. The services with more implementations have different-enough implementations in many cases that the POJO that receives it works well only with one of them (I estimate that about 10% of application services are like this). Guice's use of annotations to mark injectable resources and to let the target object control what resource gets injected to some degree is very practical.

The IoC container is pretty fast, and does some wicked ASM and cglib-fu to provide precise error reporting and the best possible speed regardless of the situation. Overall, it has an "advanced technology" feel to it. It shows a lot of polish in its package structure and in name selection.

However, Guice is a very "closed" system. Being advanced technology is nice, but it does make certain kinds of things a bit more difficult. Here is an example of such difficult things, and incidentally the reason we went with PicoContainer at work. Guice comes in a "uber jar" format; it bundles two versions of ASM (one for cglib, and a more recent version for itself; ASM is noted for major incompatibilities between minor versions, much to the annoyance of projects using it, and cglib appears to be evolving very slowly these days). Hibernate also uses a version of ASM, albeit an older one, as well as the same version of cglib as Guice. So, in an effort to trim the 550 KB or so JAR file to its bare minimum (remember, PicoContainer takes 110 KB...), I tried to get it to work with the older ASM version. The only part that didn't compile was some code used for error reporting, so I figured I could live without it. Well, I was wrong; the previous version of ASM crashes and burns during unit tests. I'm wary of dismissing obscure crashes to situations that will not happen in production code (even though the failing unit test looked like it was doing something exotic), so I left it at that, figuring I was better sticking with PicoContainer and bloating our applications as little as possible.

Some may dismiss this as a trivial reason to not use Guice, and in some ways it is. But it is something to be aware of. I know Hibernate, for one, has had all kinds of problems with use inside environments that come with a different version of ASM. So I'm a bit wary of such annoying dependencies, especially on ASM and cglib which are known troublemakers. Granted, the problem is a bug in ASM, but it does mean that Guice's use of ASM is exotic enough to trigger that bug.

Another problem with such a closed code base is that it's hard to extend it without releasing your own build of it. I'm not that keen with depending on an annotation that's inside the IoC container's package; I'd prefer depending on, say, the @Resource annotation that's standardized in a JSR (granted, it's not supposed to have exactly the same semantics, but my point is that I dislike putting hard dependencies at the POJO level). I understand the necessity of the annotation, and it's no worse, in my mind, than the transient or volatile keyword. Except in one way: it binds the code specifically to Guice, not just any IoC container. It would be nice if you could ask Guice to use another annotation type (this is noted as issue 70 in their issue database), but right now, you can't.

Pros

  1. Still reasonably small
  2. Very advanced, gives a peek of what libraries can look like if people would start to target Java 5 (see also Stripes)
  3. High-tech, uses all kinds of tricks to maximize performance and give the best error reporting possible
  4. More popular than PicoContainer at this point, is getting all sorts of frameworks integrated with it
  5. The only IoC container I know of that allows post-instantiation injection of dependencies for non-constructor injected services. This is actually a great idea in some cases (such as deserialization)
  6. Used in Struts 2. So it's going to end up being used by somebody visible, unlike PicoContainer :-)

Cons

  1. Sometimes too advanced; that kind of trickery can hurt if you get bit by a bug (admittedly, there doesn't appear to be many of them)
  2. Binding on a specific in-package annotation is annoying, I'm curious as to why there isn't a way to configure it?
  3. Not an open architecture the way PicoContainer is. It's very easy to futz around with PicoContainer internals. Guice is much more selective with respect to what it lets you do or not do.
  4. Java 5 only. Not that I care, but somebody might. Though, they did manage to make it work with Retrotranslator.

Summary

If you want a rich IoC container without the huge XML file tradition (yes, Spring folks, I know about JavaConfig, but it's still not what most people use), with sensible defaults and excellent error reporting, Guice fits the bill. However, if you want something with a hood that you can pop open at will, you may prefer another IoC container.

Plexus

I haven't played with Plexus a lot, so I can't really give too many details. My only experience was inside the Maven 2 codebase.

As far as IoC containers go, it looks OK (although it does suffer from the XML configuration file syndrome, the XML files are lighter than Spring 1.2 configurations [though not Spring 2.0 configurations], and you can use JavaDoc tags to autogenerate the XML file). However, I felt a bit annoyed at it overall. Error reporting, at least within the Maven environment, is not that great, even for simple mistakes like forgetting the appropriate JavaDoc tag or misspelling something. Plus, I can't really figure out where this project fits. It doesn't look like a project that's used by anybody except the Maven guys; why didn't they use PicoContainer to start with instead?

There are a lot of plexus components, but they don't tend to be that reliable, or their use within the Plexus infrastructure is not consistent. A good example is the password input component. Some Maven components use it, some use a plain text input component instead. It's not quite clear, when developing Plexus aware components or Mojos, which you're supposed to use for what.

A pet annoyance of mine: the Plexus guys want you to obtain the logging service through dependency injection. Although that sounds great in theory, I think it's nuts. Logging (at least, programmer logging) is purely a developer service. I'd even prefer not having to create a special object to start logging--if it could figure it out from the current stack with reasonable performance, it would be much better. So, if, to get logging, I need to create a field, a setter, and configure something in an XML file... I'm not going to use logging, or at least, I'll use Jakarta Commons Logging or SLF4J and just forget about Plexus logging.

So, all in all, it looks like an interesting project, but I don't really know why anybody would use it over Spring, or Guice, or even PicoContainer. I guess every developer likes to write an IoC container, because although it's not too hard, there are enough challenges and futzing around with reflection or byte code engineering to make it an interesting task.

Closing thoughts

So, there you have it, a quick round trip of some lesser-known containers. I also know of a few others:

  • HiveMind/Tapestry-ioc, the container behind Tapestry. Well, this one looks powerful, but like Tapestry, it changes wildly between releases, and releases happen relatively frequently. From a pure technology point of view, it looks very powerful, as HLS does not mind looking for good ideas in other projects.
  • Yan, a relatively unknown container that appears to allow injecting almost everything in almost anything. The author is one of the few who documented how to use his container in rich domain model. The Spring guys have allowed this only very recently, and it requires the use of the AOP sledgehammer to get it to work.
  • And, of course, Spring, whose basic container is becoming more powerful with every release thanks to ideas from all those lesser-known projects, but which remains a huge project. Granted, you can use only parts of it, but it's not the standard usage.

I hope this (longish) post will be of some use to somebody.

20070618

Eclipse Europa Review

About this review

OK, so it looks like I can win a t-shirt doing an Eclipse Europa review, and it happens that I've been using it since M7 (the last release before RC0). So, even if I don't get a t-shirt, I'll post this, because I'm a nice guy and I want people to benefit from my living on the bleeding edge.

Before going on with the review, it would probably help readers to know what my Eclipse usage profile is. I use Eclipse mostly at work (http://www.alogient.com), where we build web applications and web sites. I'm currently doing a lot of work on a transactional Java application, using Maven 2 as the build system, Struts 1.2.9 (yes, I know, it's old... it's also stable and well-known), Hibernate 3.3.2 + patches, PicoContainer 1.2, Jakarta Commons Lang and Collections, and other miscellaneous libraries.

Given that, I won't use a lot of the new features touted by Europa (and specifically WTP 2.0), such as JPA and JSF support. JPA may get some use someday. JSF... nah. My only experience with JSF was extremely painful, so I don't really want to use it.

So, let's see the new features I'm likely to use in WTP:

  1. Support for JSP tag files. We've started to use JSP tag files quite a bit, and I'm looking forward to better support for them.
  2. Better HTML and JSP formatting when requesting a "format everything" (CTRL-SHIFT-F)
  3. Better publishing performance.
  4. Improved "maximize editor" behaviour.
  5. Improved generics warnings.
  6. Rename refactoring changes.
  7. Ability to refactor without saving.
  8. Class editor showing disassembled code.
  9. Improved presentation of libraries in project explorer.

(In case you're wondering, I've had to recall this from the new and noteworth pages, because I've been working with Europa for long enough that I don't recall many specific enhancements)

First impressions

My first impressions once I boot the new Eclipse is that they've made it a bit faster, again. I can't really quantify this, but overall, the IDE feels snappier, especially when dealing with the WTP features. This is a great time saver in the regular look at web page/fix bug/publish/restart server cycle. More speed is always a good thing, especially since I run Eclipse on Linux/GTK+, which is usually slower than under Windows.

The workspace switching improvements are welcome, letting you switch between recently used workspaces without having to open the "Switch Workspace" dialog.

I initially imported my settings from a settings export of 3.2.2. This worked mostly OK, except for the XML syntax highlighting which, somehow, always loses my colors. This is very annoying, given that I like to use a dark background for code (OK, XML isn't really code, but it still gets a dark background). Even after importing my settings, the XML editor ends up having all content and CDATA sections in black-on-black. This kind of thing is commonplace in Eclipse and is generally annoying.

I took our main project (with its project files generated by Maven) and published it to Tomcat 5.5. It worked like a charm, and publish was significantly faster, especially the initial publishing operation.

From a stability point of view, M7 was so-so, but RC0 was pretty solid. I've yet to see it behave badly. It looks more stable than 3.2, especially the WTP features.

Refactor changes

One of the "big deals" with this release is the "inline" rename refactor. Essentially, you select the "rename" refactor (CTRL-ALT-R) and instead of getting the old rename dialog, you get to retype the new name and press enter; the rename is then applied. I've seen this feature in IntelliJ IDEA as well.

My experience with this feature is summarized as follows: I turned it off. There were two annoying things with it:

  1. I often fire the refactor with the cursor in the middle of the identifier to rename. Eclipse doesn't pre-select the identifier, so typing the new name immediately doesn't overwrite the old name; instead, it inserts the new character at the position you were in. I assume some people like it that way, but I'm used to the old way of working, and to me, positioning the cursor prior to invoking the refactor interrupts my flow more than typing the new name. Note that I'm a fast typist, so this may make a difference.
  2. After running the refactor, all editors "flicker" briefly, and I'm always worried that the refactor wasn't applied properly or something. I agree that this is largely psychological. But it's another reason for which I turned off the new feature.

So, your mileage may vary, but I think it this feature could get a little bit of polish. Maybe having an option to auto-select the renamed symbol so retyping would overwrite the old name. At least, from a "key feel" point of view, it would be more similar to the dialog, without having to lose the advantages of the in-editor rename (you see more context, it's less obstrusive, etc. etc. etc.)

The other big change I noticed in the refactoring support was the ability to refactor without saving files. This appeared to work well, and it's seamless enough that I'm not nervous about enabling it. The only thing that's slightly annoying is that when you build automatically and you do certain refactors with global impacts, Eclipse may flag a bunch of errors until you save your refactored file(s). This is probably due to the way the compiler works. But it's really not a big deal at all.

HTML and JSP improvements

The new HTML and JSP formatters are very much welcome. They do a much, much better job than the old ones, and I can actually (gasp!) use them to format blocks of JSP or HTML without fear. The old ones really mucked up our JSPs. The ones I've tried the formatter with use JSTL; I haven't tried it with scriptlet-heavy files, but hopefully, I don't have to work with those for some time. :-)

The JSP tag file support appears to work well. However, thanks to the use of an Appfuse-like taglib include at the beginning of every JSP file, it doesn't work in my everyday job, because the taglib declarations are stowed away in another JSP fragment. It looks like Eclipse doesn't see them in this case. I've resorted to copy-pasting the declarations temporarily while I work with the file so I can benefit from autocompletion. In that case, it works well.

Unfortunately, it doesn't work that well with custom tags sitting in custom JAR files that come from internal projects. I have that small library with JSP tags, which is included through a project dependency. I could never get autocomplete to work with those tags their TLD file is in the META-INF directory of the source tree). Oh, well.

The tag file support also works for editing the tag files themselves. This is not a big deal, but it does avoid a bunch of spurious warnings about unknown tags when working with those. If you use a lot of tag files, you'll be much happier with WTP 2.0 than with 1.5.

Additional polish

The new look when maximizing editors (with the package explorer, etc. minimized to the side) and the fact that minimizing the docked windows actually minimizes them (instead of turning them into a space-wasting horizontal bar) is a much overdue enhancement, and I'm very happy it's done. Also, the fact that maximizing the editor window does not close the secondary tab group is very welcome, since I tend to work with two tab groups, a habit I've kept from my old Emacs days.

The additonal diagnostics for generics are somewhat useful. They do catch a lot of strange corner cases in the generics specifications. Since we use generics rather heavily (and sometimes in exotic ways), I'm happy with this addition, but I think most developers won't notice.

The improved library presentation also fixes a small annoyance. I used to have to filter out all libraries from the project view to avoid having a huge set of crap in there. Especially for Maven-driven projects, which include every JAR in the world, this is an important feature.

Finally, the class disassembly is very nice, but unfortunately, with Maven's automatic source download feature, I don't use it anymore. Wish I'd had that a year ago...

The less good

This is supposed to be a balanced review, so I have to say something bad... bear with me.

The main annoyance with this release is that a lot of those new features have additional preferences, but those only overload the already busy preference panes. One more checkbox here, one more checkbox there, and you end up with a huge amount of checkboxes. The Eclipse preferences are comparatively messy compared to, say, Netbeans'. In all fairness, Netbeans' doesn't let you configure all that much unless you go to advanced mode, but maybe an advanced mode is what Eclipse needs. All I know is that it's starting to get really, really busy in there, despite the fact that some preference pages have very little on them (see General | Search, for instance). To me, it looks like the nature of the project (it's a plugin-driven platform) has unfortunate side effects when it comes to preference management.

The only other annoyance for me is that I was becoming spoiled rotten with the new features every minor release of Eclipse was bringing. This release does not bring much new stuff, especially from the basic IDE front. This is somewhat less true of WTP, but I'd still like to see more advanced IntelliJ features in WTP, like advanced refactoring support for CSS files. As such, Europa is a bit underwhelming. It seems like many of the features are "gadgets", and there isn't much new stuff. I'd like to see implementation of more refactorings, for instance.

Final throughts

Still, despite all this, it looks like a worthy release. The additional features in WTP are nice, and it's probable that the JSF developers out there will be even happier. And, despite my whining about the lack of "big" changes, the small things count quite a bit as well, such as the new maximized look and the increased performance.

Is it worth upgrading from Eclipse 3.2/WTP 1.5? Well, maybe. I guess it depends what features you're using. For me, it was worth it, if only for the increased performance. It will be worth it eventually anyhow. But for the average Java/JSP developer, I'm not sure there's a compelling reason to upgrade as soon as it's hot the press. For EJB or JSF developers, the picture is likely to be very different, but I admit that I don't use those technologies much anymore.

Still, Eclipse will remain my main IDE. As an aside, yes, I've tried Netbeans 6, but there's always something in the way it wants to work (usually with web projects) that don't match my way of working. Eclipse is very complex, but it's also very flexible, and I always manage to make it do what I want. That is why it remains my main IDE, and why I'll upgrade to Europa. Hey, what am I talking about! I'm already running it! :-)

Technical difficulties

It seems the old domain (bge.kernel-panic.net) does not work anymore, for reasons unknown (I don't host stuff there myself). I need to blog somewhere, so I moved the stuff to a generic blogspot address. Annoying, but sometimes, stuff like this happens. The new address (at least, temporarily) is http://bge-kernel-panic.blogspot.com. Sorry for the inconveniences. Have a nice day anyhow...

20050905

Odds and ends

I saw a couple of interesting things on TV today. First, there was this report of protest over the Gentilly 2 nuclear reactor renovations1. As you may already have guessed from my not-entirely-green opinions, I think those protests make no sense. Building a new nuclear reactor may be questionable (and then again, maybe not--nuclear waste is a huge problem, but so is the greenhouse effect, and at least the former can be found and contained with relatively primitive facilities). But Gentilly 2 has been built, and we can either get rid of it now (and thus make this costly endeavour more costly), or fix it, run it for 20 more years, and amortize the cost of getting rid of it over a longer period. Yes, Hydro Quebec is building a wind power park; that's not nearly done, and in the meantime, we're stuck importing electricity from coal-fired US electric plants. Turning off Gentilly 2 would make it worse.

On a related note, if the protesters are also those against new hydro dams, I have to wonder how they thing we'll build the windmills they are so in love with. Answer: lots of petroleum, and a bit of hydro to power the facilities while they're assembling the stuff. We'll be lucky if we can maximize the use of hydro power. The fact is, as things stand right now, it's extremely inconvenient to carry electrical power, so we're very much slave to oil when it comes to building stuff using heavy equipment. The energy density of windmills is much lower than a dam's, so I wouldn't be surprised at all if the amount of oil used to up the wind power capacity is larger than that used to build one of the humongous hydro dams that make our pride in this province. The main reason for building windmills is when hydro dams have all been built and the demand still rises.

Yeah, demand could go down too, but it's a bit late to think about this--too many buildings built with poor insulation, too many joules wasted for no good reason except to save a few pennies in construction cost, etc. We'll just have to regulate for better energy efficiency and hope we'll stamp out those obsolete energy wasters sooner rather than later. By the way, I agree with ecologists that both the provincial and federal government are really being idiots about this. I'll excuse the provincial government somewhat because it has no money, but for the federal government, this is unacceptable. They're sitting on a multi-billion dollar surplus, and instead of lowering taxes or attempting to conform to the Kyoto protocol they're so proud of having ratified, they're just pissing off the provinces trying for a political land grab. The tragedy is really the lack of alternatives; the Liberal Party really needs a kick in the <somewhere> to wake them out of their arrogance. If they don't want to lower taxes, the least they could do is gain political favor by showing leadership on the energy front, instead of duplicating provincial programs.

But I guess that while Canada still makes a lot of money off tar sands, there's little incentive to do anything like that. Why not do both, export all the tar sands, let other countries foot the Kyoto fine, and laugh all the way to the bank? Oh, wait, the US is our main buyer and haven't ratified the protocol. Damn it.


And before your friendly neighborhood defender of the A-25 bridge (which, by the way, does not look like it's going to happen--we'll get a "urban boulevard" on Notre-Dame street instead, and I'd rant about this, too, except I'm so disgusted I can't find the energy) is written off as a quack because he's inconsistent, I'll let you all know that I've been biking up a storm lately. I've also been buying local fruit and vegetable as much as I could, which is less than I'd like because growing season is too short here. I'll also let you know that my CO2 emmissions are 1.64 tonnes per year, according to the Climate Change site (which, I guess, is the only tangible thing the federal government has done about Kyoto...). Provincial average is 3.7. I can't go much lower than that if I want to survive winter, either; taking all their suggestions would lower only to 1.4 tonnes per year, mostly by recycling. There is no recycling in this building, because of some lame-ass municipal bylaw, and that's the part that hurts the most.

I find it ironic that the calculator admonishes me for not lowering my emissions by a whole ton, when one ton is 66% of my CO2 emissions. There's not an awful lot more I can do, either; I barely use the car anymore, not using the A/C and dishwasher would save something like 75 kg of CO2 per year, and I can't compost because I have no yard (condo life, you know--I'm sure I'm saving more emissions by having to heat less).


On the biking subject, I'm starting to get serious about it. I bought some new bags for the rear rack (the old one was cheap crap and its zippers broke; the new one uses a pull rope, which is a much smarter system, and it's made in Quebec to boot), a small rear-view mirror for my helmet (looks silly, but still a good idea), and biking shorts. The latter were expensive, but well worth it. I feel like maybe I'll have children someday after a long ride, which is something I wasn't too sure about when I rode with regular shorts. My perineum would feel numb after 25 K or so, 30 K if I was riding on nice, well-maintained pavement (which doesn't really exist in Montreal, I'm sad to say).

Let me tell you about my latest bouts of crazyness.

Two weeks ago, a very good friend (who is now my official biking partner) and I took the south trail. I joined her around St-Michel and Jarry East, we went to Christophe Colomb, all the way down to the Old Port, right to Pont de la Concorde, down to Jean Drapeau park, got lost (laughing most of the time) in Jean Drapeau park, decided not to go to the south shore given we had done the Ile Notre-Dame trail twice, went back up the Champlain Bridge Estacade, through the Lachine Canal trail, then back home after nearly dying on the Berri climb. The Berri climb, by the way, is much worse than I had expected. Halfway up, you feel great. Then the wind blows you nearly back down the hill, and the second part, while shorter, is steeper. You do the whole think in first gear, believe me, especially if you've just done 35 K. Total, about 45-50 K.

Last week, we took the north trail, joined by her brother and sister-in-law. Christophe Colomb to Gouin, west all the way to the Bois-de-Liesse park, then on Pierrefond Boulevard all the way past St-Charles to the next park. Then back the way we came. Much less strenuous, but much longer. The trails that way are extremely nice (though with a not-too-nice view on the Hydro Québec high-voltage power lines), and except for a rather disagreeable stint on Boulevard Pierrefond where there's a one-block stretch without a trail (I swear! This is the silliest thing I've seen in bicycle trails, but it's true!), very enjoyable. Total, 80 K for me. Odd thing was, I felt tired, but I didn't feel cramped up the following day. Admittedly, we took it easy. It's still 80 K, and still impressive. I dislike doing that much distance in a car!

Today, went solo to the east, my friend being out of province on vacation. They have this new trail next to the De Montigny stream, going through the Marie-Victorin college grounds and the Rivière des Prairies hospital. Going to the trail is a pain, though. Up an industrial street, hoping to see it connect. Didn't. Had to cross Henri-Bourassa, a rather busy and wide boulevard. Luckily, this was labour day, so it was rather calm. But you can picture me running like an idiot next to my bike because there was no ramp to go on the sidewalk on the other side of the street. Next time I'll cross elsewhere (Renaude-Lapointe, maybe), but it's still likely to be not much fun. Anyhow, I then lost the trail on Maurice Duplessis, I think I turned left instead of right because I found its end on Boulevard Léger. Traffic is light despite the very wide roads there, though, so not a problem. Then on the East Gouin trail, which was overall nice. Then the Pointe-de-l'Île park, which was a bit swampish, but overall OK and mostly free of pesky insects anyhow. Then down to Pointe-aux-Trembles, which alternated between nice trails and so-so trails next to the railroad (lots of brush in the way there; it seems like they're not really maintained). Then a disgusting segment on the sidewalk in Montréal Est. This part alone makes me wary of doing that trail again; the air was full of petrochemical and solvent smells, and I wonder whether it cancelled any health benefit I got from the ride. Then a surprisingly nice part in Promenade Bellerive, which made me forget about the ugly part from before. Then another disgusiting part in Hochelaga, where I stopped at the local Tim Horton's to get a sandwich. Then a part not worthy of being called a trail on Notre-Dame street; it's so incredibly bumpy and badly maintained that I would've been better off going up l'Assomption to Hochelaga and brave the annoying traffic. Then got lost and missed Viau, ended up on Pie-IX, turned on Ontario to join Pie-IX, up the hill (walking) of the Olympic park (it's short, but it felt like 30% grade), one lap of the Maisonneuve park, then back home through Boulevard Rosemont and Beaubien.

I have no idea of the total mileage; probably around 50 K. But I rode at maximum speed for much of it and stopped only three times (once at the park, once at the Tim Horton's at Notre-Dame/Dickson, once before entering the Maisonneuve park because the hills had taken a lot out of me). I expect to be somewhat stiff tomorrow. Overall, I liked the trip, but the part through Montréal-Est really sucked. I'm likely to go to the Promenade Bellerive in the future to kill a few hours, though; it was a really, really nice place, and not nearly as busy as Parc Maisonneuve; also, going over the A-40 east is less stressful than going over it at Viau, because there's less traffic and the lanes are wider. But I'm not sure I'll do the whole east-end-of-the-island trip again, mostly because of the poor state of the Notre-Dame trail. From a pollution point of view, though, the Champlain Bridge part is probably similar, because you're travelling under the busiest highways of the area. Though, I guess being under means a lot of lighter particulates don't enter your lungs. Still, the west Gouin trail was much nicer, though it is less interesting, in the sense that the east part fools you into thinking you're in the middle of the countryside, then throws you in dense residential areas intermixed with industrial complexes.

So, remember that heading "I must be crazy?" Well, my sanity isn't improving. I suppose my health must be, though. I hope I can do some cross-country skiing this winter, otherwise it's going to be really boring waiting for next spring.


If you want a decent bike shop in the East End, I'd recommend André Lalonde Sport, on Métropolitain East, a bit east of Langelier (though you have to do some creative acrobatics to get there if you're eastbound on A-40). It's pretty close to my home, and they were having a summer sale, with many things 50% off. That's where I got the shorts and stuff. The bikes there look like they're good quality; maybe I'll get one of the hybrids there at the end of next summer. The Schwinn is somewhat capricious (the rear brakes tend to become loose) and heavy; it's perfect to ride to the metro and for the trip to the grocery store because it's unlikely to be stolen, but for long distances, the gear ratios are far from ideal and the fat tires and weight are really dragging me down. But maybe I'll decide it's good enough by next year. Let's say the improved comfort of the biking short have really improved my appreciation of the Schwinn, now that I don't feel like my bee-hind will fall off every time I'm unable to avoid a pothole.

In any case, it's good to have a decent bike shop close by, given that Canadian Tire is a really poor bike shop, Sport Expert is expensive, and smaller bike shops are somewhat far from my place.


  1. Gentilly 2 is the only nuclear reactor in the province of Québec. It's largely criticized, but IIRC, it has had a trouble-free life.

20050724

Capitaine Flam and the death of scientific optimism

This Friday, I was goofing off at work, mostly because I was just waiting for the rain to stop. I had used my bike in the morning and didn't fancy pedaling in the rather strong summer storm that was in progress. I eventually came across this site and begun quizzing a co-worker on which songs he remembered. Now I feel quite old, because he didn't remember any of them.

I surprised myself enjoying the Capitaine Flam (Captain Future in English and Japanese) theme song. It's disco! It's kitch! But it's also a blast from the past.

I had never followed that particular series that much (preferring Albator to a large extent--seeing reviews of the two series, I sort of understand why). However, there's a particular episode that's still in my mind.

The good Captain finds himself stranded on a planet, his ship stuck somewhere in orbit. Together with other strandees, he builds a small ship from raw metal (how in the world can he do that? No machine shop, nothing... Anyway...). Unfortunately, the thrust system in the ship requires calcium to work, and there's no source on the planet. The Captain tries to sacrifice himself, but another strandee is quicker and walks into the power plant, letting his dead bones fuel the ship. All others escape, but rather gloomily.

Now, my recall of this episode may not be complete. Also, as I found out by checking out a few sample clips to refresh my memory, the animation was awful, the Captain was a stereotypical macho scientist-soldier with a density level approaching that of a neutron star, and overall, it felt very different from regular anime, plot-wise. But this particular story remains; it was a rather heavy-handed way to teach a kid that bones contained calcium, but it stuck in my juvenile mind. So, this is why I wanted to find out a bit more.

It turns out the series is based off an old pulp serial written by Edmond Hamilton in the 50's. Apparently, it got imported in Japan in the late 70's, and the Japanese really ate it up, despite all sorts of stupid plot devices, really weird concepts (one of the crew members was the brain of a dead scientist enclosed in a levitating cube filled with nutritive fluid--in the anime, it's a saucer-like device instead, but they kept the overall concept), a solar system with life on every planet (the anime places the action in different solar systems to make it a bit more plausible), a satellite around the sun, and a lot of other silliness. Curtis Newton, aka Captain Future, is the least plausible thing in the series, gifted with nearly superhuman brain power and reflexes. There's also a lot of silly stuff, like time travel, interdimensional travel, various human subtypes on every planet of the solar system, etc. etc. etc. And the total unconcern for safety around nuclear devices that filled pulps from that period.

It doesn't mean the series had no redeeming points. Newton is strong, fast, a perfect shooter, but still wins most fights through cunning and his superior scientific knowledge. Maybe unrealistic, but it did value brains over brawn, and although it was definitely meant to be wish-fulfillment for the target audience, it still made the pulp more clever than, say, plain superheroes. The text had some scientific explanations in them; some were completely bunk, but they were existing theories at the time. Keep in mind that before we sent probes all over the solar system, the fact that it was devoid of life was unobvious; we had only one planet as an example, and it's literally teeming with life of all kinds. But I think the main redeeming point is the sense of adventure, the optimism towards science, the sense that justice would prevail. Modern fiction is definitely more complex and mature for not being so optimistic, but it's also somewhat depressing, especially in the light of the events of the last few days.

So, what have I learned from this trip down memory lane? First, I've found out why the anime felt different from other anime at the time; it was actually an American story, which explained why it felt somewhat "average" in some ways. Still, I think it's an important anime, because it's one of the few that did adapt Western concepts; even if they were somewhat juvenile concepts, they are still part of our culture.

Second, I learned that I'm still very nostalgic about such series. I wish they were more easily available. People in France have better access to those; we in Canada are too small a market to matter much, unless the US cares. And the US doesn't care about most old anime series, because they were horribly dubbed and sometimes hacked to pieces and turned into some sort of composite anime.

Third, I feel that although the way pulps looked at the world were very naive, they served a purpose. I doubt the grunge movement would have come to exist if kids had been reading that kind of stuff instead of hanging out in bleak suburban malls. But nowadays, there seems to be three kinds of media that younger people consume. Squeaky clean, mind-numbingly dull shows like Barney the dinosaur, which, frankly, insult kids' intelligence (compare those to intelligent shows like Passe Partout if you have the chance, and you'll understand). Violent or disgusting cartoons, who are all pretty much the same and are really cookie-cutter shows to sell toys and such. And the things they aren't supposed to watch, and are usually depressing, trash/destroy music videos or series. Hardly uplifting material. Combine with news media that scream "fear! fear! fear!" all the time, hardcore porn that treat women like things, and wonder why you shake your head at the decline of social mores. Even adults aren't immune to such treatment, and kids are like sponges.

I am not advocating censorship in any form. All kinds of material have their place. But can't we get a breath of fresh air once in a while? That may explain why super-hero movies and Lord of the Rings are such resounding successes--they are breathes of fresh air. Unfortunately, they're also old material, and require suspension of disbelief. They also tend to be past-oriented, and I feel that future-orientation is one of the strong points of western society.

I guess I'm itching to see something like this corny, stodgy Capitaine Flam, but updated to modern standards. No need to make it violent or dark or anything; just remove the blatant macho elements, the rather dangerous use of atomic devices, make the science a bit more plausible (even if that means you have to introduce hyperspace, even though that's as implausible as everything else), and maybe we'll have something. It may not be that marketable nowadays, but dammit, I'm sick of wallowing in guilt over all the defects of my society, and I'm also sick the only optimists appear to be idiots (i.e., right-wingers) who are in denial over today's problems. Those problems are solvable, and we should be working at solving them instead of denying they exist or accepting them as the inevitable byproduct of modern civilisation.

Take the widening gap between the rich and poor in industrialized nations (or better yet, the gap between industrialized nations and non-industrialized nations). This is indeed a problem; it's returning us to a feudal-like system, with corporate employment replacing serfdom. Sure, it's not as many hours, not as physically demanding... But those are superficial differences; the work owns you, just like it owned the serf. The left likes to say this is the byproduct of a capitalist system; the right likes to say that everybody gets richer this way, even if the rich are much, much richer than the poor. Both attitudes are problematic, because they both perpetuate the problem. If the left wins, you just get a different clique with all the riches (remember the USSR?) If the right wins, as they are right now, it only gets worse. If you read older sci-fi novels, there's a huge contrast. The world is owned by the average majority. The heroes are sometimes richer than the average, but they show very little greed. Of course, they're also paternalists, and tend to patronize "the little folk", but despite that, the whole feel of stories of this era is much more hopeful than what I see these days. It seems, in fact, that those heroes wish for a day where they won't be needed anymore, and where everyday people will be able to live in peace.

There are other aspects of modern life that aren't being seen in the same way as they were in those novels. But this is starting to be quite long, so I won't go there.

If there was one thing those old stories did that isn't really done anymore, it was to teach elementary science concepts in the guise of a space adventure. In a crowded world like ours, "hard" science is losing relevance because the hard problem is becoming how to interact with all those other 6 billion people sharing the space with you. But in space, nature reclaims its place and survival comes from scientific thinking once more, just like it did on Earth before we started to tame it. The paternalist, gun-toting Captain Futures weren't the essence of such storytelling; the ability of humans to figure out who nature works and how to make the best of it was. I'm sure it would be possible to have good, entertaining stories with interesting characters without turning it into a dark, depressive mess many modern sci-fi stories end up becoming. And it would also be possible to have such heroic stories without resorting to super powers, or without wallowing in distortions of medieval history. If anyone knows of an author who does this, let me know.

As for myself, I intend to give it a shot--to see if I can pull such a thing. My own fanfics, although edging towards medieval/modern hybrids (and left unfinished, shame on me!), do try to have such an atmosphere. When I wrote them, it felt "natural;" so I don't think I'd have to force myself. The main problem will be to figure out an interesting plot with solid scientific basis (but not necessarily too solid--after all, the pulps weren't that solid about many things!), and come up with interesting characters.

Wish me luck!

20050718

Difference between Windows and UNIX programming cultures

This post on Slashdot links to an article on comparison between UNIX and Windows programming cultures. However, it mostly talks of how the problem of usability is approached. I'd like to take a different tack, in the difference between the API of the two systems.

  • Windows APIs are huge. In the Microsoft world, everything seems to end up being part of the core OS services somehow. This has the advantage that you don't need to expect people to have such-and-such library. Or does it? Changes to what is the "core" between OS versions make compatibility somewhat nightmarish; you're never quite sure what libraries are there or not. Writing installers is a mess. MSI helps, but not if there's no MSI package for the libraries. Another side-effect of this is that Windows programmers are always learning a zillion new things. Win32 services. COM. COM+. .NET. DNA. TAPI. The list goes on and on. Many of those APIs do the exact same thing, so learning the new one is only needed because the old one becomes obsolete. It's hard to stabilize such a huge API.
    In contrast, the core UNIX APIs have been mostly unchanging for 30 years. You can write most application-level code without touching the newer calls; newer calls are mostly there because they provide better performance, and are needed in more specialized situations. There are a lot of third-party libraries; however, they're not part of core UNIX, and it's reasonable for UNIX programmers (though maybe not from the users' point of view) to expect the needed libraries to be installed. Like core UNIX APIs, those libraries tend to use rather stable technologies.
  • Core Win32 APIs have no consistent reporting. OK, this drove me up the wall when I was coding on that platform. Does the MoveWindow() return NULL or INVALID_HANDLE on error? How about CreateFile()? And what's up with the ridiculous conventions for WaitForMultipleObjects()? Sure, GetLastError() is there, but so many APIs set this (including, say, MessageBox()) that many programs end up reporting an error as "The operation completed succesfully". UNIX APIs tend to return ints, -1 on error with errno set, a positive integer otherwise. Period.
  • The C library in Windows is a mess. It's getting better recently, but people still use old Win98 boxen that don't have a decent libc installed. This, plus the annoying mishap with memory allocation (there are too many ways to allocate memory: GlobalAlloc() (deprecated), LocalAlloc() (deprecated), VirtualAlloc(), CoTaskMemAlloc(), malloc() and operation new in C++--and they all use a different heap!), makes writing interoperable DLLs a real mess. Contrast with UNIX, which tends to ensure that malloc works the same across all libc versions, and where upgrading your libc pretty much upgrades your whole system, and you'll see why I was pulling my hair trying to fix installation problems with the C library. Of course, .NET will solve all this... Just like Java is supposed to solve similar problems on UNIX. Well, not everyone wants to install 100+ MB of runtime code just to run your application...
  • Windows SendMessage() is stupid. Granted, with MFC and such, you don't need to look at it as much. But what's the big idea of passing two parameters of a known bit-width for every message? Why not pass a void* pointing to a different struct for each message? The result: huge pain when porting from Win16 to Win32, and another huge pain that will occur when porting from Win32 to Win64. No wonder they want to move to .NET. Compare to X-Window, which uses the void* approach, and you have to admit that SendMessage() and the WindowProc() conventions are mis-designed.
  • Some Windows services are strangely tied to physical windows. For instance, many COM calls don't work if there's no window and no message loop. This is documented, but it's a pain in the ass for multithreaded programming. Ditto for timers; IIRC there's no way portable to Win98 that lets you have a timer callback without a message loop. Compare to UNIX setitimer(2).
  • UNIX threading is a mess. This has improved somewhat in recent years, but I still run into problems. Linux and glibc are the big culprits there. They have changed their threading strategies several time, and each time a glitch appears, we get a finger-pointing match between the kernel and glibc team. This is annoying to say the least. At least one widely-distributed Linux distro (RedHat 9) exhibits severe problems under load, due to bugs in the glibc that are partly made worse by the JDK. In my view, threading should be a kernel service (and I'm not completely alone in this view--it seems the Linux kernel is moving more and more towards that model) and it should remain stable, dammit. Sure, you could do similar things with fork(), but that's not a reasonable approach with a GC runtime. In contrast, Win32 threading has been rock-solid for years. You can bitch a lot about their synchronization privitives (events are extremely easy to mis-use, and their overlapped IO is one of the most convoluted APIs I've had the displeasure of using, full of corner cases and with no easy way to cancel without introducing a lot of extra code), but at least, threads switch properly, semaphorses are locked properly, and that part of the API has been very stable.
  • UNIX C++ integration sucks. UNIX people seem to prefer C. So, there's no integration between signals and C++ structured exceptions. C++ runtimes are not versioned as carefully as the libc. And so on and so forth. Annoying, this. C++ remains a second-class citizen in UNIX for rather stupid reasons. At least, in Windows, exceptions work somewhat right (you need to mess with _set_se_handler() IIRC to get it standard compliant, though), and the C++ runtime is versioned together with the C runtime (then again, the C runtime's versioning's already messy...).

I'm sure I could go on, but these are the main thing that strike me. I don't know if this is useful to somebody.

Credentials: I've worked in a Windows shop for four years, writing Windows applications first in raw C, then in raw C++. I've seen Win16 (the horror! the horror!), the passage to Win32, COM using raw C++ as well as ATL, lots of newfangled APIs (pen API, new serial port interface in Win32, WinINet, etc) and had lots of headaches getting the stuff to work all the way down to Windows 95. Lately, I've been mostly writing Java applications for UNIX, but I've had the opportunity to write some C code on POSIX systems once in a while. I like to think I know what I'm talking about on those two APIs.

Back from vacation

Came back from vacation 2 days ago. Spent time with my family in Hampton Beach, NH.

I've been to Hampton many times in the past. However, it was my first time doing it by bus (because I was joining the family in the middle of their vacation). There's a couple of weird things I noticed with my bus trip:

  • Those idiots at Station Centrale kept insisting the bus went to Manchester, NH first, then Concord, NH. I had picked Concord, NH as destination because I wasn't sure there would be a bus to Manchester (turns out there was, I'll know for next time). Manchester, NH is further south than Concord, NH, so starting from White River JCT, VT (where I had to transfer from the Montreal-Boston bus), it made more sense that it would stop at Concord first. Sure enough, it did. Went to Manchester anyhow because it saved some time otherwise. But if I had followed the tickets blindly, I would have been stuck waiting for the Concord bus at Manchester, which I would have just missed...
  • On the first leg, we had plenty of room; everything got packed at Burlington, VT. Namely, got a rather, ah, large person next to me with a bit of BO (don't blame him, though, it was really hot outside). Thankfully, unlike in a plane, I actually had room leftover.
  • All the cute women were in the trip from Montreal, when the bus was half-empty, so my plan of having a nice lady sitting next to me obviously didn't work. And I'd shownd up really early hoping that would happen. Bummer.

Besides that, not much to say about the vacation. It was mostly relaxing, no thanks to the idiots next door who partied until 5 AM every darn night. After four nights of this, several people complained (including us) and they got kicked out. The last two nights there were bliss compared to the previous nights. And before anybody says anything about me being an old fart and noise intolerant, I slept next to the A-15 in Montreal for four years, and I could sleep with the window open. A-15 is extremely noisy, so it's not about noise intolerance. Maybe I'm just incompatible with Rap music.

Then again, the owner told us those guys had made a foot-wide hole in the wall, and left shaving cream all over the place. I guess maybe they were just idiots.

Other things I noticed: property prices are insane down there. The only city with decent prices in NH was Manchester, which is odd, given that it's one of their largest cities. Coastal property is completely ridiculous, and rents are pretty bad as well. 4 1/2 start at 1000$ in most locations. Keep in mind those locations are suburbia at best; there is no bus service to speak of, commerces are only accessible by car, and so on. It gets worse in the Boston area (which, at least, does have some public transportation). I know taxes are low there, and mortgage interest is tax-deductible, but still.

Consumer good prices aren't that fantastic either. I bought two things: a nice pair of shoes (and I could've probably found those in Canada, now that I think back on it) and the Noir DVD set. I would have preferred getting the DVD set here, but it's out of stock everywhere. Even in NH, I only found it in Nashua. I saved a bit of money mostly because NH has no goods tax, but if it hadn't been for the out of stock thing, I would've gotten it here.

Who cares about consumer prices? Well, it used to be that going to the US, my family and I ended up close to the limit of goods we could bring back (there used to be a 300 CAD limit for a stay longer than 7 days). Now, although the limit is much higher, we weren't even close to the old limit. It's just not as attractive as it once was, even with the Canadian dollar so high and the lack of consumer tax in NH. I'm really wondering how USians make ends meet, despite lower income tax. I'm sure their salaries, just like ours, didn't move much since 2001; however, the prices are higher, real estate is insanely expensive, and even gas prices must be starting to hurt. You hear a lot of stuff on the radio about 0-down mortgages, getting a loan from the mortgage, etc. etc. etc.

This is happening in Canada as well, but I'm not that worried; prices are still somewhat reasonable, compared to theirs (it's still dang expensive, but looking at their real-estate classifieds sort of made me somewhat less sensitive to this). Taxes high, yes, but we don't have that medicare mess to contend with. The provinces are on the edge of deficit, but at least the federal government is not (and those who think state governments in US aren't in trouble; think again, they had to cut a lot of services from what I heard). And most people I talk to, even home owners, are extremely wary of taking loans from the mortgage. At least, in Quebec; don't know how it is in Ontario or BC. So, from what I can see, there's still room for price growth on real estate in Montreal (a good thing for me) without necessarily hurting everyone. I'd also expect that, if there is a speculative bubble in real-estate, it will hurt much less in Montreal than down south. There's definitely a huge bubble there; just look at Alan Greenspan's worries about the fact that raising interest rates does not raise mortgage rates. This is because banks expect to always be able to recoup their capital easily thanks to the speculative market which inflates prices way too fast. If prices crash, or even stabilize, some banks are likely to be in trouble.

Of course, I'm no economist. But seriously--300 000 USD for a 4 1/2 condo is insane, period, especially in suburbia. I'm getting annoyed at Laval prices which hover around 150 000 CAD for similar units--and you will get access to the metro in 2006-2007 at that price. But then again, maybe there's something I'm missing.

It's just that I didn't expect San Fransisco prices in the New England area. And I still think I had reasonable reasons not to expect that...

20050717

Cloud now running x.org

I just upgraded my main workstation, Cloud (named after the famed FF VII character with the big-ass sword) to use X.org.

Was a mostly painless upgrade. Didn't even follow existing instructions on the upgrade; I just noticed that a dist-upgrade grabbed a lot of xorg packages, so decided to take a look at xserver-xorg.

Initial problems: couldn't get OpenGL to work right. That didn't take too long to fix, though I'm not sure whether it's the refresh of the configuration I did or today's dist-upgrade that fixed it.

Second problem: it didn't really want to upgrade my XF86Config-4 file. Solution: nuke it, reconfiguration the package, and let debconf do the work.

There were a few other annoyances (mostly related to the dga and xv extensions that are never installed by debconf, for odd reasons), but it's been extremely painless. The main remaining annoyance is packages that still insist on linking on the original version of the OpenGL libs (such as xscreensaver-gl or doomlegacy); those won't install. But such is life on the unstable tree.

A lot of people reported better performance, but I haven't noticed anything. However, the radeon driver has always been very good even in XFree86, so that may explain it.

The main positive is that I'll finally be able to counter taunts from the Arch Linux fan at work. :-)

Enabled comments

OK, I decided to enable comments, mostly so I get an idea if anyone except Code Ronin reads this, or if I'm "pissing in a violin" as one of my French friends likes to put it.

Note, however, that you'll need to create a blogger user to be able to add a comment. This is in an effort to prevent too much spam. Maybe I'll move to a moderation-based system at some point in the future, but that would require migrating all of the blog. Right now, despite previous complaints, blogger is free, simple, and publishes to an FTP site, so I'll keep using this.