Why bashblog?
One might ask oneself "Why did this weirdo choose bashblog, this static-site generator, when there is stuff out there providing a Web UI to post stuff and everything?". The short answer is: "Because I consider these things bad design choices for my usage until proven otherwise." The long answer, of course, explaines a bit more, so let's talk about why I see simplicity as a feature.
Avoiding to rebuild unchanged files
When you visit a webpage delivered by a content-management-system like Wordpress everytime a visitor requests a certain page, that page is built just for that very visitor (and maybe some who profit of caching). SSGs on the other hand build all the files, but only once. Therefor their generated files basically act as cache, while all the hard work is already done. This is especially good to understand with files that are queried very often, let's say the RSS-feed: With a single user querying every full hour you get to 24 times a day of checking the cache, checking if the found version is still up-do-date and delivering the up-to-date version. A setup with an SSG on the other hand will have a webserver that just takes the file and provides it.
One might try to argue, that a CMS avoids changes to pages, that noone cares for. While this may seem like a good point, it ignores the fact that also a CMS has to store all the information. But by choosing a storage format that is not the delivery format all you can win in best-case is space on your hard-drive. But due to low cost and high availability on that end you quickly start losing a lot to rerunning your check-for -cache code, which can has a serious impact on your CPU-load.
So this boils down to that usually you will write your content once, but it will be accessed more than once. Therefor optimizing delivery in favor of preparation for the audience is a smart idea.
Reducing attack surface of the service
As one may have heard in that last paragraph, a CMS comes with a complexity to every request of a page a SSG reduces to once every content-change. But it also avoids the possibility of bugs in your CMS-code, that may expose your server to the not-so-nice people. Of course there may be an issue with the code delivering your static-site as well, but it is much more tested and much easier to write and maintain than code for dynamicly building the page on-demand. Also that code is much more reusable independantly from what your page looks like, is architectured etc. Therefor a greater community can watch over it to work properly which again reduces the risk of exposing security relevant bugs.
Following the UNIX-principle
A good rule to achieve the above in general is to follow the UNIX-philosophy, especially in its 1. paragraph Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new "features".
In this case this means creation of the content and delivery are two seperate tasks and will be dealt with accordingly.
Portability
Another issue you may run into is moving your deployment from one server to another, or to an entirely differnt type of service, like freenet. When working with a SSG you gain a lot from the fact that you got plain files, which are supported everywhere I can think of. When you have an SSG using relative links rather than absolute ones for links inside the same site you can even put the exact same files you have online onto a thumbdrive or an SD-card and propagate the content that way. I mean how cool would that be? Some friend walking up to you "Hey, my server crashed, but you always wrote something smart about my blog articles, so I thought I will have you an updated version on this card.".
Downsides of static-site generators
But of course no technological choice comes without downsides. In this case I see two major reasons, why one may want something different, but I will also try to point out ways to keep the best from both worlds.
No bidirectional interactions
One huge downside is, that you cannot have a SSG to deal with anything received by the user, apart from delivering a page, when it is requested. This means there is no way to store data like new posts on that way. However one is not hindered to build this /specific/ task in some minimized php-script or similar to handle the receiption of new content. Also you may use this limitation to find completely new ways of writing your posts. e.g. you can write them with your /favourite/ editor from whereever you like, not requiring a working uplink connection to your server, which is a situation where you may find yourself rather easily, e.g. by träveling with Deutsche Bahn (the german train corporation). You will have no issue to push that completly ready-to-read content later, e.g. using git, if you like that (my fork of bashblog is doing that in a way I'd consider very smooth ;) ).
No personalization
However I will not be able to give you such a nice option for when you need to personalize your content, e.g. for delivering optimized versions of your content based on your Browser. If you need to rely on this, you indeed will need to dynamicly deliver at least that browser-specific data.
Summary
Due to the nature of a blog (I write, many yous read, when you ant to comment you write an email or chat me up on XMPP and I do not rely on weird edge-case CSS features) there is no reason for me to accept the complexity that would come with a web-based CMS and instead I chose a static-site generator in my favourite programming language (you may hate me now :D). If you want to read more on low-tech stuff, you may want to look at the low-tech magazine, especially their post on switching their site to a low-tech solution.
Tags: bashblog, simplicity, technology, web
