Archive for September, 2008

A Journey

20th September 2008

I’ve been a Linux user, on and off, for some eight years now. My system has been single-boot Linux for between four and five years. And I contribute to one of the biggest user-visible projects in the free software world, KDE. So, how did I get here?

To start at the beginning, the first computer I remember was my mother’s BBC Micro B. My parents only actually threw that out about a year ago. It was like a large keyboard, with a flat block at the back that contained the actual processor and so forth. And what was essentially a small TV screen for a monitor. It had a word processor (which was my mum’s use for it), BBC BASIC (my first, disappointing and short-lived, foray into programming) and an amazing collection of games, including one called Wizalong that involved two witches on a see-saw. All this on a few giant, low-capacity floppy discs (5 1/4″, I believe).

From there we went straight to an IBM 486 with Windows 3.1. It was a whole 33MHz! Here we got the magic new 3 1/2″ floppy drive with a whole 1.44Mb storage. It had pictures, icons and WYSIWYG office applications. But no Wizalong. So, as an 8-year-old (or there abouts) the box it came in was far more entertaining.

The last shop/factory-built PC we got was a Pentium I 166MHz machine with Windows 95. A fancy new interface, and even some games (I think my favourite that came with the computer was Gizmos and Gadgets). I still remember that the odd game involved exiting to DOS in order to run it. And running just about any game involved, for the best experience, using the task manager to terminate everything but explorer.exe and rundll32.exe.

Now, there were a few forays into other machines and systems here. I bought a Commodore 64 for £5 at a sale, which had one of the best games I’ve ever played on it: Paper Boy. But my interest in that machine soon waned, helped along by the tapes only loading about one time in four.

School had two main types of machine. The computer club used RM Nimbuses, hooked up to a Winchester server via coaxial cable. These had such classic games (you don’t think we did anything else on them, did you?) as Sopwith Camel, Star Wars and Tea Shop (which, if you set the price of a cup of tea to over about 50p, would complain that “not even British Rail charge that much!”).

IT lessons were on Apple Mac LCIIIs. These were essentially typing lessons in Mavis Beacon, and the occasional use of ClarisWorks (one of the best office suites I’ve used, except for the lack of tables). These were networked via AppleTalk in order to share printers. This meant, of course, that the curious among us could poke around other people’s computers, although there wasn’t much to do on them.

There were other odd machines around. Design Technology had an Archimedes. The library had a couple of 386 machines and a PowerPC, and later got an iMac. When the new PC network came in (with Windows 2000, I think), the LC IIIs and 386s went out the window (or, rather, on the skip), although the others lived on. A few of the LC III machines even survived the cull, making their way into technology classrooms. After that, of course, IT lessons turned into internet fests (a twin 128K ISDN line serving the entire school), although we were supposed to be learning the joys (!) of Microsoft Office. Ironically, the new network manager for the Windows network was a complete Mac geek.

Meanwhile, at home, my dad had built a machine himself, with me watching, and after that I was hooked and took over playing with machines. I’ve built every computer for me and my family since (with the exception of my laptop). Those were the days of memory at £1 per megabyte (dropping quickly to 50p), computer fairs, computer magazines and dubiously acquired software. We went through a whirlwind of Windows 98 SE (crap), Windows ME (flashier crap), Windows 2000 (half-decent), Windows XP (two-minute log-on times) and back to Windows 2000.

My brief obsession with computer magazines led to a stack of freeware on the CDs on the front (this was in the days when we had AOL dialup with 19.2, 33.6 and finally 56K modems, so downloading this stuff wasn’t reasonable). One of these came with Corel Linux, which I briefly installed and didn’t get on with. But this whole Linux thing had me intrigued. Along came Redhat (5? 6?) on another magazine cover, and I tried that. No dice. It was complicated, and not in a fun way.

Then I found Linux From Scratch. I was at 6th form at the time, and I downloaded the necessary packages in Computing (Delphi and Object-Pascal ftw), loaded them onto a memory stick and took them home (where we were still on dialup, although no longer with AOL). I was hooked. My curiosity with how things work is pretty much insatiable, and here I could build up an entire operating system myself. What was there not to like?

Well, the maintenance for one thing. I was, and still am, a bleeding-edge-junkie and it was hard working keeping up with new versions. I tried DIY-Linux, but that was still a lot of work. Eventually I settled on Gentoo. Once at university (with a second-hand IBM Thinkpad with 128Mb RAM and a 900MHz processor, but more importantly a 10Mb link to the internet), I would go out to lectures leaving emerge running. And come back to find some part of the build had broken because of some problem that apparently only occurred with my particular choice of USE flags (which had taken me most of a day to decide on).

In the holidays, where I had two machines (my laptop and my desktop), I tried OpenBSD, NetBSD and FreeBSD. I liked the philosophy of OpenBSD (yeah, I’m a bit of security nut), but GNU did a damn good job on their tools, and the BSD ones had little quirks that I didn’t like.

I suppose I should point out that at this time I was a console junkie. Mutt, MPD and e-Links were my applications of choice. I would occasionally boot up X with WindowMaker if I needed something graphical, like a website with pictures. But the console was my realm. By this time I’d discovered the magic of 6 VTs (it took me a long time to learn that CTRL+ALT+F[1-6] did useful things). GPM did what I wanted with copy and paste. I even dallied with screen, although it struggled with a couple of programs like e-Links.

By second year, I had ditched the laptop (except as a useful spare computer) and taken my desktop to university with a shiny new LCD monitor. I decided that living on VTs was too much, and instead wanted something graphical. But what to choose? My minimalist side said XFCE, or Fluxbox, or just stick with WindowMaker. But I didn’t just want something for me. I wanted to impress people. I had the Linux bug, and even if I couldn’t outright convert people, I didn’t want them to dismiss it. So I wanted something pretty. But something that would satisfy the control freak side of me. To my mind, only KDE fit that bill. So on it went.

This led me to a world of graphical interfaces. A world containing the best file-browser image gallery plugin (Gwenview’s slideshow view). A world containing Amarok (well, after I’d weaned myself off MPD) and KMail (as powerful as Mutt, but nicer looking). A world of a slightly tacky plasticy look-and-feel. Oh, well, you can’t have everything, right?

Gentoo didn’t satisfy me, though. Those compile times. The build failures. What was I to do about it? Time for some more distro-shopping.

Ubuntu was the new kid on the block. It wasn’t bare-bones enough for me. I want to fiddle. If I wanted to point-and-click, I’d get a Mac. I want to poke things in /etc, and not find unrelated stuff breaks because some magic system configuration thingy. Hell, that’s why we all hate the Windows registry, right? The same went for most other distros. OpenSUSE is very nice (I’ve put it on my parents’ backup computer), but not my cup of tea.

But I found something amazing. Arch. It’s basically my perfect distribution. No waiting around for things to compile (well, unless you want something relatively obscure). Bare-bones, sort-yourself-out configuration, although most things work out of the box. I find there’s something slightly distasteful about BSD init scripts (although they’re better than System V ones), but that’s a rant for another time. Oh, and Arch has a fantasticly simple package manager and build system. And my discovery of yaourt has made package administration doubly easy.

So, I have my perfect distro, and a damn good desktop. What next? Well, I decided I wanted to hack on KDE. I’d picked up C from poking at various things over the years, but KDE’s architecture and object-orientated design using a real OO language (one of my Gnomie friends asked “who doesn’t love GObject?” Well, wjt, that would be me) was one of the reasons I chose it in the first place. Programs should be beautiful, right? C++ may not be beautiful, but it beats C any day of the week.

I chose Christmas 2006 to start delving into the world of KDE developers, beginning with the dot and Aaron Seigo’s blog. I found the planet soon after. KDE 4 development was in full swing, so I learned C++ and a bit of Qt 4, and dove in to the EBN, wading through the issues it picked up with KDElibs code. Seeing the numbers go down and knowing you did it is great, and it’s a nice way in for a newcomer.

Shortly after, aseigo’s Plasma project got under way, so I jumped in there. And recently, I started doing a bit of Amarok bugfixing, having been led there by my attempts to get my Now Playing Plasma data engine to work with it.

So, that’s a bit of personal history about how I got to where I am. I’m hooked on UNIX, hooked on Linux, hooked on Free Software. I’m hooked on KDE, and not letting go any time soon.

25 Years of GNU

6th September 2008

Stephen Fry wishes GNU a happy 25th birthday.

“GNU and Linux are the twin pillars of the free software community”.  I’m sure the BSD folk would have something to say about that…

Akademy 2008

6th September 2008

The videos from Akademy 2008 are online, I’ve just discovered.

Managed Environments and ASP.NET

4th September 2008

I started a new job on Monday.  This job involves working with ASP.NET and C#.  Actually, I could choose to use VB, but that way lies madness.

I’m not going to comment on my work or what I do, other than to say that I really like the people and the ethos of the company.  But ASP.NET and C# are technologies that are worth commenting on.

There are many good things to be said about both technologies.  ASP.NET does pretty well at the separation of code and design: you create a pseudo-html (actually a superset of XHTML) file to describe the page, including elements that can be manipulated programmatically such as an <asp:Button> tag, and then write a separate file for the code that does the magic.  Creating forms and reading input from them is a doddle.  As is doing different things depending on which button was pressed, or doing magic when a control is changed.  The backend code is object-orientated, and comes with such useful features as events (essentially a signal/slot mechanism).

In short, PHP eat your heart out.

Of course, it’s not all that straightforward.  PHP wins hands down on text processing.  The overhead of ASP.NET is not to be sniffed at.  And writing something very simple can be a faff, because you have to do it “the right way”.

So, that’s ASP.NET.  How about C#?  In particular, how does it compare to C++?

Well, it removes some of C++’s annoying quirks.  C++ has many of these.  A complete lack of consistency about whether definitions end with a semicolon or not is one, and another is the “explicit” keyword which you almost always want.  Exceptions come as standard, as does reflection (although it doesn’t seem as easy as Java’s reflection system).  Useful features like the foreach construct are welcome, as are the “out” and “ref” parameter keywords (thank you, Pascal).

And, of course, C# is a managed environment, with everything that entails.  Garbage collection.  Having to try really hard to get invalid pointers or references.  Unpredictable desctructors.

Wait.  What was that last one?

Well, say you’re working with a file.  Or a database connection.  Or any other such resource that requires tearing down in some way.  In C, you had to do it manually.  In C++, we did away with that with the advent of destructors.  Create your file object on the stack, then let it go out of focus.  Desctructor gets called, file gets closed, hey presto.

Not so with managed environments like C#.  There is no concept of a stack.  Everything is on the heap.  If you create a file object, it stays around until no-one references it any more, and then (and this is the important bit) at some unspecified later time the garbage collector eats it.  Notice the unspecified bit.  If you depend on the destructor, you don’t know when your file will get closed, and therefore when you’ll be able to open it again.  So you’re back to doing it manually.

There’s really no getting around this.  If variable scope guarantees destruction, you can’t do things like returning objects from functions.  And managed environments are supposed to get you away from petty things like keeping track of memory.  C#’s solution is to introduce a using keyword (well, an overload, since it’s already used like C++’s using keyword) which gives a fixed scope to such objects.  At the end of that scope, the resources they reference are cleaned up (but the now-useless object will hang around until some time after it’s not referenced any more).

That’s one of the irritating quirks of C#.  Another is restrictions on the switch() statement, where you can’t execute some code before falling through to the next case statement.  The reasoning is obvious – many a subtle bug has arisen by people doing this unintentionally.  But it’s annoying when you do intend it.

These are just the issues I’ve found in four days, mind.  No doubt I’ll have other gripes later.  Lack of determinism is one of my pet peeves, though.  That’s the mathematician in me, I guess.

One of these days I must get around to learning ADA.  I suspect I’ll like that.


Follow

Get every new post delivered to your Inbox.