One of the earliest things that I did with RISC OS was to start to take things apart and put them back together... more to my liking. In some cases it was just removing banners that came up when applications started. In others it was a little more invasive. I remember that I changed Impression Style so that the default document that was used came from the currently logged on user's private area. Things like that just made stuff a little more usable for the family - and were fun and helped me learn stuff.


Application icon for !MakePatch

The first patch helping tool that I wrote was to help me distribute patches. The !MakePatch tool took two binaries and produced a patch file containing the differences, in the format used by the Acorn !Patch tool. The tool had been supplied with RISC OS 3, along with a number of patches that described fixes for well known applications.

Nobody really made any other patches, as far as I could tell - it seemed that most software developers chose to produce updated versions of the software rather than just binary patch their applications. The patch file format was actually quite sensible, and could check whether the file matched what was expected before being applied.

I distributed a few patches like this but it's not very convenient as a file format for maintenance. You cannot easily see what it's trying to do. What would be more useful would be a description that retained the meaning of the changes.


Application icon for !JFPatch

As many of these little patches needed code changes, I wrote !JFPatch. This came about mainly because I was fed up with having to write the same sections of code over and over - load your source program or module, set offset in the code, assemble code, (repeated for multiple patch points), save application or module to new place.

The free assemblers that I had seen were all focused on generating chunks of output that stood alone - taking a source assembler file and producing an object or target file. Nothing really did this 'patching' process. At the time, I don't think I had a !Zap version that let you apply patches directly on the disassembled files, so that wasn't an option, and in any case it would not allow me to create annotated descriptions of the changes.

!JFPatch automated the patching process. Rather than having lots of little ad hoc programs lying around which would perform specific, small, patches on an applications, I had a defined format that would take away all the common parts of the patching process, and make the individual patches far simpler.

A simple patch script might be something like:

In      !RunImageOld
Out     !RunImage
Type    Absolute

@ &1048
    MOV    r0,r0
Example JFPatch code to NOP an instruction

Which simplified the whole process to just the bare things that make up the patch itself. !JFPatch itself wasn't all that bright at that stage - it took the file and started writing out a temporary BASIC program that contained the necessary code to load, patch and save the code and then ran it. I added DDEUtils support soon after, so that errors from the patching would pop up in the throwback and you could jump to the failing lines quickly.

There were lots of little patches for existing things, that I created, most of which were pretty trivial, but with every patch it helped me to understand how the programs worked. In some cases it was removing the floppy disc based copy protection in order to make things work from the hard disc (or just so that I could copy them!). That was always a fun challenge in itself - authors had varying degrees of cleverness in their copy protection (although there were only a few that matched up to the downright evil things that had been done on the BBC tape copy protection) ranging from reading sectors of vital code from unused sectors on the disc through to strange disc formats and multiply encrypted executables.

At some point I began to write modules myself, and this was initially just using the simple patch method, with an empty source. I created a new part of the interpreter that could declare the properties of the module in a 'Define Module' section. This only initially provided very simple entry points but was extended over time to add more and more useful things as I needed them.

A simple, not very useful, module might be:

In      -
Out     Beeper
Type    Module

Define Module
  Title     Beeper
  Version   1.00
  Service   service
End Module

        TEQ     r1,#6     ; Service_Error
        MOVNES  pc,lr
        STMFD   sp!, {lr}
        SWI     &107      ; VDU 7 - a beep
        LDMFD   sp!, {pc}^
Beep when we get an error

Which, if I've got the syntax right, would make every error that is reported also beep.

The module functionality extended bit by bit as I needed new functions. Command entries, vector claims, filter manager handling, WimpSWIve claims, and a few other exciting things all got added in order to make it easier to make things that were common to many patches a lot simpler and less error prone.

I added library support, allowing some simple libraries to be reused - mostly that was for memory handling and some string manipulation that became tedious to repeat every time they were needed.

!JFPatch was released on to the Arcade BBS, and later on my website when I went to University. A few people contributed bits to it, with some, like Jonathan Brady, rewriting sections that weren't written so well or just didn't work. There were a few examples supplied with it, too, including a most impressive functional Econet print server by Chris Johns.

Using C

After I started using C to write little programs it became obvious that it was easier than writing assembler all the time to do most things. Assembler was fine for veneers and stuff that was speed critical, but it sucked for trying to get algorithms right and trying stuff out. I changed the output format so that it could support AOF output.

This wasn't quite as complex and daunting as it had initially seemed. It did require some new keywords and some more symbols thrown in to the format to make it possible to know what was imported and exported. These symbol naming wasn't too bad - I believe I used pipe-delimited names to indicate both the export of the symbol and its import. I had seen sources for ObjAsm which used this type of notation, and had assumed (wrongly) that it implied the linkage type. By default, all symbols were still local, but exported and imported symbols would be remembered and included in the symbol table.

Linking with C code was still a bit of a fun thing, but at least it was possible. It meant that I could still write assembler bits in !JFPatch to link with C code if I wanted, or vice-versa, writing !JFPatch code which called down to C code. The network drivers for Doom were written in this way, with direct entry points that came to the assembler, and many of the low level network operations being in assembler, but some of the code being C.

Once I'd been through what the C compiler could do, and how it produced code you could interwork with it, it became a little frustrating that few people ever looked beyond the basic invocation and tried to use it in the ways that were available - and then cited that you could not do so as a reason for not doing things. One day, I wrote the 'Super-quick guide to linking C with BASIC' to try to explain what you could do with the compiler if you were so inclined, together with worked example of a genuine use case. This was mainly to show that you didn't have to be stuck in the 'I write in BASIC and assembler, and I don't want to have to start again in C' - because you don't need to, you can link the two. It may not be as nice as you might like but that's mostly because BASIC has its own set of constraints on how you do things, and assembler is obviously free-form.

The Google groups posting is archived so hopefully should be around for a bit. The example code moved later to my dumping area on

I'm not sure if the guide achieved its goal, but it was an article I was very pleased with and I remember it got some favourable comments.

It was amusing, when I first saw the RISC OS source, to find that the JPEG handling code used some of the same methods to link its C code into modules - albeit slightly more esoteric, because of its history of using AAsm.

Later, when I needed to do something that was more complex than I cared to do in assembler, in an existing assembler module, I used the same sorts of methods to link the C and assembler together. Write the code in C, to run as an application, create your test code, make sure everything works, then put in a few macros and #ifdefs to make the code linkable with your assembler. Then you can produce either the module linked together with your C, or the test code in an application. Hopefully the only thing you should find wrong when running the module would be issues with integration, rather than with the implementation of the feature. If there are things wrong, or changes to the algorithm are needed, they can be made in the basic application before being put into the module proper.

The Wimp uses this for some of its handling of newer features - in particular, the tool ordering code was rewritten from scratch in C to allow the different styles of tool furniture which modern WindowManager allows. This meant that testing out different formats was a significantly easier task - not requiring rebuilds of the module, restarts of the system and repeated tests. The main test code produces (simple) renderings of the tools using plain line drawings, so that each format can be tried out and checked that the correct behaviour is observed for each test. There were no specific regression tests, but the code-test development cycle was hugely reduced.

The Filer uses C code linked to its assembler base for a number of things, like the thumbnail cache handling. The sprite cache used for Filer windows is managed so that it can extend, shrink, and find sprites quickly. It's all pretty simple stuff, but writing it in assembler would have been so much more error prone and harder to test. For example, it was found that closing windows was slow, and it would have been a real job to rework the assembler to function differently. Whereas the 'many thumbnails in a window which is then closed' was really easy to place into a test in the C code, and then update the code to improve its speed.

Much of the pane manipulation in Filer is also C-based, making it simpler to try out different styles of panes and what happens as new panes are added, or old ones removed.

Kernel uses the technique to parse mode strings, building up a mode specifier which is suitable - and can decode a mode specifier into a mode string as required. It's only simple parsing, but why faff with assembler when you can write things sanely?

Squiggly Pipes

When I went to University, I was introduced to the fun that was Unix based systems, and specifically to the command lines that they provide. RISC OS' command line is pretty simple, and this stems from its primarily single-process architecture. This design means that implementing Unix style pipes is a little bit of a challenge. Since there is no concept of multiple processes; there's no way for a process to yield to another, or for data to be passed from one to another whilst they run.

Lots of little ways are used for data-transfer, many revolving around temporary files (if it's possible for them to run independent of one another), TaskWindows (if they need to be preempted), or Wimp tasks (if they are native or can be re-tasked). None of these are really that easy to use from the command line - you have to do everything yourself.

This, I reasoned, was a limitation. Clearly it should be possible to at least automate the first of these methods; those which used temporary files passed between inputs. These are the simplest of uses of pipes - the output from one task is supplied as the input to another, using redirection of standard input and output. For example, you might clean up a JPEG (if you didn't use jpegtran) by something like:

djpeg blah/jpg > <Wimp$ScrapDir>.temppnm 
cjpeg blah/jpg < <Wimp$ScrapDir>.temppnm 

However, you might not want to type that - under Unix like systems that might be:

djpeg blah/jpg | cjpeg blah/jpg

far simpler and clearer. Under RISC OS, though, redirection takes two forms. The OS-level form is used for controlling the input stream (SWI OS_ReadC and friends), and output stream (SWI OS_WriteC and friends). It has no buffering other than that provided by the filesystem. The other form is the C run-time redirection, which was closer to that provided by Unix or Windows systems. It is buffered within the application. The two can be used simultaneously, although because the C run-time redirection won't ever output to the OS-level output stream, the OS-level output redirection will take no effect.

OS-level redirection was 'supported' by every tool that produced output and generated input. C redirection was only supported by applications built on top of the C run-time, or which went out of their way to parse the format.

OS-level redirection took the form of a sequence '{ > output }' or '{ < input }'.

C redirection took the form of a sequence like Unix '> output' or '< input'.

SquigglyPipes was my attempt to make it possible to use pipes using either mechanism, with temporary files. The syntax I used was '{ | command-to-run }'. The name comes from the fact that the '{}' characters are (in my terminology) 'squigglies'.

The basic principle of the change is relatively simple. We trap the CLI vector to strip off any pipe redirections. If there's none, we pass the command through. If we find one we remember the command to which things are being piped and strip that from the command line. The remaining command is examined, and if the command being run is a file on disc, we check for how it starts. If the beginning is a AIF header, it is (with a very high likelihood) a C tool, so C-like redirection can be used. If not, we use OS-like redirection. A temporary filename is generated and the output redirection for this tool is appended to the command. Environment handlers are set up to trap the usual suspects so that we get control back after the command completes.

The first command is then run. If it completes without error (that is, it does not exit through the error handler), we can move on to the second command. If there was an error, we stop, removing the temporary file.

To call the second command we check the tool type (whether it's AIF on disc or not) to decide the type of redirection required. The appropriate redirection is then appended to take input from the temporary file and the command is then run.

It's all a bit fun, and there were a few missteps along the way, but it meant that you could use such pipes on the command line, and within TaskWindows if necessary. I'm not certain how much use this was to people - I never really used it much for anything clever, although I remember that it was common for me to use something like '*Modules { | grep Filer }' to locate a module I was interested in (usually in order to kill it off with prejudice).


It was quite easy to throw together little patches to do simple things. One of the things that came up regularly was the problem of writing a program that just wouldn't quit. In later versions of RISC OS, you could use Alt-Break to kill tasks off, but prior to that there was no way to prevent the application from locking up the machine. So I wrote a module that would sit on KeyV and on a key combination could perform different killing actions.

The module could generate an error - on the modern RISC OS systems it would be trapped and reported as a background error, because it was triggered from interrupts - or terminate the task with SWI OS_Exit. Sometimes the application is running just fine, but you want to perform a few other operations, so I also added the option to drop to a command line prompt as well.

It was quite a useful module to me during my development of things, and when I released it on Arcade it seemed to be quite popular with people. I think it's one of the more useful earlier modules I wrote, even if it became redundant with RISC OS 3.5.


After TaskKiller, I wrote a key triggered screen capture module, called ScreenGrab - imagination wasn't a strong point on naming things back then. Using the same code that I'd used in the TaskKiller, I created a simple module that would trap the ctrl and shift keys being pressed together. When they were detected, it would save a screenshot to a directory in !Scrap, suffixed by a number from 'ScreenDump%Number'. At the time I thought it sensible to separate numbers with a '%'. I'm not sure that this was ever defined - I'm sure I remember something else using the same separation, but can't remember what. The variable would then be incremented, so that the next file would go elsewhere.

Once I'd written the !AreaFiler, which gave different users a different configuration and storage area, I updated the ScreenGrab code slightly so that it would write to the PostBox area for the user, and if that didn't exist it would revert back to writing to !Scrap. If it did succeed, though, it would also update the marker file that indicated that post had arrived for the user. This meant that the next time they logged in, they'd see their screen grabs. I don't think it checked the file whilst you were logged in.

There were a bunch of other screen grabbing tools about at the time, of varying complexity and customisability. I don't remember the exact reasons, but I expect the main reason for writing it was 'because I could', since I understood how the KeyV entry worked. That and the fact that I could customise it.

Sadly I wasn't up to a lot of complicated bits in assembler back then, and it would happily overwrite files that already existed. If you rebooted and then took a screen grab, it would write to file Screen1, overwriting whatever was there before. A little BASIC tool was run during the boot sequence which would enumerate the files in the scrap directory (or post box) and set the system variable to the next number. Whilst I still think that's not right - it should have all been in the module - the split between the assembler to do the bit that could only be in assembler, and the simple processing being done in BASIC is quite sensible. Which isn't to say I wouldn't put that code in the module these days - just that it makes sense that way around.


The ExtraKeys module used the same methods - it was an easy way to trigger things - to add special key combinations. The idea was to make it easier to insert the curly quotes in texts. Wow, back in 1995 - it's such a long time ago! Like the TaskKiller it would track the keys that were pressed and when a certain combination was used it would perform a custom operations; in this case it inserted keys into the keyboard buffer.

Alt and the ' or " keys would produce double or single curly quotes, alternating with each press of the keys. This made it very easy to enter them without having to remember any special sequences. There were other sequences as well, like Alt-D or Alt-T for the date and time, or Alt-. for an ellipsis character.

It wasn't a particularly clever module, and looking at how it did things, I hadn't quite understood some of issues with writing code to be run in IRQ mode, but at least knew enough to use call backs properly.


Back when I was on FidoNet - when electronic communication was cheap but took days to propagate to other countries - I was asked by a Robin Abecasis to write a module which would remember the errors which had occurred on his BBS whilst he was away. There was a module called NoErrors which would click on any error boxes that appeared - this meant that the BBS would keep running if anything would have locked the machine up, but that you'd never know what problems had occurred.

RecErrors - or RecordErrors, to give it its proper name - was the module I produced. Claiming SWI Wimp_ReportError - it's WimpSWIve again - it would take a copy of the message that was to be displayed, write it to a file, along with the application that reported it and the date and time, and then show the error itself. The last version also remembered the details in system variables so that you could find them if you just happened to have forgotten what was on the screen and didn't want to go digging in the logs.

Other people contributed to the module - I don't have a record of who, sadly, but there are notes that some of the code was written by others. The module itself has a comment in the *Help message - "This module is dedicated to the hard working people of Acorn's Workstation division".

The module source was conditional, too, so that it could be built to include or omit the time and date, and to record the 'area' name from 'Area$Name'. I had set up the family A5000 such that each person could log on as themselves and their files were kept apart. The !AreaFiler application managed this so that their desktop had just their things on it, and applications would use custom settings for them (usually by patching the application to write to a different file to those they would usually use). For us, the area name was useful in finding out who was using the machine at the time, when there was a problem.

When working on Select, I wanted to include RecErrors in some form as it was really useful - but I didn't want to use this old code, as assembler modules were to be avoided, and !JFPatch wasn't acceptable (to me) for inclusion in the build system. RecordErrors2 (the name is the same but the version was increased) was very similar, but instead of using WimpSWIve, it traps Service_WimpErrorStarting (which makes a lot more sense, but I don't think existed on older systems).

Instead of recording to a file itself, it uses SysLog, which means that it not only benefits from being more centralised as a log, but it can also be sent across the network. Really it's a very simple module, but as there are two versions that do similar things maybe it's interesting to compare them ?

VersionLanguageSource sizeModule size
1.07 (08 Jan 2003)Assembler9.2K1.7K (not including WimpSWIve)
1.15 (20 Jul 2003)C20K7.9K

Nope... not really that interesting after all.


FileCore was limited in the size of discs that it could handle. As larger devices became available, use of the space by FileCore (particularly for small files) was worse. This meant that with newer discs the amount of wastage due to the filesystem itself was greater. As FileCore didn't, itself, support partitioning, there were a few solutions available to do provide them.

Most of the hardware suppliers who provided FileCore based file systems had some form of partitioning system present, which allowed the discs to be split up. If you just used the internal interface, however, this would not be an option, as ADFS did not support partitions.

I had a larger disc and wanted to try out partitioning it so that the space could be used a little more efficiently. JADFS was my cunningly named system to provide such partitions (named along the same lines as my enhanced version of ADFS from the BBC days).

It was a very simple module, with hard-coded boundaries for the partitions. The module provided a configurable number of discs which would appear as the partition from ADFS::4. The ADFS disc could be used for the first part of the disc, with the JADFS partitions being subsequent areas on the disc.

When called by FileCore, the JADFS module would merely add the disc offset to given on to the base of the partition and then pass the call on to the ADFS module. I'm pretty sure there wasn't much in the way of bounds checking. Because of the problems of leaving ADFS to access the first partition and the JADFS module accessing the subsequent partitions on the same disc, I didn't get very far with it. The module would regularly crash with FileCore in use, or would corrupt one of the transfers.

The problem was the background transfers using scatter lists, which tended to interfere with things - ADFS didn't expect to receive operations whilst a background transfer was in progress, as normally FileCore would just append the transfer to the end of the existing background transfer list. I gave up, but passed the information to Chris Johns, who had written the same thing - only better, as his worked. I think he avoided the problem by making BDFS provide all the partitions.

In any case, JADFS wasn't particularly useful. Maybe if I looked at it now it would seem a lot easier, but since Chris had managed to produce a perfectly good version, there wasn't any need for me to do anything. Plus he'd done a fancy desktop interface to set the partition sizes, which was rather neat!