The patches I created initially, started by changing things in quite simple ways. As I progressed and understood more about the system I became quite confident in changing things more significantly. Much of the time I think I knew what the danger was and as I was only doing things for fun, it didn't worry me that things might go wrong and I might have to abandon the work.

In some respects the patches were developed on a basis of tackling the problems without thinking about how badly it might go - a much more prevalent thought in later years. Not that I was all that cautious about things during the later developments, but not thinking about the problems that you'll encounter is very liberating.

DDA

David Thomas had a RiscPC, long before I did, and had started writing some clever things which rendered JPEGs using the new SpriteExtend module. There were other things which did too, like !EasyView by Thomas Olsson. The early versions of SpriteExtend had only greyscale output of JPEGs in 256 colour modes, but for a humble A5000 user like myself this would be neat. Only despite having been emailed a copy of the module, it wouldn't run.

It required allocatable Dynamic Areas through the SWI OS_DynamicArea call - not supported on RISC OS 3.1 on the A5000. Wallow in sorrow? Buy a new machine? Patch it to not use SWI OS_DynamicArea? No. None of these were considered (later I discovered that Niall Douglas had done a simple set of patches to change the Dynamic Area calls to use RMA, and this worked fine - ah, how easy life would have been if I'd known that). Instead I decided that the correct thing to do would be to create a replacement of SWI OS_DynamicArea on RISC OS 3.1 - Dummy Dynamic Areas.

David Thomas supplied me with a copy of the SWI details from PRM 5a which I used to implement a basic version of the Dynamic Area support. Changing the memory management was not a simple task. Anyone who ever saw the error '*** CAM map corrupt ***' has probably been either randomly prodding at memory they shouldn't be, or trying to do some funky memory management. This error means 'I tried to remap a page, but the page wasn't in the table where I thought it was'. In practice, it invariably means 'reboot now, I'm going to die'.

Because there were no real memory management calls in RISC OS 3.1 - AMB (the management of application memory in the Kernel, rather than the WindowManager) didn't even exist then - finding out what memory was safe to use was tricky. Initially I began by using the SparkFS-solution of using the System Sprite Area, as this is rarely used and can be subverted to store other data. However, SparkFS uses it - so remapping sections of it would not go down well. Even without SparkFS, the area is controlled by the OS, and on being increased from 0 bytes would be updated to have a valid sprite header, which might make things difficult.

Instead, I used the RAM disc as the way to allocate memory. The logical address space that the area resided in could be used as the area that we would allocate dynamic areas in, and we could extend or reduce the size of the address space appropriately.

Having found a way to get and release pages from the OS, and a place to store them, I then had the issue that the OS didn't want to actually do it whilst the pages weren't mapped where it expected. In normal use, the 'RAM disc' would contain a fragmented logical address space to which dynamic areas had been allocated. But to the Kernel, it had to look like a single block of pages from the start of the area. Attempting to change the area without this being the case would result in the '*** CAM map corrupt ***' message.

Long before this, I'd had Repton Infinity bought for me on the BBC. Rather than the disc version, I'd got the tape version. This upset me, but I decreed 'I would make it work'. The tape version worked quite differently to the disc version, because there was much less memory available - the disc filing system took up more space than the tape. My solution was have all the bits from the tape copied to disc, and use a sideways RAM patch to replace all the filesystem operations with calls which paged in enough of the DFS data to do the file operations. If the operation spanned the DFS private area, it would be split into parts, with the overlapped section loaded into sideways RAM then copied down after the paged DFS area had been paged out.

It was all pretty reliable - disk load times notwithstanding - and certainly better than working from tape. The actual disk version was slower in real use (I found out later) because it didn't also cache the last application loaded in the sideways RAM. I was quite pleased.

Anyhow, the same technique was used in DDA - when new pages were required, the memory would be restored to the shape that the OS expected, increased memory allocations requested, and then put back to the Dynamic Area form. Reducing the size of a Dynamic Area was similar, but the page that wasn't required was placed at the end of the memory area, to be taken away.

DDA supported 64 dynamic areas, which was fine since with 32K pages the amount of memory that you could actually allocate was pretty small on a 4MB machine. You could create, delete, resize and enumerate the areas - no renumbering was supported, but that was only used for incredibly special cases, and I don't think I ever encountered any good use of it.

Because DDA replaced the SWI vector, it handled all the SWIs which needed to know about the operations - SWI OS_DynamicArea, SWI OS_ChangeDynamicArea, SWI OS_ReadDynamicArea, SWI OS_SWINumberFromString, SWI OS_SWINumberToString, and SWI OS_ValidateAddress. Additionally it also supported SWI OS_PlatformFeatures and SWI OS_SynchroniseCodeAreas, as these were often used to check the capabilities of the machine before using Dynamic Areas. SWI OS_CallASWI and SWI OS_CallASWIR12 also needed emulating in order that they be able to call the new SWIs - and the CallASWI module did not play nicely with my replacement SWI vector.

SWI OS_ValidateAddress might have been implemented by the service, except that the service is issued after the internal areas have been processed - meaning that the base area that the OS thought was the RAM disc would appear to be valid, when it might not actually be.

The error handling for the module was very poor - which was fine with me because this was intentionally hacky, just to make something that I could use with SpriteExtend. And it was never particularly fast, because it was exploding (make look like Dynamic Areas) and imploding (make look like RAM disc) the memory map for all the operations that changed the sizes. For SpriteExtend, that wasn't so often. Oh, and it didn't support any clever stuff like page lists, would forcibly override Dynamic Area numbers and base addresses, and completely ignored the area flags.

Retrofitting OS-wide memory allocation support, yeah, that's fun <laugh>.

JPEGSprite

Once I'd got JPEGs rendering through SpriteExtend, there were all sorts of fun things that you could do - but only if the applications supported the new calls. Some did, but many checked the Operating System version rather than the SpriteExtend version to know if they had the right functionality. Not usually hard to change, but still a little more hassle. Wouldn't it be nicer if we could use the JPEGs directly, like first class citizens ?

Some versions of the documentation of the extensions of sprite mode words in RISC OS 3.5 included an extra entry - sprite type 9 'JPEG'. Although nothing else was said about this, it provoked ideas <smile>. Later I discovered that BMPs can have JPEGs embedded in them - not sure how that works to be honest, because I've never encountered one in the wild. In any case, putting JPEG data inside a sprite isn't all that hard. Getting it to render... well that's a bit more fun. Especially if you're running on RISC OS 3.1, which hasn't really got a lot of the functionality that you might otherwise need. For a start, it doesn't even understand the new sprite types.

This might not seem like a big deal but the sprite type is passed around a few places - ColourTrans operations are given the sprite type as the mode word in places, obviously there's the SWI OS_ReadModeVariable calls which have to return sensible details, and then there's SWI OS_CheckModeValid as well.

JPEGSprite addressed these issues. The SWIs were augmented with checks for the special mode types so that they were recognised. If the module was run on RISC OS > 3.1, it wouldn't bother to replace the SWIs - the OS would be able to handle it. Not sure that I ever tested that though! The ColourV vector was claimed when calls were made which used the special JPEG mode sprite type.

SpriteV claims meant that the operations to read details about a sprite, and to operate on it were special cased. Plotting the sprite would use the JPEG plot operations, with whatever scale factors were requested. Many of the more complex operations were just ignored. Diverting output to sprite would instead divert output to a sprite 1 row by 1 line instead. Plotting Transformed was handled a little differently because JPEGs cannot be plotted by source coordinates, so those were faulted. The use of coordinate blocks was only supported for linear scaling. Otherwise the SWI JPEG_PlotTransformed call was made.

Creating such JPEG sprites was a simple matter of dropping a JPEG after a sprite header and munging the header block to have a sane width and height for the JPEG. I had a little tool that could make them, which I supplied with the module. The module worked quite nicely - you could drop a JPEG sprite into Draw or Paint and they'd work like Sprites - Draw on RISC OS 3.1 didn't support JPEG objects, for obvious reasons. I even had special IconSprites for the tool which were JPEG sprites, so the Filer would render the JPEGs directly - not exactly great. Oh, and obviously Pinboard could load a background JPEG Sprite.

The downside to this was that you had to use DDA, which could be a little flakey when memory got low. Rather fun though!

NiceErrors

I forget the exact reason why I wrote NiceErrors. I thing it was probably a little inspired by the 'Why' module that someone submitted to Acorn User after their April Fool spoof - it made errors that explained the reason why an error occurred by trapping the prior message. Also a little inspired by DoggySoft's slightly slicker error boxes. But mostly I think it came from a general frustration that error boxes were system modal and if you left the machine alone it would happily wait at an error box doing nothing else.

Like most of my patches it was based on a simple goal. This time it was to change the error boxes from being system modal to application modal. There are very few times where the error box being system modal is actually required - most of the time (at least within the desktop) the box is used for a prompt from an application, rather than a system-wide problem. And those that actually needed to block things usually meant that there was some bad design in the application anyhow.

It's not all that complicated really. The module claims the hardware SWI vector so that it can claim SWI Wimp_ReportError and safely do its replacement. We can't use WimpSWIve because we intend to flatten the SVC stack and restore ourselves as the user mode application. There's a little faffing to work with either RISC OS 3.1 or post-RISC OS 3.5 systems, which have different ways of claiming the vector, too.

We check if we're in the desktop and if not leave everything alone. Otherwise, we remember whether we have the caret - so that we can restore it after the error is acknowledged. Partly this helps the application to track the caret, as it won't be getting any Wimp messages, and partly this ensures that the user's experience isn't too disrupted by the change to a different context. In particular, the SWI Wimp_ReportError will claim the caret so that it can be controlled by the keyboard - Return and Escape generate the default and cancel operations. We also force any menus to be destroyed, as this can confuse things.

We create a window with the error message in - this isn't as clever as the code that the Wimp now uses, because at the time this was written (1997) I hadn't really considered any of the things that eventually happened to the error box. There's a bit of fun in creating the window as we have to take account of all of the extra options - different error titles, different combinations of buttons, different sprites and classes of error box and additional buttons. Not that complex as the specification is well known and hasn't changed in incompatible ways over time.

Once we have a copied/flattened SVC stack, are happily in user mode and we have a window, we then start to poll just like a regular application. We need to handle all the operations that the application would have had to deal with itself, and we can't know what it was expecting, unfortunately. For redraw requests we remember the window handle and draw a crosshatch pattern over the window. As our final operation in user mode once the error has been acknowledged we issue SWI Wimp_ForceRedraw requests for the entire window region so that the application will redraw it fully. I had wanted to remember the regions, coalesce them and only cause a redraw of the relevant regions, but I never got around to that.

Window open requests are just honoured directly - which means that panes would not move when the windows they were in were moved. Ideally I should have checked the stack for the window above the one being moved, and if it had a pane flag set move it with the window, based on its current position in relation to the window. I don't think at the time I was so fussed about that situation. Of course, a good solution there would also have included remembering the Open Window requests and reissuing them so that the host application would move the panes to the correct location.

Window close requests were ignored, as we can't know what the effect of doing so might be.

Mouse clicks would only operate on the error window, and we would exit the error handling if they clicked one of our buttons. Then we restored the environment, issued redraw requests as required and converted the click into the correct SWI response.

If there was a menu window active we would try to restore it - this might not work very well but was close to what the application would be expecting.

How many problems can we see here ? Well there's no handling of the key input, so it would be possible to users to type into input boxes and the application wouldn't be informed of the key presses. They would find that the text had changed without any notification - but only if they'd requested any. And most applications wouldn't care.

System shutdown wasn't acknowledged, nor would the application exit, so you could get the system into a funny state where it thought it was shutting down but the application-modal error box was preventing the application exiting - and the application wouldn't know anything about it once the error box was cleared. Really the shutdown request should have been rejected whilst there were such boxes on the screen.

Drag and drop might be aborted strangely, because an application might not respond to a load request. In the worst case, an application which opened an error box to say 'do you want to load this as text or a diagram ?' (or similar) on receipt of a load request wouldn't expect the desktop to keep running on. The sending application would get a bounce for the message and might report an error to say that the transfer failed (which might also use an error box!). If you then exited the sender, and acknowledged the first error box, the host application would try to send an acknowledgement to the task and might find that it can't handle the error about the application having exited and crash.

Filters would make a mess of everything - so Toolbox applications would find themselves half working as they tried to perform operations on windows and sent messages to the host application, which would be ignored by the error box polling system. This inconsistency would cause a lot of mess.

All this said, it worked very nicely for the limited set of circumstances that it could function in, and many applications that popped up error boxes would find they were just fine. Polled error boxes like the 'Insert disc' boxes wouldn't really do the right thing, in any case.

I'm not sure I ever released NiceErrors beyond a few friends. I might have put it on Arcade, I think.

SoundTest

I needed to test that Doom's MIDI music worked properly. Not having any MIDI hardware makes that harder. There's only one choice - to use a software emulator. And that means !Synth. Only... it doesn't work unless you have the 16bit sound upgrade. I hadn't, so I was stuck without sound. The simplest way to solve this was to create a replacement module which sits above SoundDMA and provides the 16bit sound interface, which it then passes on to the regular 8bit handler. Essentially that's it, pretty much. The module sat there, taking 16bit data, mixing it with 8bit data from the SoundChannels and then passing it into the original module.

There's a whole fun thing with converting 8bit logarithmic data to linear, merging, then converting back to 8bit logarithmic to pass to the module. I never implemented the oversampling, and the frequency was fixed at 45 µs. It worked really quite well, given how limited it was. It worked well enough to make AMPlayer work - at the time it didn't support 8bit sound systems, only 16bit. I could play MP3s quite happily - admittedly it was a bit slower than the usual sound system, but it worked <smile>.

The module name tells the how much I trusted it for release. It's kind of amusing, though, that to make the emulation of the MIDI hardware work, I implemented another emulation.

Gamma310

Having made a little progress in upgrading the memory system to support some of the things that the RiscPC could do, and full of confidence, I attacked a small area of the video system. The RiscPC had introduced palette correction using lookup tables. These were quite limited as the range of the lookup tables was still 8bit and mapping from 8bit to 8bit means you're going to lose some resolution in your output for anything but a 1-to-1 mapping. However, this issue aside, I believed that this could easily be achieved on earlier systems.

All the graphics operations (pretty much) go through ColourTrans via ColourV to perform translations. So it is relatively easy to trap those operations and replace the colours being operated on with a lookup operation. The tables for those lookups can be created by sitting on PaletteV and listening for the calls that applications would make to change the gamma correction.

All the ColourV operations that handled colours, and palette translations were trapped and the operations replaced with those that were through the lookup table. Whether it was a single colour, or a collection such as might happen with SWI ColourTrans_SelectTable it went through them all.

I used David Thomas' !Gamma application to test the implementation and it works surprisingly well. Every time that the gamma is changed we trigger a ColourTrans Service_CalibrationChanged service which itself triggers a cascade of operations which eventually redraws the entire screen with the new settings.

Of course this could have been done differently by changing the actual palette entries used in the mode, rather than changing the rendering on the screen. However, the palette selection was very limited on the A5000, with VIDC not really giving you a lot of flexibility in 256 colour modes.

It didn't really have much of a practical use, but I got a lot of experience of the fun that is ColourTrans and the special cases that it has to handle - which are many and varied!

ReadVarVal

After quite a lot of faffing with TaskWindows, and the problems with not having different contexts between applications, etc, I decided to try to isolate them. System Variables are global. To isolate the system variables in one context from another would mean that you could have a different filesystem context (as the entire context is in the variables). This would mean that the context that was used by each connection made by !TelnetD would be distinct from the others - no more worrying that someone else connected in might affect your shell.

What's the easiest way to do that? Replace SWI OS_ReadVarVal, SWI OS_SetVarVal and SWI OS_GSRead with new code which does this. As the comment in my module source says...

 DomainId=&FF8:REM Why this exists, I am not sure but it is dead useful !

All the variable calls were trapped and prefixed by the DomainId and '$', if you weren't running as 'root' (according to my Users module, which tracked the logins by !TelnetD). There was also a special magic '^' that you could prefix any variable name with and it wouldn't be translated - so you could get at the root variables. Handy for restoring some variable that you've otherwise lost - which was useful because once installed, you get no variables in the domains that are so restricted. Losing things like the Run$Path variable is frustrating, as is losing all the Alias$... variables.

Ultimately because of this problem it wasn't as useful as I had hoped. I'd envisaged some issues, but hadn't quite considered the scale - that's why I made the module use the Users module, so that it didn't affect every application context. It still wasn't all that great. It was an interesting experiment, though!

ADFSCache

Also back in the A5000 days, I wanted to try to improve the performance of the disc access. The way I approached this was at the sector level - to provide a sector cache for data that we've read. The A5000 may not have a lot of memory, but caching some data from disc could make a significant difference in the speed of memory operations - in particular those that would take place during compilation. If you were compiling something that took (lets say) 6 hours, that might be a real boon (see the Mini-Projects ramble, later).

I used !JFPatch to create custom veneers to sit in front of the calls to the sector operations in ADFS. A little bit of collusion, as I'd discovered that the word at offset &70 of the workspace of the FileCore%filesystem instance was the FileCore descriptor for that filesystem. It's a simple matter to replace the descriptor with my own.

This is where it gets a bit hairy, though. FileCore separates out its operations in fun ways and you can be called well outside of where you thought you were going to be called. In particular, it buffers writes and can perform them in the background whilst other things are going on, if the filesystem supports it (which ADFS does). Similarly, some operations can be parts of sectors, which means that only a small section of data is written, or could be spread about different areas of memory as a 'scatter list'. To make my life easier, scatter lists are dealt with as multiple small operations on individual sections of data.

All this lot can make things... a bit fun. Especially if you're using a Zip drive as your test initially. The chance of your breaking it is quite high. Usually you can reformat the disc, but there had been people who'd found that certain sequences of operations on the Zip drives were not good. With this in mind (and my own sanity at stake!) I wrote a copy of the main hard disc map and boot block to a floppy disc (Sergio Monesi's fsck was amazingly helpful!).

The assembler veneer provided the basic module header, the command entry points (for the commands that configured and provided diagnostics), the dispatch for the entry points (which included the scatter decoder), and a few C support functions (faster copy routines and memory allocation). The rest was all in C, and heavily conditional. There's a slew of options at the top of the file to control how the module works and what level of diagnostics and feature support is enabled:

/* Things that work:
   ? LARGEDISCS      (untested)
   ? CMEMCPY         (untested)
     VALIDATECACHE   (for reads only)
     DIAGRAM         (puts up a little bar graph thingy)
     DIAGRAMREGS     (displays a register dump so we can see the call params)
     READAHEAD       (reads a little way ahead in the buffer !)
     READAHEADFULL   (reads 'staticlength' sectors ahead)
     READPREDICT     (try to predict when we need to read ahead)
     WRITEDISRUPTIVE (writes will flush entries, rather than update)
     BLOCKFLUSH      (seems to work ok)

   Diagram:
     A simple textual diagram will be used to show how the reads and writes
     break down.

   Diagram registers:
     A register dump will be given at particular points to show what
     requests are being made to the disc.

   Validate cache:
     After each cache read the data will be CRC'd then re-read from disc,
     CRC'd and checked. Any discrepancy will be printed.

   Read ahead prediction:
     If the last sector on the disc accessed was prior to the request then
     a read ahead will occur.

   Standard read ahead:
     All requests smaller than the 'staticlength' will cause a read ahead.

   Read ahead non-full:
     Will read the same number of sectors ahead as requested, up to the
     'staticlength' limit for a request.

   Read ahead full:
     Will read 'staticlength' every time, regardless of request size.

   Write disruptive:
     Any writes will uncache the entries in the cache.

   Write non-disruptive:
     Writes will update the entries in the cache.

   Block flushing:
     The lowest timed entry in the cache is found and all similar entries
     are flushed. A pointer to the last 'flushed entry' will be kept, and
     used on subsequent searches for free blocks as the start position.

   Non-block flushing:
     The lowest timed entry is found and flushed each request.

*/

/* #define LARGEDISCS      /-* We're using a 'new' FileCore FS -*/
/* #define CMEMCPY         /-* DEBUG: Use the C version of memcpy -*/
/* #define VALIDATECACHE   /-* DEBUG: checks if the cached data is correct -*/
/* #define DIAGRAM         /-* DEBUG: puts a diagram of the read/writes -*/
/* #define DIAGRAMREGS     /-* DEBUG: allows register dumps if diagraming -*/
#define READAHEAD       /* Enables the read-ahead on small reads */
#define READPREDICT     /* Applies a 'prediction' to decide to read or not */
/* #define READAHEADFULL   /-* Cache the full chunk when reading ahead -*/
/* #define WRITEDISRUPTIVE /-* Writes won't be copied into cache entries -*/
#define BLOCKFLUSH      /* When free space is needed it will flush in blocks */
#define CACHEDETAILS    /* We want to know how well the cache is doing */

As you can see, there were some interesting features in there - the read-ahead support was intended to make it possible to avoid the overhead of seeking on the disc. Reading ahead all the time was found to be counterproductive because of the large number of small files that would be read in general - reading more than a single sector when only one was required would be wasteful, so instead the prediction only applied in the special case of continuing a read of a region. Obviously this wouldn't help for interleaved read operations, but for some of the operations that were being performed this helped. It's useful to remember that the sector size is 512 bytes and that is the size of each cache entry.

I had started tinkering with the support for the RiscPC style SWI FileCore_SectorOp calls, rather than just SWI FileCore_DiscOp, but never put them to the test. In the updated SectorOps entry, things were simpler because you were guaranteed to get sector sized operations, which would have simplified things significantly.

The cache details, and the diagrams of the operations being performed, were very useful. The cache details recorded both statistics about the number of operations that went to disc or came from the cache, and the write statistics for flushed blocks and the like. The diagrams showed (as text to the screen) which blocks in the operation were coming from the cache, which from the disc, and which had been split.

Because the A5000 had a noisy hard disc and the light was quite obvious, it was very clear that the cache worked quite well - if the speed that operations ran at didn't make that obvious.

There was a bug - but I didn't know this.

I had done quite a bit of testing, and I was confident that the I'd got everything right. Now it has to be said that this was about 7am on a Saturday, and I was thinking that maybe I should sleep soon - but it was all going so well. I installed it in my boot sequence and I rebooted into using it. It was really rather surprising running things quite quickly. I'm pretty sure I did the silly dance, and was very pleased with myself. Then I rebooted, and ... the machine didn't boot. Disc map corrupt. That was... frustrating.

I wandered around a bit, and laughed a little - as you do when you've just destroyed your hard disc and are not quite sure if you'll get it, or any parts of your University work, back). I'm pretty certain I rang Helen to see if she'd be around, as I probably wasn't going to be feeling that great. I returned back to my room and tried to restore from the backup map that I'd made. It was old so it wouldn't necessarily have everything but the disc shouldn't be so broken that the untouched stuff wouldn't work.

<smile> I'd also had the forethought to include a copy of fsck on the floppy disc that had the map and boot sector, so it would restore. And then I had to check the disc - because the directories and the map probably wouldn't correspond in some places. FileCore would get very upset if things weren't like it was expecting, and unmapped file blocks was one way to do that. That completed, a further FileCore *CheckMap also ran to completion, and I mounted the disc again... and stuff worked. I was a bit dazed.

ADFSCache was checked in to RCS, and wasn't touched again. There's something wrong with it - I know there is and I can't place where it's broken. I don't really want to play around trying to find out, after it's nearly destroyed one disc.