The menus on RISC OS didn't really change all that much. The biggest change had happened, visually, with the addition of the 3D interface back at RISC OS 4 and hadn't changed substantially since then. One of the small changes that took place was based around the need to support the fully specified colours in a proper way within the desktop. Icons and window furniture could all be specified in any colour, and this same facility extended to the menus. However, there was some components which became difficult to use when the menu background is inverted - as might happen if you were using a 'dark background' theme. In more common use, the 'colour menus' would exhibit similar problems, as they offered (for example) all the 16 desktop colours as part of their selection, which included 8 greys from white through to black.

Usually such menus were displayed showing the current selection with a tick icon beside it. The problem with this was that, by default, the 'tick' icon was designed for black on light-grey. The sprite had been anti-aliased, slightly, with this in mind, and this had the effect of giving the icon a halo around its edge when displayed with a background other than light colours. Because alpha-channel sprites had been around for a little while and were stable, the sprite was changed to be solid black with increasing amounts of alpha-channel mask, thus allowing the tick to blend into most of the backgrounds well. Unfortunately, it blended into the black background completely <smile>.

I had considered another way of handling the problem - change the sprite to a font based 'tick' glyph, where the default anti-aliasing would have handled the problem for me. The downsides for that were that the tick sprite was well known and was replaced by quite a few people, the font would have to be present (the WimpSymbols font could have been used) and the style would then be fixed, and using the font precluded the possibility of styling in different colours.

The problem with the sprite blending to black irritated me for a while before I realised that the solution was to create a second sprite - the inverse tick, which would be used whenever the background of the menu item was dark. This was actually exactly the same tick as the regular one, but with a white tick shape instead of black. Because the sprite was paletted, this just meant changing one colour entry to create the new sprite, which was kinda neat.

Menu creation

Menus are really just special windows which are built from a definition provided by the application and decoded when the icons within them are selected. The code that creates them in the Wimp was really quite poor. Every menu item entry consists of 3 icons - the left side 'tick' space, the text in the middle, and the right side sub-menu arrow (or in the opposite order if the menu is defined as reversed). As the window is constructed, icons are added to the window block.

This results in the block of icons being extended repeatedly. A menu with a lot of items in it - as might happen with the Font menus, or a browser history - could need quite a lot of icons creating and this tended to thrash the heap a little as it was repeatedly resized. Speeding it up was simple - just allocate an icon block with the space for the right number of deleted icons when it is created. Then, when it comes to using it, the icons already exist and the space is just populated rather than extending the allocations again.

One of the features of the versions of the WindowManager that used fonts for the desktop text was that the text in menus would resize to fit the actual text that was present. They ignored the width of the menu which was specified in the definition. Primarily this was because the sizes set in the menu block presumed a fixed width font, and that isn't the case when any proportional desktop font is configured by the user. So instead they calculate the width by sizing the text for every item and keeping the maximum.

They would also take account of icons which had a sprite in their validation strings - but only if the sprite was the first declaration in the validation string. If there were other validation options present, the sprite size wouldn't be added in, so you might end up with a menu that had a sprite and text which ran off the edge of the menu icon's bounding box, truncating it. I added a fix for that, but it wasn't ideal - there were still cases that wouldn't work. Better than before, though.

The Wimp would apply special rules for certain keywords in the Menu text, to allow shortcuts to be right aligned - especially important when the proportional font was in use. So if the end of your string was a recognised keyword sequence, like 'Copy ^C' it would appear with the main text ('Copy') on the left and the keyword ('^C') aligned on the right. This worked pretty well, but was applied to all the icons in the menu windows.

That might not seem too bad unless you know that the titlebar is actually an icon for the purposes of rendering. Menus with the keywords in their title would end up with the keyword being right aligned. This could be easily seen if you created a menu titled 'My Menu', or 'RISC OS Select' - the 'Menu' or 'Select' part would be right aligned in the titlebar. Probably not too important as it wouldn't usually be noticed, but still it looked pretty silly. Fortunately, this was another simple fix.

Keyboard menus

One of the older options which we played with after RISC OS 4 was the 'Keyboard menus' option. This allowed menus to be controlled by the keyboard without needing to use the mouse - it is a very useful way of navigating when you don't have a mouse. Unfortunately, it suffers because the rest of the system is designed for use with a mouse. Particularly, when you navigate from a menu to a dialogue which doesn't take the focus, the keyboard focus reverts to the original location. This means that you lose control of the menu and the only thing you can do (other than use the mouse) is press Escape to cancel it.

On the other hand, it handles navigation into dialogues that do have an input focus quite well, and will move out of them if you press Left at the start of a writable field. We did a few fixes for minor issues, but mostly left it alone. The problem with the dialogue boxes is one that isn't simple to address, unless a separate input context is retained. That could be done, but it complicates an already confusing input system. In the STB environment, the menu being controlled by the keyboard is reasonable - but in that environment you control all the applications and interfaces. In addition, most 'menu' use would not actually use a Wimp menu - you tend to present things in a different way within the STB environment, using full screen options lists, rather than as floating menus.

The other, minor, issue with the keyboard menus was that they don't highlight the menus as having the focus. Normally windows with the focus use a different colour in their title bar when they have the focus (usually cream), but the menus controlled by the keyboard would not do this. Mainly this is because they don't really have the input focus. All input is handled above the keyboard focus code, which makes the input system even more confused. You can be in the middle of editing a field with a selection in place, press Menu and things change. The keyboard input doesn't go to the caret, the selection isn't affected by the cursor keys, and the highlight says your input focus is in the old location.

This is because the Wimp still believes the caret to be where you left it - if you type your input goes to where it was. The caret never actually left the icon or window, and none of the notifications which you might otherwise expect to happen (for example 'lost focus') with such an event have been triggered. This also means that you cannot use the keyboard to move to items in the menu, because they actually get input to where you were typing.

It was a pretty unfinished feature - at least, it didn't work in the normal RISC OS manner - and needed a lot of cleaning up to make sure that it worked well with the rest of the system. As I began working on the UTF-8 system in the Wimp, some of it was going to be tidied up, but it never quite made it.

Command window centering

The command window which appears when there is output from a program that hasn't started up as a Wimp task was traditionally in a fixed location. The window's handling is itself a little amusing. The Wimp installs handlers on all the regular output vectors when a new task is run. It removes these handlers whenever the task declares itself with SWI Wimp_Initialise. If any of these output vectors are used before this, however, the Wimp displays the command window, and sets up a text window within it.

Actually that's not quite right - the text window is actually set up before the window is shown visibly. This ensures that anything that tries to read the size of the window, for example because it wants to word-wrap its output, it will get the right size. This has the effect that the window size and location needs to be known before it is shown.

Where operations happen outside the normal output functions, this can have interesting effects. For example, plotting a font may (under circumstances that I'm not going into here) involve outputting through the normal WrchV output, part way through the plot. So it's quite possible to use SWI Font_Paint for some operations and get no command window, but others will have one shown. If the Font system had used FontV it would have made trapping such cases a lot easier (and made the printer system a little less convoluted), but this vector remained unused.

A couple of minor things were done to try to help this situation but it's not a particularly important case. The slight discrepancy doesn't affect too many applications.

However, I did add the ability to center the command window, so that instead of appearing in a fixed location defined by the templates, it would be appropriate to the screen size. This was made a little fun by the fact that you have to center the text window and then calculate the graphics coordinates of the window that will go around it - as the text window is the bit that has to be aligned to an 8x16 cell.

Configuration details

In a lot of cases it is necessary to find out how the Wimp is configured. If there is cut and paste support, then you will want to know the selection colours. If there are different sprite pools, you may need to know where they are. Like the Operating System itself, exposing this information is important to keep a consistent look to the system. Without a consistent look, applications feel different from the rest of the system and the user experience is adversely affected.

For the WindowManager some of the configuration details were exposed by the SWI Wimp_ReadSysInfo call, which allowed applications to use consistent colouring, timings and other configurables. It didn't expose all the options, unfortunately. Sometimes the features weren't intended to be used externally, and in other cases, they just hadn't been thought about enough.


One thing had annoyed me for quite some time. Somewhere, I had read that if you had a mask on a pointer, then by default the first masked pixel would be used as the active point. This allowed the active point to be defined with the pointer shape itself, rather than having to be hard-coded into the program (as was done with just about every single application and template).

However, I couldn't find where it was written. I really looked and I couldn't find it. I thought that maybe it was part of the WimpExtension or Interface modules that must have provided it, and that's where I'd got the idea - or that maybe I'd just imagined the whole thing.

It was only when I was reviewing a separate bit of documentation that I came across the explicit statement in the PRMs. Obviously I was quite happy that I wasn't just imagining such things, but also surprised because there was - to my knowledge - no code to actually do that. In any case, I added support for the active pointer, so that pointers could be defined in this way (and therefore the PRM was again correct).

As was usual (and maybe getting boring now!) the code that handled the location of any mask pixels was written in C, and linked in to the Wimp only when it was known to work properly within the test application. It is not particularly difficult - it's just a bitmap manipulation for a few different formats - but it's still a lot easier to write in C than messing with Assembler. And it's not like the code gets called that often. That said, it wasn't that bad. It could have been improved by handling multiple words at a time, but its handling of a word (not a pixel) at a time until it finds a word that has a masked pixel in it, and then working out which one it is, worked pretty well.

The pointer handling was also updated such that if no pointer active point specification was supplied, the usual system defaults would be used instead. This meant that for the times when that form of the validation was used it would not have a pointer that would (usually) line up as expected. If the user pointers had a visually different active point, they would look wrong, but then they would always look wrong in the past everywhere else.

There was still a bit of work needed to make the active pointer sensible so that the pointer shapes could be replaced reliably. This would need to happen if there was to be any form of general theme system, but the work was pushed back as the pointer handling wasn't quite as important as some of the other things.

There were, however, a few new pointer types that were introduced. There mostly came from web applications and the general need for standard ways to represent common operations. A new 'drag-to-scroll' pointer type had been introduced which was intended to indicate that the page could be moved around - usually for diagrams and maps, and maybe for a browser page.

There was a 'link' pointer which was intended for actions which launched a new or external component, primarily intended for hyper links, but might also be used for textual links to launch tools (really those sorts of things should be buttons, but it depends on the type of interface that is being presented). And there was a 'map' pointer - which was a little harder to define. It was intended for places where the region over which the pointer lies supported a dragged selection.

Not quite sure that the 'map' type was as useful, but it offered a little more freedom to developers if these types were defined - they could rely on the pointers being there, and that they would have a common meaning between applications.

Window handles

When preparing for the move to 32bit, it was necessary to examine all the memory pointers which hold information, and which might not work when placed in higher memory. One such pointer is the window handle. This had always had certain restrictions placed on it which identified it as a window handle. In particular, the window handle cannot become negative, as many applications will use a handle < 0 to check for either 'no window' or IconBar, and obviously -1, -2 and -3 are already known to be used for different parts of the window stack in SWI Wimp_OpenWindow and others. Thus, bit 31 could never be set.

Menu handles are just pointers to menu blocks. These therefore, have bit 0 and 1 clear. Window handles can be placed within menu blocks as sub-menu pointers, and need to be distinguished. This is done by the window handle having bit 0 set. Certain routines also assumed that the window handle was > &8000, but this isn't hard to enforce under the current scheme.

So, window blocks needed to live anywhere in memory and be easy to convert from the handle to a window block - without any lookup being performed. Because of how the code was structured, any changes to perform the conversion needed to not use any stack, nor affect any flags, and couldn't use any temporary registers.

My solution (with a little optimisation from Ian) was that the window handle is created from the window block by inverting bit 30 and 0, ORing the value into itself shifted right 30, and clearing bit 31; 3 instructions. The inverse - which happens far more often in the Wimp - is just to EOR the value with itself shifted left 30, and then clear bits 0 and 1; 2 instructions. I rather like the idea of encoding bit 30 inverted, so that the EOR with the known set bit 0 restored it properly - I'm quite pleased with that.

Obviously this transformation may not always be the case as the window handles are implicitly opaque, but the known constraints will continue to hold if things are expected to work in the way that they have.

Icon transitions

After the clipboard support had been added (see the earlier ramble dedicated to just the 'cut and paste' support), it was a small step to make it possible to produce messages for the pointer moving in and out of an icon. The clipboard support had added handling for highlighting of buttons when the pointer moves over them. This wasn't used, except in a few experimental button styles, but the detection code was easily reusable for notifications.

There had always been 'pointer entering window' and 'pointer leaving window' notifications. These were usually used to change the pointer shape depending on the context - for example entering a window where it was possible to drag the window might change the pointer to a hand. Modern designs usually wanted a little more feedback than this - button highlighting was typical on some systems, and I had seen (but disliked) animations of actions on buttons as the mouse moved over them. It is also possible that authors might want to provide a means of providing 'pop out' windows (or icons) when the mouse moves over a region (again, I don't overly like such behaviour but that's not to say that it's not useful or right for some times). Such behaviour would require manual monitoring of the mouse position - which is more expensive in terms of processing.

The pointer shape is a useful feedback to the user as to the action that can be performed. Along with tooltips (which RISC OS provided no useful mechanism for), the mouse shape provides the first line of aid to the user as to the actions that can be performed. Its usefulness is - obviously - only useful when the system has a pointer, but that's another of the fun hurdles for interface design with non-mouse based pointing devices.

Anyhow... the pointer shape can be changed automatically as the mouse moves over icons, using the P validation. This obviously reduces the amount of support required by the application, but means that any more advanced feedback or interface design requires much more work from the application. Usually this sort of interface isn't used within RISC OS, which is partly due to the fact that it is harder to provide. There's also the fact that such interfaces on other systems tend to be rejected by RISC OS users and developers because "that's not how RISC OS does things" or through prejudice. Mostly, such interfaces are rejected because they are a bad design decision, providing very little in the way of extra usability over existing interactions. My experience of RISC OS developers (myself included) is that very few would categorise it as a design decision, but would naturally select another suitable interaction which they knew and understood.

As I digress again from the topic, it is probably important to note that the lack of such features in the system generally results in a more limited selection of interaction methods being used. This in turn leads to an easier to use interface - if the user knows how certain types of operations work it becomes easier to use. If they have not encountered a 'box will expand when you move over it' type of interaction, they would not attempt to trigger one (as an example - there are quite a few other forms that also aren't used on RISC OS). This means that there is a shallower learning curve, at the expense of a more fluid interface for certain operations, and a more recognisable and 'friendly' environment. On the other hand, the lack of some interaction forms means that it is not possible to do some advanced operations, with developers feeling shackled by having to use unsuitable methods to perform simple actions.

So... icon transitions increase the richness of the interactions which are readily available to the developer. As always, it is an author's choice as to how they implement functions, but if you give them a better palette they have more choices for the picture they can paint. Aside from highlighting buttons, I'm not really sure of what interactions you might provide which would be useful with icon transitions. However, I had in mind that other work had been moving towards more advanced gadgets for the Toolbox, and these gadgets might want to have similar functionality - or more advanced - so making the transition available would be useful.

The interface I chose for the transitions was that of Wimp event reasons 14 and 15. This was quite cute, because those two reason codes were unused (and there was no indication of prior use), and reason 4 and 5 were used for 'pointer leaving window' and 'pointer entering window'. Having 'pointer entering/leaving icon' in the same order but 10 higher was rather nice <smile>.

Initially, I wanted these events to be delivered to every application, which would allow filters to trap them if they wanted. An alternative was to only deliver the events if the application declared itself as a high enough Wimp version on initialisation. This is the way that a few of the new features had been added in the past. However, the downside to using the application's Wimp version on registration is that it prevents the filters from trapping those events to provide augmented functions.

As I had been bitten by this problem when developing filters and adding features in the past, I made the events get delivered to all applications. This worked for about 20 minutes... until I hit a DeskLib based application. RISC_OSLib applications would suffer from the change because an unrecognised reason code caused transient dialogues to be closed by it. That wasn't too hard to 'fix' through a patch.

The AppPatcher module could recognise sections of code in applications (assuming that the code could be decompressed properly - see the ramble about Application Execution for more detail), and patch them to make the code do the right thing. The RISC_OSLib library had been released in a few forms, and the signatures for them were simple enough to identify to make sure that RISC_OSLib based things could work properly.

DeskLib was another matter, though. RISC_OSLib had merely closed a dialogue box. DeskLib crashed when it received a reason code it didn't understand. That was... bad.

At first, I updated the Patcher in the same way as I had for RISC_OSLib, identifying the signature in the event handling code and replacing it with safe code. The problems came when I tried more applications - there were a lot of different variants. The 'standard' version had multiple releases and, depending on the compiler used, the code could compile differently. Additionally, the code could be built 26bit or 32bit, which meant there were different signatures to spot. I think I did five or six different variants of the patch for different versions of the code, but decided I was on to a loser here.

I reported the problem to the maintainer as I usually did when I found problems in 3rd party things. I don't remember whether I had any further correspondence, and can't see anything in my mail. The current version is still broken though (in the middle of 2012), which is always good to know. Fixing the source would only help future versions in any case, so I still needed a way to make the transitions work without triggering the crash in DeskLib.

Some of the SWI Wimp_Poll flag bits had been used to indicate that features were present, or requested. For example, one bit indicated that a pollword was supplied. As there were already bits set aside to mask the poll reasons, adding another which controlled whether those bits were obeyed was pretty trivial. This meant that the flags would be able to be manipulated by any filters, making the interface compatible with them. And the relevant flags would never be set in an old DeskLib (or RISC_OSLib) application, so the problem would not present itself there.

Of course, if anyone wanted to use the icon transitions in DeskLib they would find that it would crash - at which point they could fix the issue. Can't do everything for people <smile>.

With the icon transition events being delivered as well as the window transitions, and icon border highlighting as well, it was important that these all worked together in a consistent manner. The icon bordering had to be taken care of first, then the icon transition events (which the application might have masked, either implicitly by not setting the new 'use icon transitions' bit, or explicitly by setting the bit and then masking the events), and finally the window transition events (which might also be masked).

Checking that all these worked wasn't particularly hard, but it was a little tedious. There were some odd cases where the icon borders were masked off, so that instead of the application's mask affecting just the application they would also prevent the buttons highlighting. Not quite a 'simple' ordering problem because the code was structured awkwardly, but it wasn't too hard to ensure that the events occurred in the right order.

There was a interesting case where an application exiting whilst there was a highlight, and the icon transition had been requested, would cause a crash because the highlight being removed as part of the exiting task would prevent the current icon's state being cleared. The result was that the next task to be paged in crashed because the transition was invalid. Silly mistake, but it took an hour of so to track down how it reached that state.

Caret colouring

The default for selecting a '256 colour' mode had been changed from the old-style 64 colour modes (256 colour palette, but with a limited number of actually selectable colours) to a fully definable 256 colour mode (discussed in the earlier rambles). This had an odd effect on the caret colour. For legacy reasons, the caret was plotted using a colour number and the behaviour of the operations changed in fully defined modes compared to the non-fully defined 256 colour modes.

There was an oddity in these modes with the caret which would appear (I think it was) blue, rather than red. This was 'fixed' by adding a special case to the WindowManager, which I think might have been a mistake. One of my notes says "I really don't want to go in to checking whether FontManager is correct or not, so I have just fixed the code." That was a little lame really.

(Actually I don't think that the FontManager is used for most of the caret drawing, as you have to explicitly request it if you want that style of caret. I do remember experimenting with the 'anti-aliased caret' - which wasn't at all anti-aliased if I remember rightly - and whilst it was nice, it took up more room and was likely to provoke bad responses if the default were changed.)

I remember a lot of discussions back and forth with Christopher Bazley, with each of us putting forward different reasons and justifications for behaviour based on implementation in different modes, documentation, and sometimes even sanity. One of the areas that was of continual frustration (whatever we did, because it could never be truly consistent) was the discrepancy between the way in which colours were manipulated in the old style 64 colour modes compared to the other modes, and whether the new style 256 colour modes should work the same way or differently. I am certain that there were no final solutions to many of those problems, but along the way we managed to come to some decisions about the most compatible and sensible ways of the calls working.

One thing that did come out of the work, though, was that the handling of ColourTrans calls was made a lot more sane, removing special cases where possible. Because of this the nasty hack in the Wimp got removed, and everything was simpler. Lameness defeated <smile>.

Prior to all of this, the 'red' caret had been changed so that it was actually configurable. The documentation had always said that the colour could be set from the standard palette, or use the default (which happened to be red). Adding a small option to configure the colour of the caret was simple enough, and would give flexibility to theme authors.

Because the colour was configurable, it was necessary to also provide a means by which the colour of the 'cursor' could be read so that applications that provided their own implementation (or similar) could match the user's preference. I called it a 'cursor' because the focus of operations might not be text in some applications. In a music typesetting tool it might be a position in time across a stave. In a vector application like !Draw, it might be an object (!Draw had not been updated, though <sigh>).

The Filer came to use it when I implemented keyboard controlled selections. The dashed box which is controlled by the keyboard is drawn using the cursor colour.