PS2Driver

The mouse on the RiscPC was a quadrature device. Many other PC systems had standardised on the PS/2 mice. The A7000, which used the ARM7500, had support for two PS/2 ports, so both mouse and keyboard were connected through that. PS/2 configuration for mice was very limited, and the number of mice which would fail when sent commands that they didn't understand meant that extensions to the protocol were rare. The Microsoft 'Intellimouse' extension caught on however, due to the active support from the Windows drivers for scroll wheel and additional buttons. This had a significant user experience improvement for web-browsing in particular - it came at the right time and brought benefits which were worthwhile in user's eyes.

The Intellimouse protocol was essentially a sequence of 'set data rate' operations in a pattern to enable the additional reports for the scroll wheel and extra buttons.

I updated the PS2Driver to support the Intellimouse format by issuing the commands to select the different modes of operation. There were two variants of the Intellimouse protocol - Intellimouse and Intellimouse+5, which supported the scroll wheel, and the scroll wheel plus up to 2 extra buttons respectively. The driver would probe these in descending sequence, to determine which of the forms the mouse supported. The highest level of the protocol supported will be selected for use.

The extra buttons support was a problem because the buttons themselves are triggered through the KeyV interface, not the PointerV interface. Similarly, there was no way to actually report additional records through the PointerV interfaces. Having mouse movement returned through one interface and buttons through another was not at all ideal - and precluded the use of scroll-wheels anyhow.

To resolve these problems in one go, a new PointerV request form was created. This 'extended request' returned both the 'alternate positioning device' and the buttons pressed, in addition to the mouse position itself. The handling for the pointer itself was taken out of the Kernel into a new OSPointer module to make it simpler to maintain, and this handled all the pointer requests. This module would make requests for the extended information and, if the request failed, fell back to the old request format.

One advantage of the new request format was that it took the onus for handling all the buttons from the pointer device, and moved it into the OSPointer module where it could always be done consistently.

The main reason for calling the scroll wheel an 'alternate positioning device' (which sounds odd and might not be obvious) is that on some interfaces it may not be a scroll wheel. On touch-pads the 'scroll' is commonly a gesture. It is quite possible that a remote control interface might use either buttons for scrolling, a mini joystick, or even a track-ball type interface. That might seem a little unlikely as the common use is a scroll wheel, but I didn't want to be tied (and didn't want others to be tied) to thinking of it as a purely mouse based interface.

Of course, testing this new module wasn't the easy thing you might hope. As mentioned previously, the RiscPC doesn't have a PS/2 mouse port - it has Quadrature mouse. It does have a PS/2 keyboard port. If you've used pretty much any other system you'll know that the PS/2 mouse and keyboard ports are colour coded (well, they are now - they weren't back then) and that plugging the device into the wrong port meant that the device didn't work. Certainly all the Windows versions I've ever used were like that, and a few Linux versions have had similar issues. Maybe I was unlucky in the systems I used.

It is completely baffling to me as the protocol is so similar - intentionally so. You (should) just read back the device type and switch the implementation between mouse and keyboard. Certainly this was how PS2Driver worked. In an A7000 - which had 2 PS/2 ports - you plugged the devices in whichever port you liked and the device worked. The driver did the (very tiny bit of) work necessary to decode the protocol through the right routines.

Anyhow, RiscPC only has a single PS/2. So if you plugged the PS/2 mouse in you lost the ability to use the keyboard as there was nowhere to connect it. This made it more hassle to keep switching back and forth between systems. Fortunately, I had previously written a pair of remote control modules - RemoteMouse and RemoteKeyboard - which took their input from a remote system over a TCP connection. Using these, it was easy to take control of the machine and still type, or use the 'mouse' when it turned out that I had not actually got the protocol right <smile>.

Quadrature Mouse

STD (Stuart Tyrrell Developments) produced a small adapter with a controller which could talk PS/2 to the mouse, and quadrature to the RiscPC. Others had produced similar interfaces, but STD had the forethought to include a way to support scroll-wheel. This special mode could pass through the scroll-wheel to the quadrature interface.

It was relatively easy to update the Mouse to support the signalling method when configured to a special mouse type. Because the signalling was based on the times at which the reports are read from the Quadrature interface, it is possible that the scroll events might be misinterpreted. If I remember rightly, the scroll events were indicated by all the mouse buttons being reported as pressed at once - an action that was quite unlikely in reality. In general it was pretty simple to handle.

One amusing side-effect of adding a new driver type, though, was that when the Remote Mouse driver and the built in drivers (PS/2, Serial, Quadrature and STD interface) were installed, the configuration tool would crash. There was a buffer overrun in the code which was triggered for 'large' numbers of mouse drivers. I guess the code had never been tested like that <smile>.

OSPointer

The OSPointer module has been mentioned above, and there's probably not too much more to say - it took the events from the pointer devices, and converted them into the relevant calls to the vectors and inserted events into buffers. The input system is rather strange in its separation of calls - mouse and keyboard (and thus buttons) are separated, and whilst there are buffered mouse operations, these don't include (for legacy reasons) any scroll details.

Some of these problems were intended to be resolved in a subsequent release. One of the longest standing 'fun' issues was that there was no way to read the mouse bounding box through a defined API. Setting the mouse rectangle came through the SWI OS_Word interface (presumably from the BBC master interface, I guess), but to read the rectangle you needed to know where in zero-page it was stored.

I am particularly averse to the SWI OS_Word and SWI OS_Byte interfaces - they're such old legacy interfaces, they don't lend themselves well to proper documentation, and they tend to be a grab-bag of interfaces which were placed together because... well, those are the general interfaces on the BBC. However, after going back and forth, trying to decide the right way to introduce another API to control the mouse rectangle, I decided to leave things as they stood and just add a new SWI OS_Word (alongside the other mouse operations) which could read the mouse rectangle.

Part of the next release would have dealt with Input and Sound handling, and would have probably addressed this better, but at least removing the requirement for collusion with the Kernel workspace was important to me, and to the general goals of the development.

The OSPointer module (and documentation) was updated to allow the extended pointer requests to return 'absolute positioning' as well as the relative positioning that was returned by a device like a mouse or touch-pad. Devices such as a touchscreen or tablet could return their positions in the form of a fractional position across the device. This meant that it was significantly easier to provide a driver which could perform absolute positioning through the APIs - previously it would be necessary to know the actual position of the device in order to simulate relative movements necessary to make the operation happen.

I was aware of the tablet interfaces that had been produced by 3rd parties, but these were generally through additional modules which had not been standardised. In particular it would have been important - at the very least - to return the pressure applied. In order to be useful, this needed to be passed through the APIs at the same level as the other pointer operations. There wasn't time to design and implement a path which allowed this to work sensibly - there are just too many areas in the input system where this just isn't possible at present, and botching on another data path was just going to hurt.

It would be better to unify all the input of buttons, scroll operations, pressure handling, unified transitions and so on in a single sweep of the APIs. I also hoped to unify the joystick API which was quite inflexible due to the requirement that there be a single module implementation which provided all the interface - no use if you want to plug two joysticks in for a multi-player game, for example. And of course there are all the other fun HID devices which you can use through USB which weren't supported.

KeyInput

As I may have mentioned once or twice, the input system was pretty messed up when it came to keyboard operations. I'm not even going to get into the mess that was that InternationalKeyboard support and its intensely non-modular and difficult to implement 'design'. The keyboard inputs themselves generally went through KeyV to be handled by the Kernel's input debounce code, and to be tracked so that their 'up/down' state was known when queried through the SWI OS_Byte keyboard scan, or specific key check INKEY(-key code) interface.

The debounce could be disabled - Pace had defined a way to disable the debounce. If you can think of a way to pass messages to the keyboard system to disable the debounce which is worse than their system, I would love to know. On the other hand - as I've said - the keyboard system is already a huge mess, so what's one more little bit of ickiness if it gets through to the right bit of the system ?

The SWI OS_Byte/INKEY operations are pretty awful hangovers from the BBC. They work, but they are not extensible. The fact that they only handle an 8bit key number (reduced to 7 bits by the interface) meant that the large number of key inputs that could be triggered could never be represented. Certain extensions had been defined by Acorn for keyboard input, partly from their work on eastern input devices, and partly from the introduction of new keyboards.

Windows keys and the Menu keys were processed by the keyboard system, although their use had not been defined. Additional keys for composing sequences, and a Yen key had also been added so that these could be input easily on keyboards that supported them.

I defined how the Windows and Menu keys should be handled by applications, but didn't add any specific use for them - eventually these could have been dispatched to handlers in the desktop, but again there wasn't time, and this wasn't the focus of the last release.

Similarly, the use of Multimedia keys was defined but not actually trapped anywhere. Actually that isn't entirely true. The 'shutdown' (ACPI Power) button was trapped by the TaskManager, and would initiate a shutdown. But other than that, they were not handled anywhere.

There would never be sufficient room in the old interfaces for keyboard scans and state checks to handle the extra key inputs that were possible with new devices. So it was necessary to define a new way of accessing those interfaces. A new KeyV API was defined which was based around the USB HID identifiers - as this covered a crazy selection of the input mechanisms it should be enough to be forwards compatible.

A new KeyInput module was created which would handle the events from such devices using the new KeyV API, and which had an interface which could be used in a similar way to the older BBC interfaces but for a greater range of inputs. The KeyInput module was intended - in its initial form - to be a stopgap which would convert down from the new interface to the old-style KeyV operations such that older uses still worked. Additionally, it would track any use of the old-style KeyV to allow it to report the state of keys.

This meant that whether your drivers used the old or the new key formats, you could still use SWI KeyInput_Scan (or other KeyInput calls) to check for a selection of keys. Unlike the Kernel handling of state transitions, KeyInput wouldn't crash if you used codes which it didn't understand <laugh>.

The extended mouse buttons mentioned in the earlier section were also passed through this interface by the OSPointer module - as there was no other way to pass the buttons. Of course, if the KeyInput module wasn't present, this meant you lost support for those buttons, but that is an understandable restriction given the way that things are routed.

The module wasn't fully defined, but it provided a means whereby we could move forwards with a more centralised interface without having to be held back by the very old BBC style interfaces. Quite how this would have affected BASIC, I'm not sure. Would the INKEY() operations have been updated to use it? Not sure, but it seems like the most logical way forward. Certainly there was a lot more scope for development. Scripted operations and key naming were noted as being future developments amongst others.

If all this is very confusing it's because a) it is, and b) I may not have quite explained is as best I could. The documentation released at the time describes a few of the nuances I've skimmed over here, anyhow.

WindowScroll

Whilst the scroll operations from the mouse available through APIs, it isn't actually useful to the user unless what they do have an effect on the screen. However, there is a reasonable amount of scope to decide what that effect should be. There are two major concepts which can affect where the scroll should take place - the current caret focus, and the current position of the pointer. Strictly, there could be third - the topmost visible window. I discounted this very early on because unlike Windows it was very common to not be working on the foremost window. Whilst it would be simple to support such behaviour, a few tests at the time showed that the window you were trying to manipulate was rarely the foremost - at least not for my common usage pattern. Maybe that was restrictive, but that's the choice I made at the time.

At times - regularly - there would not be a caret to focus, but usually you would want to be able to scroll a window or control. Similarly, some windows would never get the focus. Filer windows (prior to the implementation of the keyboard control) were such a case, and being unable to scroll them seemed pretty unreasonable.

Because the Filer generally didn't have the focus, but you might be working with an editor, it would be common to need to scroll a Filer window whilst the editor had focus. This presented another usage scenario. Other desktop systems had different ways of handling the scroll, and adopting any one of them would be to the detriment of proponents of a different system - plus RISC OS has different general use which is not the same as that of other desktop systems.

So, I created 4 different schemes for deciding what the scroll operation should affect:
  • 'Focus', which only affects the window which is focused and does nothing it no focus is active.
  • 'Pointer', which only affects the window which the pointer is over.
  • 'Focus or pointer', which uses the focus if there is one, or the window the pointer is over if there is no focus.
  • 'Favour higher', which selected the highest window in the stack between the window with the focus and the window the pointer is over.

The latter option might seem difficult to explain, but it makes the most sense when you try it. The general reasoning I had was that if you have a window which has the focus, and then move to get a file from the filesystem, the latter will be higher in the stack than the focus, and so will receive the events. Then you'll return to the main editor window (or naturally leave the pointer on the background or IconBar) and the target of the scrolls returns to the main window. This was my personal preference, but the other options all had their merits and detractors as well.

The actual issue of what it means once you have found the window you are going to operate on is the next fun problem. If the window which has been selected (by any of the methods above) actually has an icon in it, we might want to modify it. For example, a slider might want to be controlled by the scroll wheel, or a set of menu items might wish to be scrolled through.

The general way to indicate that an icon wants a behaviour beyond that of the basic operations is to add a command to the validation string. However, they were beginning to get very full. The commands in the validation strings were a single case insensitive character, followed by optional parameters. Commands ended at a ';' character. Many of the commands in the range 'A' - 'Z' had been used, and adding another which just flagged that the icon wanted to handle scroll operations felt wasteful. Plus, I couldn't use the 'S' command for scrolling because 'S' was used to mark sprites <smile>.

To try to reduce the problems that this might cause in the future (and make it easier to add new flags in the future), I defined the 'Y' validation command to mean 'boolean flag' (aka 'Yes'). This was similar to the way that the 'K' validation grouped single character flags for keyboard input handling. The 'S' flag was simple enough to add here.

If the WindowScroll module found the flag to be present in the validation string for the icon, it would send a message with that icon number filled in. The SWI Wimp_Poll reason for Scroll Requests was extended to include an extra field for the icon handle, but it would only be populated when the request was an extended one generated by such a scroll operation.

If the icon wasn't flagged in the validation string, the window was considered next. Windows have always been able to handle special scroll operations themselves. Normally when you click on the arrows on the window you would get a scroll by a fixed small amount, and if you clicked above or below (or to the sides, in the case of a horizontal bar) the scrollbar you would get a page scroll. However, the window can say that it wants to handle the actual operations itself, rather than being performed by the WindowManager. This is most useful for editors where the 'units' which you want to scroll by are variable or differ from the expected amounts that the Wimp will use.

A text editor scrolling in lines is a typical example. For horizontal scrolling, a spreadsheet provides a similar example. Each line (or column) can be scrolled separately and may have different sizes, so scrolling in units is useful to the user.

If the window wants to handle such scrolls itself, a flag is set in the window flags, and the Wimp will use the Scroll Request reason to request a scroll of suitable size. There were defined values used for the scroll request types, in these cases - type 1 being a line or column, type 2 being a page, and type 3 being an 'auto scroll' request. The type 3 requests were created in the RISC OS 4 WindowManager to handle the automatic scroll which could be performed when the window requested it though the new SWI Wimp_AutoScroll API - usually whilst dragging.

When the window requested that it wanted to handle the scrolls itself it would be expecting the type 1 (line/column) or type 2 (page) requests. Type 3 (auto-scroll) had to be explicitly requested by the application, so legacy applications would never receive them. If the mouse scroll requests were to fit in with such applications, and not deliver events that they might not like, there had to be an extra way to signal that the window understood them.

So an extra flag was added to the window to indicate that the mouse wanted to receive the extended scroll requests. These became type 4 messages, although strictly they were multiples of 4, because the scroll device can generate multiple scrolls at a time, depending on how far the wheel (or whatever the control is) is moved.

If the window did not have the extra flag set, but did have the plain scroll request flag set, the WindowScroll module sends it a sequence of line/column scroll requests. If it has the extra flag set, then the new type 4 requests are sent.

If the window does not have either of these flags set, but does have a scrollbar that can be affected by the direction of scroll which has been performed, a regular OpenWindow request is send, just as if the window had been manually scrolled. The scale of the scroll used is controlled by the WindowScroll's 'speed' configuration.

If the window doesn't have the relevant scroll bars present, but is a pane, the window immediately below it is considered. If it is a pane then we repeat until we locate a window which isn't a pane. This window then repeats the entire process from the determination of the window scroll flags.

If the window is a child of another window, the parent is examined, repeating the entire process as above.

If the window fails all these tests, the scroll is ignored.

One interesting effect was that if you tried scrolling over a menu with a scrollbar... nothing happened. This was because the Wimp would try to find a task to deliver the message to when the WindowScroll application sent the OpenWindow request to it - and a menu isn't owned by a task. I had a make a small update to the WindowManager to make it handle the requests itself if messages were sent to menus.

I have a vague feeling that this would mean that !Madness would gain the ability to make menus wander as well, but I'm only thinking that now - I can't say whether I had thought of it at the time. Nor whether anyone would care.

I am sure that someone will say that this is an over-engineered solution, but I defend it completely in that the scroll behaviour has to fit in with the existing applications, but also needs to be available to use independently by other applications. The flexibility and backwards compatibility is high, not to mention the ability to support all the modern features like nested windows (even if they were not widely used yet).

Zytouch touch screen driver

The input interfaces that RISC OS uses for pointing devices are focused on providing relative positioning. This is fine for a mouse, track ball, touch pad, or joystick. However, if you want to use a tablet or a touch screen you need to jump through a few hoops in order to get the data in to the Operating System. Not only that, but some of the hoops you have to jump through are not even slightly documented, and you need to know the layout of the Kernel workspace. In some cases I updated the interfaces so that it was possible to avoid using private workspace (the previously mentioned mouse bounding box calls are a good example).

I had used a custom touchscreen interface before, as SiPlan had lent us a control device that they had built for their testing labs. Their system was a metal cased A7000 which had the large face of the case replaced by a touch screen, and used a custom driver written by them. It was a pretty impressive (and solid) device, and using RISC OS became quite interesting with this interface. However, after even a short while using it I found that returning to the regular systems still left me trying to reach for windows to drag them around and tap buttons. RISC OS certainly worked with such an interface - issues with menus aside.

I had added some interfaces in the OSPointer module (as mentioned above) to allow it to handle interfaces that had touch screen input through a new absolute positioning interface. However, I didn't have any sort of tablet or touch screen myself, so the implementation was untested. That didn't sit very well with me, so I found a reasonably cheap touchscreen on eBay and implemented a driver for it. The touchscreen I found was a 'Zytouch' panel, produced by Zytronic. The panel I got was a serial interface, so that's how I drove it.

Initially, I used the Windows software to work out what the API was by monitoring the serial communications, and then implemented the same thing in BASIC to confirm that I had understood it correctly. The interface was really very simple, once initialised, the device would report the raw readings from its 16 horizontal and 16 vertical lines as bytes, followed by a 0 byte (33 values in total per reading). Because the raw data was returned, and there are only 16 lines, it is necessary to do a bit of maths to try to work out where the finger has been placed. In theory it is possible to detect multiple touches in this way, but the complexity is greater - and not something that I attempted to do at that time.

The accuracy wasn't really all that great, but I wasn't especially worried I was only really interested in testing that the OSPointer touch interface was usable and worked relatively well. The implementation was initially just a simple C library that drove the serial port, and reported results through a polled interface. It wasn't a particularly complex implementation, and I intended to release it under a BSD license. This was back in about 2006, and there weren't any other drivers around - at least none that I had found at the time, hence the reverse engineering of the serial data.

Before I released it, though, I wanted to get an OK from Zytronic themselves that this wouldn't be a problem. I sent them an email to say what I had written and that I would like to release the results as open source. I didn't get a reply. I wonder whether I don't come across too well by email, or there's some email gremlin eating responses from people. But I don't wonder too hard.

The C library was simple to hook up to the Pointer vector in a module and worked reasonably well - at least within the limits of the accuracy it could read the display at. I wasn't doing anything particularly complex in any case. It raised the obvious issue of pressure on the device, and a lack of a calibration interface. Neither of which were provided for through the current API, and would need addressing later.

The absolute positioning interface had worked, and whilst there were some wrinkles still to work out, it served its purpose pretty well. I was quite pleased with the driver really, although disappointed that I had heard nothing from the company. It got filed away under the abandoned things, but it had still done its job <smile>.

IRMan

The support for remote control devices was near nonexistent on desktop RISC OS. Whilst Acorn had developed a number of infrared drivers, which acted as keyboard devices, none of these were available to the desktop community. They all used special drivers for the specific hardware used by their customers' devices, although they did interact through a central switcher module which allowed the different controls to be handled in a similar manner.

I created a module called IRMan, which received serial data from the product of the same name by a company called 'Evation' (who have now vanished, I think). This was actually the very first thing that I bought online <smile>. The 'IRMan' device is an infrared receiver that converts the signal it receives into a short serial sequence. The sequence is, for most devices, unique for a given button. It depends on the protocol, and how the remote functions, as to whether the receiver can decode the signal and whether repeats of the button produce different codes.

My IRMan module would receive these presses and convert them into controls for the system. Initially I only implemented AMPlayer controls, and had hard-coded the button codes directly into the module. With the large number of remotes that are around, I wanted to allow greater flexibility, and to be able to do other things in order that I could control my MP3 playing software, which I had also called AMPlayer (see a later ramble about this mini-project).

To make things more flexible, the module had support for bindings for the sequences. With the *IRMan_Bind it was possible to configure any infrared sequence to control AMPlayer, run arbitrary commands, or to simulate key presses. The implementation was pretty robust, although learning was very tedious - however, I've used the Linux 'lirc' tool for learning codes, and it is even more tedious and error prone.

Although Evation are no longer around, there are other manufacturers who produce similar devices which work in similar ways - I bought another device a few years ago, and despite producing different codes to the original device, the receiver was equally capable.

The IRMan was configurable, although it was a little fiddly to do so - mostly it was a matter of adding the relevant commands to an Obey file which could be run to reconfigure the module.

MetaKeys

Related to the handling of remote keys was the handling of combinations of key presses. In the description of the new key press handling which was introduced in RISC OS 4, I had defined the use of the combinations of the 'Windows' keys. Although they were not implemented in the current versions of the operating system, the combinations were reserved so that developers could ensure that any provision they made would match these.

The MetaKeys was intended to provide the means by which operations could be bound to these combinations. At first it provided a way in which the 'Windows + non-shift key' presses could be bound, allowing any operations to be bound to those key presses. The first version of the module allowed the bindings to be used in a similar way to the function keys, using system variables. This was only a stop gap.

I would bet that most users don't know that you can configure the function keys with either '*Key # string', and the string will be expanded when you press the key in the desktop. The WindowManager performs the expansion for you if you press it whilst in an icon.

A separate module, which I started but never completed, took the binding implementation from IRMan and the inputs provided by the MetaKeys to provide a system of commands which could be executed by other modules. I wanted to provide a way in which applications could receive commands from the key input sources, and modules could also receive them. The implementation allowed registration of binding commands (to be dispatched to modules), general press messages (which would be passed through Wimp messages), and the existing command launch mechanism (which could be used by applications that were not running, for example, to launch them).

The different methods of distributing the key messages meant that the implementation was a little more complicated than I liked, and diagnosing why a message did not arrive in a set place was more difficult. Additionally, it required an extra mapping stage for each of the operations - the internal key press needed to be mapped to a key string (for example 'Windows-B', 'Play' or 'Sleep'). This key string would then be passed to the dispatcher, which would look up the binding to use.

The idea here was that the readable strings could be fixed for all devices and therefore allowed a simpler configuration. The actual names of the keys could be displayed verbatim, or translated into a localised language. It was still a way off being usable, but it was moving in the right direction, and with KeyInput already beginning to take over the control of the input system, it would be easier to add new operations in the future.