Legacy mode changes

Whilst the video abstraction was going on, the mode changing and reading code was being updated. In quite a few cases, the current screen mode was read with SWI OS_Byte 135, which is a bit of a silly call to read the mode - its primary function is to the character at the current text cursor position, and as a side effect it returns the current screen mode in R2. Worse, it returns the mode as a 32 bit value - which is just plain wrong, because the SWI OS_Byte interface is an 8 bit interface.

Surprisingly, this was done in quite a few cases. Whilst this would 'work', it is a slow call - it has to read data from the screen and compare it against system font characters. Plus there's a SWI OS_ScreenMode call which is the modern way to read the current mode. Where ever possible, the code was updated so that it used the more modern calls.

Like many of the bits of debugging, this was made easier by the work that had gone into improving the detail that was available through the Back Trace Structures ('BTS'), and the simple debugging that the JavaScript 'Service List' module provided (see the Testing and Debugging ramble).

Amusingly, in a couple of places the screen mode was still being selected using SWI VDU 22 - the BBC interface to change modes. These, too, were swatted. They didn't improve the behaviour, or the speed, in any significant manner, but it's just silly to have such ancient calls still used. It doesn't help to encourage other people to update their code to use modern interfaces if the OS itself still uses the obsolete forms.

In one place there was even a use of SWI OS_Byte 160 which is the BBC equivalent of the SWI OS_ReadVduVariables call. This call was very old, and had been moved into the LegacyBBC module - with just a simple mapping to the modern calls. Updating the relevant components to use the newer calls wasn't all that hard, though. It still amuses me that there were places where BBC style calls were used when more modern version interfaces exist.

Mode restrictions

Changing the way in which mode selection and usage worked gave opportunities to remove some of the legacy issues completely. Implementing some of the features of the video system through the new abstracted interfaces would give a less tidy interface for features that are next to never used. In particular, the old BBC screen modes were sufficiently low resolution that many video cards or monitors couldn't be driven that low - and they had low numbers of colours (2, 4 or 16 colours) which wasn't likely to be supported on modern graphics hardware.

The low resolution modes had different values for the 'bytes per character' and 'bits per pixel' configuration, which meant that each pixel was doubled up horizontally. There are only a few modes that use this configuration - modes 2, 4, 5 and 10 in particular. Since these are not really useful on a modern system, the capability was dropped, making such modes unavailable.

Similarly, the implementation of 'gap' modes made some operations a little more difficult and so were deprecated. They might work, but the behaviour might differ between implementations of the video driver. The software driver supports them, but other hardware might not bother. The 'gap' itself made each character row 10 pixels high instead of 8; the extra 2 pixels were given a different background colour to separate lines from one another.

Removing these modes made the implementation of video drivers marginally easier, and removed some special cases that cluttered up parts of the code. I'm pretty certain that nobody would miss these modes, and if they did... well, using a 'modern' RISC OS system to run BBC software is a little pointless so I wasn't really that fussed.

Multiple drivers

The driver registration allowed for multiple drivers to be present at any time. I wanted to allow for the situation where two drivers were available, and could be directly switched between. Essentially this provided the same functionality as ViewFinder's ability to switch between VIDC and its video. I had considered the implications (quite a bit) of trying to merge video drivers such that two distinct drivers could form a single display - side by side, or top-to-bottom. The top-to-bottom arrangement was easier, but would require some organisation between the two driver modules to position their logical memory space together. The side-by-side arrangement was even more complex.

I reckoned that, in order to provide a direct screen access it would be possible to align the screen base such that the left screen's row end was at the end of a 4K page. The right screen's pages for the row started immediately after it, with a gap at the end which would wrap to the start of the next page. Because rows would not be contiguous for either of the video drivers, it would rely on them supporting gap regions at the end of rows.

Aside from the headaches this invoked, I didn't think it was particularly practical. Another possibility for using multiple displays was to only allow accelerated operations (that is, no direct screen access). This would mean that in many cases the operations would be duplicated to both drivers with an offset, where the operations would affect both (or to just one where it was obvious that the operation would only affect one display). The flood fill operation would be unable to function like this, and any direct screen operations would become harder. One way to implement direct screen access where the memory areas were not mapped would be to use the aborting area trap to perform the translations, never mapping in a relevant page. This would make such access quite significantly slower.

Another form of operation when using multiple video drivers might be the primary and secondary screen - for example the primary being used for the desktop, and the secondary being used for a game, debugger, or some other full screen system. Because both are present simultaneously this offers some areas of confusion about what it means (for example) to have a mouse - does the mouse control the pointer on the primary screen? If the primary is swapped with the secondary (that is, we're now writing output to the secondary) does the mouse move over there? Do VSyncs happen for both displays or just the one (of course this is also a fun issue if you've got side-by-side displays)? Is the pointer shape configured for the primary screen transferred to the secondary on changing the primary?

Any form of multiple displays meant that there were multiple contexts available - each display could, logically, be running with a different screen depth and resolution (the use of side-by-side tended to avoid depth differences). Retaining the contexts wouldn't have been too hard, as there was already support for such things due to redirection to sprite. Managing it was a little more tricky.

All in all, I decided to sidestep these issues by allowing for multiple drivers to run concurrently and implicitly were able to both (or more) be displaying output at the same time, but the current implementation would explicitly shut down a display when another display was selected - so if you switched from the VIDC driver to the ViewFinder driver you would find that the VIDC display would power down state (usually turning off the display it was connected to), and vice-versa.

I wasn't happy with this really, as it all felt a little restrictive, but there was no real way that you could retain the RISC OS single display interfaces and gain a lot of extra functionality. In some cases things would need to be reconsidered. Given time constraints, thinking about it after it had been delivered and used by people for a while was probably the most productive. No doubt people would explain at length why what had been done was wrong, and maybe some of their comments would help. Maybe.

No display, VideoGuard

Another side effect of having any number of drivers was that 'any number' included none. The fact that there might not be a graphics driver present wasn't really a part of the design of any part of RISC OS. Most places assumed that you read the screen update base and used it - which is not unreasonable really. The very first time the system was started up without any graphics driver present (that is, nothing set up in the Kernel, with the driver in a separate module) it was blown away as the system tried to clear the screen. The 0s in the screen update base were obeyed, and it cleared a section of zero-page before finding that the machine couldn't work.

A few places were updated so that a screen base of 0 meant that there was no base and whatever operation had been requested should just given up (silently, as if it succeeded). There weren't too many places this was needed, fortunately.

The same situation is true not only when the system starts up, but also when a driver is restarted - for example, if you reinitialise the video driver when it is the only one present. The module is finalised, releasing its workspace and deregistering itself, which leaves no active display. Then it starts up again, and the display becomes active - anything between those events might output to the 'screen'. It needs to be safe to do so, or the system will crash.

This is also the main reason that the system keeps a record of the display configuration for a driver, even when the driver has terminated. If the driver returns, it can restore the mode back to what it was when the driver was killed. Try *RMReInit VideoHWVIDC (or whichever driver you use) in a TaskWindow for a little bit of scariness. Well, scary if you think about what it's just done.

The display number which is used on start up is configurable, so that the default display doesn't end up being forced to the same one all the time. This was necessary to ensure that it was possible to start up in ViewFinder mode, rather than VIDC mode. In this case, the configured default display is set as the active one on start up. When the drivers initialise they are only told to become active if they match the active display (in exactly the same way as reinitialising the driver retains the current display without switching to the non-selected display).

The VideoGuard module was created to get around the problem of a badly configured default display. If, for example, you configured display 2 as your default, but a driver for display 2 never appeared during the system start up, you would be left without any display at all. It's tricky to get out of that state, as you're typing blind. The VideoGuard module waits a bit after the start up, and if there isn't a registered driver after a period it will forcibly select display 0 as the active one, on the assumption that display 0 ought to be your fallback display and ought to work.

There is also a little extra that's done to force there to be some output on the screen, otherwise the forced display 0 would end up being a black screen as that's the default for a new mode. I believe that I chose to insert a ctrl-@ (NUL) into the buffer as this should be benign, and generally the system would be sitting at the boot menu prompt which would detect this and redisplay its menu.


VSyncs need to be generated by the graphics hardware in order that events which rely on them occur at the correct time. The display VSync is usually used to synchronise video such that there is no tearing in the display - the effect of having a different frame displayed part way through the refresh of the screen. On older systems, it was common that games used the VSync to trigger music or sound events, which had strange effects if the mode used was a higher refresh rate than the music expected. Whilst less common, this still happen on occasion.

The video driver would take on the responsibility of triggering the VSync events whilst it was the active display. As discussed, there are a number of ways in which multiple screen drivers could be implemented, and because I opted for the less complex single active display this meant that only a single VSync event would be used. I don't think that was a bad choice, but it is a little more limiting. Baby steps, I reckoned - I could have spent an age trying to get everything right but would never have had any thing to actually give to people.

Because the VSync interrupt is known by the graphics hardware and is claimed and managed by it, the actual hardware interrupt that controls the event can change (especially if the active display changes). This shouldn't really affect any applications, though, so I felt it was quite safe and reasonable. If there were any good reasons for fixing the hardware interrupts for 3rd party use (without direct reference to the hardware), I couldn't see any.

Display driver interface

The display drivers registered themselves with the Kernel using the SWI OS_ScreenMode API. Each display driver has a description block that allows it to provide information about its capabilities. The description block consists of a set of tagged elements which provide different types of information about the device. The name of the device, for example, it given so that it can be used for selection purposes. The device name was intentionally not translated so that it can be used as a token for lookup in the future.

Some display devices would not be able to display certain modes - for example, 16 colours and below are generally not supported for display by many controllers. A table of formats is provided as part of the description table, which can describe the standard RISC OS modes. This table is used by the ScreenModes module to filter the list of defined modes to just those that are supported by the hardware.

Similarly, a separate tag defines the alignments required by the driver for screen modes, in both dimensions. Although the OS requires that lines start at word aligned addresses, it doesn't impose any other restrictions on the modes. The hardware might have other requirements though. If it requires that the screen start on 16 byte aligned addresses, it can be specified in the table, and similarly if the number of rows has to be a multiple of a particular number of lines, this can also be supplied. Again, the ScreenModes module will use this to restrict the modes in the definition.

The physical memory available to the driver is also described, as this may prevent certain modes from being selectable. Although it's unlikely that there would be the 480K restriction of VIDC1, the ARM7500 is limited to 1MB, and in any case the RiscPC's VRAM can be absent (same as ARM7500), 1MB, or 2MB. The address space available to Podules was 16MB, so there was an upper limit on the contiguous space that could be allocated for the screen (unless special tricks were pulled with abortable memory and dynamic page mapping).

There was also a field for a device model, which could be used by the driver to describe the particular variant of the hardware. This was originally an oversight on my part - I assumed that the device name would be sufficient. David Moore convinced me otherwise. The most obvious real example is the ViewFinder which can control a number of different types of PCI cards, so knowing which variant you have got is important, although the device is still a ViewFinder. More generically, video controller chips are updated and it isn't unreasonable to find a single product using a variety of different revisions or variants of a chip. Having a defined way to describe this makes for a useful way to differentiate them without resorting to driver specific commands.

In the future, other feature tags could be added which supplied extra information about the driver and hardware capabilities. When a suitable memory reservation and sprite manipulation API had been defined, it could have been described here, which would allow a generic sprite caching and manipulation system to work without needing to be supported directly by the driver. Similarly for any other 2D or 3D acceleration functionality which might be provided.

Screen banking

The graphics drivers can define how they provide different screen buffers. In particular applications cannot rely on reading the size of the 'screen' dynamic area (because that makes no sense - there may not be just one 'screen' area) and dividing by the size to get the number of screen banks. Many applications do this and it isn't really practical if the display is variable and can be implemented differently.

One possible way that you could have determined the number of banks would have been to increase the size of the screen dynamic area to its maximum, and then select each bank for update, checking whether you got it. That's equally impractical, again because the dynamic area might not be known. Just dividing the size of the screen memory by the product of the row length and rows might be wrong for some hardware where there are restrictions on the alignment of the screen start address.

As there was no way to perfectly retain the old behaviour, and no way that could be easily determined from the parameters that were already exposed, I extended SWI OS_ScreenMode to allow operations to be performed on banks. New reasons were defined to allow applications to:

  • Count the number of available banks.
  • Select, or read the banks to update or display.
  • Copy between banks.

These are the most common operations that would be performed on the banks. The implementation of SWI OS_Byte 112/113 was replaced with calls to these SWIs, which allows everything to work as it used to - the API is identical, but the new location of the call makes far more sense. The use of bank switching dated back to the BBC, hence the use of the old SWI OS_Byte calls. The added ability to count the number of banks in the mode means that it's far simpler to work out how many banks will be required.

Because there is no guarantee that the banks will be contiguous in memory, it also follows that there is no guarantee that they will even exist in memory. If a bank is neither displayed, nor updated (and possibly even if it's displayed but not updating) it doesn't need to be in logical memory. This gives the driver the freedom to map in only the memory areas that are required, and should reduce the amount of logical space that gets used up with multiple screen banks.

The copy operation was intentionally simple - the copy would take place for the entire screen bank. It is possible that this should have been reduced to copying a rectangular region, but that would have made the operation harder in general. As with other 'Mode' entry points, these calls are passed through the VideoV vector for the driver to handle. In most modern hardware a bank copy would be the a fast, accelerated, operation so would be significantly better than its software counterpart.