There are a lot of things that I would have liked to do throughout RISC OS. Many had dependencies on other areas being improved to the point at which they provided facilities to enable them to happen, and which were either in the process of being implemented, or had been planned. This said, I never produced what you might call an architecture roadmap or design for future development. Much of the future direction was in my head, although it was regularly discussed with people. Sadly, many of those discussions ended up being passed on to people through various forums as 'we are going to do ...', which was rather limiting as to how the future direction was intended to go.
For the video system, there were already abstractions in progress for hardware acceleration. Initially, this focused on the primitive 2D operations, isolating them from the specific implementation of RISC OS graphics state.
The graphics state had long been tied to a single context, with sprite redirection, and the redirection context required certain knowledge about the scope of the video system. It was not, for example, possible to record details about the video system which were not related to the primitive state - so if, for example, additional state was needed (arbitrary clipping regions, transparency masks, 3D state, z-buffer areas, Teletext context, to pick a few examples) it requires that the redirection code within the Kernel to be aware of this. There needs to be a way to extend the redirection, so that it can hold details of additional state that other installed components may require. It had not become a significant issue, but as more features were added to the graphics system, the need to retain state became more relevant.
More immediately, it would have been useful to include transparency state in the plotting context, as an operation that can be used as part of a plot. This would allow rendering of Draw paths transparency, and the like. Obviously it would have been a large change across a number of components to take account of this property, but would make a useful change to start along the road for updated contexts. It would highlight the areas in which other related context changes would be required.
Any change to the video system needs to be mirrored within the printing system. Failing to do that means that operations which are performed on the regular system will appear differently when printed. Some of the added operations for sprite plotting transparently, and the DrawFile clipping operations, were not updated in the printing system so they would not work correctly. I had avoided a lot of work on the printing system because it was quite heavy assembler, and the way in which the printing system was modularised wasn't exactly easy to work with. The bitmap printer drivers would work fine, as they use the standard sprite operations. Any of the non-bitmap drivers such as the PostScript driver, would find themselves not knowing about how the operations worked. Sprite operations with transparency might not work, and those with alpha-channel might not render at all - or might render with the wrong data (if the driver interpreted the mask as 1 bit-per-mask-pixel).
On the hardware acceleration front, there was a skeleton driver that was able to provide operations through the acceleration API, and support for capturing sprite operations such that it could be used with sprite caching. The API was still lacking in some important features. Whilst it was quite capable of handling system font output, there was no acceleration API for generalised bitmap fonts (for example, ZapRedraw), or anti-aliased font support. The former should be relatively easy as there already exists a basic acceleration API for ViewFinder. Making that generic was one of the later things on my To-Do list, but finding a suitable solution which would work for common systems, rather than specific just to ViewFinder, was delaying it.
The anti-aliased font API would need to be extracted out of the existing FontManager cache and be able to be updated to be suitable for acceleration. The easiest way I could see of doing this would be to either issue services to notify the drivers that cached entries had expired (as that would be simple to insert into the FontManager's cache handling), or a vector which was called to perform the operations. However, that could easily result in flurries of services, or vector calls, to flush entries as fonts changed, especially if the font cache was too small for the active set of fonts in use. Another possible way to deal with this would be to use a sequence number with cached font chunks (FontManager cached characters in chunks of 32, if I recall properly), which would allow the hardware driver to determine when to uncache its copy of the details. It is not possible to keep only a single copy (eg a copy in the hardware driver), because when there are multiple drivers, or a driver is reloaded, the context would be lost.
The primitives themselves are handled poorly through the BBC style
interfaces, and this was already becoming an issue for some operations.
Primitive operations were passed through vectors, primarily
WRCHV, which serialised all the operations as bytes. These
would then be reconstructed into the relevant operation (mostly character
and graphics plotting), before being handed off to the hardware abstraction
In particular, this restricts the coordinate space to 16 bits signed, which is not a problem for the screen (as it was at the time) but could easily become an issue for some print operations. 215 pixels across an A4 portrait page is about 4000 DPI, or about 2800 DPI for the long edge (across A3 portrait). Both are high, but depending on the use, resolutions that high might be required. The coordinate space restriction also exists in the graphics window clipping system, which is passed through the same interface, and the input system, which buffers its data as bytes. The WindowManager has fewer restrictions with respect to coordinates, but the window minimum extent is measured as 16 bit values - not likely to be a problem until displays become significantly larger.
The sprite system has limitations on the sizes which can be manipulated due to internal calculations relying on the values being less than 16 bit. Those could be overcome internally by changing the operations to use safer calculations - as the sprite operations are (mostly) compiled on demand special variants could be created for very large images.
Draw paths are already shifted up by 8 bits, so the gain in scale which could be expected would only be 8 bits over the existing 16 bits. Draw paths already begin to have issues at high scale factors, so it is possible that there might be issues already with very large content.
The standard matrix operations use a fixed point 16.16 scale factor, so with larger output it is possible to scale well beyond the size which is currently addressable. The additive offsets in the matrix are either raw coordinates, or fixed point 24.8 (Draw units), depending on their use, so these are as safe as Draw paths.
The Font system is not quite as restricted, although its scaling may suffer the same problems as the Draw system (as it degrades to Draw paths at higher scale factors). Internally, the Font system only operates on a per character basis, so should not have any issues.
All of which means that extending the coordinate system has a number of effects beyond purely widened coordinate values. It had never been a significant priority for me, because the scope of changes necessary to use larger sizes would have made some existing programs break and done very little to benefit the system. If I had focused earlier on the printing systems, where the benefit would be realised more obviously, I might have changed that view.
One restriction that had been retained from VIDC was that of a limited pointer definition. The pointer shape is defined, obscurely, by SWI OS_Word 21,0 - and the data block is referenced through a pointer that isn't word aligned, just to make things more ugly. The format of the data is 2 bits-per-pixel, a maximum of 32 pixels wide and high. Colour 0 is transparent, colour 1 and 3 are the main colours that are used for the shape, with colour 2 discouraged because (in the PRM's words) "it does not work correctly on high resolution mono screens".
The vectored screen interfaces which I created did not extend the definition of the pointer - in fact they just propagated the interface directly to the new vector. Another area that I cut a corner to try to get things to work quicker, sadly. The interface defined through the vector should have been rapidly deprecated, and a more flexible interface added.
In its place, a new interface should allow for more flexible pointer definitions. For a start, the parameters for the pointer need to be made available to clients. The maximum number of colours, and the dimension limits should be retrievable through device registration parameters. A flags word for the pointer could provide an indication that the new API was in use by the driver (otherwise we fall back to the legacy vector), and whether there is any hardware pointer support. If no hardware pointer support was provided, a software pointer would take over. I had already implemented support for a software pointer, which was hastily written, but functional. It could have been made production quality without too much difficulty, even though it was implemented in !JFPatch assembler.
The SWI OS_Word 21,0 interface would be deprecated, and in its place we would recommend the use of SWI OS_SpriteOp 36, which already sets the pointer. Instead of SWI OS_SpriteOp 36 calling down to SWI OS_Word 21,0 it would be the other way around. The sprite operation can convert down to the right format as necessary.
As well as allowing larger pointers (or reducing the size if they were not able to be rendered directly), it would also allow the pointer to use more colours.
The FontManager had been pretty constant for some time. The interface it provides is useful for plain text presentation, but if you want to do anything beyond solid text you had to do some more clever things. Although obviously it is foolish to discourage anyone trying to do clever things, it is also useful to provide a more integrated way of achieving special effects.
The font rendering system would need to be updated in order that it could be accelerated. The characters which were cached consisted 15 levels of alpha in a bitmap. Depending on how the characters were cached it might be trivial to use the bitmap as a stencil, which could be used to apply patterns, or sprite effects.
Obviously the same effects would need to work when the cache was not in use
(as would be the case if the characters rendered exceeded the
limits). Done properly, this would allow the font to be rendered with
different patterns and styles - would it be useful to be able to draw
the font in a rainbow, or as a flame pattern, or as a stone pattern? I am
not completely convinced right now, but I was keen to at least try it out.
It would also have been useful to provide other effects, such as a blurred spread edges, to give a halo, or soft edges, to the font. Given a bitmap, this is relatively easy to do, and the vector form of font plotting can be reduced to a bitmap, albeit at cost of a bit more memory.
Being able to render the font with transparency, instead of being solid, would also be of benefit for shadows - a font plotted at an offset with a reduced opacity could give a nice shadow effect.
These sorts of compositing effects are common in Web browsers today, and would hardly have been ground breaking back when I was thinking about them, but they might have made using the FontManager more appealing. Of course, it is quite possible that these effects might not be provided by the FontManager itself, but through an additional support module.
I wanted to make some use of the nice DrawChart module that Ian Jeffray had written. The module provided an easy way to turn data series into a chart, either directly rendered on the screen, or by creating a DrawFile for writing to disc. The module was able to plot pie charts, line graphs, bar charts and scatter graphs, together with titles and axes labels, and with different data markers.
The module was very simple to use, and made it incredibly easy to construct a simple presentation of data, without having to resort to a more powerful application, like a spreadsheet. One very obvious use of the chart which I wanted to do was the 'free space on disc'. I had written a very simple Toolbox application, just to demonstrate it, and it worked reasonably well. The chart functions needed to be built in to the Free module so that the it could be used whenever a normal 'Free' request was made.
The alternative would be for the Free module to able to be overridden, allowing other applications to provide the free space presentation. This might work in much the same way that the Filer_Action module was able to let other third parties provide replacements for its functions.
There were obviously other places that a graph could be used, either statically, or updating dynamically. There is no reason why the old !CPUusage application couldn't be replaced by a DrawChart presentation - it probably wouldn't be all that sensible, but it could be done. Or you could use the chart to show the battery power remaining, or power over time, if you wanted.
I can see the module having more uses for third parties than I can within the OS, but providing the functionality would be nice.
The existing BTS system can provide a lot more information about the path to a failure during execution, but there is a lot more which should be done. I started on initial work towards using the ASD areas to decode debug areas in applications. This could have been used in a number of ways, but initially I wanted to be able to decode the local variables as part of the backtrace we report, and to add information into the recorded BTSDump so that it would be available to other users.
It would also have been very useful to be able build an executable with debug information, but then detach that debug data from the executable itself. This would allow tools (such as BTSDump and others) to be able to use additional information but without the need to load data which is never going to be used. The AIF module was already sufficiently aware of ASD areas to not load them (unless a debugger was present).
The debug system - !DDT - needed a significant overhaul in order to use the standard APIs rather than colluding so much with the Kernel and graphics drivers. It is quite likely that updates to !DDT to do this would mean that there were new interfaces necessary to expose the information which it needed, or to perform controlled operations rather than directly working on the internal Kernel data structures. So long as these were controlled, and kept to sensible limits, it should not really have hurt it in a significant way. The other useful (and necessary) result of doing that would be that these new documented APIs would be available to other tools - allowing other debuggers to work without colluding with a Kernel that they should not have access to.
It has always frustrated me that the 'internal' interfaces (which I heavily used in my patches and diagnostic tools) were sometimes the only way to get at information. Wherever possible I had removed the internal reliance on such data structures and interfaces, and introduced APIs to get the information legitimately. This process had started long before I had seen any of the RISC OS codebase, by Acorn, as they had tried to rationalise parts of the system.
There were initial hooks which were added so that the AIF would be able to invoke debuggers initially when an application was run, and services to trap the backtrace reports raised by the . It should have been possible to extend this so that a running application could have been debugged from an external task. This would be more complicated to achieve, but some work had been started towards this by Acorn some years previously. The remote-GDB protocol would be a useful way to go - a driver providing the GDB protocol could be placed within the system to provide more information in failures. Might not be completely suitable, but it is usually better to reuse defined functionality.
Full circle reporting, by emailing the diagnostics from the user's system to the author was an area I really wanted to be able to address (in addition to direct debugging). I wasn't sure that I wanted to go down the centralised route that Microsoft used, but being able to configure destinations, and other details, for failures would have made for much better turnaround on bugs - mostly because the relevant information was supplied to the author.
The input systems on RISC OS had not changed significantly since Arthur. I had begun, to rework parts of the keyboard so that it was possible to support a greater number of keys, and the mouse so that it could support scroll wheels and additional buttons. The mouse, in particular was something of a difficult area to change because anything that was done needed to retain backwards compatibility.
The scroll wheel had been added relatively easily, and without too much impact on existing applications, but this was at the cost of not buffering its input. Greater support for scroll actions would be useful, such as being able to control the volume if used in conjunction with a mouse button or key press. Of course, such things may be better supplied as third party patches, as they can handle the vector calls in a similar way to the original mouse operation.
The initial parts of touch screen support had been added to the system, and whilst this focused on making a touch input as mouse input, this could easily be augmented by the OSPointer drivers (and others) to allow gestures to be supported. Multi-touch support would need a little more thought, but the obvious way to support such interfaces would be for the touch input to be vectored and for modules to watch for gesture sequences they recognised. The gestures could then be turned into special key operations, passed through the newer KeyInput interfaces to indicate the operation in question.
Gestures for 'forward' and 'backward' would obviously map easily to the forward and backward key codes in USB HID drivers, thus ensuring that the interfaces remained the same. Obviously, there are bound to be some operations that do not have HID mappings, but with a 32bit key space, it should be easy to find a way to match them. Operating on gestures in this way - producing key operations - would mean that supporting the gestures was merely a matter of trapping the keys they produced. And teaching new gestures would be a matter of translating the gesture into the key press.
On the other hand, the keyboard input system was a terrible mess. It was expected that different keyboard drivers would be installed to provide correct layout of the keyboard. The drivers were, generally, derived from the same sources and built into the InternationalKeyboard module. If you had a different layout of keys, you needed a special module.
Frustratingly, the USB standard specified that keyboards should return the key code which matched the representation on the key cap, rather than a key code for the key's logical position. Keyboard manufacturers still found it cheaper to produce a single set of hardware, and print different key caps depending on the territory the keyboard was being sold into. So, the problem remained that the key code reported might not match the key caps.
In reality this only applied to the primary keys on the keyboard - 'special' keys, such as media controls, or power buttons, were correctly reported. The positioning (and therefore key code reporting) of quotes, pounds, hashes and some other punctuation tended to change for regular western layouts, but most other keys were correct. "Azerty" or "Dvorak" keyboards sometimes reported the key codes for "Qwerty", which made them interesting.
This is before we start to consider eastern input systems, which are quite a bit more complicated.
Ideally a table based configuration, which could be loaded at run time, rather than being hard coded into a module, would make the system far easier to configure, and less in need of additional support from suppliers. With a bit of thought, a configuration tool might be easy to provide with the system.
The InternationalKeyboard also provided the special key combination support through the Alt key. For example, pressing Alt-[ followed by a vowel would produce an accented character. Of course, this also depends on the character set in use on the desktop. ISO-8859-1 (Latin 1) was well supported, but the if you needed another character set you could rapidly find yourself in trouble. Switching to UTF-8 input would help, but this has its own problems.
It had often been requested that there be support for other modifier information as part of the Wimp key events - so that you could use alt-X for a particular operation (alt-X, would produce a '×' symbol in the default keyboard configuration, however), or so that ctrl-I could be distinguished from Tab. The problems related to this are:
- Key presses are unbuffered, so you have to track the state yourself (the KeyInput module made this simpler, but did not address the problem directly).
- The WindowManager deals almost exclusively with buffered character input, after the InternationalKeyboard has seen the events.
- The interfaces for buffering keys are defined and changing their operation would most likely affect some applications.
- Key input, in the current configuration, UTF-8 input would deliver multiple key events to the application to construct a single character, forcing applications to track the UTF-8 state if they expect to work in such an environment.
- The Wimp key codes cannot just be replace by Unicode because some key presses do not have a Unicode code points (for example cursors, paging, 'Insert' and so on).
- Key processing in the Wimp is passed through applications by each application passing on the key with SWI Wimp_ProcessKey, so if any extension is to be made it has to fit into the parameters to that call - that is a single 32bit value for the key code.
Additionally, mouse operations are buffered, but the buffered data doesn't contain enough space for any additional modifier information.
Modifiers could be added to the Wimp key codes by using the top bits of
the key code, but this might affect older applications which expected to
see code 9 for ctrl-I, rather than a high bit indicating 'ctrl pressed',
together with a key value of '
i'. Support for such key
codes could be provided by filtering on SWI Wimp_Initialise version
number, but this precludes filters from being aware of the
interface in use. Alternatively, a flag on SWI Wimp_Poll could be
used, which would give filters a look in, but would still get confusing
if there were multiple filters in use.
The third party DeepKeys module provided some support for key modifiers, but only addressed part of the problems. In some respects it would be better to deliver Unicode characters to an application, rather than UTF-8, but conversely this pushes the Unicode characters on to the application, which needs to decompose them back to UTF-8 if they need to be inserted into character arrays. Similarly, in some respects it would be better to deliver modifiers rather than the decoded input.
UTF-8 input introduces fun problems for presentation - if an application receives the first byte of a 3 byte UTF-8 sequence, inserts it into an icon buffer manually, and then tells the Wimp to redraw the icon, what should it draw? A square, to indicate a broken sequence? Or nothing, because it doesn't want to interrupt the expected remaining characters? Or something else?
I don't have any answers to these problems. Each time I think I like a solution, there are issues raising their heads, either due to backwards compatibility, or usability for applications.
If it were not necessary to retain backwards compatibility, many of the issues would go away.
Power management had been touched upon with the A4's Portable module, which helped to manage the operations of a system with a battery. The main use of the Portable module in modern systems has been to control the 'power off' for the machine, and to place the system into an idle state. Really, the idle state is a matter for the CPU, and should be part of the SystemInit handlers, but there might be other systems that can be made less busy through the idle management.
In its normal state, RISC OS is an 'always busy' system, constantly polling for events and managing applications. Within the Desktop that may mean that the system is constantly swapping applications in and out to deliver 'Null' events. Well behaved applications, since RISC OS 3.1, should have been using the SWI Wimp_PollIdle to indicate that they were not interested in Null reasons until at least a given number of centiseconds had elapsed; or they could just mask off Null reasons entirely.
If the desktop finds that it would be doing no work, it can place itself into an idle state, with SWI Portable_Idle, which means that the CPU will execute no further code until an interrupt is received. Alternatively, it can control the CPU clock speed to reduce the power consumption - if it isn't doing very much. When error boxes are displayed, the same idle is triggered, as these prevent any other tasks from switching in.
Outside the desktop, the SWI Portable_Idle call is made by the default handler for ReadLineV whilst waiting for input. SWI OS_Confirm does a similar thing.
Most of the time this is not a problem, and can work quite well, but it would also be useful if other components had an opportunity to shut themselves down, or run in a lower power mode, under similar control. For example, hard discs, if they have not been used for some time, could be placed into a Sleep or Stand By state - the support for this has been present in ATA for quite some time, but is not exposed to the rest of the system. Replacing the filesystem with a more modular structure would help to address this issue.
Similarly, USB devices and displays could be powered down, or configured to use less power. The control for the display dimming was partially provided within the Portable module, but was not suitable for modern systems (a 'dim' interface was provided over PaletteV). USB systems hadn't become stable enough to provide a consistent way of being powered down. The Portable module provided some support for powering down systems but wasn't really extensible enough to handle the many new hardware types that were now available.
Additionally, the Portable module was another of the components that provided both the interface and the hardware control in a single module. As with the other components, this meant that any hardware implementation had to provide the entire programmers interface, and could not be used in conjunction with other hardware providers. The original Portable module was written in assembler, and wasn't well documented. Although there were measurements and metrics that could be read from it, the units in particular were not specified - for example, you could read the power left, or charge rate, but didn't know what units they were in.
Later changes that Acorn had made to the Portable API, for non-A4 hardware, had added some proportional readings. These were more useful for presenting to the user (eg, a percentage power left), but these were not well publicised.
I had collected all the documentation for the module, which had been spread through different documentation, into a single XML documentation file. This made it a lot easier to find the information I needed, and I wrote a skeleton implementation in C which provided all the interfaces, but did nothing with them. The intention was that this could be distributed to hardware manufacturers as a starting point for their implementations - with the reasoning that we were often being told that providing the necessary RISC OS interfaces was a significant burden on top of the development of the hardware itself.
However, what I wanted to do was to change the Portable module so that the hardware specific parts were provided by drivers. Instead of the module knowing about different hardware itself, it could query the controller modules about their state and control them. This would make for a more extensible interface which could cope with a greater range of hardware. Hardware drivers would become marginally more complicated, but only where they provided power management; if they did not provide any interfaces, they would be unaffected by the change.
The skeleton provided all the necessary SWI APIs, and called down to a simpler implementation source file, which had examples and descriptions of what the functions should do in order to function. If anyone used it, they would find it very simple to provide the necessary details in order to get a working system. More importantly, if the Portable module was abstracted to provide a management interface and a hardware implementation, the existing implementation could be reused with a different wrapper quite easily.
The Portable APIs had already been extended for VirtualRPC, defining new reasons for the power remaining which gave units of seconds. This allowed the emulator to provide an accurate reflection of the host machine's state. Of course, applications which presented the information would still need to be able to try the different operations, in case the module couldn't provide the newer data formats.
There were other areas that could benefit from new interfaces. It would be useful if multiple batteries could be supported. Rather than a single power result being reported, information on each battery pack could be provided (where relevant), and UPS information could be provided through the same interface. It is arguable whether desktop UPS systems should actually be provided through the Portable module, given its name - it might be more useful to separate the power input entirely, especially if the existing API was no longer going to be used (it could of course be provided as a legacy interface).
Docking station support could also be provided through the portable module, although the amount of support necessary might vary between devices. At its simplest, the docking station would take over the portable device's outputs and route them to the external unit, and the only support necessary would be an 'on dock' indication.
The original power icon which appeared on the A4 was not very complicated and even at the time looked like it had been an afterthought. It could be improved (a rewrite would be preferable, as the original was in assembler), and more information about the power devices added. If there were multiple batteries, a way of presenting the information usefully would be needed.
Much of the power control would actually be device control; turning the Bluetooth or Wireless off through the power interfaces makes sense, but might not be obvious on some devices. Some degree of customisation would be required, allowing the power control interface to be replaced entirely.