SpriteExtend
SpriteExtend provides the sprite rendering functions which aren't in the Kernel - anything that renders sprites with scaling, translations or colour translations - and JPEG rendering as well. When starting work on Select, one of the areas that seriously needed to be addressed was the handling of JPEGs. From my own work previously with JPEGs plotted through the SWI interface, there were a lot of places where oddly formed, corrupted, or incomplete JPEGs could cause crashes during rendering - or worse.
David Thomas had produced a few patches for the existing versions of the module, so that his !PhotoFiler could work without fear of the Filer crashing. I intended, at some point in the future, to provide a thumbnail view in the Filer, so it needed to be solid - and even without that incentive, having a component that would crash when supplied with valid data was pretty poor.
BEWARE 10000 VOLTS (This park will never fail, the T-Rex will never get out, etc)
- a comment from SpriteExtend I'll not forget in a hurry .
JPEGs
One of the first areas that I examined was the different sampling factors that JPEGs could be produced with. Generating example images was pretty easy with cjpeg, and of course there were images all over the Internet which could be used as sources of data. The Usenet groups which distributed images (of various qualities and dubiousness) were a good source of data which would not be quality controlled in any way.
With a good selection of these (many of which would never actually be viewed), it was simple to produce a test tool that would run through the images trying to render them, and stop when it found those that it could not handle - by virtue of crashing . These days you'd probably source your images from peer-to-peer services if you wanted to apply the same scatter-gun approach that I used. In addition to these collected and generated images, a few images were sent to me by people when they had found problems. Dave and I exchanged quite a few difficult images.
As well as some sampling factors not being supported, the non-2x2 sampling was commonly very slow. The 2x2 sampling factor images (more commonly known as 4:2:0) were the most common. This was where the Cr and Cb were only stored once to cover a block of 4 pixels, but the Y was stored for each of the pixels individually. Obviously this meant that there was significant colour resolution loss, but that was part of the appeal of JPEG - it didn't usually matter as you were trying to compress things.
This standard sample factor was handled by an assembler function which could process the data far faster than a plain C implementation could. Any image that used a different sample factor, though, had to fall back to the standard C routine. It wasn't very flexible and couldn't cope with some of the sample factors. I added more support for different factors so that such images would render, and render faster than they had previously.
In addition to the use of different sampling factors, there were 'new' forms of JPEGs that were coming into more common use. The JPEG standard (T.81, I still remember its ITU specification number) only defined the storage format for the image data. It did not define a container which the data should be wrapped in. JFIF was commonly used in many images files, although its definers - C-Cube - had vanished quite a while previously. The lack of development of the JFIF standard, and the rise of digital cameras, brought about the use of an alternative standard - Exif - which allowed for tagged data to be included within file container.
SpriteExtend initially only supported the JFIF standard, and would reject anything using Exif, despite the file being otherwise quite parsable. This was a relatively easy change to make, and to apply consistently through the code. There were multiple code paths through the JPEG decoding code depending on the operation being performed, which made it a little harder to ensure that behaviour was consistent.
Whilst JFIF had allowed for thumbnails to be present in the file, Exif encouraged the use far more because the thumbnails could be decoded on the cameras that took the pictures with far less processing requirements than to decode the entire (possibly quite high resolution) image. SpriteExtend never uses these for its rendering, although I did consider adding support at some point. Part of the difficulty is that there are a few different ways that the JPEG thumbnails can be represented in the file, and it wasn't unheard of for the thumbnail data to be JPEG encoded in a form which swapped the U and V components - resulting in blue skinned people.
SpriteExtend does plot certain images faster, though. When an image is scaled beyond 1:6 (that is, 6 times smaller than the original), it will skip decoding the entire 8x8 block and only decode the image data for the top left pixel of the image. This is then scaled across the image. The effect is a slight reduction in image quality at 1:6 but a significant speedup. To be more accurate it would need to be 1:8 - I guess the reason for the 1:6 was that it's perceptually difficult to tell the difference, and because scaling down to that level would be common for the uses they were expecting. I don't know the reason, and maybe it was more significant than that, but that's what it did and I saw no reason to change it.
When rendering images scaled up, it was obvious that they were sometimes quite poor quality, especially if the image itself was being reduced in colour depth. To try to address this, for JPEGs, I added some special code that triggers when the JPEG is plotted at >150% in an 8 bit-per-pixel mode with the standard palette. In these cases the output is internally decoded into a buffer of 2 bytes per pixel, indicating alternating colours to use for the decoded source pixel. These can then be alternated between as the image is plotted. The result is slightly better colour handling for scaled up images. I'm pretty sure that this 'ordered dither on scale up' for JPEGs wasn't really noticed by anyone, as scaling up JPEGs is less common, but it was a technical improvement over the original plotting method.
Of course, in many cases it would be really useful to have bilinear blending on the image data. The complexity of this within SpriteExtend put me off - it should be possible, but it's significantly more work than had already gone in. Not only would previous line buffers need to be remembered so that they could be blended from, but the mask, translucency and alpha channel in use for each of the pixels being blended would also need to be accounted for. The code was quite complicated enough, thank you very much .
Colour mapping
As has been mentioned previously, SpriteExtend also gained the ability to use ColourMapping on both sprite and JPEG render operations. This meant that any deep colour image could be colour mapped to any colours that the user wanted, in much the same way as they had been able to do with the paletted images by using a colour transfer function in the SWI ColourTrans_SelectTable call.
This addition made for some fun juggling in the plotter. One thing that isn't obvious from the outside of SpriteExtend is that it operates as a dynamic compiler for the code that's needed to plot the image. Every sprite operation (JPEG operations included) would have its parameters checked for matching code buffers of already generated code - input and output depth, plot operation type, mask use and the type of mask in use, palette translation, whether we're actually plotting a mask, whether it's a JPEG, dithering options, whether there's a colour map in use, scaling factors and probably a few more that I've forgotten. As most of the time the operations that are being performed will be the one of a common set - most sprites being the same depth, plotted to the screen with the same scaling, etc - this will regularly hit the same cached code.
If new code needs to be built, it will be. SpriteExtend builds up operations it needs to perform in a number of stages, allocating registers first, possibly splitting them between the outer loop (for each line down the screen) and the inner loop (for each pixel across a line), then building the outer loop to go down the screen and the inner loop to move across the screen. In the case of JPEGs, the outer loop tries to fetch a 'row' of data from the JPEG. As JPEG data comes in chunks of 8 rows, this is cached per row and a decode only happens for every 8th row.
The whole dynamic code generator is written in somewhat hairy, and excitingly formatted C code. Every instruction that needs to be compiled is built up using macros that insert the correct instructions into the buffer. If, during the build, it runs out of space, uses a register wrongly, cannot allocate enough registers (and doesn't have the ability to spill them on to the stack), or detects an odd situation, it will exit cleanly without running any code - and therefore not drawing anything on the screen. This makes it very safe to use. If you built it with debug enabled you can get a lot of extra information out on every sprite plot, which you can very easily drown in.
I think it's worth stressing that the code generator existed long before I saw it - I believe that the compiler was originally in assembler and was changed to C sometime around RISC OS 3 time, or a little later. The transformed sprite plotting still uses an assembler code compiler and is far scarier for it.
Because of this, it means that adding new code to perform different operations means working out what registers are available, possibly allocating new ones (only when you need them) and making the extra code get compiled in for the new case. Of course, if the operation differs from existing ones, the conditions that separate the plotting need to be updated as well so that you don't accidentally try to use (say) a colour mapped sprite renderer when you wanted a plain sprite render.
Adding colour mapping was, understandably, quite fun. Colour mapping requires two parameters - a function and a workspace - so that's at least 1 extra register (a pointer to the pair), or 2 if you're going to provide them in a register each. This is on top of any other registers that are already in use to hold the image data being manipulated, scale factors, any mask, and the locations they come from. This makes the register allocation far more fun. Usually the time that you come unstuck is when trying to plot an image dithered - because that needs extra registers, and usually you don't test that operation until later. Or at least I didn't - and for your troubles you get nothing plotted on the screen, and (if you left debug on) a message telling you that the compiler ran out of registers.
To make it extra fun, when plotting 16bit data we need to expand the input to 24bit in order to call the colour mapping function (which expects a regular 32bit colour word) - if we're plotting to a 16bit mode we then have to convert back to 16bit! Of course, if we're plotting to less than 16bit we then need to lookup the correct colours to use with the 32K table supplied by ColourTrans.
I had considered just manipulating the 32K colour tables to produce the colour mapping, but this would be to the detriment of the 24bit colour modes - you'd get significant visible banding, and it wouldn't use the full depth of the mode.
The transformed sprite plots were even more scary for the colour mapping because they needed to be updated in the assembler version of the compiler which was really quite mind warping. Copying around chunks of inline assembled code fragments to perform the mapping really hurt after a while.
CMYK sprites and JPEGs
In some cases it is useful to be able to manipulate colour in CMYK, rather than RGB. It's not quite so common, but in those cases it's useful to have support in the graphics system for manipulating such images. When PRM 5a was produced it was documented that 'type 7' sprites were reserved for CMYK. Implementing these wasn't actually so hard given the work I had previously done on implementing 'JPEG Sprites' back when I was patching the OS - it's easy to know where the choke points are if you've already done it before. The Kernel's tables needed updating to understand the sprite 'mode', ColourTrans needed to be aware of the new type and obviously SpriteExtend needed to be updated to be aware of the type.
The Kernel's 'mode flags' needed updating to handle the CMYK sprites
because the assumption throughout the system was that colour depth
(log2bpp
) implied the type of colour data. It wasn't possible to rely
on the value of NColours
, because it made no sense when the colour data could
be laid out in different formats. The mode flags variable was the most obvious
place to include the indications of the image data format. Other
layouts could be defined here relatively easily as necessary.
In general, CMYK isn't a useful output mode, but it becomes more useful as an interchange format, where such image content needs to be transferred between locations. It was always intended that the output type be used for printer drivers which needed CMYK output - they could use that as their destination rather than redirecting to an RGB sprite and having to perform the conversions themselves. Writing directly to a CMYK sprite would also allow better handling of the key component where that was necessary.
There were a few 4-component, CMYK JPEGs around, so the parsing code was updated to recognise these. Usually JPEGs included the components as:
- 1,2,3 indicating YCbCr format.
- 'R','G','B' indicating that the data was raw RGB.
- 1,4,5 indicating YIQ (which I never came across and SpriteExtend didn't handle in any case).
- 'C','M','Y','K' indicating that the data was CMYK.
Any other 4-component JPEG data was treated as YCbCrK, which was rarer but had been come across.
The JPEG output code for CMYK shares its implementation with that of the CMYK sprite.
Adding new image formats to the system always had an impact - not least the fact that they were not portable to previous iterations of the operating systems. All the modules that would normally handle them would need to be updated so that they understood the new format, and any third party tools which directly manipulated image data would find itself at a loss with the new formats. Nonetheless, this work was a 'safe' area to update the rendering and colour manipulation areas of the graphics system.
In future versions of the operating system it would be necessary to provide other formats - in particular reversing the order of components in the true colour modes, as the opposite to the standard RISC OS order was more common for most video cards. Similarly, output formats such as direct handling of YCbCr output buffers (as might be useful for direct video output) would be necessary in the future. The work in adding CMYK was a good step towards the goals, allowing me to understand the way in the areas worked, and was a good way to proceed.
Translucency
Having seen how a lot of the innards of SpriteExtend work, the next step was to make it possible to plot at a level of translucency. One of the main uses for this was to allow the DragASprite module to do its job better and not look too bad. The DragASprite module allows applications to specify an image and have it move across the desktop with the point, almost always as part of a user's drag. Originally the image was dragged as a solid image. This caused problems because the place where you were dropping the content was obscured by its sprite representation. Acorn added a dither mode to the module, so that you could at least see what you were dragging over. But that really is a quite tacky solution. Better to be able to plot the sprite with transparency.
The translucency plotting was added to both the scaled and the transformed sprite plotting, as a 8-bit translucency factor passed along with the flags. When plotting to paletted destination it would be necessary to convert the colours to fully specified form, perform the translucency calculations and then convert back to paletted form. This would be quite process intensive, so a module - BlendTable - was created which would produce lookup tables such that the plotting could be performed quickly. This can generate a large table, but is only needed for a short time. There are special routines for generating tables at 25%, 50% and 75% which are quite a bit faster than the general table generation calls.
The advantage of having a separate module which can generate the tables comes when the table is required by other modules. The InverseTable module is used in a similar way, to generate tables for 32K colour modes. A new entry (SWI InverseTable_SpriteTable) was needed in order that the module could handle arbitrary sprites - the SWI InverseTable_Calculate implementation only handles the current destination. The tables in InverseTable have also been pre-calculated for the standard palettes, which means that they do not need regenerating at run-time.
As mentioned, the DragASprite module was updated so that it used the new translucency plotting if it is available. This makes the dragging of objects around the screen look much better (rather than the solid, or dithered versions of the objects that had been used previously). There is a problem when the translucent plotting is used in 2-colour and 4-colour modes, as the number of colours available for blending is too limited to be useful really. However, such modes are not particularly useful within the desktop and in any case would probably not be supported by newer video hardware.
Alpha channel
Once the translucent plotting had been implemented it was clearly time to start on alpha-channel. Discussions previously had suggested using the top 8 bits of the RGB data in the 32bit sprites to hold the alpha. That was a reasonable way to do it, but does not allow for alpha channel to be used on anything but the widest of the sprite formats - neither efficient, nor fast to process. Admittedly it would keep the data together, and SpriteExtend would be able to plot it more efficiently internally, but the difference would be marginal given the additional processing required to perform the alpha-blend and convert down to the destination mode.
Instead it seemed more logical to use the normal mask area, but to widen the mask from 1 bit-per-mask-pixel to 8 bits-per-mask-pixel. This meant indicating that the mask format was different, so new sprite types needed to be defined. Since all of the extant sprite types which had been defined used 1 bit-per-mask-pixel, it was logical to provide 8 bit-per-mask-pixel versions of those formats (even CMYK and other formats yet to be defined could then handle alpha-channel in the same way).
Because the mask had changed from being binary to being (effectively) a linear scale, it needed to be understood by ColourTrans. When output is redirected to a mask, ColourTrans treats the output as a greyscale palette. This allows the mask to be manipulated in a reasonably useful way - you can change the alpha channel by selecting colours of the level you want to plot and plotting them to the mask. This does present issues when you want to plot one thing over another, but this at least allows the basic manipulation of the alpha channel.
During development, alpha-channel sprites had been generated either by direct manipulation of the sprite data (very early on), or through conversion from PNGs. !Paint was being updated to support some of the more advanced features, such as alpha channel, and one of the features suggested was a way to change the alpha-channel mask into a binary mask and vice-versa. The former is most useful for creating sprites which will be supported by earlier versions of the Operating System. The latter allows the sprites to be promoted so that they can use the new alpha-mask.
Creating sprites from the test image PNGs helped greatly during the testing for showing that the images decoded properly. There were a few test images which were commonly used - particularly the Toucan, Glass bowl, PNG marbles, and RGB slice with alpha. One of the most obvious and prettiest of those that I tested with was the Icicles alpha image. This showed that the alpha channel was visible through the icicles in a way that was very obvious.
New SpriteOps
The new sprite types and handling of alpha-channel masks meant that we needed some way to control them. !Paint was being worked on by Ian Jeffray, who had done some really great things with the handling of the alpha channel, and to the application in general. Adding and removing masks had always been possible, using SWI OS_SpriteOp calls, but this didn't allow for the new mask types. To help !Paint, and make it possible to manipulate the alpha-channel mask, a SWI OS_SpriteOp 38 was created.
This call will perform both mask promotion (creating, or increasing a mask from 1 bit-per-pixel mask), and mask demotion (reducing the mask to 1 bit-per-pixel, and removing it entirely if unnecessary). Promotion is a relatively easy operation, as it just needs to expand the mask to 8 bits-per-pixel. Demotion, on the other hand, is going to be lossy. The way in which I decided to deal with this was to offer the caller the opportunity to specify the point at which the mask becomes binary. The simplest call would specify that if more than 50% of the pixel was masked, it be masked in the resulting image, but this was able to be specified in the interface.
The call also had the option to automatically strip the mask if it wasn't necessary. This would make the sprite as compact as it would go without changing the colour type of the sprite.
Ian and I also discussed providing a conversion call for converting between sprite types. In some cases that would just mean changing the sprite word, for example changing the sprite from square to rectangular pixels. In others it might mean expanding the colours, or reducing them. Both of these operations would usually be handled by redirecting output to the new sprite type, and plotting the original sprite to it. This would work, but it might be overkill for some operations. In any case, we didn't take it any further than discussions.
In addition to the handling of the masks, the sprite validity checks needed to be updated to better handle the new sprite types. Actually the sprite validity checks were pretty poor to start with, and got a little bit of an overhaul to ensure that they were checking the validity of the area properly. The call itself had been introduced in RISC OS 3.6, and wasn't used that often - !Paint called it to check that the sprites it loaded were actually valid, as an invalid sprite file could cause the application to crash. Otherwise it wasn't used much.
The checks were tightened up so that negative values weren't acceptable for most of the file and sprite header values. This affected certain applications - I have a vague feeling that !StrongHelp relied on having negative sprite area offsets in its sprite pools, but I don't think it was affected by this change. Whilst the use of negative sizes did mean that you could chain together sprite areas in different places, it also made it more difficult to recognise corrupted files.
New style alpha-channel sprites were recognised as acceptable, if their non-alpha-channel numbers were also valid. They also had slightly different size checks for the end of the mask data, as obviously the sprites would be much larger than their 1 bit-per-pixel counterparts.
The sprite name, too was checked. A few times I'd been bitten by strange effects as the sprite name had not been generated properly. Usually this was due to failing to pad the sprite name words properly, or writing the sprite with a capital letter in - they should always be lower case. This kind of problem would only happen if you were manually creating sprites (as I was with most of the ImageFileConvert modules), but it was important to ensure that they were valid.
I'm not sure whether anyone would have cared about the validity checks, other than that they worked, but the mask manipulation calls were very important to !Paint and would have been useful to anyone needing to create similar tools. I'm not aware of anyone actually using alpha-channel sprites though, possibly because I've been away from RISC OS for a while.
Tiling
One of the new SWI OS_SpriteOp calls of particular note was the addition of a 'tiled' plot call. This was similar to the scaled sprite plotting, except that instead of plotting a single sprite at a location on the screen it would fill the screen (or the graphics window) with the sprite. It's not a particularly complex operation, but it is one that is performed regularly in different components.
The WindowManager tiles the background of windows and menus with different tiles depending on the colour of the window (and the existence of the relevant sprite in the pool). The Pinboard may tile the background with a sprite if configured to do so. Other applications may wish to do so for similar reasons.
The call itself is almost identical to the simple 'Plot Sprite Scaled' call, which means it is very easy to use in any application. Internally the call is handled by SpriteExtend as part of its sprite operations, and is dispatched as multiple calls to the Plot Sprite Scaled call. This means that if the basic plotting call has been accelerated, the tiled operation will still run quickly - but it can itself be accelerated directly in any hardware support module to provide a very fast fill.
Like many bits of extra code, this was implemented and tested in C before being incorporated into the SpriteExtend module. It's all pretty simple code, but there's always scope for mistakes .
If I remember rightly the ViewFinder hardware acceleration module incorporated the tiled plotting support, so it ran significantly faster.
1 bit-per-pixel masks
During the work to reduce the size of the ROM image to include some other features, I wanted to reduce the standard sprite pool to 1 bit-per-pixel masks. The standard sprites that the Wimp used were now significantly larger than they had been. Whereas up to RISC OS 3.7 the sprites were 16 colours, they were now 256 colours. To make these faster to plot and take less space, Acorn had reduced the sprite mask size by introducing 1 bit-per-pixel masks. Prior to this, the mask had the same width as the data pixels in shallow modes - so for a 256 colour sprite, the mask had 1 byte per pixel. The use of 1 bit-per-pixel masks had previously been reserved for deep mode sprites (32K and 16M colours), so extending to the paletted modes wasn't a huge step for them.
However, during testing, we found some strange effects which weren't really going to be acceptable. Usually these presented themselves as a problem when you dragged a sprite around the screen, and it used a 1-bit-per-pixel mask. This held up the change to using the new style sprites quite a bit. The problem lay with the Kernel sprite plotter failing to take account of the right hand edge of the pixel mask when the image data ended on a word boundary (as the mask would not necessarily end on a word boundary at the same time as the image data), and when the alignment on the screen was a particular offset within the word.
Tracking it down and getting the right combination of parameters which failed was fun - and then locating the correct code to modify in order to make it render properly was equally convoluted. I remember there being about 3 attempts to get the correct combination. The first 'fix' worked unless your sprites were not 1 bit-per-mask-pixel, which meant that actually most things broke! The second dealt with only a single case of the 'end of sprite word' alignment. And finally I think I got them all with a fix that applied to all cases.
With that, the sprites could be reduced to be 1 bit-per-mask-pixel and therefore took less room. The other plus - of course - was that the advertised feature actually worked reliably, which is usually a good thing .
Sprite crashes
Going all the way back to RISC OS 3.1 (and possibly earlier) there was a fun bug with redirection to a sprite which is only 1x1. The bug itself had been known to people for some time, but the usual solution was simply to create a 2x2 sprite and redirect to that instead - because usually you were trying to create a solid single pixel sprite.
Although the bug didn't tend to affect applications, it meant that trying to use such sprites wasn't particularly safe. It made general use of sprites harder, because the code had to be aware that such 1x1 sprites were unsafe. The usual effect of trying to write to such a sprite was a massive memory corruption and a hasty reboot being required - mostly because the crash occurred whilst the redirection was still in force.
Because it was rather severe, and obviously would have affected any sprites that might be manipulated by routines like Filer which could generate thumbnails from other formats, I spent a little while hunting for it. That was back in the early RISCOS Ltd days and, like Acorn before me, I didn't find the problem. Much later, whilst working through the many bugs in the sprite system, I found the true cause hiding away in innocent code which claimed that being triggered was "impossible!".
This is just another case of my 'see a bug, fix it' policy being successful. There were other times when it was less successful and code had to be rolled back, but they were really quite rare. It took quite a while to reach such a place within RISC OS source, where I was comfortable doing that, but it made a big difference once that decision had been made. Occasionally it resulted in a greater degree of anxiety than might otherwise have been reached - once you're a few layers deep in bugs that you've seen, and whilst fixing them found yet more, it tends to fray your faith in the software.
However, such sprees resulted in far more robust software and less likelihood of being tripped up in the future. Albeit at the expense of feature development. Fixing bugs found by users is embarrassing. Fixing bugs before they're found is far better. Fixing bugs that have been there a long time and have been missed by others is enlightening. At the end, though, you have to provide features - I believe that bug fixing is of itself a big feature.
There was another crash that was slightly related - although the problem lay quite elsewhere. When you redirected to a sprite the text window size would be calculated to that the number of full characters that would fit into the sprite, both horizontally and vertically. However, in the case of a small sprite, where not even one character would fit (either because it was too thin or too short) it would still say it was one character tall or wide.
This didn't usually have problems as in most circumstances you wouldn't
output text to the screen whilst output was diverted the sprite. However,
in some cases this was out of your control. The example that found the
problem for me was debugging the thumbnailing system. My debugging involved
using VDU4
output, which would be captured to the !Console
application. The only problem is that if the application isn't running,
the output goes where ever the output normally goes - in this case, to the
sprite.
Crashing Filer is generally a bad thing, so it was important that this was fixed. Initially I wanted to ensure that the text was plotted truncated at the edges of the sprite. After a little bit of looking at the code I decided I didn't care - you wanted a small sprite, and it was too short for text, so you wouldn't get any. It fitted better with the other cases (where text would never 'hang out' over the edge of sprites if they were not multiples of 8 pixels), and it was safe.
CompressJPEG
JPEG transcoding
The JPEG decoding in SpriteExtend was based on a version of the Independent JPEG Group's reference library, although very heavily modified. As far as I could tell, it is based on version 3 of the IJG reference library - a version that isn't at all supported, and is very difficult to find. In particular, it does not support the decoding of progressive JPEGs.
As retrofitting a more recent library wasn't going to be practical, and would in any case have resulted in a far slower implementation than the optimised ARM routines that were already present, I had to find another way to render such images. Progressive images came in a few flavours, but essentially they were useful for use on slow links where the general shape of the thing being represented was important to get across early on, with subsequent data filling in the detail.
The solution I employed was relatively obvious and simple. Instead of trying to render the progressive image directly, flatten it first into a sequential image that we could render directly. To do this, we needed to transcode the progressive JPEG data into a non-progressive format.
The JPEG reference library had functions to do this - but only in its most recent version (6b at that time). The CompressJPEG module used the version 5 library, which didn't have these functions. Upgrading the CompressJPEG module to version 6b gained a few minor features (which we wouldn't use) and bug fixes (which we'd benefit from even if we'd not seen problems previously). It also increased the size slightly because there were more functions available.
Introducing the transcode functions to CompressJPEG presented the first fun issue - we needed to be able to read JPEGs. Previously the module had only ever needed to write JPEGs, but if it needed to transcode them it would also need to read them. This introduced a bunch more code for the reading which hadn't previously been required. As a result, the 'slight' increase in side became a large increase in size - the module became about twice the size it had been previously. After a little dithering, I decided I didn't care; the size brought with it a significant useful feature of being able to render any JPEG, so it wasn't an issue of concern.
I added a new call - SWI CompressJPEG_Transcode - to allow us to generate a non-progressive version of the JPEG. SpriteExtend would first vet the JPEG it had been given. If it was found to be of a type that it could not handle natively, it would then pass it to the transcoder to make non-progressive. Because this only involved simpler data manipulation, and didn't require the data to be decoded, it was reasonably fast.
SpriteExtend keeps a cache in a dynamic area of the JPEG that has been created by the transcode operation. Some quick checks on the JPEG data are performed on each operation, and if the image looks like the same data that we were given previously we reuse the transcoded buffer. SpriteExtend already did this for JPEGs so that it didn't need to throw away its Huffman row cache tables which accelerated redraw quite significantly.
The initial SWI call was just a simple transcode to a non-progressive image, but as the facilities were there for other formats I wanted to make them available to callers. The transcode operation could perform a number of different types of operations, which were useful to export. So the SWI gained a number of flags, to allow it to perform these operations quickly - lossy rotation and flipping of the image were easily possible with the call, as was reduction to greyscale and trimming out any other chunks that weren't required (eg thumbnails and Adobe chunks).
It was also possible to create a progressive image from the JPEG, effectively reversing the process - which might be useful if you were creating a JPEG for use in a browser. In these days of fast broadband, progressive images are less useful; even reasonably sized JPEGs download in a few seconds. But they have uses on slower links. The progressive images could be created either as a simple progressive JPEG (using a fixed table) or a more complex progressive with many more stages, or different ways of filling in the data.
I created a very simple transcode tool, MiniJPEGTran which would use the CompressJPEG calls to perform the conversion, making it a very small tool to create progressive (or non-progressive images). I'm not sure that the Mini* tools were used by many people, but it seems more sensible to have tools that used the facilities which were shared.
Simplified JPEG creation
The CompressJPEG module provides the support for creating JPEGs from
bitmap images. Initially, when it was introduced in RISC OS 3.6, it was
used primarily by !ChangeFSI for creating JPEGs, but it was limited in
its usefulness. I had used it in the past to create JPEG screenshots, and
Chris Johns wrote a nice simple 'jscreensave
' which would
save the screen as a JPEG using the CompressJPEG interface.
As a developer it was a little frustrating that the interface only allowed for input as RGB triples. You had to convert the data from your sprite into a suitable array of RGB triples for encoding into a JPEG. If the sprite (or screen) was paletted, that meant an extra lookup. If the source was 16 bit-per-pixel then an expansion was necessary.
In every single case this meant that if you were converting from general sprites to a JPEG, you would have exactly the same code to perform this conversion. Additionally, the CompressJPEG API only allowed for a single line to be operated on at a time, which was a little wasteful if you had the entire buffer available to you.
I added a new SWI CompressJPEG_WriteLines call which could write multiple lines, but was also able to perform the conversion from sprite data into RGB data for you. For paletted modes, the palette in use had to be supplied still, but this was usually available when you began the operation.
At the same time, it seemed useful to introduce a means of encoding a comment into the JPEG. This was usually only useful at the start of the image, and allowed nice comments to be written in the data. I think I updated the !ChangeFSI code to add a message to include the application name version and the type of file that they were created from.
SpriteUtils
Many parts of RISC OS had a long history, and had become obsolete. The
system sprite area was one of these. This was the initial sprite area
which Arthur had used for all of its operations, and which mapped to the
sprite operations which were performed by the GXR
ROM on
the BBC (and later built in to the Master).
The area had previously been allocated by the Kernel, and requests to perform operations on the system sprite area were processed by the Kernel as part of the SWI OS_SpriteOp call. Two factors caused this to change. Firstly, all the obsolete interfaces were being moved into modules separate from the Kernel in order that they could be removed more readily in the future. Secondly, memory areas (whether obsolete or not) were being moved to the modules that provided the functionality, rather than being managed by the Kernel.
The Sprite vector processed all the calls to handle sprites. The system sprite pool had been handled by the Kernel, but these calls were all moved so that the SpriteUtils module provided the operations, modifying the reason codes as they passed through it.
The dynamic area itself came under the control of the SpriteUtils module as well, which meant that when the module was killed (or if it was never present) the area itself didn't exist. The result was that the Kernel never needed to know about the system area any more, and if the legacy support needed to be removed, it would only be a simple matter of killing the module.
I think there were a couple of calls which weren't supported due to this
change - very old GXR
calls that had less meaning on RISC OS -
but the whole system was so old that it should have been exorcised long
ago.
Disclaimer: By submitting comments through this form you are implicitly agreeing to allow its reproduction in the diary.