Twin Window Management
Most window systems do window management by having the applications essentially manage their own windows. They have shared code which the applications invoke when appropriate. This seems like a good model for cooperative and well behaved applications as it reduces the complexity of the process considerably.
So, unlike X, I'll use this and see if I get stuck at some point. I'm not expecting to need very sophisticated window management, so we'll hope it all works out. To allow a variety of different window styles, I've hard coded a few different window types:
- Plain - no decorations at all, useful for menus and the like
- Application - full decorations
- Full screen - like plain, but pinned to fill the screen
- Dialog - temporary window associated with an application window
- Alert - system modal window, always on top.
I'll be surprised if I don't end up adding more.
Event Dispatch
To get events connected from the hosting system, I have a simple dispatch model. There's no queueing within the system, although there may be some within the event device itself. Events are dispatched from the device to a specific screen.
Pointer device event are delivered by searching through the viewable pixmaps on the target screen from top to bottom and sending the event to the first one with a non-transparent pixel covering the pointer location.
A ButtonDown event causes the pointer device to be 'grabbed' by the target pixmap; further pointer device events are delivered to that target irrespective of where the pointer is located. When the matching ButtonUp event is received, the pointer is ungrabbed.
Like Windows, I've split out Key events from Text events. That is to say, you can look at the raw KeyUp/KeyDown events if you like, or ignore them and let the system convert them to Unicode values. This allows applications to handle non-Unicode keys but not have to deal with the complexity of whatever conversion system is used to generate Unicode. I'm hoping this will permit the addition of non-keyboard based text input methods for devices using Twin. Hmm. If I add some editing commands as well, it should be possible to drive text input from devices which have imprecise conversion which needs correction.
With this event dispatch model, I've hooked button motion to a simple routine which moves the windows around on the screen. As usual, you can see what things look like to day in the obligatory screen shot.