- April 2004 -
[Click Message To Learn More]
Software Issues and the
We believe that the scheme described here would be an optimal solution for software vendors to take when supporting external control in their products, but the primary reason for this article is less to offer specific architectural recommendations than it is to make more people aware of this issue.
My company, Charmed Quark Software, is the vendor of a software based control and automation system called the Charmed Quark Controller, CQC for short. To date, CQC has concentrated on the 'traditional' control and automation market, meaning the management of external devices, and providing all the 'glue' to distribute that control and make it available to the user via various input/out devices. This is currently the bulk of the automation market, and it seems likely to remain the core of any high end automation product for some time to come.
However, these days more and more customers (and importantly, potential customers) are becoming interested in moving more functionality onto their general purpose computers, especially in the area of media content, e.g., various video and audio playback systems, and the management and distribution of those media either locally or in a distributed manner, and very commonly using the computer (the Home Theater Personal Computer, or HTPC as it's called) as the core of a home theater system.
Being a software-based automation system, the ability to control software based 'devices' is also an obvious area of expansion for our product, since it (or some component of it, as it is network distributed) will often be running on the same network as the target software component, if not on the same machine. Conceptually, a software DVD player is no different from a hardware DVD player. It provides certain capabilities, and one would like to direct those capabilities from an external viewpoint via a control system, and to get feedback from it so that its state can be displayed in various ways.
Charmed Quark is definitely agnostic in the growing 'war' between dedicated CE devices and the HTPC. They both have their pros and cons and potential markets, and we would like to address both audiences. But at this point, external devices definitely have a huge advantage in terms of automation.
2. The Problem
We've spent many an hour cursing the designer of this or that particularly bad control protocol on some external devices (mostly serial these days), but it is infinitely easier to create a robust and responsive automation system for external devices than for a system of HTPC software components, though one would initially think it would be the opposite because of the malleability of software. The reason is that external control generally seems to be a few notches below 'integrated gun rack' on the list of things to support in most of these software based components. Some small number of vendors has made external support a priority, but most have no provisions for control at all. The primary means of control is via the very kludgey mechanism of faking GUI events to them, e.g. faking an Enter key or faking a mouse click on a given button.
A control mechanism that works in terms of the user interface is extremely prone to being broken, or failing in non-obvious ways, since user interfaces can change even with minor new releases of a product. So if you depend on a particular sequence of events (select the menu, arrow down three times, fake an Enter key, click the button with id 0x31 on the dialog that pops up, fake an Enter key), and the interface changes, who knows that will happen when you fake that sequence of events. And this is not to mention the difficulty of getting feedback from the software component, since it will only be available if actively displayed on some GUI window because it has to be 'scraped' out of the text of displayed windows.
We believe that this is a festering issue that needs to be addressed sooner rather than later. It seems inevitable that more and more functionality will become available via the HTPC, and that therefore there will be a growing desire to manage them via the same robust and bi-directional styles that external devices are managed. But the vendors of these software components seem fairly oblivious of (or just not convinced of) the need to provide those robust control mechanisms.
If things remain as they are now, the growing family of software based media management products is not going to be taken seriously in the automation world. For vendors of hardware based media management and playback devices, this is probably welcome news, since their markets will be protected from encroachment by the HTPC. But, for those who want to see the computer shoulder more of the responsibility of home media entertainment chores, the current situation doesn't bode well for moving beyond the hobbyist realm, and dedicated hardware devices will continue to be the only viable option for those looking for highly controllable systems. The HTPC already has this sort of hobbyist reputation, but it won't get any better if the requirements of automation are not taken into account.
3. Possible Solutions
Many software engineers reading this may immediately jump on my use of the term 'component' and point out that therein lies the problem. In the software world, the traditional way to provide this kind of control is that the individual pieces should be provided not as standalone programs, but truly as components in the software sense, meaning a blob of software designed to perform some function which can be integrated into a larger program which provides the overall control and management of them. Pretty much all of the HTPC oriented products are not components in the software sense, but are worlds unto themselves, separate programs intended to be used directly by an end user only.
If these products were delivered as real software 'components', which isn't likely but for the sake of argument, that could be very useful for certain types of applications. But we would argue that this would not be the appropriate solution for automation systems in general. Though a software based automation system like CQC could easily make use of such loadable components, and even external (outside the local PC) control systems could provide a proxy program on the PC that could do so indirectly, it just raises too many issues of reliability and compatibility.
CQC's CQCServer component, which provides the actual background control of devices, is a highly multi-threaded, highly asynchronous, complex object server. We really don't want to load any other code into it at all if that can be avoided, and most other automation vendors would probably feel the same. That would just be asking for trouble. We cannot validate the robustness of every vendor's software, and if anything goes awry, it will be Charmed Quark who gets the call, not the vendor of the software component being controlled. Even if CQC and any one loaded component ended up being reasonable dance partners, throwing four or five or ten of them into the same process space could get ugly.
We believe that these products should be treated just like automation systems treat hardware devices that they control, i.e. at the end of the proverbial ten foot pole. The connection between them should be at some protocol level, not at the code level, keeping them both in their own process spaces. This avoids, in addition to possible bugs in the code, all of the language and library and operating system compatibility issues that would otherwise arise from direct mixing of code. And, if either product has bugs in it (controller or controllee), the fickle finger of blame can be far more easily pointed when things go awry, and that's no small advantage to either user or vendor.
There are a number of possible solutions here. These product vendors could implement platform specific inter-process capable automation interfaces, such as COM on the Windows platform. That would be an effective means of providing external control while still providing a callable API style interface. But, that would still be limiting in that only control systems which run on the same operating system, or for which a cross platform implementation of the automation scheme was available, could manage these software components. Even if a product only runs on Windows, that doesn't mean that the control system does.
In fact, the control system may be (and often will be) completely external to the computer running the software components you want to control, so the optimum solution would be for these component vendors to use a network based server approach. This has many advantages. It has most of the advantages of the COM approach, though a little more complicated than a callable API-type interface. Most important though, it becomes completely platform neutral since almost any control system can control socket based devices. And it becomes location independent which is important if the controller is not software based and running locally, else some sort of proxy process must be running on the target operating system to act as the control system's go-between. The network connection is very fast, it's ubiquitous, available on any PC that any product is likely to run on, can easily support multiple connections, and almost any developer is familiar with socket programming.
4. The Optimal Solution
To us, the optimal solution would be one in which the vendor provides a clean separation between interface and implementation, i.e. the GUI part is cleanly separated from the part that provides the actual functionality. This is just good design in and of itself and will have many internal benefits for them, but it means that the non-visual parts become a separable 'engine' that can be run without the interface. The vendor will have to provide an API on this engine that is sufficient to do anything that their own user interface allows the user to do, which one would assume is everything possible. So the API to the engine will be known to be full featured and robust, and will get constant testing via the standard interface.
They still have a product that they can ship in it's original standalone application form, but they can also ship a server 'front end' for that same engine which runs either as a foreground or background program, or perhaps as a Windows Service (or a daemon I guess it is called in that strange world of Unix). That server will expose a network API that allows the engine to be driven from external sources. If desired, this server could be an optional accessory which will provide extra revenues, or it could just be shipped as a part of the product.
Another important aspect of this type of architecture is that it makes enforcement of things like copy protection or any licensing schemes much easier to enforce, since in both the standalone and the server incarnation of the product, the vendor's own code is in control. And it prevents some less than honest company from just putting a new front end on their engine and selling it, or selling a competing front end, since the API between the engine and front end is never exposed or documented to the outside world, and it can change in any new release as required. It also moves the overhead of supporting external control outside of the product, so that it can just not be installed at all if the user has no need for it.
It is also possible of course that both of these schemes could be provided in the same product, so that the engine is simultaneously exposed via the user interface and the network interface, within the same process. This is how it works on hardware devices, but it does add extra complexities in a software product that is likely to be more asynchronous and often multi-threaded and will require extra synchronization and smarts in the interface (which must deal with the fact that a value was changed via the network interface as the user is changing it via the GUI interface.) So there could be some simplicity gained by separating the two schemes, and if more money can be made in the process, that's all the more impetus to do so.
It is also important that such a control protocol be two way, and provide good feedback on the state of the component. Even in the world of discrete hardware devices, where external control is more widely implemented, all too often the protocol provides little or no feedback back to the control system. It cannot be stressed how important this is for creating an automation solution which is both very intelligent (self aware) and which provides a lot of 'wow' factor. But this is getting into how to write good control protocols, which is probably a subject for another article.
We believe that the scheme described above would be an optimal solution for software vendors to take when supporting external control in their products, but the primary reason for this article is less to offer specific architectural recommendations than it is to make more people aware of this issue. Until there is a perceived requirement from users for HTPC oriented product vendors to get serious about automation, it won't happen. Most non-trivial software is already too difficult to create and test, and adding more requirements is just a burden that must be justified.
Perhaps these vendors don't really see their products as ever being sold into an environment where automation is a common requirement, i.e. the high end. However, one of the primary driving forces of the HTPC world is that one can often achieve champagne functionality on a beer budget. The problem is just that it is often created out of a hodgepodge of bits and pieces that aren't really designed to be robustly integrated, and therefore they will continue to be looked down on by the high end automation world, and often remain too temperamental and quirky to be accepted by average (read non-technical) users as well. It is true that some automation products do provide application control via user input emulation, and CQC will have to do so as well, and they can provide reasonable automation. But they use input emulation because it is the only option, not because it is preferable, or even acceptable in our opinion.
So we would encourage everyone with an interest in automation and in seeing the HTPC achieve more of its potential to speak to the vendors of the software components you use and let them know that you take automation seriously. Though some level of automation can be achieved now, the mechanisms involved are not of the level of quality that anyone would expect when looking to create a truly robust automation solution. Unless the vendors of these products hear otherwise, they will have little reason to think that anything is missing, and certainly no reason to add any more work to their already full schedules.
6. The Author
Dean Roddey is the owner of Charmed Quark Software, whose CQC product is a secure, network distributed, highly integrated, control and automation software suite, including a powerful backend architecture, versatile custom interface system, user based security, object oriented macro and driver development language with graphical IDE, supporting X-10, IR, socket, serial, and USB devices. It has a 30 day trial period, during which it is completely unencumbered. At any time during the trial period you can purchase a license and convert your existing system to a licensed system without interruption. You can read more at www.charmedquark.com.