Identifying the top requirements for Embedded Linux systems
by Nicholas McGuire (March 9, 2002)

As Embedded Linux becomes established as a solid alternative to many proprietary OSes and RTOSes, demands on embedded Linux developers and providers are increasing. This detailed technical article by Nicholas McGuire sketches the top requirements for Embedded Linux systems including considerations of user interface, network capabilities, security issues, resource optimization, performance requirements and issues, and compatibility and standards issues.



Introduction


Embedded Linux distributions have been around for quite a while. Single floppy distributions, mainly targeting the X86 architecture, like the Linux router project (lrp) and floppy firewall are well known by now. This first step into embedded Linux distributions was accompanied by a fair amount of 'home-brew' embedded Linux variants for custom devices, expanding the architecture range into PowerPC, MIPS, and ARM, from which development kits are now starting to evolve.

Embedded Linux is more and more becoming a usable and easy to handle Linux segment. But what is the position of embedded Linux? Where does it fit into the other embedded OS in the 32-bit market? In this article a few thoughts on the question "Why embedded Linux?" will be sketched out, positioning embedded Linux quite high up on the list of first-choice embedded OS and RTOS options.

The main challenges I believe will be to fit contradicting demands into embedded systems -- these demands are . . .
  •  Simple user interface vs. in depth diagnostic and administrative interface
  •  High level of security vs. open and simple access to the system via network
  •  Resource constraints vs. high system complexity and low response time




Part 1: The main challenges in High-end Embedded OSes


What are the main challenges for system designers and programmers in the embedded world? The list given is definitely not complete and reflects lots of personal impressions -- it is thus only one view under many -- heated debates on what is required may be fought on mailing lists. Consider the following as 'one picture', hopefully offering constructive thoughts on the subject, even if not all may be applicable to some systems.

User Interface

A major point of criticism of embedded Linux systems is their lack of a simple user interface -- generally embedded systems have an archaic touch to their user interface. But a tendency that is evolving is to split the user interface into two distinct sections -- a simple to use 'system overview' that gives you a general 'system up and running' or 'call the technician' information. And a more in depth interface that allows you to diagnose system operations at an 'expert' level.

This split is not always done cleanly and is not always visible to the user, it will often run on one interface, but this split is anticipated by most interfaces of embedded devices -- representing the actual operational demands. Simple to use for common operations -- clear and instructive to the maintenance personnel in case of errors. Embedded Linux can provide both in a very high quality if designed to these goals from the very beginning on.

Many embedded Linux distributions offer a web-server giving OS-independent remote access to status information -- at the same time maintenance via secure shell can allow insight into the system down to directly poking around in the kernel at runtime without disturbing the systems operation.

Operational Interface

HMI's as machine-tool designers like to call it or GUI's as OS developers will prefer are some sort of generally graphical based interface that should allow close to untrained personnel to inter-operate with specialized hard and software. A problem that arises here is that embedded systems are limited in available resources and fully developed X-Windows systems are very greedy with respect to RAM and CPU usage (if anybody tried out XFree 4.0 on a 486 without FPU . . . at 33MHz let me know how long the window-manager takes to "launch").

So does this mean forget embedded Linux if you need a graphical interface? Nope! There are quite a lot of projects around: nano-X, tiny-X, and projects that give you direct access to the graphics display like libsvga or frame-buffer support in recent kernels.

Getting an acceptable graphics interface running on an embedded Linux platform is still a challenge even though IBM has shown that one can run Xclock on top of XFree86 in a system with no more than 8MB footprint, generally a 32MB storage device and 16MB RAM will be the bottom line (there are some PDA distributions though that are below that). The Operator Interface will be a simple scale down variant of a "standard" Linux desktop in many cases and this simplifies development greatly as the graphics libraries available for Linux cover a very wide range -- with a new widget set emerging every few weeks.

Administrative Interface

Embedded Products have traditionally required skilled personnel to handle error situations or performance/setup issues. This basically is due to a non-standard operating-system model behind all these devices. The goal was to have a intuitive interface at the expert level (and many hours of training . . . ) which limited the potential scope of intervention and at the same time raised maintenance costs of such devices. Embedded Linux takes a different approach -- you have a very large and seemingly complete operator interface -- a more or less complete UNIX clone -- and this allows operators to debug, analyze and intervene with great precession at the lowest level of the GNU/Linux OS.

The advantage is clear -- you don't need to learn each product -- it's a GNU/Linux system just like a multiprocessor-cluster, a web-server or a desktop system -- one interface for the entire range of possible applications. This allows operators and technicians to focus on the specifics of each platform without great training efforts on a per-device basis. Even though the initial investment in training can be relatively high -- all attempts to manage complex problems using simple interfaces are severely limited -- POSIX II gives a complex and powerful interface to the operator that allows adequate response to a complex system.

Status and Error reporting

Checking the status of the fax-machine or an elevator is not a high-end administrative task and should not require any knowledge of details at all. To this end Linux offers the ability to communicate with users directly via the console (simply printk'ing errors on a text-console) or a web-interface as well as offering a OS-independent active response via voice, email, SMS or turning on a siren if one connects it to some general output pin of the system.

So the resources required for clean status and error reporting are available in Linux and embedded Linux but care must be taken as to what information can be displayed in response to errors as this naturally touches security issues. Error messages need to be clear and status information needs to be informative -- "An application error occurred -- [OK]" is not very helpful -- on the other hand it is not always desirable if error messages include the exact version of the OS/Kernel/application and the TCP port on which it is listening . . . as this could reveal information that allows attacking such a system.

Network Capabilities

High end embedded Systems are not only required to offer remote administration in many cases, but in addition the demand for system update and fixes from remote sites is moving into the demands list. Linux and also embedded Linux offer many possibilities to satisfy these needs at a high level of efficiency flexibility and security, at the same time extending network related feature far beyond common demands.

Network resources

One of the strengths of GNU/Linux is its network capabilities. These include not only a wide support for protocols and networking hardware, but also a wide variety of servers and clients to communicate via network links. Naturally, a system that provides a large number of network resources also needs to provide appropriate security mechanisms to protect against unauthorized access. In this respect Linux has evolved very far -- especially the latest 2.4.X kernels provide a highly configurable kernel with respect to network access.

Remote Administration

Reducing costs is a primary goal of much of the technical development effort being done. A major cost factor in embedded systems is long term maintenance costs. Not only the direct costs of servicing the devices on a routine basis, but also the indirect maintenance related costs of system down-times and system upgrades are an important factor. A reduction of these costs can be achieved if embedded systems have the ability of remote administration. This encompasses the following basic tasks:
  •  remote monitoring of system status (web-interface, logging to a central facility, etc.).
  •  remote access to the system in a secure manner allowing full system access. This can be done via encrypted connections.
  •  the ability of the system to contact administration/service personnel via mail/phone, based on well definable criteria.
  •  upgradeability of the system in a safe manner over the network, allowing not only full upgrades but also fixing of individual packages/services.
A GNU/Linux based embedded system is well suited for these tasks, providing well tested server and clients for encrypted connections, embeddable web-servers as well as system log facilities that are capable of remote logging in most cases. Outgoing calls from an embedded system, that are necessary to satisfy these criteria are also well established in GNU/Linux, allowing for connections to be established via any of the common network types available, including dialing out via a modem line.

Scanning the Potential

The previous section listed a number of tasks that a remote administratable system should be able to perform, but this is definitely not the full suite of offerings a GNU/Linux system will have in the network area. The degree of autonomy of an embedded system can be pushed up to that of a server system -- allowing for dial-in support for proprietary protocols to fit into a non-UNIX environment smoothly. NFS, the network filesystem, can not only be incorporated as a client in an embedded system, but also as a server, allowing for a central server or administration system to mount the embedded system for monitoring and upgrade purposes. This way giving virtually unlimited access to an embedded system over the network.

At the same time, all of these services can be provided in a secure manner by running them over VPN's or encrypted lines. This capability of 'stacking' services is one of the strengths of GNU/Linux networking -- and again, you don't need to rely on a specialized software package, you can rely on well-tested and widely deployed setups that will give you a maximum of security.

Security Issues

My personal belief is that not so much power consumption or processing speed but security will be the key issue in embedded systems in the near future. Reliability was one of the demands from the very beginning on -- security, on the other hand, has been neglected. The more embedded systems become complex, offer extensive user intervention and utilize the ability to interact with local networks and the Internet, the more security related issues are emerging.

Linux Security

GNU/Linux for servers and desktop is well suited for sensitive computer systems. Its security mechanisms are challenged on a daily basis from script kiddies and 'professional' hackers. Although this is not a very pleasant way of getting your system tested, it is a very efficient way. A system that is deployed in a few hundred to maybe a thousand devices will hardly be tested as extensively as the GNU/Linux system.

This means that an embedded Linux or real-time Linux system is relying on the same mechanisms that are being used in servers and desktop systems. This high degree of testing and, at the same time, the full transparency of the mechanisms in use, due to source code availability, make a GNU/Linux system well-suited for systems with high security demands.

Standard services that a Linux system can provide:
  •  Firewalling and network filtering capabilities
  •  kernel based and user-space intrusion detection
  •  kernel level fine grain capabilities allowing for precise access control to system resources
  •  user level permissions and strong password protection
  •  secure network services
  •  well configurable system logging facilities
These possibilities taken together allow not only monitoring systems with respect to current actions taking place and intervening if theses are inappropriate, but also for detection of system tendencies and response to developments far before failure occurs. This tendency monitoring covers hardware (e.g. temperature detection or system RAM testing) as well as monitoring system parameters like free RAM, free disk-space or timing parameters within the system (e.g. network response time to ICMP package). A vast majority of the hardware related failures are not abrupt, but develop slowly and are on principle detectable -- having an embedded OS/RTOS that can provide this service can improve the system reliability as well as the systems security.

Talking to devices

Most embedded systems will have some sort of specialized device that they are talking to, to perform the main system task -- may this be a data-acquisition card or a stepper motor controller. These 'custom devices' are a crucial point in the embedded Linux area, as these will rarely rely on widely deployed drivers and have a limited test-budget available.

So to ensure the overall system security, a few simple rules need to be kept in mind when designing such drivers. Regular Linux device drivers operate in kernel space. They add functionality to the Linux kernel either as builtin drivers or as kernel modules -- in either case there is no protection between your driver and the rest of the Linux kernel. In fact kernel modules are not really distinct entities once they are loaded, as they behave no differently than built-in driver-functions, the only difference being the initialization at runtime.

This makes it clear why device drivers are security relevant: a badly designed kernel module can degrade system performance all the way down to a rock-solid lock-up of the system. A really badly designed driver will not even give you a hint at what it was up to when it crashed. So drivers, especially custom drivers, must aim at being as transparent as possible.

To achieve this, a flexible system logging should be anticipated. This may be done via standard syslog features as well as via the /proc interface and ioctl functions to query status of devices. The latter also can be used to turn on debugging output during operations, a capability that, if well designed, can reduce trouble-shooting to a single email or phone call.

Aside from these logging and debugging capabilities, a driver design must take into account that there is no direct boundary between the driver and the rest of the kernel. That means the driver must do sanity checks on any commands it receives and in some cases on the data it is processing. These checks not only need to cover values/order and type of arguments passed, but also check on who is issuing these commands -- the simple read-write-execute for user-group-other mechanism of file permissions is rarely enough for this task.

RTLinux devices are not that much different from regular Linux devices with respect to security considerations, but they differ enough that this difference should be mentioned explicitly. Noting this for RTLinux only, is due to the fact that my work covers this RTOS variant of Linux, but basically it should hold true for the other flavors of real-time Linux variants (corrections appreciated).

A simple example of setting up a secure RTLinux device would be a motor controller kernel module. This module must be loaded by a privileged user (the root user) and needs to be controlled during operation. To achieve this:
  •  Load the module and system boot via init script or inittab.
  •  change the permissions of a command FIFO (dev/rtfN) to allow a non-privileged user to access it.
  •  send a start/stop/control command via this FIFO as the unprivileged user.
  •  check the validity of the command and its arguments.
  •  log such events with timestamps and user/connection related information to the systems log facility.
  •  monitor the logged events and follow development of driver parameters during operation.
  •  document the system behavior in a way that deviation can be located in debug and log output.
If a scheme of this type is followed, then operating a system with custom devices will exhibit a fair level of security. Clearly, a non-standard device will also require an increased amount of documentation and instructions for the operator, as the behavior of non-standard devices can hardly be expected to be well-known even to knowledgeable administrators.

Kernel Capabilities

A feature of the Linux kernel that is slowly finding its way into device drivers and into applications is its ability to perform permission checks on requests at a more fine-grain level that the virtual filesystem layer (VFS) can.

Kernel capabilities are not limited to the normal filesystem permissions of read-write-execute for owner-group-others. Resorting to these capabilities in the kernel, allows controlling actions of the driver, such as introducing restriction on chown or releasing some restrictions like on ID checks when sending signals (which allows unprivileged users to send signals instead of making the entire process a privileged process). These capabilities require a cleanly designed security policy for the drivers. The name of this kernel feature says it very clearly: it's control of capabilities, not a security enhancement as such.

No system is secure or insecure, but some systems can be configured to be secure and others simply can't. The goal of any implementation using kernel capabilities for access control should be to replace global access settings by resource specific access restrictions. By this means, one can prevent the root user from accessing the device altogether as well as give an otherwise completely unprivileged user full access to a specific resource.



Part 2: Resource Allocation


Embedded systems, even in their high-end variants, are resource constraint systems by desktop or server standards. At the same time, complexity of operations require many optimization strategies that were designed for server and desktop systems to be utilized in embedded systems as well. As standard GNU/Linux targets interactive usage and optimized average response, some of these strategies are not ideal for embedded systems. Considerations for more predictable resource allocation are required in resource constraint systems -- resources in question being not only RAM and CPU consumption, but also timing and well-defined system response to critical tasks.

Standard Linux

Linux has a record of squeezing a lot of performance out of little or old hardware. This is done by relying extensively on strategies that will favor interactive over non-interactive events. For instance, writes to disk can be delayed substantially and Linux will buffer data and reorder it, writing it in a continuous manner with respect to the disks location, and out of order from the user's standpoint. These and other strategies are well-suited to improve average performance but can potentially introduce substantial delays to a specific tasks execution. This is to say that peak delays of a second or even more can occur in GNU/Linux without this indicating any faulty behavior. As embedded systems are generally resource constraint systems such optimization strategies are an improvement in most cases, but increasing system complexity and the potential of a networked system reaching very high loads (just imagine a network on which many other probably faster systems are broadcasting all kinds of important server announcements . . . ) can degrade the system's response to high priority events dramatically. This is to say that an embedded GNU/Linux system better not have any timing constraints at all and should not rely on the system's catching a specific event. If there are no such constraints with respect to timing, then an embedded system running a scaled down standard GNU/Linux will well suit most purposes and operate very efficiently.

Soft Real-time

There are many definitions floating around what soft-real-time is. I'm not an authority on this section, but give the definition used here to prevent any misunderstandings. Under Soft-real-time a system is capable of responding to a certain class of events with a certain statistical probability and an average delay. There is, however, no guarantee of handling every event, nor is there any guarantee for a maximum worst case delay in the system. In this sense every system is a soft real-time system. Of course, the term is used for systems that have enhanced capabilities in this area. In most cases this will mean:
  •  high-resolution timers
  •  a high-probability of reacting to a specific class of events. High probability in this sense means 'higher than regular Linux'.
  •  low average latency, again low relative to regular Linux.
Soft real-time systems are well-suited for cases where quality depends on average response time and delays, like video-conference and sound processing systems, and if the system will not fail or get into a critical state if the one or other event is lost or delayed strongly. Simply speaking, soft-real-time will improve quality of time related processing problems, but will give you no guarantee. So you can't have safety critical events depend on a soft-real-time system. There are multiple implementations of soft-real-time for Linux, starting out at simply running a thread under the SCHED_FIFO or SCHED_RR scheduling policy in standard Linux all the way to the low-latency kernel patches that make the Linux kernel partially preemptive (please no flames . . . thanks). Soft Real-time variants of Linux include RED Linux, KURT, RK-Linux and the low-latency patch of Ingo Molnar.

Hard Real-time

There are many systems that obviously have hard-real-time requirements, such as control or data-acquisition systems. But there also are a large number of systems that don't have quite so obvious hard-real-time demands: those systems that need to react to special events in a defined small time interval. These systems may be performing non-time-critical tasks in general, but emergency shutdown routines must still be serviced with a very small delay independent of the current machine state.

In such cases, a hard-real-time system is required to guarantee that no such critical event will ever be missed, even if the system goes up to an enormous system load or a user-space application blocks altogether. The criteria for requiring hard-real-time as opposed to soft-real-time are the following:
  •  No event of a specific category may be missed under any circumstances (e.g. emergency shutdown procedure)
  •  the system should have low latency in response to a specific type of event.
  •  periodic events should be generated with a worst case deviation guaranteed.
Note that these three criteria do overlap in a certain respect and could be reduced to a single one, that being to guarantee worst case timing variance of a specific event class, but that's not what I would call a self-explanatory definition.

RTLinux and a derivative of it called RTAI fall into the class of hard-real-time Linux variants (if you know of any others let me know). These are based on three principles:
  •  Unconditional Interrupt interception.
  •  delivery of non-real-time interrupts to the general-purpose OS as soft-interrupts.
  •  Run the general-purpose OS as the idle task of the RTOS.




Part 3: Operational Concepts


During the development of embedded GNU/Linux projects a few main modes of operation have evolved. These modes will be briefly described in the next sections, showing the flexibility of embedded GNU/Linux. This flexibility is a product of the wide range of hardware Linux and embedded Linux has been deployed on -- ranging from commodity components embedded systems to dedicated hardware SBC's.

Networked Systems

Network capabilities was one of the early strengths of Linux -- and very early in the development of Linux, specialized Linux distributions for disk-less clients have evolved. XTerminals based on low end commodity component computers have been around quite a while, from which specialized systems like the Linux Kiosk system evolved as an example of embedded Linux running via NFS-root filesystem.

In its latest version the Linux kernel is fully adapted to boot over the network and run via nfs-root filesystem, allowing for inexpensive and easy to configure embedded systems ranging from the noted kiosk system to embedded control applications that will boot via network and then run in a RAMDISK autonomously. The ability to operate in a disk-less mode is not only relevant for the administration, but also important for operation in harsh environments on the factory floor where hard-disks and fans are not reliable.

A further usage of the network capabilities of embedded Linux is allowing for a temporary increase of 'local' resources by accessing remote resources, may this be mounting an administrative filesystem adding an nfs-swap partition (a cruel thing to do . . .) or simply using network facilities for off-site logging. The network resources of Linux allow moving many resources and processing tasks away from the embedded system, thus simplifying administration and reducing local resource demands.

Performance

Performance issues with nfs-root filesystems and nfs mounted filesystems will rarely be a critical problem for embedded systems, as such a setup is never suitable for a mission-critical system or a system with high-security demands. Nfs-server and client in the Linux kernel is very tolerant towards even quite long network interruptions (even a few minutes of complete disconnection normally will be managed correctly), but this tolerance does not eliminate the performance problems and nfs-root definitely is only suitable for systems where the data-volume transfered is low.

A special case might be using nfs-root filesystems for development purposes, this is a common choice, as it eliminates resource constraints related to storage media and simplifies development. Development on nfs-root filesystems, though, must exclude benchmarking and reliability tests as the results definitely will be wrong. A stable nfs-root environment can offer a filesystem bandwidth well above a flash-media. On the other hand heavy nfs-traffic on an instable network or a highly loaded network will show false-negative results.

Security of NFS

The nfs-filesystem does not have the reputation of providing a high level of security. So nfs-root systems should not be used in areas where network security is low, or on critical systems altogether (for a Kiosk system it may be well suited though). There are secure solutions for network file-systems, like tunneling nfs or SMB via a VPN, but these do not allow for booting the system in this secure mode (at least not to my knowledge). Also SMB, which is a state-full protocol is clearly better than nfs, but again, I don't know of any bootable setup providing something like smb-root. For systems that might use a local boot-media and then mount applications, or log-partitions over the network both SMB and tunneled NFS are possible with an embedded GNU/Linux system.

RAMDISK Systems

RAMDISK systems are not Linux specific, but the implementation under Linux is quite flexible and for many embedded systems that have very slow ROM or media with a relatively low permissible number of read/write-cycles, a RAMDISK system can be an interesting solution. RAMDISKs reside in buffer cache, that is, they only will allocate the amount of memory that is currently really in use. The only limitation is that the maximum capacity is defined at kernel/module compile time. The RAMDISK itself behaves like a regular block-device; it can be formatted for any of the Linux filesystems and populated like any other block oriented storage device.

The specialties of Linux are related rather to the handling of the buffer cache, which is a very efficiently managed resource in the Linux kernel. Buffers are allocated on demand and freed only when the amount of free memory in the system drops below a defined level -- this way the RAMDISK based filesystem can operate very efficiently in respect to actually allocated RAM.

To operate a RAMDISK system efficiently an appropriate filesystem must be chosen -- there is no point in setting up a RAM-disk and then using reiserfs (at least in most cases this will not be sensible) a slim filesystem like minixfs, although old will be quite suitable for such a setup and yield and efficient use of resources (imposing minor restrictions with respect to maximum filename length and directory depth).

Performance

One of the reasons for using a RAMDISK is file-access performance; a RAMDISK can reach a read/write bandwidth comparable to a high-end SCSI device. This can substantially increase overall system performance. On the other hand, a RAMDISK does consume valuable system-RAM, generally a quite limited resource, so minimizing the filesystem size at runtime in a RAMDISK based system is performance critical. It is a slight exaggeration, but doubling available system-RAM in a low memory setup can improve overall performance as much as doubling CPU speed!

A nice feature available for Linux is to not only copy compressed filesystem images to a RAMDISK at boot time, but to actually let the kernel initialize a filesystem from scratch at bootup and populate it from standard tar.gz archives thereafter. The advantage of this is that the boot-media can contain each type of service in a separate archive, which then allows safe exchange of this package without influencing the base system. Naturally, exchanging the base archive or the kernel is still a risk but at least updating services -- which is the more common problem -- is possible at close to no risk. If such an update fails, you just login again and correct the setup.

With a filesystem image you generally have to replace the entire image; if this fails, the system will not come back online, and a service technician needs to be sent on site to correct the problem. To put the additional RAM requirement into relation to the services -- a system providing a RTLinux kernel and running SSHD, inetd, syslogd/klogd, cron, thttpd, and a few getty processes will run in a 2.4MB RAM-disk, and require a total of no more than 4MB RAM.

Resource optimization

When using a RAMDISK system, a few optimization strategies are available that are hard to use in general systems or desktop systems. These optimizations are related to the files in a RAMDISK system only have a 'life-span' limited to the uptime of the system; at system reboot the filesystem is created from scratch. This allows removing many files after system bootup: init-script, some libs that might only be required during system startup and kernel modules that will not be unloaded during operation after system initialization has completed. The potential reduction of the filesystem is 30-40% on the test system built (e.g. MiniRTL).

Security

As everything else, the choice of the system setup also has security implications, a few of these with respect to RAMDISK systems should be noted here. System security and long term analysis relies on continuous system logs, writes to RAMDISKs are quick, but to an off-site storage media or a slow solid-state disk are delayed, system logs may thus be lost. A possible work around is to carefully select critical and non-critical logs, writing these along with other critical status data to a non-volatile media (e.g. NVRAM). This solution is quite limited as, in general, no large NVRAMs will be available. Alternatively, log-files may be moved off-site to ensure a proper system trace, as access may not be possible after a system failure. When writing logs to a non-volatile media like a flash-card locally one needs to consider the read/write cycle limitations of these devices, as letting syslogd/klogd write at full-speed to a log-file on such a media can render it useless within a few months of operations, making in hardly better than off-site logging.

A clear advantage of RAMDISK based systems is that although the filesystem modifications are volatile as -- is the entire system -- a 'hack' would be eliminated by the next reboot, giving a safe although invasive possibility to relatively quickly put the system into a sane-state of operations. To enhance this feature, access to the boot-media can be prevented by removing the appropriate kernel module from the kernel and deleting it on the filesystem. In case the boot-media needs to be accessed for updates, the required filesystem/media kernel-modules simply can be uploaded to the target and inserted into the kernel. This strategy makes it very hard for an unauthorized user to access the systems boot-media unnoticed. A reboot puts the system in a sane-state, as noted above -- a system can also be configured to boot into a maintenance mode over the network, allowing for an update of the system. These methods are quite easy to implement. For example, such a dual-boot setup RAM-disk or Network, requires no more than a second kernel on the boot-media (<= 400K) and a boot-selection that is configurable (syslinux, grub, lilo etc.) on the system. RAMDISK based systems can be a security enhancement, if setup is done carefully.

Flash and Hard disk

Embedded systems need not always be specialized hardware -- even if many people will not recognize an old i386 in a midi-tower as being an embedded controller -- this can be a very attractive solution for small numbers of systems, development platforms and for inexpensive non-mobile devices. The processing power of a 386 at 16 MHz is not very satisfactory for interactive work, but more than enough for a simple control tasks or machine monitoring system. The ability to utilize the vast amount of commodity components for personal computers in embedded systems is not unique to embedded GNU/Linux, but Linux systems definitely have the most complete support for such systems, aside from being simple to install and maintain.

Hard disk based systems

Obviously the last mentioned method is only acceptable for systems that don't have low power requirements and can tolerate rotating devices, that is, are not to operate under too rough conditions. In these cases, the advantage of Linux supporting commodity PC components may be a relevant cost-factor, as especially for prototype devices and those built in very low numbers, these components simplify system integration substantially (no special drivers, no non-standards system setups required). Aside from these specialized systems, hard-disk based systems are also interesting for development platforms, as they eliminate the storage constraints that are imposed on most embedded systems. And, with there ability to use swap-partitions on such a setup offer an almost arbitrary amount of virtual-RAM (although slow) for test and development purposes.

Flash/solid-state disks

Solid state 'disks' have already been available for Linux in the 2.2.X kernel series. Obviously the IDE compatible flash-disks were no problem; other variants like (CFI-compatible,NAND-flash,JEDEC etc.) were more of a problem, but the MTD project now has incorporated these devices into the Linux kernel with the 2.4.X series in production quality. The restrictions for some of these media do stay in place, that is, that they have a limited number read/write cycles available (typically in the range of 1 to 5 million write cycles -- depending on the technology used and environment conditions as well as operational parameters). This can be a problem if systems are not correctly designed. A file system and the underlying storage-media tend to erase/write some areas more often than others (e.g. data and log files will be written more often than applications or configuration files, naturally the load can be very high in all temporary storage areas so the storage media may wear out faster depending on the systems layout. wear leveling strategies have been design to reduce this "hot-spot burnout" but this generally means data around to level out the wearing and thus reducing read/write performance of the media.

Imagine a swap-partition on flash or the system logfiles with syslogs parameters not adopted; such a flash device could run into problems within as little as three months! When using a solid-state media with limited read/write cycles, filesystem activity should be reduced, write logfiles at large intervals, write data to disk in large blocks, make sure temporary files are not created and deleted at high frequency by applications. Taking the read/write limit into account, the effective life-span of such a system easily can be extended to years. If high frequency writes are an absolute must, then the usage of RAMDISKs for these purposes is preferable.

Since solid-state based systems generally don't loose their data at reboot, one must also take care of data accumulated in temporary files and especially in logfiles. For this purpose some sort of cron-daemon will be required on such a system, allowing for periodic cleanup. Also, in general, a non-volatile root-filesystem will be 30-40% larger than a volatile RAMDISK based system -- if file integrity checks are necessary (as a reboot will not put the system back into a sane state after file corruption or an attack on the system) the filesystem can be double as compared to a RAMDISK based system.

Alternatives to delayed read/writes to devices with limited read-write cycles, are to use filesystems that implement wear leveling like jffs and jffs2 (or use devices that implement wear leveling in hardware like DOC or some PCMCIA cards). Generally this should be taken into account for any devices that don't implement wear leveling on the hardware level (like Compact Flash and Smart Media . . . correct me if I'm wrong on this . . . ). And no -- journaling filesystems don't automatically guarantee wear leveling. They will protect the filesystem against power-fail situations which older filesystems like minix or ext2 don't handle very well -- especially if the failures occurs during write cycles -- but journaling filesystems will also show hot-spots with respect to read/write cycles that can reduce the life span of some devices.

One characteristic of solid-state devices that must be taken into account is that they are relatively slow (although faster devices are popping up lately). This has implications on the overall system performance as well as on the data-security of items written to disk. Solid-state disks will often exhibit a data-loss on the items being processed at the time of power-loss, though this does not though influence the integrity and stability of the filesystem itself. So in a solid-state disk based system, critical data will have to be written to a fast media if it is to be preserved during a power-loss.

The generally low performance of solid state disks with respect to read/write bandwidth can be overcome in some setups by having a "swap-disk" located in RAM. This might seem surprising that reducing system RAM and putting some into a swap-partition can improve performance, but this is the case due to the different strategies that Linux uses to optimize memory usage -- swapping to a slow media would hurt performance greatly -- swapping to a fast media will improve swap performance and at the same time the Linux kernel will modify its optimization strategy to use the reduced RAM as good as possible. The implementation of such RAM-swap-DISKS can be done with current MTD drivers using slram on top of mtdblock. Slram provide access to the memory area reserved (by passing a mem= argument to the kernel, limiting the kernels memory to less than physically available); mtdblock provide the block devices interface so that this memory area can then be formated as a swap partition on system boot.



Part 4: Compatibility and Standards Issues


The term compatibility has been widely misused, OS's claiming to be 'compatible' as such -- without stating to what they are compatible. So first a clarification as to how this term is being used here. Compatibility between embedded OS and desktop development systems is one aspect here, this compatibility being on the hardware and on the software level, as well as on the administrative level. Beyond that level of compatibility there is also a conceptual compatibility, which is of importance not only for the development, but also to an even higher degree, for the evaluation of systems. The compatibility of embedded Linux to desktop development systems as understood here, is defined as the ability to move executables and concepts from the one to the other without requiring any changes. This does not mean that some changes might then be made for optimization reasons but there is no principle demand for such changes. As an example one might consider a binary that executes on the desktop and the embedded system unmodified, but in practice would be put on the embedded system in a stripped version -- this is no conceptual change though.

POSIX I/II

The blessings of the POSIX standards have fallen on GNU/Linux -- as much as these standards can be painful for programmers and system designers, they have the benefit of allow clean categorizations of systems, and they describe a clear profile of what is required to program and operate them. This is a major demand in industry, as evaluation of an OS is a complex and timer-consuming task, so POSIX I cleanly defining the programming paradigm and POSIX II (not so cleanly) defining the operator interface, simplify these first steps.

Network Standards

Aside from the important POSIX standards, GNU/Linux also follows many other standards, notably in the network area, where all major protocols are supported. Supported standards include the hardware standard for Ethernet Token-Ring FDDI, ATM etc., and the protocol layers TCP/IP, UDP/IP, ICMP, IGMP, RAW etc.. This standardization level allows a good judgment of a embedded Linux system at a very early project stage, and at a later stage simplifies system testing a lot.

Compatibility Issues

The demand for compatibility of embedded systems and desktop development systems touch far more than only the development portion of an embedded system. As much of the operational cost of systems lies in the administration, and a major issue, evolving even strongly now, is system security -- the question of compatibility is very high ranked. The more systems become remotely accessible for operation administration, even for a full system update over the Internet, the more it becomes important to have a well-known environment to operate on. This is best achieved if the remote system behaves "as expected" from the standpoint of a desktop system for which developers and administrators have feeling for -- even if many people in industry will not like this 'non-objective' criteria, it is an essential part. And looking at a modern photo-copy machine one will quickly have the impression that this is a miniaturized XTerminal that one is looking at triggering expectations on the side of the user.

Development related

During the development process for an embedded system there are a few distinct states one can mark:
  •  system design -- one of the hardest steps in many cases.
  •  kernel adaptation (if necessary sometimes simply a recompile and test)
  •  core system development -- a root-filesystem and base services.
  •  custom application development and testing
The first step is the hardest for a beginner, and having a desktop Linux system to 'play' with can enormously reduce this effort. It is very instructive to set up a root-filesystem and perform a change-root (chroot) to that directory, gathering hands-on experience for the system, systematically reducing executables, scripts, libs, etc. A highly compatible system obviously is a great advantage here. The kernel adaptation phase can be simplified, if a desktop system with the same hardware architecture is available (especially for x86 based systems this generally is the case), allowing compiling and pre-testing the kernel for your hardware. The third step -- actually building the root-filesystem, is not as simple as it might sound from the first step described above. A root-filesystem needs to initialize the system correctly -- a process that can not only be hard to figure out -- but, also hard to debug, if the system has no direct means of talking to you (it can take a month until the first message appears on the serial console of some devices . . . ). Designing a root-filesystem requires that you gain understanding of the core boot-process. To gain this understanding a desktop system is hardly suitable, resorting to a floppy distribution (Linux-router-project or MiniRTL) can be very helpful. Where compatibility between your desktop and the target system can save the most time -- is when your application runs on your desktop, if the debugging and first testing can be done on a native-platform. The biggest problems are encountered during development with cross-compiler handling and cross-debugging on targets that don't permit native debugging. Even though there are quite sophisticated tools available for this last step, a native platform to develop your application is by far the fastest and most efficient solution (although not always possible).

Operation Issues

Hardware and development expenses are a major portion for the producing side of a system. For people operating embedded systems, maintenance and operational costs are the major concern in many cases. Having an embedded system that is compatible to a GNU/Linux desktop system simplifies not only administration and error diagnostics, but can substantially reduce training expenses for operational personnel. Compatibility is also relevant for many security areas. It is hard to implement a security policy for a system with which operators have little hands-on experience. At the same time there are few documents to reference on such a security policy for proprietary systems. Being able to apply knowledge available for servers and desktop improves the situation and opens large resources on the security subject for operators of embedded systems. One further point that can be crucial is the ability to integrate the system into an existing network infrastructure. The immense flexibility of embedded Linux in this respect simplifies this task a lot.

--- The end ---




Bibliography . . .
  • Baraban  -- M. Baraban, New Mexico Institute of Mining and Technology:  A Linux-based Real-Time Operating System , Thesis (1997).

  • RTLinux  --  website , ftp site

  • MiniRTL  --  website , ftp site

  • Linux Memory Technology Devices  (MTD) --  website , and in the official Linux kernel



About the author:  Nicholas McGuire's  first contact with Linux dates back to Linux kernel version 0.99.112, at a time when many rumors and myths were circulating about the fledgling open source operating system. McGuire first came into contact with RTLinux at RTL version 0.5, in the course of developing a DSP system replacement for magnetic bearing control, at the Institute for Material Science of the University of Vienna, Austria. McGuire began developing MiniRTL while RTLinux was at version 1.1, and has been engaged in RTLinux and MiniRTL based development work ever since.