Historical Control Systems

These are pages and documentation from my old consulting company, Mt. Hood Software. They provide some historical views of real-time control systems.

Introduction

Real-Time refers to the fact that the system must perform in real-time. The system is running some process that requires the absolute undivided attention of the computer or processor. Most operating systems are real-time in some sense because they respond to your inputs, whether keyboard or mouse based. True real-time systems respond to many inputs, and they must prioritize the different inputs depending upon their importance. Real-time systems that always respond in a deterministic way and guarantee they will respond within a given time period are called hard real-time systems.

Windows 3.1 is an example of a soft real-time system. Any Windows 3.1 process can go into a compute intensive loop and hog all the CPU time. The only way multi-tasking occurs is with the cooperation of all programs. They must periodically release their event loops to allow other processes to receive CPU time.

Windows 95 and NT improve on this somewhat by using preemptive multi-tasking. One Windows process can't stop system response by going into an infinite loop. You can always switch to another application and use it or press ctrl-alt-del and terminate the errant task. However, 95 & NT are not hard real-time systems. True hard real-time systems have many priority levels that you can assign processes to. Windows NT has one deterministic priority level called real-time (actually it has several but only a few are usable, the others being used by OS functions). It offers other lower levels, but processes can be automatically assigned a new priority within these lower levels based on how compute bound they are. The fact that there are few available real-time priority levels, and the other levels aren't truly deterministic, disqualifies Windows NT as a true hard real-time system. A software engineer would need to dig into the device driver level of NT and work directly with interrupts to create a true deterministic real-time system. This generally makes the software too expensive to deploy in custom applications where the priorities and tasks must be modified for each customer.

An even bigger problem with NT is: WIN32 programs can't do direct I/O. If a program wants to operate a device it must do so through a driver. You can't just write a C++ program and operate an encoder interface card and read the registers. The reason for this is security. If you can write I/O registers from a program, you can write to the hard disk controller, and that is bad as far as security is concerned. So not only do you need to play tricks to write a real-time WIN32 application, but you must dig down into the device driver level and write an interface driver for each special piece of hardware you are using in the system. Most drivers will be specific to Windows NT and can't be used for 3.1 or 95. In addition, send Microsoft about $1000 for the special CDs of documentation and their C++ compiler which you must use to write the drivers. And don't forget a good low level hardware debugger which you will need to fix your bugs. Get the picture?

A Better Way, Client-Server Protected Mode DOS

Mt. Hood Software has developed a better method for running hard real-time systems. We use a protected mode real-time interrupt scheduler to create large process control systems. The 640K DOS real-mode world is just too small for most systems today, so off-the-shelf DPMI technology is used to extend these programs into 16-bit protected mode. The programs are written with Borland Pascal 7.0 because this language has strong type checking and also range checking which aid robust program development.

To extend this technology into the modern programming world we have added network server functions to the protected mode DOS program. The first version of this improvement is finished and uses standard Microsoft drivers and NetBIOS protocols. Future plans are in place for TCP/IP, but NetBIOS will generally run on top of both TCP/IP and Netware already.

This client-server technology allows the DOS program to do what it does best, run a real-time process. Only a minimal operator interface needs to be created for the DOS program. Through the magic of client-server technology, the database embedded within the DOS program is almost instantly available to any Windows 95 or NT client application on the network. Our sorter applications have a real-time database that is around 256K in size. This amount of data can be sent from the DOS server to the Windows client in around 2 seconds using about 20% of the 10 MBPS Ethernet bandwidth. When the Windows client needs to change a specific data item that the human operator edits, only a small block of data must be sent back to the DOS server.

There are limits to this technology, however. The DOS server must use real-mode DOS memory for network buffers and this will limit the number of Windows clients that can connect to the server at one time. In actual practice this isn't too much of a problem. You normally only want one or two clients accessing a real-time system at one time anyway. There are ways around this limitation, such as using a remote WIN32 program to mirror the DOS server and allow many simultaneous connections. We haven't worried about these limits because in practice they haven't been a problem.

There are also other factors that limit real-time response when using network client-server technology. The network drivers run on interrupts, and depending on how well they are written, they can cause slight delays in real-time response. We have run tests on our programs and find that when the network is in use the real-time tasks can be delayed as much as 10 milliseconds from their normal timing. In most practical applications (such as basic 120 VAC control systems) this isn't a problem. Some combinations of drivers and network cards create worse delays. We have found that the industry standard NE-2000 card and drivers work well.

The Move to PLCs (Programmable Logic Controllers)

About the time that computers were getting good at controlling real-time processes, control engineers realized that you could replace electo-mechanical relays with transistors too. One of them put together some logic gates and magnetic core memory and called it a Programmable Logic Controller. Even though it shared the same internal architecture as a computer, they decided on a new name. Perhaps they had some hidden agenda.

Soon after its introduction, the PLC was put to work in automotive plants on a grand scale, replacing old relay panels that needed changing every year when the new models were introduced. Now all they needed to do was reprogram the PLC when a new model came out. I really doubt that it was quite this simple, but the salesmen tried to explain it this way.

Around the same time, the PLC salesmen and control engineers got together and decided that they didn't need computer programmers anymore. The PLC was touted as the solution to all the problems involved with hiring good real-time computer programmers. Just throw away the computer and the programmer and replace it with a PLC and general electrician and you have instant process controls. I remember a wish list from Control Engineering magazine from this time period that called for "virtual elimination of the general purpose computer programmer from the control loop."

While the PLC may have been good at general relay panel replacement, it really wasn't a general purpose device. It usually was limited to a memory much smaller than most computers and lacked an interrupt system which is absolutely necessary for real-time applications. Can you imagine trying to write a real-time application in machine language without subroutines and very limited branch instructions as one gigantic program. Well, this is what the PLC engineers started doing and boy did they create some disasters. Novice PLC programmers wrote giant programs that ran too slow and would not respond quick enough on high speed equipment. Management went along with this mode of thinking and continued to favor PLCs over computers because computer programmers were less desirable as employees compared to PLC programmers. Not to be too biased, I will admit that the PLC hardware was generally superior to computer hardware when it came to survival in the industrial environment and this had something to do with their popularity. Just don't tell management that all modern PLCs really have a micro-computer inside them that processes the ladder logic.

The modern PLC has started to address these problems. The Allen-Bradley PLC-5 platform has introduced such features as interrupts and crude multi-tasking. They also provide subroutines and indirect addressing which are very useful if you want to write efficient small programs. This is perhaps the reason they have captured the largest share of the PLC market. Their only drawback is that they are still limited to small memory sizes in the range of 16K to 96K bytes.

Another thing the PLC has failed to do is eliminate the computer programmer from process controls. The first PLCs used specialized programming terminals and didn't need to modify the process much when running. Now that the Wintel PC has become the standard personal computer, it has become so low cost that most PLCs are programmed using one. As control programs using PLCs became larger and more complex, better human interfaces were needed to operate them. Computer programmers are again in favor because the networked Wintel platform has become the standard for operation of PLCs. There is an explosion of custom OLE controls going on right now in the PLC industry to provide easy setup and access to PLCs from Windows programs. My opinion is that while the big software companies will do well with VBX and OCX controls, it is quite easy for a small startup to write such controls for PLCs and do a much better job of it.

Lumber Sorters

Introduction to Lumber Sorter Control Systems

Lumber sorters are large machines that sort the lumber cut in modern sawmills. I have been designing and writing control programs for these machines since about 1978. The first systems I worked with used a Computer Automation LSI-2 mini-computer with 64K bytes of magnetic core memory. …

Sorters Before Computers

Control Systems Before Computers

Controls before computers consisted of relays, switches and wires. When memory was needed, latching relays were generally used. Sorters must keep track of quite a bit of information for every board in the system. Magnetic drum systems were used to record the board information. …

First Computerized Sorters

The First Computer Controls

The first computer to control a lumber sorter was designed around 1975 by Progress Electronics, Co. of Portland, Oregon. It used a National Semiconductor IMP-16 micro-processor. The program was usually written directly in machine code and burned into ROMs. This was before the IBM-PC was available to run cross-compilers. …

Modern Sorter Controls

Modern Lumber Sorter Controls

Mt. Hood Software has designed a wide range of lumber sorter control systems. We use personal computers to control the sorter for cost sensitive applications. When the customer doesn't mind paying the price, we also supply PLC controls based on the Allen-Bradley 5 Series. …

Client Server White Paper

Sorter Network Interface Features

This paper discusses the various features available that allow you to connect the sorter control program to a mill wide network.

Why We Use DOS for Sorter Controls

We are currently using DOS as an operating system for our PC based sorter control programs. The reason for this is simple: Windows doesn't do real-time. …

32-bit Sorter Controls

 Our New 32-bit Sorter Interface

We're Going RADical

No, we aren't shipping bombs with our programs. RAD stands for Rapid Application Development, a new way of writing Windows based programs. What once required writing hundreds of lines of program code now can be done in a few seconds by using the mouse. …


© James S. Gibbons 1987-2015