I think I made a grievous error when I originally laid out how the 3B2 system timers would work in SIMH, and last night I started down the path of correcting that.
The 3B2/400 has multiple sources of clocks and interrupts: There’s a programmable interval timer with three outputs being driven at 100KHz, a timer on the UART that runs at 235KHz, and a Time-of-Day clock. They’re all driven at different speeds, and they’re all important to system functionality.
SIMH offers a timing service that allows the developer to tie any of these clocks to the real wall clock. This is essential for time-of-day clocks or anything else that wants to keep track of time. I used this functionality to drive the UART and programmable interval timers at their correct clock speeds.
But that’s completely wrong. Of course this is obvious in retrospect, but it seemed like a good idea at the time. The problem is that the main CPU clock is free to run as fast as it can the SIMH host. On some hosts it will run very fast, on some hosts it will run quite a bit slower. You can’t possibly know how fast the simulated CPU is stepping.
When your timers are tied to the wall clock but your CPU is running as fast as it can, there are going to be all kinds of horrible timing issues. I had lots of unpredictable and non-reproducible behavior.
Last night, I undid all of that. The timers are now counting down in CPU machine cycles. I used the simple power of arithmetic to figure out how many CPU machine cycles each step of each timer would take, and just did that instead.
Now, it seems like everything is a lot more stable, and much less unpredictable.
Comments