Seth Morabito ∴ A Weblog
Saturday, June 29 2019 at 3:15 PM PDT
After what feels like an inexcusably long time, the WE32106 Math Acceleration Unit (MAU) has been merged into the main SIMH tree, and the 3B2/400 simulator has support for it.
This project was one of the most tedious, boring, and yet educational projects I have worked on. Join me now as I recount the harrowing details.
Sunday, February 3 2019 at 9:09 AM PST
Ever since I got the AT&T 3B2/400 emulator working well enough to share publicly, I've received a lot of requests for it to support the (in-)famous 3B2 Ethernet Network Interface option card, commonly referred to by its EDT name, "NI".
The NI card is based on the Common I/O Hardware (CIO) intelligent peripheral architecture, the same architecture used for the PORTS card and the CTC tape controller card. I've implemented support for both PORTS and CTC cards, so how hard could the NI card be, right?
But, I think at long last, I've finally cracked it.
Monday, January 7 2019 at 5:05 PM PST
I'm writing this entry more for myself than for anyone else, because I always forget what a pain this is to get set up properly. The backstory starts with a kernel fault.
# TRAP proc = 4022EBEC psw = 1472B pc = 400287BD PANIC: KERNEL MMU FAULT (F_ACCESS) SYSTEM FAILURE: CONSULT YOUR SYSTEM ADMINISTRATION UTILITIES GUIDE
This is what happens if you install software out of order on the 3B2/310, and 3B2/400. I've encountered it a few times, both using my 3B2/400 emulator and using a real 3B2/310. Here's what's happening.
There are two windowing packages for the 3B2:
- "AT&T Windowing Utilities", a single floppy disk containing a driver
for the DMD terminal (the
XTdriver), a minimal support environment, and not much else. It installs under
- "DMD Core Utilities", a three flopy disk set containing a different
version of the
XTdriver, a full support environment, a bunch of demos, and more. It installs under
You might think the more complete package should be installed, ignoring the other one. But, surprisingly, you'd be wrong. If you install only the "DMD Core Utilities" package, or if you install the "AT&T Windowing Utilities" first followed by "DMD Core Utilities", you'll be left with a driver that simply doesn't work. I haven't dived in and really debugged why it's broken, but clearly it is.
So, instead, always install the "DMD Core Utilities" first to get the
/usr/dmd environment and all the nice demos, but then install
the "AT&T Windowing Utilities" to get a working driver.
You can all thank me later.
Monday, January 7 2019 at 11:29 AM PST
The short version is that my use of ElectronJS to build a fully cross-platform front end did not go quite as smoothly as I'd hoped. It went fairly well on macOS and Windows 10, albeit with performance that wasn't quite what I wanted. But when it came time to build it on Linux, I hit a brick wall: No matter what I tried, I could not get the native Rust NodeJS module and the native Electron NodeJS modules to rebuild cleanly.
See, there's a problem when you use native NodeJS modules in ElectronJS: You have to make sure that both ElectronJS and the native modules have been built with exactly the same version of NodeJS. ElectronJS has some tools to help with this, but they were extremely flakey for me. There is a morass of complexity involving NPM, native compilers, and NodeJS. I spent a few days on it, but eventually I gave up in frustration.
I was upset about not having a functeional emulator on Linux, so I dove in and decided to learn some GTK+. If I couldn't have an Electron app, I'd damn well build native front-ends myself, thank you very much.
The result turned out very well. It's a fully native GTK+ app that
takes a few arguments on the command line, has good performance, and uses
the common dmd_core back end library. It's written in C, and uses a
pre-built static library build from
After this success, I wanted to see if I could do the same thing on macOS. I could have just used the same GTK+ code on Mac, but the performance of the Cocoa GTK+ bindings is somewhat poor compared to native frameworks like Metal and CoreAnimation that offload work to the GPU, so I built a native Cocoa app written in Swift instead. It went so well, in fact, that I've distributed it in the Mac App Store.
[A brief aside rant: Apple, you need help. Your developer documentation is terrible. It took far longer for me to build the Cocoa app than it should have. All the way, I had to fight documentation that is embarrassingly sparse at best. Very few documents mention how to use an API or framework. Those that do are in the "Documentation Archive" and haven't been updated in years. Some of those come with a deprecation warning. Most of the currently maintained docs offer nothing more than a function signature with no further discussion. I relied almost entirely on third-party tutorials and examples to help me understand how to use the Apple frameworks. Shameful. Please, do better!]
My next task is going to be a native Windows front end. I'm sure it'll be an adventure, just like the Mac version, but at least I'm learning some useful skills here.
Thursday, December 13 2018 at 1:39 PM PST
Back in the early 1980s, before GUIs were commonplace, Rob Pike and Bart Locanthi Jr. at AT&T Bell Labs created a bitmapped windowing system for use with UNIX. Originally, they called the system "jerq", a play on the name of the Three Rivers PERQ computer. When the technology started to get showed around outside of the lab, however, they changed its name to "Blit", from the name of the "bit blit" operation.
Here's a small demo of the system in action, circa 1982.
Monday, September 10 2018 at 9:50 AM PDT
I recently started on a quest to finish up a loose end on the AT&T 3B2 emulator by finally implementing a simulation of the WE32106 Math Acceleration Unit (MAU). The MAU is an IC that accelerates floating point operations, and it could be fitted onto the 3B2 motherboard as an optional part. I've seen quite a few 3B2 systems that didn't have one; if it's not present, the 3B2 uses software floating point emulation, and gets on just fine without it. This means the 3B2 emulator is totally usable without the MAU. But still, wouldn't it be nice to simulate it?
Lucky for me, one of the critical pieces of documentation I've managed to find over the last few years is the WE32106 Math Acceleration Unit Information Manual. This little book describes the implementation details of the WE32106, and without it it would be hopeless to even try to simulate it. (Speaking of which: The book hasn't been scanned yet, which is a high priority. I have the only copy I've ever seen.)
But even with the book, this is no simple task. So let's dive in a little deeper and look under the hood of the WE32106.
Monday, August 27 2018 at 11:00 AM PDT
This weekend's project was to image all of my AT&T 3B2 hard disks to preserve the bits on them. To do this, I used David Gesswein's MFM Reader and Emulator board, which allows you to either emulate an MFM hard disk, or read MFM data off of a real hard disk.
A nice side effect is that the images you pull off of a hard disk are compatible with my 3B2 Emulator, so once they've been read off, you can just boot the image as if it were running on a real 3B2. Nice!
I've been pretty impressed with the MFM Reader and Emulator board. It's a nice piece of kit to have around, and it's fully open source. If you've got any MFM systems or hard drives lying around, give it a shot. It's sold either in kit form or fully assembled at a very reasonable price.
Friday, August 10 2018 at 5:40 PM PDT
After much hacking on elisp, I'm happy to announce two changes: First, I've finally implemented pagination on my blog, so the entire nine years of archives isn't rendered in one huge page. And second, there's now an RSS feed available at https://loomcom.com/blog/index.xml. Woohoo!
Getting both of these features implemented was a bit of a
challenge. The built-in Org-Mode publishing feature provided by
ox-publish.el is very much geared toward publishing a single
page. Really, it's supposed to be for publishing a site map of your
website layout. Using it to publish a blog is actually kind of a
kludge and an abuse of the feature, to be frank. But here we are,
that's how most Org-Mode bloggers publish their blogs.
If you want to check out the Emacs setup for this site and blog, you
can look here on GitHub. It's a fairly complicated bit of hacking
designed to work around the single-page limitation. I've also altered
the default RSS backend, supplied by
ox-rss.el, to filter out some
unwanted noise from my RSS feed.
The only thing I can't yet figure out is how to force Org Publish to always generate absolute URLs. I actually don't think it's possible, yet.
Thursday, July 12 2018 at 6:30 AM PDT
You may notice something quite different about this blog if you've been here before. The look and feel is different, yes, but it's more than just skin deep.
For the last nine years, I've kept my blog in WordPress, a very capable blogging platform. But, starting today, I've switched my entire website to a technology from the 1970s: Emacs, the venerable text editor.
Am I crazy? No, I just think it suits my workflow better.
Tuesday, May 22 2018 at 2:59 PM PDT
On November 29, 2014, I posted the following message to the Classic Computer Mailing List.
Folks, For some reason I got it in my head that writing an AT&T 3B2 emulator might be a good idea. That idea has pretty much been derailed by lack of documentation. I have been unable to find any detailed technical description of any 3B2 systems. Visual inspection of a 3B2 300 main board reveals the following major components: - WE32100 CPU - WE32101 MMU - 4 x D2764A EPROM - TMS2797NL floppy disk controller - PD7261A hard disk controller - SCN2681A dual UART - AM9517A Multimode DMA Controller How these are addressed is anybody's guess. To even dream of doing an emulator, I at least need to know the system memory map -- what physical addresses these devices map to. Without that it's pretty pointless to get started. If anyone has access to this kind of information, please drop me a line. Otherwise, I'll just put this one on the far-back burner! -Seth
When I sent that message, I had no idea that it would lead to almost four years of effort, and perhaps if I had known I would have given up. But thankfully I didn't, and today I'm happy to say that the effort was, at long last, successful. The 3B2/400 emulator works well enough that I have released it to the world and rolled it back into the parent SIMH source tree.
Because this project required so much reverse engineering, and because documentation about the 3B2 is still so scarce and hard to come by, I wanted to take the time to document how the emulator came about.
Friday, March 24 2017 at 6:19 PM PDT
I had an absolute breakthrough tonight regarding how the WE32101 MMU handles caching. In fact, when I implemented the change, my simulator went from having 108 page miss errors during kernel boot, to 3. The cache is getting hit on almost every virtual address translation, and because of what I learned, the code is more efficient, too.
The key to all this was finally looking up 2-way set associative caching (see here, here, and here), which the WE32101 manual identifies as the caching strategy on the chip. Once I read about it, I was enlightened.
Friday, March 24 2017 at 8:26 AM PDT
I think I made a grievous error when I originally laid out how the 3B2 system timers would work in SIMH, and last night I started down the path of correcting that.
The 3B2/400 has multiple sources of clocks and interrupts: There's a programmable interval timer with three outputs being driven at 100KHz, a timer on the UART that runs at 235KHz, and a Time-of-Day clock. They're all driven at different speeds, and they're all important to system functionality.
SIMH offers a timing service that allows the developer to tie any of these clocks to the real wall clock. This is essential for time-of-day clocks or anything else that wants to keep track of time. I used this functionality to drive the UART and programmable interval timers at their correct clock speeds.
But that's completely wrong. Of course this is obvious in retrospect, but it seemed like a good idea at the time. The problem is that the main CPU clock is free to run as fast as it can the SIMH host. On some hosts it will run very fast, on some hosts it will run quite a bit slower. You can't possibly know how fast the simulated CPU is stepping.
When your timers are tied to the wall clock but your CPU is running as fast as it can, there are going to be all kinds of horrible timing issues. I had lots of unpredictable and non-reproducible behavior.
Last night, I undid all of that. The timers are now counting down in CPU machine cycles. I used the simple power of arithmetic to figure out how many CPU machine cycles each step of each timer would take, and just did that instead.
Now, it seems like everything is a lot more stable, and much less unpredictable.