Monday, November 28, 2022 at 11:48 AM PST
Almost every time that I give out my email address and I have to spell
loomcom.com, I get a funny look or a little chuckle, like "Wait,
seriously? Two coms?" And yeah, to modern ears it sounds pretty
funny. Saying it out loud is clunky. Explaining that it's short for "Loom
Communications" helps, but it just adds further clunkiness to the
So, why the heck is it
loomcom.com, anyway? Basically, historical reasons.
Way back a long time ago when I first wanted my own domain name for Loom
Communications, there was a popular and well known Internet host called
Netcom. Their domain name was
netcom.com, and I thought they were just the
coolest thing ever. Naturally I wanted to be just like them, so I settled
loomcom.com. And now I'm stuck with it.
As an aside, one of my bigger regrets in life is that I was approached by
a friend in 1997 who happened to own
loom.com. He offered to give it to
me, but I said no, it was too much bother, I was already well established
loomcom.com. That was the wrong answer!
Sunday, October 30, 2022 at 11:10 AM PDT
I just deleted my Twitter account, and gosh it felt good. 2009–2022 was a good run.
If you need me on social media, you can find me very occasionally on Mastodon!
Monday, October 24, 2022 at 9:29 AM PDT
I was prompted by a Tweet this morning to dig out my copy of Jacques Vallee's book "The Network Revolution" (And/Or Press, 1982). This passage has haunted me since I first read it:
One lesson which I am not likely to forget took place at the RCA research labs in Princeton, where most of the discoveries in color television had occurred. I had been invited to give a talk on information retrieval. I ventured into English-language interrogation of data bases, and other applications of artificial intelligence used to converse with computers. A man with intense eyes and bright white hair took me aside. We sat on the benches in the lab next to the lecture hall.
He said, "There is a fundamental fallacy in artificial intelligence, and you're falling into it."
"In what respect?" I asked, with the feeling that this discussion was not going to conform to the usual polite exchange of generalities heard at most professional meetings.
"Artificial intelligence is trying to emulate nature, it wants to approximate what Man does."
"What other inspiration is there?"
"Imitation of nature is bad engineering. For centuries inventors tried to fly by emulating birds, and they have killed themselves uselessly. If you want to make something that flies, flapping your wings is not the way to do it. You bolt a 400-horsepower engine to a barn door, that's how you fly. You can look at birds forever and never discover this secret. You see, Mother Nature has never developed the Boeing 747. Why not? Because Nature didn't need anything that would fly at 700 mph at 40,000 feet: how would such an animal feed itself?"
"What does that have to do with artificial intelligence?"
"Simply that it tried to approximate Man. If you take Man as a model and test of artificial intelligence, you're making the same mistake as the old inventors flapping their wings. You don't realize that Mother Nature has never needed an intelligent animal and accordingly, has never bothered to develop one.
"So when an intelligent entity is finally built, it will have evolved on principles different from those of Man's mind, and its level of intelligence will certainly not be measured by the fact that it can beat some chess champion or appear to carry on a conversation in English"
With his piercing eyes on me, I had a brief vision of what an intelligent machine would be. If Nature has never needed an intelligent animal and hasn't evolved one, I kept wondering, then who are we? In our feeble attempts to handle the information we call life, and increase its quality, can we trust the creations of our dreams? Are we perhaps nothing more than the process through which another form of intelligence is itself evolving? And in the end, what measure of control do we really have on the technologies we create?
Friday, October 7, 2022 at 1:04 PM PDT
Over the last couple of days I've been busy ripping out a lot of the Emacs lisp code that I orignally wrote to publish my weblog with Emacs and cleaning up the project quite a bit. The most obvious changes are that there's finally an RSS feed again, and that my weblog is all on one page now instead of being spread out across multiple pages with "Next Page" and "Previous Page" links. You might reasonably be asking why I would do such a thing, and it all comes down to complexity and brittleness. I don't like complexity or brittleness!
The complexity came from the hackish code I wrote to split my weblog into multiple pages. A lot of the Emacs publishing pipeline is highly customizable and offers you the option to write your own functions to override default behavior, but there's no way to tell it to use a custom function to change how the blog index gets written to a file (or files). So, until this most recent change, I was actually un-defining one of the core internal publishing functions and replacing it with my own definition.
Of course, doing it that way touched a lot of the internal and private API, and that's really a no-no. Yes, it worked, but it was difficult to maintain, and prone to breakage.
So I gave up on that. Instead I'm going with the flow now and doing
things more in line with how Emacs publishing does them by default,
and that means the whole blog index on one page. I may eventually take
it upon myself to offer
org-mode a patch to do support custom
multi-page stuff at a later date, but for now, this is the simplest
and cleanest path forward.
Monday, August 1, 2022 at 11:25 AM PDT
I struggled with whether or not I should make this post because I worry that the topic of burnout is taboo. Admitting that I was burned out and that I had to take time to recover from it feels like admitting a kind of failure or a weakness, but the longer I dwelled on it the more I knew that I had to say something, if for no other reason than because I want to help anyone else who's suffering from burnout and wondering what to do about it. You're not alone. During the last few months I have discovered that so many friends and colleagues have gone through something similar, and I want to normalize burnout awareness, especially becuase my message is positive.
Anyway, hi, I was really burned out. I want to talk about it.
Saturday, November 21, 2020 at 8:47 AM PST
Was my last blog entry really in March? Apparently, yes, it was.
Forgive me. 2020 has been a strange year. I'm sure you'll agree.
I have been lucky enough to retain a job during the global pandemic, and moreover, the job has been busy as hell. In truth, I haven't been able to work on personal projects nearly as often as I wish I had been able to. What's worse, while some people found themselves with lots of extra time after losing a harsh commute, I was already used to working from home. I've been full-time remote since the end of 2014, so working from home is nothing new to me. I already didn't have a commute! I did not suddenly get any new hours in my day.
I do still have some software projects on the back burner right now. Among them:
The 3B2/1000 emulation, which is perhaps 80% complete. A Tektronix 4404 emulator, which is perhaps 20% complete. Various personal information management projects that are scattered here and there. Forever tinkering with Emacs configurations.
And, unrelated to computers, I recently bought a violin with the intention of learning to play old-time fiddle. It's fun, but practicing is quite a chore, especially when the sounds you're making are so very terrible.
Monday, March 30, 2020 at 9:51 AM PDT
It's now been more than three months since my last post, and I think I owe everyone an update on what I've been doing with my life.
I've been working on the 3B2/1000 simulator for a while now, and at the outset, I decided to try to build a single simulator executable that could be configured as either a 3B2/400 or a 3B2/1000. In hindsight, this was a mistake, but sometimes you can't know that until you get deep down into the guts and have to do some unwinding.
So, I've thrown out what I had, and started anew with the right
strategy. Instead of a single
3b2 binary, instead, there will be
3B2-1000.EXE, respectively, on Windows). This follows the VAX
simulator model, and makes a lot of sense given the levels of
similarity and difference between the 3B2/400 and 3B2/1000.
Friday, December 27, 2019 at 10:51 AM PST
In the long, dark months since September, I have not really done any work at all on my 3B2 emulator. Since it's Christmas break for me right now, though, I've taken up the mantle once again and started hacking away on it with the aim of implementing Revision 3 system board support that will allow the simulator to run as a 3B2/1000 system.
There's actually quite a bit of work involved in getting this done, and there are a couple of different ways I could go about it. As a bit of background, other simulators in the SIMH platform take a few different approaches, which I'll summarize here.
Monday, September 30, 2019 at 12:56 PM PDT
It's been a minute, hasn't it! I'd like to talk about the state of the 3B2 Simulator and get everyone up to speed with where we are, and where we're going.
Monday, August 5, 2019 at 8:11 AM PDT
Do you like obscure artificial intelligence workstations from the 1980s? Of course you do! So do I. That's why, when I found out about the Tektronix 4400 series of workstations, I was immediately smitten.
The 4400 series (including the 4404, 4405, and 4406) were Motorola 68010 and 68020 based workstations built to run Smalltalk-80 and Lisp, and were marketed toward the artificial intelligence industry of the first half of the 1980s. Those were the heady glory days of companies like Symbolics and Lisp Machines, Inc., and stiff competition from Texas Instruments' "Explorer" workstations, so they were pushing into pretty frothy territory.
For about a year now, I've wanted to emulate one, but I just didn't have all the documentation necessary to make it a reality. This past weekend, though, my friend Josh managed to scrounge up a copy of the Tektronix 4404 Component Level Service Manual, which Al Kossow of the Computer History Museum and Bitsavers scanned and archived. Published in 1987, this juicy little manual details every aspect and detail of the 4404 workstation from the top down and the bottom up. It's absolutely glorious. Combined with the Tektronix 4404 firmware already preserved on Bitsavers, I have everything I need to build an emulator. Now, I just need to actually do it.
Yes, in other words, it's just a small matter of programming.
Saturday, June 29, 2019 at 3:15 PM PDT
After what feels like an inexcusably long time, the WE32106 Math Acceleration Unit (MAU) has been merged into the main SIMH tree, and the 3B2/400 simulator has support for it.
This project was one of the most tedious, boring, and yet educational projects I have worked on. Join me now as I recount the harrowing details.
Sunday, February 3, 2019 at 9:09 AM PST
Ever since I got the AT&T 3B2/400 emulator working well enough to share publicly, I've received a lot of requests for it to support the (in-)famous 3B2 Ethernet Network Interface option card, commonly referred to by its EDT name, "NI".
The NI card is based on the Common I/O Hardware (CIO) intelligent peripheral architecture, the same architecture used for the PORTS card and the CTC tape controller card. I've implemented support for both PORTS and CTC cards, so how hard could the NI card be, right?
But, I think at long last, I've finally cracked it.
Monday, January 7, 2019 at 5:05 PM PST
I'm writing this entry more for myself than for anyone else, because I always forget what a pain this is to get set up properly. The backstory starts with a kernel fault.
# TRAP proc = 4022EBEC psw = 1472B pc = 400287BD PANIC: KERNEL MMU FAULT (F_ACCESS) SYSTEM FAILURE: CONSULT YOUR SYSTEM ADMINISTRATION UTILITIES GUIDE
This is what happens if you install software out of order on the 3B2/310, and 3B2/400. I've encountered it a few times, both using my 3B2/400 emulator and using a real 3B2/310. Here's what's happening.
There are two windowing packages for the 3B2:
"AT&T Windowing Utilities", a single floppy disk containing a driver
for the DMD terminal (the
XT driver), a minimal support environment,
and not much else. It installs under
"DMD Core Utilities", a three flopy disk set containing a different
version of the
XT driver, a full support environment, a bunch of
demos, and more. It installs under
You might think the more complete package should be installed, ignoring the other one. But, surprisingly, you'd be wrong. If you install only the "DMD Core Utilities" package, or if you install the "AT&T Windowing Utilities" first followed by "DMD Core Utilities", you'll be left with a driver that simply doesn't work. I haven't dived in and really debugged why it's broken, but clearly it is.
So, instead, always install the "DMD Core Utilities" first to get the
/usr/dmd environment and all the nice demos, but then install
the "AT&T Windowing Utilities" to get a working driver.
You can all thank me later.
Monday, January 7, 2019 at 11:29 AM PST
The short version is that my use of ElectronJS to build a fully cross-platform front end did not go quite as smoothly as I'd hoped. It went fairly well on macOS and Windows 10, albeit with performance that wasn't quite what I wanted. But when it came time to build it on Linux, I hit a brick wall: No matter what I tried, I could not get the native Rust NodeJS module and the native Electron NodeJS modules to rebuild cleanly.
See, there's a problem when you use native NodeJS modules in ElectronJS: You have to make sure that both ElectronJS and the native modules have been built with exactly the same version of NodeJS. ElectronJS has some tools to help with this, but they were extremely flakey for me. There is a morass of complexity involving NPM, native compilers, and NodeJS. I spent a few days on it, but eventually I gave up in frustration.
I was upset about not having a functeional emulator on Linux, so I dove in and decided to learn some GTK+. If I couldn't have an Electron app, I'd damn well build native front-ends myself, thank you very much.
The result turned out very well. It's a fully native GTK+ app that
takes a few arguments on the command line and has good performance.
It's written in C, and builds the
dmd_core from a git submodule.
Thursday, December 13, 2018 at 1:39 PM PST
Back in the early 1980s, before GUIs were commonplace, Rob Pike and Bart Locanthi Jr. at AT&T Bell Labs created a bitmapped windowing system for use with UNIX. Originally, they called the system "jerq", a play on the name of the Three Rivers PERQ computer. When the technology started to get showed around outside of the lab, however, they changed its name to "Blit", from the name of the "bit blit" operation.
Here's a small demo of the system in action, circa 1982.
The windowing system was a combination of hardware and software. The hardware was a terminal built around a Motorola 68000, with an 800 by 1024 pixel 1-bit bitmapped display in portrait mode, a keyboard, and a mouse. The software was hosted on a UNIX computer and would be uploaded to the terminal's memory on demand, where it executed on the terminal's CPU itself.
Later still, AT&T and the Teletype Corporation teamed up to commercialize the system. They reimplemented the hardware and based it on a Western Electric WE32100 CPU. They named it the DMD 5620 (DMD for "Dot-Mapped Display").
Because I'm such a fan of emulation, and because I worked so long and hard on the AT&T 3B2/400 Emulator, and because 3B2s often ran DMD 5620 terminals in commercial environments, I knew I had to remedy the situation.
Monday, September 10, 2018 at 9:50 AM PDT
I recently started on a quest to finish up a loose end on the AT&T 3B2 emulator by finally implementing a simulation of the WE32106 Math Acceleration Unit (MAU). The MAU is an IC that accelerates floating point operations, and it could be fitted onto the 3B2 motherboard as an optional part. I've seen quite a few 3B2 systems that didn't have one; if it's not present, the 3B2 uses software floating point emulation, and gets on just fine without it. This means the 3B2 emulator is totally usable without the MAU. But still, wouldn't it be nice to simulate it?
Lucky for me, one of the critical pieces of documentation I've managed to find over the last few years is the WE32106 Math Acceleration Unit Information Manual. This little book describes the implementation details of the WE32106, and without it it would be hopeless to even try to simulate it. (Speaking of which: The book hasn't been scanned yet, which is a high priority. I have the only copy I've ever seen.)
But even with the book, this is no simple task. So let's dive in a little deeper and look under the hood of the WE32106.
Monday, August 27, 2018 at 11:00 AM PDT
This weekend's project was to image all of my AT&T 3B2 hard disks to preserve the bits on them. To do this, I used David Gesswein's MFM Reader and Emulator board, which allows you to either emulate an MFM hard disk, or read MFM data off of a real hard disk.
A nice side effect is that the images you pull off of a hard disk are compatible with my 3B2 Emulator, so once they've been read off, you can just boot the image as if it were running on a real 3B2. Nice!
I've been pretty impressed with the MFM Reader and Emulator board. It's a nice piece of kit to have around, and it's fully open source. If you've got any MFM systems or hard drives lying around, give it a shot. It's sold either in kit form or fully assembled at a very reasonable price.
Friday, August 10, 2018 at 5:40 PM PDT
After much hacking on elisp, I'm happy to announce two changes: First, I've finally implemented pagination on my blog, so the entire nine years of archives isn't rendered in one huge page. And second, there's now an RSS feed available at https://loomcom.com/blog/index.xml. Woohoo!
Getting both of these features implemented was a bit of a
challenge. The built-in Org-Mode publishing feature provided by
ox-publish.el is very much geared toward publishing a single
page. Really, it's supposed to be for publishing a site map of your
website layout. Using it to publish a blog is actually kind of a
kludge and an abuse of the feature, to be frank. But here we are,
that's how most Org-Mode bloggers publish their blogs.
If you want to check out the Emacs setup for this site and blog, you
can look here on my Github.
It's a fairly complicated bit of hacking
designed to work around the single-page limitation. I've also altered
the default RSS backend, supplied by
ox-rss.el, to filter out some
unwanted noise from my RSS feed.
The only thing I can't yet figure out is how to force Org Publish to always generate absolute URLs. I actually don't think it's possible, yet.
Thursday, July 12, 2018 at 6:30 AM PDT
(EDIT: This blog post is a few years old now, and some of the code examples are no longer correct. If you you want to see my current blogging setup, I recommend you look at my current homepage source code on Github.)
You may notice something quite different about this blog if you've been here before. The look and feel is different, yes, but it's more than just skin deep.
For the last nine years, I've kept my blog in WordPress, a very capable blogging platform. But, starting today, I've switched my entire website to a technology from the 1970s: Emacs, the venerable text editor.
Am I crazy? No, I just think it suits my workflow better.
Tuesday, May 22, 2018 at 2:59 PM PDT
On November 29, 2014, I posted the following message to the Classic Computer Mailing List.
Folks, For some reason I got it in my head that writing an AT&T 3B2 emulator might be a good idea. That idea has pretty much been derailed by lack of documentation. I have been unable to find any detailed technical description of any 3B2 systems. Visual inspection of a 3B2 300 main board reveals the following major components: - WE32100 CPU - WE32101 MMU - 4 x D2764A EPROM - TMS2797NL floppy disk controller - PD7261A hard disk controller - SCN2681A dual UART - AM9517A Multimode DMA Controller How these are addressed is anybody's guess. To even dream of doing an emulator, I at least need to know the system memory map -- what physical addresses these devices map to. Without that it's pretty pointless to get started. If anyone has access to this kind of information, please drop me a line. Otherwise, I'll just put this one on the far-back burner! -Seth
When I sent that message, I had no idea that it would lead to almost four years of effort, and perhaps if I had known I would have given up. But thankfully I didn't, and today I'm happy to say that the effort was, at long last, successful. The 3B2/400 emulator works well enough that I have released it to the world and rolled it back into the parent SIMH source tree.
Because this project required so much reverse engineering, and because documentation about the 3B2 is still so scarce and hard to come by, I wanted to take the time to document how the emulator came about.
Friday, March 24, 2017 at 6:19 PM PDT
I had an absolute breakthrough tonight regarding how the WE32101 MMU handles caching. In fact, when I implemented the change, my simulator went from having 108 page miss errors during kernel boot, to 3. The cache is getting hit on almost every virtual address translation, and because of what I learned, the code is more efficient, too.
The key to all this was finally looking up 2-way set associative caching (see here, here, and here), which the WE32101 manual identifies as the caching strategy on the chip. Once I read about it, I was enlightened.
Friday, March 24, 2017 at 8:26 AM PDT
I think I made a grievous error when I originally laid out how the 3B2 system timers would work in SIMH, and last night I started down the path of correcting that.
The 3B2/400 has multiple sources of clocks and interrupts: There's a programmable interval timer with three outputs being driven at 100KHz, a timer on the UART that runs at 235KHz, and a Time-of-Day clock. They're all driven at different speeds, and they're all important to system functionality.
SIMH offers a timing service that allows the developer to tie any of these clocks to the real wall clock. This is essential for time-of-day clocks or anything else that wants to keep track of time. I used this functionality to drive the UART and programmable interval timers at their correct clock speeds.
But that's completely wrong. Of course this is obvious in retrospect, but it seemed like a good idea at the time. The problem is that the main CPU clock is free to run as fast as it can the SIMH host. On some hosts it will run very fast, on some hosts it will run quite a bit slower. You can't possibly know how fast the simulated CPU is stepping.
When your timers are tied to the wall clock but your CPU is running as fast as it can, there are going to be all kinds of horrible timing issues. I had lots of unpredictable and non-reproducible behavior.
Last night, I undid all of that. The timers are now counting down in CPU machine cycles. I used the simple power of arithmetic to figure out how many CPU machine cycles each step of each timer would take, and just did that instead.
Now, it seems like everything is a lot more stable, and much less unpredictable.
Thursday, March 23, 2017 at 7:48 AM PDT
I spent last night probing my 3B2/310's hard disk controller with a logic analyzer so I can see exactly how it behaves, both with and without a hard disk attached to the system. It proved to be very tricky to get the logic analyzer probes attached because the motherboard is so incredibly dense. In fact, I couldn't get a probe attached to the chip select line no matter how hard I tried. There just wasn't any room to fit a probe between the chip and a nearby resistor array, so I resorted to using a little piece of wire to just touch against the pin. I could have used three hands for that operation.
Wednesday, March 22, 2017 at 8:34 AM PDT
My next mini-project in the 3B2/400 simulator will be emulating the hard disk. The 3B2/400 used a NEC µPD7261A hard disk controller (PDF datasheet here), which has proved to be harder to emulate correctly than I would have liked.
So far, my hard disk controller emulation has been limited to the most minimal functionality needed to get the emulator to pass self-checks at all. Other than that, it's just a skeleton. But I believe that it's actually hanging up the floppy boot process now when UNIX tries to discover what hard drives are attached, so it's time to get serious and fix it.
My progress isn't good. I am following the datasheet to the letter, trying to give the correct status bits at the correct time, but the 3B2 just gets confused. It never even tries to read data off the drive, it just gives up trying to read status bits. So, clearly I'm doing something wrong, but I don't know what it is.
Tonight I will strap a logic analyzer to the PD7261a in my real 3B2 and see exactly what it's doing. I'll report on my findings when I have them.
Tuesday, March 21, 2017 at 7:59 PM PDT
And just like that, it's solved. I figured out the mystery of the Equipped Device Table.
The answer was in some obscure piece of System V Release 3 source code. The 3B2 system board has a 16-bit register called the Control and Status Register (CSR). In the CSR is a bit called named TIMEO that I never figured out.
It turns out that I just wasn't reading the disassembled ROM code closely enough. The exception handler checks this status bit whenever it catches a processor exception while filling the EDT. If the bit is set, it skips the device.
So what is TIMEO? It's the System Bus Timeout flag, according to the SVR3 source code.
The correct behavior, then, if nothing is listening at an I/O card's address is to set an External Memory Exception, plus set this bit in the CSR. Once I implemented that in my simulator, the EDT started working exactly the same as it does on my real 3B2/310. Success!
Tuesday, March 21, 2017 at 8:18 AM PDT
There is yet one more puzzling aspect of the 3B2 that I do not yet understand, and that is the equipped device table, or EDT. I've documented the nitty-gritty details on my main 3B2 reverse-engineering page, so I won't bore you with the details. But here's the short version.
The EDT is what tells the system what I/O boards are installed. On startup, the ROM code probes each I/O slot to see what kind of card is in it. Each card has a read-only register that returns the board's 16-bit Option Number, and the ROM fills in a special table in RAM with the Option Number for each card it discovers. It doesn't fill in any other information, just the Option Number. For example, the SCSI expansion card returns option number 0x100, and the EPORTS card returns option number 0x102. That's the only information the ROM gets from the card. Later, the user can run a program called filledt that looks up all kinds of other information about the card in a database held on disk.
So here's today's puzzler: How does the system decide that there's nothing in a slot?
Monday, March 20, 2017 at 10:37 AM PDT
The first and most important thing I learned is that indexing cache entries only by their tags does not work. There are collisions galore, and no way to recover from them. However, if I index SD cache entries by the full SSL, and PD cache entries by the full SSL+PSL, everything seems to work perfectly. This leaves several big questions unanswered, but they are probably unanswerable. After all, I have no way of looking inside the MMU to see how it actually indexes entries in its cache, I can only go on published information and make educated guesses.
Second, I learned that implementing MMU caching is required for the 3B2 simulator to work correctly. Until this weekend, I had not implemented any caching in the simulated MMU because I assumed that caching was only used for performance reasons and could be skipped. But this is not true. In fact, UNIX SVR3 changes page descriptors in memory without flushing the cache and relies on the MMU to use the old values until it requests a flush. Not having implemented caching was a source of several serious bugs in the simulator.
Third, I learned that the "Cacheable" bit in segment descriptors is inverted. When it's a 1, caching is disabled. When it's a 0, caching is enabled.
The 3B2/400 simulator now makes it all the way through booting the SVR3 kernel and starting /etc/init. There are some other bugs preventing init from spawning new processes, but I hope to have these ironed out soon.
Sunday, March 19, 2017 at 8:41 AM PDT
I had a Eureka! moment last night about MMU caching, but it all came tumbling down this morning.
My realization was that the Segment Descriptors are 8 bytes long, and that Page Descriptors are 4 bytes long. So, if we assume that the virtual address encodes the addresses of the SDs and PDs on word-aligned boundaries (and SDs and PDs are indeed word-aligned in memory), then you don't need the bottom three bits for SD addresses, nor do you need the bottom two bits for PD addresses. Voila!
Saturday, March 18, 2017 at 6:37 PM PDT
I'm in the middle of a very long, very drawn out project to try to emulate the AT&T 3B2/400 computer. I should probably have been sharing my progress more frequently than I have been, but it has for the most part been a painful and solitary endeavor.
Today, though, there is something in particular that is bothering me greatly, and I must yell into the void to get this frustration off my chest. And that is, how in the hell does the MMU cache work?
So first, a little background.
Sunday, December 18, 2016 at 1:51 PM PST
This is the story of a short time, a quarter century ago, when a little cluster of computers at Cornell University played a very important part in in my life and in the lives of my friends.
It was the fall of 1992, and the Internet was growing up. It would still be another year or two before it became a household name, so for the time being it was our little playground, our special place that you couldn't get to unless you were at a big research company or a reasonably well endowed University. I lived at Mary Donlon Hall, one of two dorm buildings that had recently been wired with Ethernet and therefore offered its residents a mainline into the addictive world of the Internet.
Tuesday, August 4, 2015 at 1:34 PM PDT
It's been a long time since I wrote about my tragic tale of Internet woe I owe you all an update, and I'm very happy to say that at long last, we have broadband. It's kind of a long story.
Back in February when all hell was breaking loose and we were becoming desperate for Internet connectivity, I reached out to the Kitsap Public Utility Commission (KPUD), who were listed as providing fiber optic Internet service on the Washington State Broadband map (a web site that is unfortunately no longer in service). KPUD was quick to point out that yes, they provide wholesale Internet service over their fiber to ISPs, but not to consumers. That said, they are in charge of running the infrastructure, so they were at least willing to come out and see how much it would cost to run their fiber to our property. Their best guess: $125,000. Well, obviously that was not going to work, so I didn't do any more follow-up with them.
Fast forward to the middle of March, when Consumerist wrote about my situation. The story kind of blew up and went everywhere, and for a few days I was inundated with press contacts and questions. It was surreal, especially because I was out of town on business when it all happened. But then just as I was getting ready to fly back home, I got a call from KPUD. They had read the article too, and were curious if they could come out and do a better estimate for me than the back-of-the-envelope work they'd done earlier. Of course, I said yes.
Monday, June 29, 2015 at 2:28 PM PDT
But first, the WATs.
Saturday, March 28, 2015 at 1:05 PM PDT
My story kind of hit social media hard this week. To say it caught me off guard is an understatement. I never in a million years expected the response to be so big. I'm just some guy, I don't particularly enjoy all the attention suddenly paid to my problem. But, that said, I'm grateful for all the support and tips I've received, and I'm investigating all connectivity options.
One of the frequent criticisms I've seen goes something like this: "Well, you moved to a rural area, what did you expect? Even though Comcast said it was serviced, you share some of the blame for even expecting cable out there. You should have known better!"
I disagree pretty strongly with this criticism. Let me tell you why.
Sunday, March 15, 2015 at 4:37 PM PDT
My internet fiasco aside, I think it's time I start getting back into some technical matters here on the blog. So welcome to the first in a series of posts about FPGA development!
Sunday, February 22, 2015 at 2:30 PM PST
UPDATE 1: See the FAQ At The Bottom of This Post
UPDATE 2: Be sure to read my follow-up post. Spoiler Alert: It has a happy ending!
This is a very long post, but it needs to be long to properly document all the trouble we've gone through with Comcast. In short: We moved into our new home in January after verifying that Comcast was available. They said no problem, and we ordered their service. After moving in, and only after a month of confusion and miscommunication, we discovered the truth: There's no Comcast service on our street.
Monday, October 13, 2014 at 10:42 AM PDT
I was a freshman at Cornell University in Fall of 1992 when I logged into my first UNIX system.
I’d heard of UNIX before, of course—it was a popular subject in trade magazines of the period, and if you tinkered with computers you’d probably have heard of it—but I’d never actually used it. So one day I marched over to the campus IT department to register for a UNIX account. It took some wrangling, but very shortly I was drinking from the UNIX firehose.
Wednesday, July 30, 2014 at 8:18 PM PDT
Well. Here it is, the final entry for my Summer Retrochallenge project. I wanted to do more, but as so often happens, real life intervened and I didn't have nearly as much time to work on it as I'd wanted. C'est la vie!
But I'm still proud of what I accomplished. I have a working ROM monitor, and I'm happy to report that as a final hurrah, I got it fully integrated with Lee Davison's Enhanced 6502 BASIC.
Friday, July 25, 2014 at 6:52 PM PDT
As I write this, it's early on the evening of July 25th, and I'm staring next Thursday's deadline in the face. I haven't been able to work on my Retrochallenge entry for over a week, and it's in poor shape.
But am I going to give up? No way. I'm going to go down fighting.
My over-enthusiastic early plans called for me to finish up my 6502 ROM monitor so early that I'd have time to work on cleaning and sorting my RL02 packs. That, needless to say, is not going to happen. Instead, I'm going to concentrate on polishing up and documenting my monitor this weekend. Whatever I have ready to go next week will just have to be good enough. It won't be as fully-featured as I originally wanted, but at least it's something, and at least it works.
I have to pick my remaining features pretty carefully now. I want to enhance the Deposit and Examine commands to add syntax that will allow auto-incrementing the address, and then work on tying my monitor into EhBASIC, so I can run it on my home-brew 6502 after Retrochallenge is over.
It's a race to the finish, now. Expect an update from me on Sunday or Monday. Until then, I'm face down in the code!
Thursday, July 10, 2014 at 1:29 PM PDT
I'm fairly happy with my parsing and tokenizing code now. I wanted to give a little breakdown of how it works.
The over-all goal here is to take a command from the user in the form:
C NNNN [(NN)NN [NN NN NN NN NN ... NN]]
C is a command such as "E" for Examine, "D" for Deposit, etc.,
and store it in memory, tokenized and converted from ASCII to binary.
I wanted to give the user flexibility. For example, numbers do not
need to be zero-padded on entry. You should be able to type E 1F and
have the monitor know you mean E 001F, or D 2FF 1A and know that you
D 02FF 1A.
I wanted whitespace between tokens to be ignored. Typing
should be the same as typing
"E 1FF "
And finally, I wanted to support multiple forms of commands with the
same tokenizing code. The Examine command can take either one or two
16-bit addresses as argumentsfor example, E 100 1FF should dump
all memory contents from 0100 to 01FF. But the Deposit command takes
one 16-bit address and between one and sixteen 8-bit values, for
D 1234 1A 2B 3C to deposit three bytes starting at address
Sunday, July 6, 2014 at 9:53 PM PDT
[Today is kind of a big update, and not all the source code will be presented in-line here in the blog post. As always, you can look at the current source Here on my Github.]
Good news, everyone! After quite a lot of hacking, I can examine memory with my monitor. It's a very primitive feature, right now: I can only examine one byte at a time, so I can't dump memory regions yet. But hey, it's a good start. Let's dive into the code.
Friday, July 4, 2014 at 2:26 PM PDT
I'm moving kind of fast here because I really want to get to the meaty parts of the code, but I want to cover how I'm getting input from the console into the system. As you might expect, it's all about reading from the ACIA instead of writing to it.
Friday, July 4, 2014 at 1:23 PM PDT
Last night I got string output working. But what if I want to make string output generic? I want to write a STOUT (STring OUT) subroutine that can take the address of any null-terminated string and print it to the console, but there's a problem: The 6502 is an 8-bit machine, so passing a 16-bit address as an argument takes some wrangling.
To get around this, I've designated two locations in the precious Zero
Page (arbitrarily choosing
$21) to store the locations of
the low byte and high byte of the string's address, respectively.
Thursday, July 3, 2014 at 5:20 PM PDT
Now that I have a skeleton 6502 assembly project set up and building, it's time to get going and writing some real code for the ROM monitor. But wait, there's just one more thing I need to get set up, and that's an emulator so I can test code easily. Without an emulator, I'd have to flash a new ROM image to an EPROM and put it into a real 6502 computer every time I wanted to run it, and debugging would be very, very hard. Luckily for me, I wrote a 6502 emulator called Symon a couple of years ago! You can download it from Github if you want to follow along.
Tuesday, July 1, 2014 at 6:48 PM PDT
Before we get our ROM monitor off the ground, we'll need to sort out a few things first. The most important decision will be what assembler to use. I've decided to go with the CC65 tool chain, because I'm already familiar with it and I don't have a lot of time to come up to speed with a new assembler. Now, a word of caution: CC65 is no longer being developed, so its fate is uncertain. There are other assemblers out there, and if I were doing this outside of Retrochallenge and had more time, I would probably look into them. Chief among these seem to be Ophis, a 6502 cross assembler written in Python, and XA, a venerable cross assembler with a long history.
Now that I've picked my assembler, it's time to set up the project. I'm going to get things rolling with a very simple skeleton directory. If you want to follow along in real time, the project is hosted here on my Github.
Monday, June 30, 2014 at 11:27 PM PDT
Here it is, Retrochallenge Summer 2014. It's time to get started. It's time to write a ROM monitor.
But just what is a ROM monitor? In simple terms, it's a program that gives the user direct control over the lowest level of a computer. Typical monitors will, at the very least, allow a user to read and write memory contents directly using a very simple command-line interface. Now, when I say "very simple", I mean primitive. ROM monitors usually don't have such luxuries as user help or warnings or anything of the sort. No. They allow you to shoot yourself squarely in the foot, using whatever gauge you happen to have on hand.
My goal is to build a full-featured but very simple ROM monitor for the 6502 that offers the following features:
Built-in help Simple line editing (backspace, if nothing else) Read and write single memory addresses Read and write memory ranges Show contents of Accumulator and X and Y registers Show contents of the Processor Status register Allows starting execution from an arbitrary address
A secondary goal is to be very clear and well-documented code. It will likely be much larger and take up more ROM space than it needs to be. A superior 6502 programmer may even look at it and wince. But that's OK. As long as it's readable and easy to understand, I will be happy. I'm not a ROM hacker, I don't need to fit my monitor into 128 bytes of code!
By necessity, of course, the monitor will be written in 6502 Assembly. That means my very first step will be picking and setting up a 6502 assembler and development environment. More on that tomorrow.
Friday, June 13, 2014 at 3:53 PM PDT
Well here we are again. It's June of 2014, and I have to come up with a Retrochallenge entry.
Last year I decided to tackle an original hardware design and convert a VT100 keyboard to USB. It was a lot of fun, and a lot harder than I thought it would be. This year, I just haven't been able to come up with a cohesive and innovative idea for something to do, so instead, I'm going to do two smaller projects.
Thursday, October 10, 2013 at 12:38 AM PDT
Things have been quiet around here recently. What have I been working on?
It's a Model 33 ASR Teletype, better known as just an ASR-33. This is the "Before" picture. With the help of some amazing Teletype experts on the GreenKeys mailing list, I've been cleaning it, oiling it, and replacing broken parts.
I'll have more to post in a few days.
Saturday, August 17, 2013 at 7:23 PM PDT
I had hoped that adding Composite Video to my little black and white Panasonic television would be a piece of cake, and in fact it looked like it would be a piece of cake. But it is not. It is not a piece of cake. It is not a piece of any kind of pastry.
After a lot of time playing around and poking and prodding and probing signals here and there, I have successfully gotten composite video to appear on the screen, sort of, by putting an ordinary 1V P-P composite video signal into the video driver transistor's base (pin 3 of IC12), and grounding the video coax to the input of the horizontal sync separator (pin 1 of IC12), which is just not at all how I expected it to work. But "work" is not really the right term, because it's obviously not really right; the video looks weird and very washed out, and no amount of futzing with controls gets it looking acceptable.
The problem here is twofold: One, I don't really grok analog TV circuits yet, and two, this TV is a hybrid between discrete logic and ICs. I think the circuit would be a lot simpler for me to understand if it were fully discrete, and I think it would be a lot easier to add composite input if it were either fully discrete or more fully IC based. But since it's a weird in-between thing, some of the functions are separated into ICs in such a way that I don't really "get" it. So, I think this will just be a TV I'm willing to junk so I can learn about how to drive a CRT in general, and not something for any specific project ideas.
That said, if you're curious here's the schematics and the IC details.
Monday, August 12, 2013 at 2:28 PM PDT
Sweet, I just found this YouTube video about adding composite input to a television. It looks much simpler than I anticipated.
I also just got the schematics for my set, so I'm in business.
Saturday, August 10, 2013 at 4:28 PM PDT
While I was in Seattle last weekend I popped into a few thrift shops looking for vintage electronics goodies to pull apart. I found this little 5" television from 1984, for a whopping $2.50:
I know it looks like it's color, but it's actually black and white–I connected a Commodore 64 through a horribly kludged together RF demodulator just to test it out, since we don't have analog television broadcasting any more.
What I'd like to do first is open it up and add a composite video input, so maybe I can drive it with a Raspberry Pi or something. It has no video input other than antenna right now. I've never really hacked video equipment before, so it'll be fun new territory for me. Don Lancaster's classic book "TV Typewriter Cookbook" (PDF, 13.5MB) has a treasure trove of information about how to interface with an analog television, so it's going to be my bible for TV experiments.
I do need to get my hands on a service manual for the TV. I found a place that sells them online (for $17, almost 7 times what I paid for the TV itself!), so maybe I'll just have to do that.
Monday, July 29, 2013 at 10:58 PM PDT
It's done. I'm happy with it. I managed to squeeze a few final features into the firmware, including persistent storage of the Keyclick / Bell config. I learned a lot, and best of all, I had a lot of fun doing it.
I'm already wondering what my next project should be. Something slightly less useless, maybe, but where's the fun in that?!
Friday, July 19, 2013 at 11:01 PM PDT
I have learned an incredibly valuable lesson, and that lesson is: I do not like soldering prototypes on perfboard. I was at it for about eight hours spread out over the last few days, and in perhaps the fifth hour I paused to reflect on how much time I was putting into cutting, stripping, routing, soldering, and checking connections. I thought about how, these days, it only costs $75 to get a batch of 4 or 5 PCBs made and shipped to your door, with just a week of lead time or less. And finally, I concluded that doing perfboard prototypes is for the birds. Next time, I'm just going to get PCBs made at a board house. Eight hours of my time is worth more than $75.
This time, I powered through my prototype anyway. It's not the prettiest thing in the world, but it works! This is the first (and will be the last) time that I've tried using copper tape as ground and power bus. I'm not convinced the benefits outweigh the complications.
I might get a batch of PCBs made for fun, anyway, but if I do I'm going to shrink the design and use a bare ATmega32U4 instead of a Teensy.
Tuesday, July 16, 2013 at 3:48 PM PDT
When we last spoke, I used the term "Success!" to describe what was going on. That was only partly true. I was not referring to completing the project, but rather to the success of getting raw key scan addresses out of the keyboard. It is a very long way between raw key scan addresses and a usable keyboard. So perhaps I should have titled that post "I can actually write some keyboard firmware now!" Yesterday between obligations I powered through writing rather a lot of firmware, and today I am happy to report that I actually have a better success than the last success. Even more successier than I expected in so short a time!
Friday, July 12, 2013 at 10:13 PM PDT
At last, I'm getting valid keyboard input.
What was wrong? It's so embarrassing. I had a 21.5K 1% resistor out of place. It was supposed to be part of a voltage divider on one of the input pins of the LM311 comparator. Instead, it was just hanging out doing nothing. So, why was I getting ANY input at all? I'm sure that if I dug in I could analyze the circuit and figure out exactly why it was almost-but-not-quite-working without that resistor, but frankly I'm just glad to have sorted it out! I'll leave it as an exercise to the reader.
So, with that taken care of, I can get back to writing the firmware. Again, DEC actually has a pretty good explanation of how their firmware works, so I'm going to try to get mine to behave similarly.
Friday, July 12, 2013 at 2:21 PM PDT
I'm frustratingly close to getting data from the keyboard, but something is clearly not right.
The keyboard protocol is weirder and weirder the more you look into it. Long story short, the terminal is continuously sending a status word to the keyboard, as previously discussed When it wants to read the current key (or keys) being pressed, it sets bit 5 in the status word, and the keyboard responds by scanning every single key on the keyboard in sequence. Whenever it finds a key down, it sends the key code to the terminal, and then continues its scan. When it's done with its scan, it sends the character code 0x7F to say "I'm done!"
Not a design decision I would have made, but whatever.
Anyway, I'm trying my best to get the Teensy 2.0 to emulate this behavior. It's constantly sending status bytes to the keyboard. My demo program clearly shows the status bytes are working, I can control all the lights and the speaker. Once every 64 status bytes, I ask for a key scan. The UART receives the data, and interrupts the Teensy to let it know it has data available. That's all working fine.
In fact, I can successfully read character codes, but only some of them. A lot of them just don't work at all, and others send back weird (but consistent, at least) values that they shouldn't.
Just to be absolutely sure it's not a bad keyboard, I finally dug my VT101 out of storage and turned it on for the first time (carefully, with a Variac). Hey, good news, it works perfectly. No issues. The keyboard is 100% functional on a real VT100 terminal. That is actually a big relief.
So, I'll keep plugging away. Debugging is the fun part, right?!
Thursday, July 11, 2013 at 1:29 AM PDT
I've reached a great milestone tonight. For the first time, I'm actually sending data to the keyboard using the real communications protocol. I wrote a very simple demo program in AVR C to show off a few LED patterns and beep the speaker a few times. Video embedded below.
Now that I know the circuit works, I'm in full AVR C programming mode. Best of all, because the clock is fully implemented in hardware, I don't have to spend all my time worrying about tight timing conditions. I'll be pushing my code on Github just as soon as I remove some of the more embarrassing comments!
Tuesday, July 9, 2013 at 12:49 AM PDT
Just a quick note tonight about a hack I tried that failed. But first, here's the latest revision of my schematic.
If you compare it to the older schematic there are a few small changes. The one I'd like to write about is the addition of IC8, a 74LS04 inverter that sits between the 74LS93 and the 74LS38 in the "Clock Source" section. It inverts the square wave with the 8 uS period. If you go back and look at the original DEC schematic, you'll see they used one there, too. Silly me, I thought I could get away without it. But no, it turns out that UART is clocked on the rising edge, but the PWM encoded data is clocked on the falling edge. It's a critical logic gate, otherwise the data get out of sync by half a period.
Sunday, July 7, 2013 at 6:38 PM PDT
I've annotated a diagram from the VT100 technical manual to explain how that clock timing circuit works. It's pretty neat!
There's a lot going on in that diagram.
The clocks corresponding to LBA3 and LBA4 are CLOCK_B and CLOCK_C in my circuit, with periods of 4.0 uS and 8.0 uS, respectively. The intermediate square waves (I, II, and III) show the logical combination of the data and the clocks. Finally, the outputs labeled OUT and OUT represent the clock output on either side of the 7416 buffer/inverter (because the 7416 is an open collector output, it's safe to pull it up to +12V, which is what the interface expects)
I'm just thankful that DEC produced such marvelous documentation. Everything is explained in such great detail. It sure saved me from having to do any actual work, that's for sure! :^)
Sunday, July 7, 2013 at 6:28 PM PDT
The other night I went to bed frustrated with AVR programming. I was trying to come up with some perfect scheme that would allow me to generate PWM "the right way" so I wouldn't have to bit-bang, but it was janky at best. I wanted to use interrupts to drive the keyboard decoding, but there was no guarantee they'd get serviced in time. Then I looked into whether I could somehow hijack the AVR's USART to work with an external clock and still do 16-sample encoding/decoding (you can't). It was hard, and I could sense that I was setting myself up for a terrible month of debugging impossible timing issues.
And then it hit me. What if instead of all this software, I do it the old fashioned way? What if I use hardware?
OK, so I know the UART that DEC used, the Western Digital TR1865, is no longer produced and it's very hard to find. But it turns out there's an equivalent part made by Intersil! The pin compatible and software compatible HD-6402 UART is not only still made, my favorite local parts shop Anchor Electronics has them in stock. Eureka!
Goodbye, Plan A. Hello, Plan B!
Thursday, July 4, 2013 at 8:54 PM PDT
One more thought for tonight before I call it an evening. How am I going to build the interface?
Lucky for me, DEC already built the interface 36 years ago and documented it thoroughly. In fact, here it is, straight from the schematics (enhanced for readability).
I've highlighted the bits I'm interested in stealing with a dashed green line. This is the electrical interface I described previously, the one that compares the output clock signal and data to the input from the keyboard and converts it into a TTL-level input for the UART.
Of course, in the real VT100 the clock is generated by video refresh circuitry (LBA3 and LBA4) and wire-AND'ed together with the data to generate the PWM signal. I won't have that luxury. I'll be using a Teensy 2.0 AVR development board, and while it does have a UART, it's not exactly TR1865 compatible! So I'm going to resort to bit banging. At 16 MHz, I'm hoping I can squeeze enough out of the Teensy to handle keyboard input, keyboard status update, and USB output. If not… well, I guess I can move up to the Teensy 3.0, which is a 48 MHz ARM Cortex monster. But that seems like absurd overkill to me! Let's hope we don't have to go there.
Thursday, July 4, 2013 at 7:22 PM PDT
I've been delving more into the wire protocol the VT100 keyboard uses to talk to the terminal. As you may recall from my first post in the series, communication is bidirectional, asynchronous serial over a single wire. The protocol uses three reference voltage levels (0, 6, and 12 volts) and makes each end sample its own output to see who's talking, which is… well, it seems weird to me, but I'm sure they had a good reason. So that leaves the actual signaling. How did they send the clock and data using those levels?
Well, it turns out that they used PWM (pulse width modulation). Each bit sent across the wire is encoded on top of 16 clock cycles. The exact number 16 is dictated by the UARTs the designers chose, the Western Digital TR1865. The TR1865 expects a square wave clock signal, and samples its serial input on the falling edge of the pulse for sixteen clock cycles per bit. The DEC engineers cleverly decided to use PWM to change the duty cycle of the clock wave form between 25% and 75%, and feed the same signal to the clock input and serial input of the UART. The serial data can then be separated from the clock signal with a very simple RC circuit and a comparator.
Tuesday, July 2, 2013 at 3:01 PM PDT
I've been delving into the internals of the VT100, and it's an impressive piece of engineering. The terminal itself is a fairly sophisticated computer in its own right, built around an Intel 8085 processor. You might expect the keyboard to be a simple ASCII affair with a 7-bit interface, which was popular at the time, but it's not. Instead, it's a serial interface, like the later PC, AT and PS/2 keyboards we're all familiar with. Unlike the AT or PS/2 keyboards, though, the VT100 keyboard is bi-directional. The terminal manages the state of the LEDs and bell on the keyboard by sending status words to it.
Not too hard, right? It kind of sounds like SPI, no big deal. But wait!
Sunday, June 30, 2013 at 5:33 PM PDT
Egads, is it really July already? It's time for Summer Retrochallenge 2013!
Actually, I've left it right up to the last minute this year. So sorry! I was this close to just giving it a pass, but then a project idea literally figuratively fell into my lap.
Long story short, a couple of weeks ago I bought a VT100 keyboard to complete a keyboard-less VT100 terminal I've been trying to get working. The keyboard just arrived at my home, but the rest of the terminal is down at my storage. I'm far too lazy to go get it, especially in this heat. Solution: a way to plug the VT100 keyboard into my Mac and use it without the terminal!
Sunday, February 17, 2013 at 2:14 PM PST
When I was about 16, I fell in love with books. Oh, I don’t mean reading — I was in love with reading long before then. I mean that I fell in love with books as objects, as vessels of information with their own form and function. In college I took a course on bibliography called The History of the Book, which only served to make the love stronger. In 2002 I studied bookbinding at the North Bennet Street School in Boston. So, when the 46th Annual California International Antiquarian Book Fair came to town, of course I had to visit. While I was at the fair, I attended a talk titled “Forging Galileo” by assistant professor Nick Wilding of Georgia State University, and it kind of blew my mind.
Thursday, January 31, 2013 at 9:20 PM PST
And now, the inevitable video.
Wednesday, January 30, 2013 at 11:43 PM PST
Well, the project is done, I've put down my soldering iron, I've stopped writing code, and I'm in the middle of editing a demo video (to be posted later, when I'm a little less tired).
I'm mostly happy with how things turned out this time around. I was successful with my project, I've met my goal of having usable tape storage. I should probably put the word usable in quotes, though, like this: "usable". Because, to be honest, it is somewhat flakey. I have about a 70% success rate with saving programs that can be read back later. I think my main problem is just how much noise my circuit injects into the signal. I really did not use best engineering practices: I have no ground plane, my cables are unshielded, my leads are too long, I'm just begging for noise. So, really, I'd say 70% success is not so bad, all things considered.
Tuesday, January 29, 2013 at 12:39 PM PST
Good news! I tracked down my checksum bug last night, and just in the nick of time, everything is working. I can both LOAD and SAVE from EhBASIC, and my little SBC is happy. Finally, a Retrochallenge that I've completed successfully!
For the technically minded who are interested in the gory details, I'll explain exactly what was going wrong. BASIC programs are stored in tape files that contain two records.
Each record starts with a 770Hz tone as a header, followed by a short sync bit, then the record data, and finally a checksum. The checksum is just a running-XOR of the data.
The first record is just two bytes, it specifies the length of the data in the second record (little endian, which is, as we all know, the only true endianness).
The second record is the BASIC program data. The bug I was seeing on Sunday was that the checksum failed, but only on the DATA record, never on the LENGTH record. I could not for the life of me figure out what was happening. I went so far as to capture the tape data on my oscilloscope and walk over it bit by bit, by hand, and calculate a checksum. The checksum on the tape was correct, it should have read correctly. And of course the checksum of the LENGTH record was matching correctly.
Then I had a breakthrough, one of those things that should have been
painfully obvious but was not. I discovered it by walking through the
data that was loaded into memory off the tape. EhBASIC puts the
$0301, so I dumped page 3 to see what was in it. It all
looked correct, until I noticed that the checksum itself was being
loaded as data. I had a simple, very stupid off-by-one error in my
arithmetic, so I was slurping in the checksum as part of the program
data by mistake. It was then reading garbage (random tape hiss noise)
from just after the record and treating that as the checksum, which of
course never matched. Aha!
I fixed my arithmetic, burned a new EPROM, and now, at last, I can
LOAD just like it's 1977 all over again.
Tomorrow I'll post a quick video, and a final Retrochallenge wrap-up.
Monday, January 28, 2013 at 12:46 AM PST
OK, it turns out this is harder than I expected. I'm fighting a very strange bug in my LOAD code. It's complaining of a checksum mismatch, but there doesn't actually seem to be one. And, weirder, my program loads anyway, despite the fact that the checksum failure branches away from the code that is supposed to reset the BASIC program and variable space.
In short, it's failing where it's supposed to succeed, and it's succeeding where it's supposed to fail. Wait, what?!
Whatever the bug is, I still intend to squash it before the Retrochallenge deadline on Thursday. Plenty of time! (he said, over-confidently)
Sunday, January 27, 2013 at 12:50 PM PST
Golly, I do like waiting until the last minute, don't I?
No, I haven't given up. I'm just finishing up the last few bits. I meant to get everything done yesterday, but got distracted by Real Life Matters. Oh well! But I have plenty of time today.
The very last thing I need to do is patch together my READ and WRITE routines into LOAD and SAVE BASIC commands. Since my little computer uses Enhanced 6502 BASIC, this will not be difficult. EhBASIC provides empty hooks for your own LOAD and SAVE code, so it's very easy to patch into.
By this time tomorrow, I hope to have a finished project ready to demo.
Monday, January 21, 2013 at 7:01 PM PST
Great! I can both save and load from tape now!
I'm not quite done yet. I want to add checksum code and get this integrated into Enhanced BASIC so I don't need to drop down into the monitor to save and load. Gotta make it user friendly, after all!
Friday, January 18, 2013 at 12:26 AM PST
Good progress tonight! I think the code to WRITE data to tape is fully complete. I'm not particularly impressed with my own 6502 assembly, but by golly it works.
Monday, January 14, 2013 at 10:29 PM PST
Alright, it's not much to look at, but it's a start.
This code generates ten seconds of 770Hz square wave, which is the tape header. It is both less compact and less elegant than the code that Woz wrote for the Apple 1 and Apple II, but for that I blame my inexperience (and the fact that Woz was a kind of 6502 savant who had no need for such niceties as assemblers). I hope that it is at least fairly easy to read and understand.
Saturday, January 12, 2013 at 4:54 PM PST
A little while ago I wrote the 6502 simulator pictured above, Symon, and released it as open source software. I wrote it because I was developing a small 6502 computer, and wanted a simulator that matched the hardware's memory map. I've always been interested in learning more about simulation, so it was a natural project for me to gravitate to. I've enjoyed working on it tremendously, but there is only one problem: I wrote it in Java.
Friday, January 11, 2013 at 8:46 PM PST
I've been pretty quiet over the past week. This is primarily because my day job has kept me busier than I'd like, compounded by the fact that I've come down with a cold. But I shall persevere and forge on ahead. The hardware is done, and I'm currently working on the software. But, how exactly does it work? That's what I hope to show in this post.
Saturday, January 5, 2013 at 12:37 AM PST
I put together the cassette input circuit tonight on a teeny-tiny little perfboard. It's not really much to look at, but happily, it works!
Its sole purpose in life is to take the audio from a cassette and turning it into a useful digital signal. Here's a capture from my scope, showing the cassette input on the yellow trace, and the digital output on the blue trace. The input is showing 200mV per division, so it's a peak-to-peak of about 300mV. The output trace is showing 2V per division and it's peaking at about 3.9V, but that's because I forgot a pull-up resistor. I'll add that tomorrow!
I've annotated the screenshot to show how the data is encoded on the tape. This is actually from a copy of APPLE 1 INTEGER BASIC (the original, the one that Woz wrote!) that I downloaded as a digital audio file from Brutal Deluxe Software and then recorded to cassette myself. The Apple II used the same format. A “0” is encoded as one cycle of a 2 kHz signal (about 500µs wide), and a “1” is encoded as one cycle of a 1 kHz signal (about 1000µs wide). This seems like a fine format to use for my own project, so that's what I'll do.
I have a few minor tweaks to do to the hardware before I mount it into the project box next to my 6502 SBC, but I should have all the hardware done tomorrow. Then it's onto the hard part: the software.
Friday, January 4, 2013 at 12:14 AM PST
I managed to get a bit of experimenting in last night, and the first thing I learned was that my decision to crib the cassette input circuitry from the Apple II was not going to work.
Tuesday, January 1, 2013 at 9:41 PM PST
Happy New Year! It's January 1st, and that means it's the first day of the 2013 Retrochallenge Winter Warmup!
This year I'm going to be tackling a small project, something I know I can finish in a month. I want to add cassette mass storage to a 6502 Single-board computer that I built about a month ago. It's a nice little computer, but without any kind of storage system for programs it's kind of useless.
Naturally, the project will require both hardware and software. Today I've been doing research into how cassette storage worked on a few classic systems: The TRS-80 model 1, SYM-1, the AIM-65, the Apple I, and the Apple II. After briefly evaluating all of these, I think I'm going to just steal from take inspiration from the Apple II Cassette Interface. It has two big advantages: It's very well documented, and both the hardware and software are extremely simple. Of course it has some drawbacks, too. In part due to its simplicity, the Apple II Interface was finicky and required careful setting of the tape audio level on playback for everything to work right. I think I'm OK with that trade-off.
On the hardware side, the cassette interface should require very little. For I/O, I'll only need one pin on my 6522 VIA. Writing will be done directly, and reading will use an LM741 Op-Amp as a zero-crossing detector, just like the Apple II. Software will use polling and a counter to determine whether two logic level transitions are a one or a zero, just like the Apple II.
I suspect the hardest part of all of this will be getting the tape record and file format right. My goal here is NOT to be 100% Apple-II compatible, so that gives me some leeway.
Wish me luck!
Saturday, December 15, 2012 at 3:59 PM PST
In the spirit of fairness, I'm not going to start work on the Retrochallenge Winter Warm-Up until January 1st, but I did want to start laying the groundwork so I can hit the ground running on the first of the year. I took my first step in preparation today by buying the all-important cassette player, an old-school Panasonic RQ2102.
I chose the RQ2102 specifically because of its historical accuracy. Apple used to recommend this very model as the preferred cassette recorder for the Apple II — and yet, it's still made today. Remarkable!
Of course I don't strictly need a cassette recorder. Sure, I could use a PC or an iPad or something as an MP3 recorder. But honestly, where's the fun in that? So I want to use the real thing.
Friday, December 14, 2012 at 3:34 PM PST
Following the rather disastrous non-completion of my PDP-11 restoration project for the Summer 2012 Retrochallenge, I have decided to bite off a much more realistic and completable project for the 2013 Retrochallenge Winter Warm-Up.
Last month, I built this small 6502-based single board computer to play with. It is nothing special, but I learned a lot while putting it together and had quite a lot of fun.
It's a super minimalist setup: R65C02 CPU, 32KB SRAM, 16KB EPROM, R6522 VIA, R6551 ACIA, a couple of oscillators, and a couple of 74HC00s and an 74HC14 for address decoding and reset. I wire-wrapped it, and I was thrilled when I got it working and running Enhanced 6502 BASIC.
But it's really just a toy, you can't do very much with it. My biggest gripe by far is that there's no mass storage, you can't SAVE or LOAD anything, so you'd better hope your BASIC program isn't very long.
In the spirit of 1970s computer home-brewing, I'd like to fix that problem by adding an audio cassette I/O system for mass storage. I think this would be a good, small Retrochallenge project. And it's a great unknown to me, certainly something I've never done before, so it should be very educational as well!
Sunday, September 16, 2012 at 6:25 PM PDT
It's been forever since I updated here, hasn't it!
Well. This is hard for me to say, but I think it's time I officially give up on the 11/35 restoration. Wait, hear me out.
I've been poking at it over the last month or so, trying to make heads or tails out of what's going on. And, unfortunately, I can't. It has been exceedingly frustrating just trying to understand the problems with the startup status pulses, let alone repairing what looks like a completely flakey timing board, and possibly dead ROMs on the microword board.
When I received them, the board set was covered in rust and mouse piss. The IC legs (with the exception of the gold-plated ones, of course) were black, or red and spotted. I'm shocked I was even able to get the power supply back to life. So it's really no surprise that the logic repair is above my skill level. Would it even be repairable to a DEC engineer? Maybe. I don't know.
So tonight, I will regretfully put it back into the rack, clear off my workbench, and move on to some other projects. Maybe I'll come back to this some day, if I ever find a replacement set of KD11-A boards. I do know someone who has a set. But for now, it's time to move on.
Monday, August 6, 2012 at 1:09 AM PDT
I know I said I was going to take a few days off, but I couldn't help myself. I spent about two hours looking at signals on the PDP-11/35 today.
Saturday, August 4, 2012 at 11:25 PM PDT
The 8881 was a red herring. As Ian pointed out in the comments, the output is an open collector, so it won't be high until BUS DC LO pulls it high. In fact that gate is only pulsed on power-off, not power-on. So that chip is probably just fine.
That said, I'm still completely lost. There is definitely something wrong with the power-on sequence, but I can't for the life of me figure out what it is. I'm going to step back and take a break for a few days and see if inspiration strikes me.
What I really wish I had was a known-good KD11-A to compare against, but that is very unlikely to happen, alas.
Friday, August 3, 2012 at 11:12 PM PDT
It looks like an 8881 is bad on the STATUS module.
Pins 5 and 6 are inputs, and pin 4 is the output of a NAND gate on the 8881. That sure looks like a bad part to me.
I've ordered a few spare Signetics N8881's off of eBay, but… I have to admit, I'm really down and disheartened about the project tonight. I didn't get more than an hour of debugging in tonight before I found this chip, and now I'll be high and dry for another week until my chips come in. And what will I find next?
I really think this might be a lost cause. I hate to think about giving up after all the time and money I've poured into this so far. There are other projects I'd like to be working on, but I've had this PDP-11/35 occupying my entire workbench for a month now and I don't feel like I've actually accomplished very much.
Well, maybe the feeling will pass. I'll give it one more go. If I continue to find as many dead parts after getting past the 8881, I'll reconsider whether it's worth it to continue.
Thursday, August 2, 2012 at 12:09 AM PDT
This is what makes a PDP-11/35 or PDP-11/40 tick. It turns out to be 421 ICs. Impressive!
I needed to know what TTL ICs I might need to order replacements for, so I decided to tally up exactly which ones go into making a KD11-A CPU. This information comes straight from the PDP-11/40 Engineering Drawings, which list a summary of ICs used in each module on the first page of the module's drawings.
Wednesday, August 1, 2012 at 1:24 PM PDT
Obviously, I did not finish my project in time for the conclusion of Retrochallenge. Of course I'm a little disappointed, but in retrospect there's not much that I could have done to make it come together successfully before August 1st.
A few things went wrong: I didn't get my chassis back from the powder coater until over a week into the month already, and then my power supply died just as I was getting started. But even so, given what I've discovered in the CPU so far I doubt that I would have made it by August 1st even without those problems. In fact, I predict this restoration will take several months more at least. I'm fully committed now, I'm not stopping until this thing works, even if I have to replace every single IC.
I'm counting this as a valuable learning experience, and I'll be back with some crazy new idea for the next Retrochallenge.
Wednesday, August 1, 2012 at 10:27 AM PDT
A couple of comments here and over on the Vintage Computer Forum prompted me to pull out my copy of Don Lancaster's "TTL Cookbook" to verify my assumptions. As is so often the case, my assumptions were wrong.
The original TTL family used in the KD11-A, the venerable 7400 series, can drive a maximum load of 16mA per output, and consumes 1.6mA per input. The more modern (and cheap and easy to find) Low-Power Schottky 74LS00 parts on the other hand can drive a maximum load of only 8mA per output.
Tuesday, July 31, 2012 at 1:06 AM PDT
Now that the power supply is straightened out (at last!), I've been able to start tracing logic and seeing what's up with the CPU.
First things first: It still doesn't run. Last week before the power supply died I mentioned that I thought it may have been due to low voltage. That was wishful thinking. Now that the voltage levels are good, the behavior of the CPU is roughly the same. It is time to go spelunking into the lair of the beast.
I wanted to tackle the HALT/ENABLE switch first. If the CPU is halted (switch set into the HALT position) it should be running the console loop, waiting for commands from the console switches, but I have no evidence that it is. I had already verified that the 7410's on the console were working correctly, so my investigation took me from there to the CPU itself.
Sunday, July 29, 2012 at 11:55 PM PDT
It's remarkable how much spam is directed at this little blog. In a given day, I'd say the Akismet plugin stops something like 5 to 8 spam comments from getting posted here. It's absolutely infuriating and ridiculous, especially for such a podunk little no-name blog like this. I can't imagine how many the big ones get!
On the other hand, it would be so much worse if the comments actually made it through. I should be grateful that Akismet is so good at stopping them (knock on wood).
Saturday, July 28, 2012 at 10:03 PM PDT
I replaced MPSA05 and MPSA55 transistors that I suspected of being reversed, and bingo, the power supply is working again. I'm still upset with myself that I put them in backward, but in the end no other damage was done. Thank goodness.
Now I have a good, stable 5V supply, and I've adjusted the output correctly. That means I can get back to the business of debugging the CPU and seeing why the machine won't run.
Friday, July 27, 2012 at 10:37 PM PDT
Oh my God! I know what I did wrong with the power supply. Good heavens!
DEC used GPSA05 (NPN) and GPSA55 (PNP) transistors. I replaced two that were shorted out with MPSA05 and MPSA55's. They have completely identical specifications.
But guess what? THEY USE REVERSED PINS. Pin 1 is the emitter on the MPSA05/55's. It's the collector on the GPSA05/55's.
Lesson learned: ALWAYS CHECK THE TRANSISTOR ORIENTATION. Especially when you're 110% sure the parts are identical.
I will report back after I have fixed my mistake.
Friday, July 27, 2012 at 12:56 PM PDT
Last night before bed I whipped up a circuit simulation of the 5V regulator using iCircuit running on OS X. It's a little buggy, and it's certainly no Spice, but the real-time feedback is fantastic.
The beauty of this is I can play with it and examine possible failure scenarios by shorting or opening various components to see what happens. I think I have a much better idea of how the regulator actually works, now, and I'm going to do a careful part-by-part check, from one end of the circuit to the other, and find out what's failed. My gut tells me I should check R50 (the potentiometer), R47, and the 2.4V reference zener. So I'm not giving up hope yet!
Thursday, July 26, 2012 at 7:28 PM PDT
Things are certainly looking grim for my hopes of completing restoration before July is through. I have replaced every shorted part, and I still don't understand what's going wrong. I am letting my Retrochallenge friends down!
The problem really is that I am simply out of my league with the +5V regulator. I can fake my way through digital electronics quite well because much of it feels natural to me. And since everything I do is low frequency, I have the luxury of more or less ignoring capacitance and inductance, and I can pretend that transistors are nothing more than electrical switches. It makes life so easy.
Tuesday, July 24, 2012 at 5:04 PM PDT
You're probably wondering to yourself, "Self, why hasn't Twylo updated his PDP-11 repair journal lately? It's been days!"
I can sum it up with one schematic:
That's the schematic for the +5V regulator in the 11/35's power supply. It blew up on me (figuratively) just as things were getting good.
Wednesday, July 18, 2012 at 11:44 PM PDT
I've taken the initiative to dig into the KD11-A Maintenance Manual and the Engineering Drawings a little bit. To say I am intimidated would be a bit of an understatement. This is a very complex machine that many smart people with engineering degrees designed. I like to think that I am capable of screwing in a lightbulb without hurting myself, but I do not have an engineering degree. My electronics knowledge is 100% self-taught. So I feel, I think somewhat justifiably, like I may be a little out of my league.
On the other hand, there's a sense of freedom in not really knowing how much I don't know.
Tuesday, July 17, 2012 at 9:13 PM PDT
Last night was the big moment. I fired up the 11/35 for the first time with the CPU installed.
I'm pleased to report that nothing popped, no fireworks went off, and no magic smoke got out.
On the other hand, it did not work correctly, either. The processor starts up, the RUN light comes on, random(-ish) data is displayed on the ADDRESS and DATA lights, and that's about all that I can make happen.
So now begins the really hard part, the logic debugging. I've started a thread over on Erik S. Klein's Vintage Computer Forum to discuss what I'm seeing, with the hopes that the very smart (much, much smarter than I) DEC fiends over there will be able to offer insight into how I should proceed with debugging.
I am armed with the KD11-A Processor Maintenance Manual, the PDP-11/40 System Engineering Drawings, an 8-channel logic analyzer, an oscilloscope, a multimeter, my brain, and the Internet. This will by no means be easy, but it will definitely be educational, and probably fun and frustrating in equal parts. Let's do this!
Friday, July 13, 2012 at 9:47 AM PDT
Things really started to come together last night. I finished installing the cooling fans, dropped the power supply into the chassis, and connected the 9-slot backplane. No cards and no console, but I consider this my first "smoke test" of the fully assembled chassis and power subsystem. It works!
Exciting days ahead. I have to re-assemble the console, and then I'll drop in the cleaned-up CPU cards this weekend and see what happens. I'm a mite skittish about that.
Sunday, July 8, 2012 at 10:57 PM PDT
I spent this weekend getting started on the reassembly of all the little bits and pieces that I pulled out of the 11/35 while cleaning it. I'm super glad that I photographed everything and bagged and tagged all the screws and such when I pulled it apart, because it's made the process of putting it back together a lot easier than it otherwise would have been.
Thursday, July 5, 2012 at 10:59 PM PDT
I picked up the PDP-11/35 chassis parts from the powder coat shop this afternoon, and they are spectacularly well done. Total cost was $100.00, which I consider money well spent.
I'm thrilled at how everything came out. There was some really deep pitting from rust, and of course some of the pitting is still there after the metal was sandblasted and powder coated, but at least now everything is protected from corrosion again. With proper care this chassis should last a very, very long time.
Now that I have everything back I will start piecing it back together. I want to get it all assembled this weekend, so I can start debugging logic by early next week.
Onward and upward!
Thursday, July 5, 2012 at 10:04 AM PDT
The powder coater has my job done, so I'm heading up to South San Francisco this afternoon to pick it up. That means we're back in business, and I'll be able to start putting the 11/35 back together this evening. Woo!
Saturday, June 30, 2012 at 3:06 PM PDT
Now I'm getting worried. Retrochallenge starts tomorrow, and I still don't have my BA11-K back from powder coating. I'll have to call tomorrow and find out when the ETA is. I know the shop was really backed up, but it's been three weeks now, so I hope they'll have it done soon. Every day without it is one less day to get the 11/35 fixed up before Retrochallenge is over!
Tuesday, June 26, 2012 at 8:37 PM PDT
Well, I'm still waiting for my powder coated parts to come back. I sure hope they get here this week, I need to get moving on this project because I've entered the July 2012 Retrochallenge!
If I have any hope of actually making the PDP-11/35 work, I'd better start getting things assembled for testing the first week of July. I don't know for sure, but I suspect I'm going to have to spend a lot of late evenings in the workshop with a logic analyzer and an oscilloscope or two. Forty-year-old TTL has a way of going bad!
Wednesday, June 20, 2012 at 11:09 PM PDT
While I'm waiting (rather impatiently) for the chassis and bits to come back from powder coating so I can start assembling things, I've moved on and finished cleaning up the Unibus backplanes and front panel switches.
Monday, June 11, 2012 at 5:06 PM PDT
Well, I dropped all the chassis bits off at the powder coater today. They'll get sandblasted and powder coated, except the front panel bezel, which is just getting sandblasted. I'll paint that white myself.
It's going to be gone for at least a couple of weeks, I think, since the shop is backed up with work. But that's fine, I'm patient.
Sunday, June 10, 2012 at 6:52 PM PDT
Tonight I moved on to a task I've been dreading: cleaning up the Unibus backplanes.
Electrically, they seem like they were probably in acceptable shape, but they are incredibly filthy. They were exposed to the elements in a shed for many years, face up and just waiting for God knows what to fall or crawl into them. The slots that did not have cards had become embedded with bits of fiberglass, dirt, spider webs, insect droppings, and probably a goodly dose of mouse piss. Some of the individual nooks and crannies (I don't know the technical term here - the individual slot openings where a single finger of one card edge connection is installed) had what I assume were insect or spider egg sacs in them, gripping the sides tightly.
Saturday, June 9, 2012 at 11:16 PM PDT
I started testing the LEDs on the KY11 front panel tonight. The process was simple: apply power to the panel, and ground each pin one by one.
There was only one dead LED, but several others were barely visible so I decided to replace them too. Unfortunately, the new LEDs are noticeably brighter than the old LEDs (though drawing the same current), so I ended up pulling and replacing ALL of them.
Normally I'd hate to do a wholesale replacement like that, but in this case I think it'll be worth it.
Thursday, June 7, 2012 at 5:46 PM PDT
I had hoped to get the BA11-B and H750 chassis in for sandblasting and repainting today, but the painter I called turned down the job. He is not used to dealing with sheet metal. I appreciate him turning it down rather than trying to do it and not doing a good job.
Thursday, June 7, 2012 at 12:41 AM PDT
Now that I have a working power supply, the first thing I'm going to do is put the machine back together, right?
Thursday, June 7, 2012 at 12:31 AM PDT
Success! At long last I am thrilled to report that my DEC H750 power supply is fully operational. All the voltages check out, and both with and without load the waveforms show no ripple. It didn't come easily, of course, but I got really lucky and along the way I learned a lot about DEC's power supplies.
What follows is a bit long. Unless you're really interested in reading too much information about the process, please feel free to skip it and just say "Oh, that's nice, Seth got his power supply working. Jolly good!"
Wednesday, May 30, 2012 at 7:25 PM PDT
As promised, I received my replacement H744 +5V regulator in the mail last Tuesday. I haven't tested it yet, but it looks brand new. Other than that I don't have any more progress to report on the PDP-11, but a big project at work is just winding down, and I'm taking several days off next week. That will give me some time to really start tackling the power supply. I'll post pictures and updates here as I work on it.
Friday, May 18, 2012 at 7:24 PM PDT
Just a quick note. I'm not going to have much time to work on the PDP again until next weekend. But my plan is to disassemble the power supply completely at that point. The replacement H744 is on the way and I'll have it in my hands on Tuesday, that should go a long way to making things functional.
So, no, I'm not giving up. Not yet!
Sunday, May 13, 2012 at 7:23 PM PDT
Today's activities were spent taking the H750 power supply out of the BA11-B chassis. I haven't found a scrap of documentation on the H750 or the BA11-B, so this was a little harder than it sounds. Luckily for me, DEC engineers of 40 years ago put everything together into a package that was fairly easy to figure out, so it didn't prove to be too difficult. Actually, probably the hardest part was getting the BA11-B out of the rack and onto the table without destroying my back. This thing is flippin' heavy.
Wednesday, May 9, 2012 at 7:15 PM PDT
I got some guidance today on the classiccmp mailing list, and finally figured out how the AED 2200 Disk Controller and the Diablo 30 connect to the Unibus. So, with that fresh in mind, tonight I decided to de-rack the Diablo 30 and inspect it.
Tuesday, May 8, 2012 at 7:10 PM PDT
Big day today, restoration wise. I extended the 11/35 out on its rails until they locked, and flipped it up so I could get access to the underside. Like the top cover, the bottom cover of the 11/35 is missing - I assume ATARI got rid of it years before they scrapped the system. But the system modules (as the backplanes are called) seem to be relatively unscathed. The pins show signs of oxidation, but no severe corrosion. Electrical connections are probably OK, but I'll need to verify by tracing, once I've washed out the backplane slots thoroughly. Not tonight, however. For now, all I've done is remove the system modules and set them aside for future cleaning and testing.
Friday, May 4, 2012 at 7:04 PM PDT
More vacuuming, and started using distilled water and clean rags to wipe up surface dirt on the chassis, front panel, and cables.
To switch gears a little and pulled the DSD-440 dual 8" floppy drive from the rack and and cleaned the chassis with a soft damp rag. Upon opening it up I discovered that the interior is almost pristene! Very little dirt inside. This bodes well.
Thursday, May 3, 2012 at 7:00 PM PDT
More card pulling, but no washing. My main goal was just to get the chassis empty of cards, get all the cards photographed, and get them into antistatic bags. So far, only the first four cards have been washed. I think I'll probably end up finishing disassembly before I do any more washing, anyway.
Tuesday, May 1, 2012 at 7:01 PM PDT
My first night of restoration! I started by getting hundreds of photos of everything from every angle, so I could document the original state as best I could. I also made note of the system configuration before I started to touch anything.
Monday, April 30, 2012 at 6:56 PM PDT
Here's a little more information about the PDP-11/35.
It came to me with the following peripherals:
ECCO Paper Tape Reader Diablo Series 30 disk drive – compatible with DEC RK05 Applied Engineering 2200 Disk Controller for the Diablo Series 30 Data System Designs DSD 440 floppy disks – compatible with DEC RX02 DEC H750 Power Supply
Sunday, April 29, 2012 at 4:00 PM PDT
On Sunday, April 28 2012, I picked up a PDP-11/35 from a friend who had been storing it in a shed for many years. The system has been home to generations of spiders and mice, and needs more than just a little bit of TLC.
Monday, September 12, 2011 at 3:44 PM PDT
I've been doing this ham radio thing for just over a year now. Even though I don't talk very much, I still really enjoy listening and occasionally making long-distance (DX) contacts. One area of study I've been dragging my feet on for ages is learning Morse code. If you're not into ham radio, you may assume that Morse code died out with the telegraph, but the amateur radio bands are still alive with Morse code. Up until 2007, knowing "the code" was still an FCC requirement to obtain the highest level ham license, and before 1990 it was required of every license holder, regardless of level. Those of us who got our ham radio licenses after 2007 never had to experience the sweaty palms of an FCC Morse code exam!
Friday, July 1, 2011 at 1:29 PM PDT
I've recently been reading Brian Bagnall's book On the Edge: The Spectacular Rise and Fall of Commodore. It's a good read, if a little rough in places and perhaps in need of more judicious editing.
But while reading it, I've also kept some source materials by my side. Thanks to modern technology, I have my iPad with a collection of Byte magazine from the late 1970s sitting next to the book, and it's been fun to go find the original articles that Bagnall sourced for his work.
Wednesday, October 20, 2010 at 10:39 PM PDT
Well, I finally have my new call sign. It showed up in the FCC ULS database on Friday, and I'm still getting used to it. Say hello to NF6Q!
It definitely feels a bit weird to have a new call sign. But I'm pleased to have a call that's easier to say and much more friendly to DX.
Now I'm working on a new QSL card. I got a very stock card for KJ6HZC, so I'd like something a bit nicer for NF6Q.
Thursday, September 23, 2010 at 10:55 PM PDT
My home location is unfortunately very poorly suited to ham radio. That's a subject for another post, but suffice it to say that if I want to get on the air I need to go portable. A lot of the time I adore going out and setting up on a park bench, putting on the headphones, and making QSOs. But other times I'd like to be able to just turn on the radio and get on the air without all that setup and tear-down fuss, you know?
Wednesday, September 22, 2010 at 1:48 PM PDT
I spent a rather silly amount of time agonizing over what vanity call sign to pick. In the end, I chose a 2x1, and some backup 1x3's. I won't say yet what they are, but I should know in a few days which one I'm likely to get. To be honest any of them would be great, and they're all much less tongue-twisty than KJ6HZC is. Will I have a new call sign before Pacificon on October 15th? Maybe, but only three weeks to process a vanity application is pretty optimistic. Wish me luck!
Saturday, September 18, 2010 at 11:08 PM PDT
After a couple of weeks of solid study, I drove to the Saratoga Fire House this morning and took the Amateur Extra upgrade exam. Boy was I nervous! After all this time, I still sweat every test like a Freshman in college. But I didn't have to worry. I got all 50 questions correct, didn't miss a single one. So as of 10:05 AM this morning, I'm KJ6HZC/AE.
Friday, September 10, 2010 at 5:16 PM PDT
I have such a knack for ignoring this blog, haven't I? Well, nothing like a new post to help me break out of the habit.
These past few months I've been having more and more fun with amateur radio. Why on Earth didn't I get into this sooner? Well, I'll be honest, when I was younger I didn't have any interest at all in ham radio. It's not that I was /un-/intereseted in it. It's more that it simply didn't enter my mind. I gave it no thought. So of course I wasn't a ham. It wasn't until I indulged in a desire to pick up a short-wave radio in 2008 that the idea popped into my head to check out this ham radio thing.
Friday, May 21, 2010 at 11:26 AM PDT
Last Saturday I drove down to the Saratoga Fire Station and took the FCC amateur radio license exam. Well, actually I took two of them; one for the Technician class, and another for the General class. On Wednesday, I got my call sign, KJ6HZC.
Sunday, December 20, 2009 at 4:10 PM PST
It has been almost five years since I maintained a weblog on anything like a regular basis. My attempts since then at making regular observations on day-to-day life have been spotty at best, and all have been aborted before they had a chance to grow.
I hope that this time, finally, I'll achieve the kind of momentum I need to keep writing. Will it work? Only time will tell.