With the recent revival of the thread about an LOS accelerator, I thought that it might be a good idea to have a quicker, easier, and cheaper way to prototype and test new Lisa hardware. And also a way for people to have a 100% authentic Lisa experience (no emulation) without having to buy or build an actual Lisa. So I figured I'd try and implement the entire machine in SystemVerilog on an FPGA.
I'm still pretty early into the project right now, so there's a decent chance of failure or it taking forever, but I figured I'd at least tell everybody about it.
So far, I've written SystemVerilog modules for the 512K RAM board and the CPU board. I'm in the process of testing and debugging them right now, and needless to say, there are still a LOT of problems to work out. But most of the timing logic (Page 2 of the CPU board schematic) seems great, and I just got the CPU to start fetching instructions from the ROM, although it's clear that it gets stuck pretty quick. The video state machine seems mostly functional too, although I think there's some signal contention messing something up there. So it's just slow incremental progress, chipping away at each failure until I slowly get closer to a working CPU board. I think there are some serious problems with the MMU, so that one might take a little while.
I'm going to wait and do the I/O board once I get the CPU and RAM boards fully-working (or at least as fully-working as I can test without the I/O board). If I remember correctly, a Lisa without an I/O board should boot loop with some sort of error on the screen, so this should be a good enough configuration to validate minimal functionality of most of the CPU board and RAM hardware.
I'll keep you guys posted on my progress, or lack thereof!
Neat! I'm curious enough to press for more details --- answer whatever you feel like answering:
What FPGA are you using?
What 68k core are you using?
As one goal is to support developing new Lisa hardware (I'm imagining the situation where I want to develop a new expansion card), do you expect developers to prototype the hardware inside or outside of the FPGA? If the latter, how will you cross the +5V barrier?
Good luck!
Fantastic!
Please keep us updated!
Quote from: stepleton on September 05, 2025, 03:44:53 AM
What FPGA are you using?
I'm using a Xilinx Pynq-Z2 board with a Zynq 7020 chip on it, just because it's what I already had on hand. No particular reason, although I avoid anything from Altera at all costs because of how horrible Quartus is. Note that I haven't actually run the Lisa on the FPGA yet, short of synthesizing and programming it once just to make sure that my design is synthesizable. All of my testing is being done in simulation to save heaps and heaps of time.
Quote from: stepleton on September 05, 2025, 03:44:53 AM
What 68k core are you using?
The FX68K (https://github.com/ijor/fx68k) because it's supposed to be cycle-accurate, and a lot of the CPU board timings in the Lisa are centered around the timing states of the original processor. It gets clocked differently from the original 68K though, and I don't think its clock is in phase with the rest of the system right now.
Quote from: stepleton on September 05, 2025, 03:44:53 AM
As one goal is to support developing new Lisa hardware (I'm imagining the situation where I want to develop a new expansion card), do you expect developers to prototype the hardware inside or outside of the FPGA? If the latter, how will you cross the +5V barrier?
Right now, I'm picturing people designing their hardware inside the FPGA so that they can get a working prototype going way faster and cheaper than with real hardware. But once it comes time to convert that design into real hardware, I think bidirectional level shifters will be the way to go.
An update on my progress: I can now confirm that the Lisa is getting through the ROM checksum tests and the MMU tests and configuration routines in the boot ROM, all the way up to the point of enabling the MMU for access to RAM. The MMU even seems to translate its first RAM address properly, although it's just address 0 and a 0 could pop out in a variety of failure modes, so that doesn't really confirm a whole lot. But then when it tries to talk to RAM for the first time, it never gets a /DTACK and everything screws up. I've traced the problem down to /CAS getting inhibitied when it shouldn't be thanks to a slight timing discrepancy with the address strobe. Now I'm just having to figure out why the address strobe doesn't quite look the way that it's supposed to. I hope they weren't mistaken when they said the core was cycle-accurate; it's probably just a case of me screwing up the weird 2-phase clock that the core requires.
Good news! After much debugging, according to my simulations, the FPGA-based Lisa is making it all the way to the point where it tries to talk to the I/O board, realizes it's not there, and shows an error code on the screen!
This means that much of the logic on the CPU board is confirmed working, including the entirety of the MMU, all the timing stuff, the system control latch, video address latch, bus error circuitry, and so on. There are only a few things on the CPU board that haven't been tested at this point in the boot process, like interrupts, the memory error address latch, and system status latch, but those are all really minor and easy to fix if broken.
And obviously the 512K memory board has to be pretty much fully-working to get to this point too, because the ROM has already done memory sizing, a full test of the 32K video page, and has stored constants in low RAM that it's clearly able to read back. I know that something's wrong with the RAM board's /HDER (hard memory error) signal and it's just stuck asserted all the time, but I've disabled it for now in order to get to the point of hopefully having something on the screen. It should be an easy fix when I get around to doing it.
Now it's just a matter of moving out of simulation and trying this on the actual FPGA, at which point I'll be able to see what's on the screen to confirm that it's actually displaying stuff like the instructions executing in the simulation are indicating. There's one roadblock keeping me from getting it onto the FPGA; Vivado is complaining about a double-driven net that I'm really confused about because it's clearly not being double-driven, but once I figure that one out, it should be ready to put on the FPGA and test!
Once that's working, the only big step left before we have a completed Lisa is making and testing the I/O board, which should hopefully be easier given its relative simplicity compared to the CPU board. I believe there are preexisting Verilog cores for the 6522 VIA, COP400 microcontroller series, and 6502 (which can be used in place of the 6504), but I'm not sure about the 8530 SCC. I might have to implement that chip myself.
Wow! reminds me of when Dr. Chandra revived HAL in Odyssey 2 ;)
Another update: although things worked in the simulation, they don't quite work in real life.
When I hook the FPGA Lisa up to a display, it's clearly syncing properly, so that's at least something. But the rest of the system seems to be pretty dead. After hooking some virtual logic analyzer probes up to the FPGA, I think that most of my problems are stemming from signals being in undefined states at power-on. In the simulation, these kind of work themselves out after the CPU comes to life and starts executing code, but in the actual hardware, they cascade until about 50% of the signals that I'm probing get locked up.
As the fix, I'm working on improving the reset logic so that it puts all these signals into the proper initial states, in addition to the obvious stuff that it was already doing like resetting the CPU and clearing some counters.
Unfortunately, the synthesis and implementation process in Vivado takes about 35 minutes for this design, so that's how long I have to wait to test things out each time I make a code change. It's not a fun process, so let's hope this reset thing is the only issue that arises...
More progress. After lots of work, we're making it through the ROM checksum test and MMU tests/configuration on the actual FPGA hardware now. So that means that much of the core CPU board logic is alive!
The less great news is that I've been stuck on some RAM problems for several days now. For some reason, the Lisa just isn't capable of writing to the RAM. It always reads back a zero on the actual hardware, despite working fine in simulation, and I've tried tons of things to fix this, all to no avail. It's putting out the right address and asserting all the correct control and selection signals at the right times, but the write just doesn't "stick" for some reason. I tried testing part of the RAM subsystem in a small SystemVerilog module that does nothing but write stuff into memory and read it back, and it worked fine there, so it's got to be something about its integration into the greater Lisa system. It's just a question of what. I'll keep working and let everyone know once I've figured it out!
Thanks for your work!
I wish I could help, but that is way out of my skill sets...
It's not pretty, but that is indisputably a CPU board error 41 on the screen. Which is exactly what we'd expect to get from a fully-functional yet I/O board-less Lisa!
Now I just need to figure out why it looks so terrible...
Wow, this is big news, and thank you for your efforts Alex! I can't wait to own one, and I will want one.
Quote from: AlexTheCat123 on September 19, 2025, 02:51:00 PM
It's not pretty, but that is indisputably a CPU board error 41 on the screen. Which is exactly what we'd expect to get from a fully-functional yet I/O board-less Lisa!
Now I just need to figure out why it looks so terrible...
Incredible! Where do you find the time??
Nice work!
Quote from: classiccomputing on September 19, 2025, 06:37:41 PM
Wow, this is big news, and thank you for your efforts Alex! I can't wait to own one, and I will want one.
Progress has been great so far, but it's probably going to be a while before we get to that point of other people being able to build one! Getting I/O working and deciding how much I want to modernize that stuff as opposed to requiring the use of original Lisa hardware (like adding native support for USB keyboards/mice vs making the user plug in Lisa originals) is what I anticipate to be the hard part.
For instance, I'm working on piping the Lisa's video output straight through HDMI right now, and that's proving to be a bit weird/challenging in several ways. But it'll be nice once I succeed because then we'll no longer have a need for an RGBtoHDMI, and the Lisa's contrast control can be simulated by simply adjusting the intensity of the white sent over the HDMI link.
Quote from: jamesdenton on September 19, 2025, 08:48:17 PM
Incredible! Where do you find the time??
Just working on it whenever I'm done with school and homework for the day, and as much as I can on weekends. The PhD program keeps me busy, but luckily I've still got enough spare time to fit in a couple things like this!
Thank you so much !
The video problems have been fixed; it turns out that they were being caused by a RAM timing issue where video reads from RAM were occasionally causing addresses adjacent to the current read address to be overwritten with random garbage. This one was really subtle and took a long time to figure out!
I've moved onto the I/O board now, and I've finished the initial design of the whole thing other than Page 2 of the original schematics, which contains the keyboard VIA and the COP. Since the COP core I found is in VHDL, whereas the rest of my code is in SystemVerilog, and I'm a bit nervous about integrating the two, I think I'm going to implement everything but the COP and then come back for that later. We should still get waayyy further in the boot process even in its absence.
Some people might be wondering what cores I'm using for the 6504, 6522s, 8530 SCC, and the (yet to be integrated) COP. Well, no 6504 cores seem to exist, so I'm using this (http://www.aholme.co.uk/6502/Main.htm) transistor-accurate 6502 core instead. As for the 6522 and 8530, I'm using the cores from the NanoMac project (https://github.com/MiSTle-Dev/NanoMac/). The 6522 core seems really accurate and fully-featured, but the SCC core leaves a lot to be desired; it seems to just implement the bare minimum to get the Mac to work and leaves out a lot of I/O lines used by the Lisa. But it's the only 8530 that I could find, and it should at least get us booting. And last but not least, the COP. I'll be using the T400 (https://github.com/devsaurus/t400) core, which as I said is written in VHDL instead of Verilog, which hopefully won't cause too many headaches. It also has a really weird/unpleasent way of reading in the ROM (you have to run a script that generates a VHDL file that's essentially a massive case statement for each ROM address instead of just using $readmemh) and some weird platform-specific scripts that you have to run, so I'm hoping that none of that causes a problem.
I should be able to start testing the I/O board in simulation by tonight or tomorrow, and then in actual hardware whenever I get all the simulation kinks worked out!
The 6504 is a 6502 with some pins not connected. Both parts use the same silicon die. So you can use the 6502 core and ignore the upper address lines.
Quote from: patrick on September 25, 2025, 03:15:20 PM
The 6504 is a 6502 with some pins not connected. Both parts use the same silicon die. So you can use the 6502 core and ignore the upper address lines.
Yep, that's exactly what I'm doing!
Time for another progress update!
I'm in the process of testing and working out the kinks on the I/O board, and it's getting pretty far in its series of tests.
The boot ROM actually does a decent bit of testing on the I/O board before it even puts up the "Testing..." screen, with only a relatively small amount of testing happening during the "I/O board test" that you actually see occurring after the RAM test completes. Once it gets through that initial I/O board test, it actually goes back and does some additional CPU board tests (this is what's happening when the CPU icon is highlighted on the Testing... screen) and this exposed a few more minor problems with the CPU board. Namely that the "write wrong parity" and vertical sync interrupt circuits weren't working right, but those have now been fixed and I think it's safe to say that the CPU board is fully-functional.
It also completes the long RAM test just fine (the one that you see on the Testing... screen), so that's some additional confirmation that memory addressing and parity checking is working fine!
As for the I/O board itself, both VIAs seem to be working flawlessly, and the COP seems to be executing code and putting out its ready signal like it's supposed to. The boot ROM isn't making it to the in-depth COP test yet (that's the very last test that it does on the I/O board), but the preliminary test at least looks good.
Communications with the SCC seem to work fine too, although the SCC core I found from the NanoMac project is insanely limited, to the point that it doesn't even support the internal loopback mode that the Lisa uses during the self-test. For now, I've just patched the loopback test out of the boot ROM, and I'll come back and improve the SCC core later. Reading and writing all the SCC registers seems to work great though!
The only two tests left in the I/O board phase of testing are the floppy controller and extended COP tests, and it's stuck on the floppy controller right now. Up until yesterday, the floppy controller was actually completely dead, but I've got it in a much better state now. It's now happy enough that it puts its ROM revision in the shared RAM for the 68000 to read, and the 68K reads the A8 just fine. And it seems to be able to address and control all the floppy drive control signals as well. I'm not 100% sure if the LS323/P6A PROM/LS174 state machine is working the way it's supposed to, but it at least seems to be running and doing something.
The current issue with the floppy controller is that it's failing its initial self-test, which the boot ROM notices when it reads the test results from the shared RAM, causing it to abort testing at that point. Luckily, I know why it's failing: for some reason, the floppy controller thinks a drive is connected, even though there isn't one, and so it tries to seek the heads back to track 0. But even after sending the seek command 80 times, the track 0 indicator bit still doesn't turn on (because there's no drive connected to turn it on), so the floppy controller thinks that either the drive or controller is defective and sets an error bit in the self-test result byte. So I just need to figure out why it thinks there's a drive connected when there's actually not.
After that, it's just the COP test, and then the I/O board should be done (aside from the SCC fix of course). The only thing left in the boot ROM's self-test after that is the scanning of the expansion slots for cards, but I don't have any cards "inserted", so it shouldn't detect anything and will hopefully just breeze through that test just fine.
At that point, we should be at the boot menu, where my next step will be getting a keyboard and mouse hooked up and interfacing with the COP so I can actually control things. I think I might try to hook up a USB keyboard and mouse, or PS/2 at the very least. And after that, I think connecting a ProFile would be a good idea. Not sure if this is even a thing, but if an ESP32 Verilog/VHDL core exists, I might even be able to integrate the functionality of ESProFile straight into the FPGA; no external drive or emulator needed!
Okay, I'm very nearly to the point where I can get to the boot menu consistently!
In fact, the FDC and extended COP tests work great in simulation and I'm able to get to the menu there every time just fine, but I'm still having a minor problem on the actual hardware, although it's proving to be pretty annoying.
When running on the actual FPGA, sometimes the FDC RAM gets corrupted and causes the Lisa to fail the self-test with an Error 57, and other times it passes just fine and makes it to the "no keyboard connected" and then Startup From... menu. It looks like the problem has to do with setup time issues with the addresses going to the floppy controller RAM, which of course only show up in actual hardware where you've got propagation delays to worry about. I've tried some strategies to fix the address issue (like delaying the RAM CS signal so that it wouldn't arrive until a 16MHz clock cycle later after things are stable), which helped, but then that has introduced weird edge cases where things completely break if the 68K tries to access the floppy controller RAM at the wrong time and interrupts the delayed CS pulse from the 6504. So I'm now trying out another solution, which will hopefully do the job a bit better. I'm super close though!
It must be an inspiring site to see it go through the pre-boot hardware check and then come up with the "no keyboard connected"!
I don't have much to offer to help along this journey, but every time I see that there's another update to this thread I get pretty excited. Go Alex go!
Still getting the "no keyboard connected" icon, but the good news is that the mouse is now working!
I hooked up a Macintosh mouse, and it works! FPGAs aren't 5V-tolerant, but the Mac mouse requires 5V since it's got a TTL logic chip inside, so I was pretty worried about what do do there, given that I'd rather not add any additional complexity in the form of level shifters until I get to the point of designing a custom PCB. But luckily, I discovered that certain mice would actually function fine on 3.3V, although it took about 5 or 6 mice before I found one that would.
I was initially trying to hook up a USB mouse, only to discover that my FPGA dev board doesn't actually have a USB host controller on it, just a USB line driver. Given that my FPGA (a ZYNQ 7020) has an ARM core inside it, I think they were expecting that to handle the USB protocol, but obviously we're not even using that here at all! And I don't feel like implementing the entire USB protocol from the ground up in Verilog, so USB support will probably just need to wait until I make a custom PCB with a host controller.
I've plugged up the keyboard now too, although it's not working yet. Well actually it's a USB to Lisa keyboard adapter because believe it or not, despite having three Lisas, I somehow don't have an actual Lisa keyboard! But regardless, something's keeping it from working. Not sure what though; I've probed the line with a scope and the COP is clearly sending out sync pulses that the keyboard responds to with data whenever I press a key. And the keyboard sends a reset packet too whenever it powers up, so it doesn't look like the issue is anything with the physical line itself.
I'm not sure how tight the timings have to be on the keyboard signals (maybe @patrick would know thanks to his reverse-engineering of the protocol), but my current suspicion is that the COP's clock is slightly off and it's not quite in step with the signals coming from the keyboard. The sync pulses generated by the COP are supposed to be 20us but are more like 13us in my case, so I think there might be something to this theory. The PLL inside the FPGA isn't producing exactly 3.9MHz like the COP expects, although it's only off by a little and I figured that it wouldn't be enough to matter. But to test my theory, I'm going to expose the COP's clock to an I/O pin and feed it from a function generator so I can fine-tune the clock and see if I can get that sync pulse tuned to exactly 20us. Then it should hopefully be perfectly in sync with the timings it expects from the keyboard, and maybe then it'll actually detect it properly!
Still having intermittent issues with the floppy controller, but I want to get the keyboard and mouse working before I go any further with that so I can go into service mode and write bytes straight into the FDC's shared RAM to try and diagnose the problem.
I occasionally would have delays or issues with my USB to keyboard adapter. Typically, right after a power on. Might've been my individual keyboard or equipment though so consider it anecdotal.
Luckily, it wasn't a problem with the keyboard adapter! It was a combination of accidentally having the wrong clock divider value set for the COP (/8 instead of /16) and forgetting to gate the output of the keyboard reset signal with the VIA's DDR so that it's not stuck low the entire time the computer is in reset. So now we have working keyboard and mouse, and I can type stuff into Service Mode!
I'm tempted to just go ahead and try to hook up an ESProFile to see if I can get anything to boot. I know the floppy controller's not quite healthy, but at least the Selector should start up assuming everything else is working. Luckily, the ESP32 uses 3.3V logic levels to begin with, so need to worry about any level shifting there!
ESProFile is hooked up and somewhat working! I can get into the Selector just fine and BLU on most attempts (although it fails its self-check presumably because of the SCC or floppy controller), so that's a start. And thanks to the serial number reading feature in BLU, I can finally confirm that the time-sensitive SN logic seems to be working okay!
Unfortunately though, it seems that the Lisa doesn't really like ESProFile for some reason. Or maybe it's a timing thing inside the FPGA, but results seem better with a real Widget, so I tend to think it's an ESProFile thing. Which means that I can't really get any other environments to show any signs of life because of disk read errors before they can load more than a few blocks off the ProFile. MacWorks Plus gets as far as clearing the screen to grey before crashing, but that's the farthest we get, aside from LOS.
It honestly shocked me that LOS made it the furthest of them all, but it did! Not to the "welcome to LOS" screen or anything, but it clearly loaded for a few seconds before disk erroring, and gave a 10726 (boot device read failed) error when it finally did fail. The others just hung or gave an error 75 from the boot ROM after reading a block or two, so LOS got quite far by comparison.
So then I decided to try and plug in a real Widget with LOS 2.0 installed to see if I'd have any better luck with a real drive. And I sure did! This time, it loaded for a good 10-15 seconds before giving a 10727, which means that the loader exhausted all the system's memory and had to give up. This makes a lot of sense; my Lisa currently only has 256K of RAM installed in it, which very likely isn't enough for LOS!
I'm just realizing as I type this that I plugged the Widget into the FPGA without level shifters. Whoops. At least it didn't fry anything.
The only big peripheral left to hook up is the floppy drive, but obviously I need a working floppy controller before I can do that! And given that I designed this thing around the 2/5 I/O board so that people can hook Twiggies to it, I guess I need to do the Lite adapter too, although that should be super easy. So I think I'm going to go back and finally get the FDC fixed up now. I could see this being really perplexing and taking a while, so don't be shocked if the next update isn't for several days or a week.
I'm noticing that (aside from the garbled CPU board error 41 picture) I haven't really provided much proof that anything I've been saying is actually true up until this point, so I've attached some photo evidence of all this too!
Okay, the floppy controller interfacing problem is fixed now! I ended up adding some delay logic that keeps the 6504's clock paused for a little while after a 68K access to shared RAM ends, which gives some other logic enough time to raise and then lower the FDC RAM CS signal again to re-latch the RAM address that the 6504 was accessing before the cycle got interrupted by the 68K. It was a pretty convoluted fix, but no more Error 57 or hanging!
I also implemented the Lite Adapter, which was super easy. It just compares a constantly-incrementing counter to the value in a shift register (which represents the PWM duty cycle) and sets or clears the PWM output signal depending on whether the counter is greater than or less than the shift register value.
Now that the FDC seems to be working and fully-implemented, I've hooked up a Floppy Emu, and unfortunately we can't quite boot from floppy yet. But we're really close. It can clearly detect whether or not there's a disk inserted, and I've confirmed that seeking and ejecting the disk work perfectly by sending those commands to the FDC manually in Service Mode. The one thing that's not working is reading and (presumably) writing, which is obviously a pretty big problem! Whenever you try to read a sector (either manually in Service Mode or automatically by booting from the disk), the FDC just sits there trying to read the sector forever. This is making me think that something's wrong with the PROM state machine, which is supported by the fact that the only two things I've ever seen it put out onto the 6504's data bus are 0 and 1, which obviously doesn't seem right when reading sectors off a disk. So that's what I'm about to start troubleshooting next; hopefully it's not too hard of a fix!
After reading some more about the theory of the Apple ][ floppy state machine (which is basically identical to the Lisa's) and tracing through the states, I discovered that I had wired one of the pins on the ROM to the QH pin of the shift register instead of the QA pin, causing it to clear the shift register every time it shifted in a bit instead of only after a full byte was shifted in. And after fixing that, I can boot from floppy! Here's the status of booting various things from floppy, I'm assuming anything that's failing is because of my tiny 256K of RAM:
BLU - Works
Selector - Works
NewWidEx - Works
LisaMandelbrot - Works
MacWorks XL - White screen, floppy activity for a while and then hangs
MacWorks Plus - Gets about halfway through the Loading... screen and then hangs
MacWorks Plus II - Gives error about PFG since we don't have a PFG installed, if we choose to continue it loads about half the sectors from the floppy before hanging
UniPlus - Immediate error 75, I think my UniPlus floppy might just be corrupted
Xenix - Gets to the screen where it prints the Lisa system type, expansion slot contents, and free RAM, then kernel panics
GEM - Can boot to the command line, starting the GUI causes it to hang midway through loading
LisaTest - Error 49 (Line 1010 or 1111 trap)
LOS/Workshop - Error 10727 (Memory exhausted)
I don't have enough room in the FPGA to go to 512K of RAM, and I can't move the RAM to my board's external SPI flash because the write speed is too slow. My board has a DDR3 RAM chip on it too, but I really don't feel like interfacing with that, and I'm worried the latency would be too high anyway. So I think it might be time to design a custom board with an external parallel (or maybe SPI) SRAM before I continue. Unless anyone's got any better ideas (which I would love to hear), I think this will be my next step, and so it'll probably be a little while before another progress update. I guess I can go ahead and try to fix the intermittent ProFile read problems that I was having, but that's about all that I can do with the current version of the hardware.
Fantastic work!
LisaMandelbrot? I will have to look into that.
Amazing progress!
LisaMandelbrot is a set of Mandelbrot set plotters that I wrote a while ago. You can find it here: https://codeberg.org/stepleton/LisaMandelbrot
That page looks sparse because there are three varieties: "Port" which runs on the Office System, "Pro" which runs in the Workshop (no extra features, it's just that the workshop seems like the place for "pros"), and "Solo" which is a standalone program (i.e. boots and runs without an OS). Solo is quite small. Anyway, click through to any of the three to see more detailed information.
My standalone programs (including the Selector) don't really exercise all that much of the Lisa's capabilities; for a start, they all leave the MMU in the boot-up "flat" configuration. So it's not a big surprise that they run. For this reason I wouldn't think there's much point in running my stunt standalone Forth port (https://codeberg.org/stepleton/lisa-fig68k) for Lisa, as it is just as gentle on the machine (the Forth part can't even use RAM above 64k!).
Alex's SRAM idea touches on a project I was thinking of attempting but was in my project pile: the Smallest 2Meg RAM Card On Earth. I had a vision of an SRAM-based RAM card that would be comically small inside of the card cage, and I'd even had my eye on this single, somewhat pricy RAM chip (https://www.mouser.co.uk/ProductDetail/727-CY62167ELL-45ZXI), with its 16-bit data bus and (IIUC) +5V tolerance. But I hadn't started to sweat the details yet, and realistically it is a project that would have sat on a back burner for months at least. All of which is to say: Alex, if you wanted to stretch your SRAM plans slightly to make the Smallest 2Meg RAM Card On Earth, it would be pretty cool, and maybe that IC is worth knowing about :)
Ha, it's interesting you mention the 2MB RAM card thing; I was just thinking about doing something like that! All the RAM card logic I've written would easily fit into a small CPLD, so throwing the CPLD, some level shifters, and the SRAM onto a tiny little board would work nicely. And funnily enough, that SRAM chip is the exact one that I've been eyeing for the FPGA!
The COP core I'm using seems to also be small enough to fit into a CPLD, so a CPLD-based COP replacement could be another interesting idea...
(all out of curiosity)
Are the level shifters for the CPLD? That SRAM itself seems to accept and send TTL-compatible signals, though the outputs won't go up to a full +5V.
What kind of things does the CPLD do? I only scanned the Lisa Hardware Manual about this once, but I took away the impression that some of the work of the support logic was dealing with placing the RAM board in the appropriate part of the address space. (As the SRAM is a full 2 MB, the correct positioning is straightforward: it takes up all of it.) Furthermore, there's no need for DRAM refresh, so maybe that simplifies things too.
I suppose I'd hoped therefore that you might be able to reduce necessary logic down to a few surface-mount TTL ICs. You might put them on both sides of the board to achieve more compactness, though it would be better to avoid that so that you can use a hotplate for assembly.
But I assume there is plenty of remaining devil in various timing details for RAM reads/writes, etc.
Meanwhile, I was always wondering if you could replace the COP with a suitably busy Arduino (with enough pins)...
Quote from: stepleton on October 15, 2025, 03:48:47 AM
Amazing progress!
Indeed!
Quote
vision of an SRAM-based RAM card that would be comically small inside of the card cage, and I'd even had my eye on this single, somewhat pricy RAM chip (Infineon CY62167ELL-45ZXI), with its 16-bit data bus
...
But I hadn't started to sweat the details yet
I suspect parity could become a sweaty consideration, so I suggest sorting out how you will handle the parity issue before finalizing your part requirements.
The issue is that the parity test circuitry allows storing bad parity in multiple byte addresses for discovery at an indeterminate time.
IIRC, the POST sets bad parity at only one address, so rather than storing parity, it is possible to record that address and report the parity error when it is read again, but if some software (LisaTest maybe?) does a more complicated parity circuit test, it may discover your secret of circumventing stored parity bits.
Modifying the CPU ROM to remove the parity circuit check may be sufficient for most operation, but some purists prefer LisaTest success, and I don't know if anything else checks the parity circuits, so ymmv etc.
Quote from: AlexTheCat123 on October 14, 2025, 04:42:15 PM
MacWorks Plus II - Gives error about PFG since we don't have a PFG installed
Since you don't have a fully functional SCC module, and possibly don't want to develop one (in particular with the SDLC (or whatever it is) complication that makes LocalTalk possible), you might consider using a real SCC, which then makes plugging in a real PFG an option. And/or we can figure out a strategy for implementing any desirable features of the PFG without the real hardware if the SCC can be adequately emulated.
But this is just one aspect of what the final result might be... now that you've very well established a proof of concept, it might be time to brainstorm the final objectives...
For example, is it, or some version of it, going to:
- completely replace real Lisa hardware including card cage, video, chassis, specific expansion cards, while not supporting arbitrary real expansion cards?
- replace the CPU and memory boards in a real Lisa card cage/chassis that uses real motherboard, video, I/O and expansion cards?
- replace CPU, memory, and I/O boards in motherboard form-factor so real expansion cards can be inserted in a real chassis?
Lots of options to consider that have their benefits and drawbacks... some of the challenges of being the project manager!
Yeah, I know it would be some pretty simple logic, but the purpose of the CPLD would be consolidating the small amount of TTL logic that would be required as well as figuring out parity. It could be done in TTL too, but I figured that a CPLD would probably be more compact and inexpensive. I was thinking about doing @sigma7's strategy of remembering the one address that the boot ROM does the "write wrong parity" test to, but it's true that we don't really know what else may use this feature. It would be easy enough to find out though; just boot each Lisa OS with the FPGA-based Lisa set to trigger on accesses to the appropriate address in the system control latch.
Unlike the SRAM card (if and when I get around to making that), I want to actually handle parity the proper way in the FPGA-based Lisa. So I was thinking about using the external SRAM for the main memory and then still keeping the parity inside the FPGA's block RAM.
Quote from: sigma7 on October 15, 2025, 07:07:08 PM
Since you don't have a fully functional SCC module, and possibly don't want to develop one (in particular with the SDLC (or whatever it is) complication that makes LocalTalk possible), you might consider using a real SCC, which then makes plugging in a real PFG an option. And/or we can figure out a strategy for implementing any desirable features of the PFG without the real hardware if the SCC can be adequately emulated.
Yeah, that's a good idea that I hadn't really thought about before! I had previously planned on (and really, REALLY dreaded) implementing the SCC myself later on, but this might be a better (even if temporary) solution.
I've already implemented the PFG in an ESP32, and I've been imagining that implementing the same state machine inside an FPGA would be even easier, so my current plan is to (at least for my own personal use) make a Verilog version of the PFG. And same for the XLerator and LSAC too. Maybe we could work out a deal where @sigma7 could sell those as IP cores that people could add to their own FPGA Lisas without having access to the source code?
Quote from: sigma7 on October 15, 2025, 07:07:08 PM
But this is just one aspect of what the final result might be... now that you've very well established a proof of concept, it might be time to brainstorm the final objectives...
I'm currently imagining two different versions of the final device:
1. A modernized standalone version of the Lisa. This will be a board that sits on your desk with HDMI video output, USB (or possibly PS/2) keyboard and mouse input, built-in ESProFile hard drive emulation, USB to serial feeding directly into the Lisa's serial ports, and perhaps built-in floppy emulation if I end up writing a floppy emulator. It would also let you plug in original keyboards, mice, hard/floppy drives, and would expose the original Lisa video signal for those who want it.
2. A version in the form factor of the Lisa motherboard. This would be a drop-in replacement that people could stick straight into their Lisas to replace a bad card cage, and it would include expansion slots too, on top of all the original ports that you'd expect.
Both versions would probably have switches that would let you flip between H and 3A as well as 40 and A8 ROMs on the fly.
Option #1 is the first priority that I'm just starting to mess around with now, with #2 coming later.
It's been a while, time for an update!
I'm getting pretty close to finishing the schematic for the PCB, and then I'll be onto the layout and routing phase. The whole schematic is done other than the connections of everything to the FPGA itself. Hopefully that shouldn't be too hard, I just need to look at the datasheet and make sure I connect certain Lisa signals to certain special-purpose I/O pins.
By the way, I've settled on the Xilinx Artix 7-100T as the FPGA of choice for the final board. It's pretty darn cheap at about $20, has plenty of LUTs, and between 200 and 400 I/O depending on which package you get it in. I'm currently using a little over 200 I/O, and this board doesn't even have expansion slots on it (which will add even more to that), so the 300-I/O version is going to probably be the chip of choice.
The key features of the board are:
- Power over either USB-C or barrel jack; 12V, -12V, and -5V (along with the 1V, 1.8V, and 3.3V FPGA voltages) are all generated onboard.
- USB port goes into an onboard hub chip, which feeds four devices: JTAG for FPGA programming and debugging, an onboard ESProFile, an onboard (yet-to-be-coded and I'm not even sure if I'll ever get it to work) floppy emulator, and a USB to serial chip that can directly feed the Lisa's Serial B port.
- As just mentioned, an onboard ESProFile and ESP32-based floppy emulator. There's a Floppy Emu-style OLED display and set of buttons for controlling floppy emulation, although once again none of that is implemented yet. And of course there are switches to choose whether you want to use the hard/floppy emulators or actual drives plugged into ports on the board.
- Switches to toggle between H and 3A CPU ROMs (and the corresponding VSROMs) and A8 and 40 I/O ROMs on the fly.
- The first version won't, but future versions will have Twiggy headers for the lucky few of you who happen to own a set of Twiggy drives!
- Onboard speaker (with the Lisa's 3-bit volume control) for Lisa audio, and external speaker header if you want to connect a larger one.
- Onboard contrast latch DAC in case you want to use the analog contrast signal for anything.
- HDMI video output. The VSYNC, HSYNC, and VID signals are also exposed on a header for easy connection of an RGBtoHDMI if you prefer that.
- USB keyboard and mouse input. As with the ProFile and floppy interfaces, these are switchable, so you can still plug in original Lisa keyboards and mice and use those too.
- Per James' suggestion, the Lisa's serial ports are implemented using a real 8530 SCC that you'll have to pop into a socket on the board once you receive it. But all the SCC's serial I/O pins are also bidirectionally level-shifted and run back to the FPGA so that a future version of the firmware can implement the SCC internally without requiring any changes to the PCB. The FPGA would just enable the level shifters, and that would take control from the external SCC.
- And as I mentioned earlier, Serial B is switchable between the actual DB25 port and a direct USB to serial interface with your computer, making it really easy to do things like transferring over all the LOS source code without needing a USB to 9-pin serial adapter and then a 9-pin to 25-pin serial cable.
There's no way that this thing will even come close to working on the first try, but hopefully I'll be able to figure out all the problems reasonably easily after getting a prototype run of boards in the mail.
As with some of my other recent designs, I've made sure that every part I'm sourcing is from JLCPCB/LCSC's parts library, so they'll be able to assemble the whole board for you when you order one. Which is pretty crucial given how much surface-mount stuff there is on here! The parts cost is currently looking to be a little over $100 per board, but I'm hoping to get it a little lower than that on the next board revision when some of the debugging hardware is removed.
It's still probably going to be a couple weeks before the layout and routing are done, but at least finishing up the schematic means that I'm about halfway there now!
That is astonishing. Congratulations, Alex! Looking forward to seeing it in the real world.
I am stunned fantastic work thank you!!
Getting pretty close now!
The whole board is laid out and most stuff is routed too, with the exception of the traces to/from the FPGA itself. All the individual subsystems and power are completely hooked up though, and you might be able to see that a few things (the config flash, JTAG, differential pairs for USB and HDMI, and half the SRAM) are routed to the FPGA at this point. And all the pins on the BGA are fanned out (which was an absolute nightmare), so it's just a matter of connecting everything to them now. Although I'm anticipating that being a pretty big challenge given how dense everything is and the fact that I'm using literally every single I/O pin on the FPGA.
In case anyone's wondering, I decided on a 6-layer board with planes for 3.3V and ground, and the other 4 being signal layers. I didn't bother making power planes for 5V, 1V, 1.8V, or any of the other voltages because they're used so sparsely that it was just more efficient to route them on one of the internal signal layers.
Out of curiosity, I threw the design into JLCPCB to see how much it would cost, and fabricating 5 bare boards comes out to about $60. When you add their assembly service to that and ask them to assemble 2 of the 5 boards, the total price comes up to $438. About $200 of that is parts ($100 per board) and the rest is labor. So people probably won't want to order a set of boards unless they're buying several and selling the ones they don't use themselves. I checked the price for ordering and assembling 100 of them, and it comes out to about $8000, so you really do save a lot when you buy in bulk. That's only $80 per fully-assembled board!
I've attached a rendering of the PCB so you can get an idea of what it'll look like. The layout is completely finalized; the only thing that should change from here is the routing of traces going to/from the FPGA.
Impressive and certainly a project for the advanced home soldering enthusiast.
I can see the floppy emulator you've mentioned before. But I also notice the wall of capacitors to keep out the riff-raff ;-)
It's interesting to see all the DC-DC conversion on the board; I'm guessing that using off-the-shelf switching regulators is pricier?
I've never used an assembly service (but would have to in this case). Would there be some savings possible if you just used them for the surface-mount parts and left through-hole parts for the buyer to sort out?
Really nice. I wonder how easy it would be to replace the innards of a Lisa with this, so that from the outside all the original ports would appear in the normal places. Seems like it would require a bit of snaking some extension cables around inside the case, and a tiny power strip where the power supply usually is to accommodate both the board and whatever is powering the CRT.
Quote from: stepleton on November 05, 2025, 08:13:16 PM
I can see the floppy emulator you've mentioned before. But I also notice the wall of capacitors to keep out the riff-raff ;-)
No guarantees that I'll get the floppy emulation working, but I'll sure try my best! And yeah, there's an insane number of caps on this board. There are another 30 or 40 that you can't see on the bottom underneath the FPGA too.
Quote from: stepleton on November 05, 2025, 08:13:16 PM
It's interesting to see all the DC-DC conversion on the board; I'm guessing that using off-the-shelf switching regulators is pricier?
Yeah, that's the way it seemed. If you're just doing a single voltage rail, then a switching regulator module is cheaper than all the discrete components to build a DC-DC converter from scratch, but when you're doing 5 or 6 of them, then the discrete version is cheaper thanks to how many parts are reused between them and the fact that you're already ordering 20 to 100-ish of each to begin with thanks to minimum order quantities.
Quote from: stepleton on November 05, 2025, 08:13:16 PM
I've never used an assembly service (but would have to in this case). Would there be some savings possible if you just used them for the surface-mount parts and left through-hole parts for the buyer to sort out?
Ha, somebody else asked me that same thing a few days ago! Yeah, it would certainly save a few dollars, but only if we used the cheap Chinese through-hole parts. And somebody told me that you get double-tariffed if you order boards and parts from China separately, so the through-hole parts would need to come from DigiKey or Mouser instead. And by the time you pay the higher DigiKey/Mouser component prices, it comes out to about the same price as getting them to assemble it anyway.
Quote from: andrew on November 06, 2025, 01:15:50 PM
Really nice. I wonder how easy it would be to replace the innards of a Lisa with this, so that from the outside all the original ports would appear in the normal places. Seems like it would require a bit of snaking some extension cables around inside the case, and a tiny power strip where the power supply usually is to accommodate both the board and whatever is powering the CRT.
Not sure when I'll get around to it, but my eventual plan is to design a second version of the board that's in the form factor of the Lisa motherboard, so it would be a drop-in replacement for the original card cage without any adapter cables or extensions or anything. And I could omit some of the circuity on the board (HDMI port, voltage regulation, keyboard port, and so on) since those facilities would already be provided by the Lisa chassis.
Quote from: AlexTheCat123 on November 06, 2025, 02:28:42 PM
Not sure when I'll get around to it, but my eventual plan is to design a second version of the board that's in the form factor of the Lisa motherboard, so it would be a drop-in replacement for the original card cage without any adapter cables or extensions or anything. And I could omit some of the circuity on the board (HDMI port, voltage regulation, keyboard port, and so on) since those facilities would already be provided by the Lisa chassis.
You ought to try putting a micro HDMI port in place of the video out port!
And in general, I don't think it hurts to have redundancies here and there. For instance, the USB keyboard and mouse inputs are probably worth keeping in the event you need to troubleshoot a keyboard or mouse issue with the original hardware.
Yeah, I haven't fully decided what to keep and what to take away on the Lisa motherboard version of the board. That's all pretty far out in the future. But I like the idea of micro HDMI and maybe keeping USB too!
If it were me, I'd keep the motherboard ports exactly the same as a /5 and run a second board over to the expansion bays with all the extra goodies on it.
Does MacWorks, XENIX, UniPlus run on this?
Quote from: blusnowkitty on November 07, 2025, 09:00:51 AM
Does MacWorks, XENIX, UniPlus run on this?
Well, nothing (other than BLU, the Selector, NeoWidEx, LisaMandelbrot Solo, and GEM sort of) actually runs on it right now! Everything else tries to boot but hangs or errors out somewhere during the process. The common thread between all the errors is memory exhaustion, which makes a lot of sense given that I've only had room for 256K of system memory up until this point. To get more RAM, I had to take a break from the HDL side of things and design this board (which happens to have a 2MB SRAM on it), and then I can get back to trying to get stuff to boot once I've got enough memory. I'm hoping that everything will just work once it's got more than 256K of RAM to play with, but there could absolutely be more bugs to figure out too!
Finally done with the board! Here's the final rendering of it, which looks basically identical to the previous one other than a bunch of additional traces going to the FPGA. And I've also attached a screenshot of the board layout with the inner layers visible so you can see what's going on under the surface; that's where most of the traces converge toward the FPGA.
I'm about to place the order, and the price has gone up a bit thanks to me not selecting a few important options during the first price estimate. But selecting those options now (double-sided assembly for the bypass caps under the FPGA and vias smaller than 0.4mm) have brought the final price up to $525 for two fully-assembled boards and three unpopulated spares. And then there might be tariffs on top of that, but hopefully not. I haven't actually placed a PCB order since the tariffs went into effect, so I'm not completely sure how that works.
Given the complexity of the boards, it's going to take them 8 or 9 days to fabricate and assemble them versus the standard 2-4 days for fabrication and assembly, but hopefully they'll get to me within the next couple weeks!
Edit: Oh my god, the tariff is $289. I could barely afford this before, but I sure can't now. I hate to say it, but this project might have to wait a while until I can save up enough to actually buy these things.
Damn, that's insanity. It's too bad. :'(
Quote from: AlexTheCat123 on November 10, 2025, 12:56:19 AM
Edit: Oh my god, the tariff is $289. I could barely afford this before, but I sure can't now. I hate to say it, but this project might have to wait a while until I can save up enough to actually buy these things.
Community effort here. How much of a donation would you need to continue your work?
Good news! Thanks to an incredibly generous donation by @jamesdenton, I was able to place the order, and I should have the boards on hand within the next 2 weeks. Thank you so much James, I really appreciate it!
Quote from: bmwcyclist on November 10, 2025, 02:07:05 PM
Community effort here. How much of a donation would you need to continue your work?
Thanks for offering to pitch in, but I think I should be good for now!
I would really love to be able to buy one of these, but I would also need it ready to run.
Quote from: classiccomputing on November 14, 2025, 10:44:48 AM
I would really love to be able to buy one of these, but I would also need it ready to run.
I'm definitely not to that point yet, but that's the ultimate goal. This is just the initial round of prototypes, so I'm absolutely expecting problems!
They just finished assembling the boards, and they'll probably be shipped within the next day or two. Hopefully I get them before Thanksgiving. In the meantime, here's a cool X-ray shot they sent me of the area under the FPGA on one of the fully-assembled boards!
that is crazy cool!
Good news, the boards are here and they look great! I've attached some pics in case anyone wants to see.
Now into my current progress with testing them, which is mostly good news so far.
I designed each of the switching regulators such that they could be disconnected from the rest of the board, just in case my initial design was bad and was causing them to put out a bad voltage. So I tested them in the disconnected state first, and everything was 0V! Well, it turns out that I labeled the jumper that selects between USB-C power and barrel jack power backwards, so I just swapped the jumper around and then things started coming to life.
All the voltages were spot-on, most of them to the thousandth of a volt. The only ones that weren't were the 1V and 1.8V rail, which were 0.999V and 1.801V, respectively. Definitely close enough. I was honestly shocked at how spot-on the voltages were, and they stayed this consistent under load. The ripple on all the rails is 40mV or less, with most being below 20mV, so pretty darn good and well within spec of all the components.
Then I hooked the PSUs to the rest of the board, and the USB hub activity LEDs came to life! Plugging it into my laptop, all four USB devices (the FT2232 for JTAG, the CP2102N for serial comms with the Lisa, and both ESP32s) were visible, although the ESPs were repeatedly connecting and disconnecting every second for some reason.
Luckily, after programming both ESP32s, they stopped this weird behavior and are working perfectly. I can't really fully test the CP2102N (or the ESP32s, for that matter) until the Lisa is up and running, but I was able to talk to it and program in some custom name and vendor strings, so I'd say it's working pretty well.
The FT2232 is where I encountered my first major problem. Which sucks because it's how you program the FPGA! Xilinx has a tool that flashes its configuration EEPROM (a 93C46) with a special signature that Vivado looks for to recognize it as a USB to JTAG interface, but the tool kept erroring out whenever I tried to flash it. After some experimentation, I discovered that it was successfully programming the EEPROM, but was running out of space. The 93C46 is a 64-byte EEPROM, but it turns out that you need at least 128 bytes (93C56 or above) to store all of Xilinx's configuration data. So there's nothing I can do to make this work with the existing chip.
Luckily though, the 93C46 is the only chip on the entire board (aside from the SCC) that's through-hole, so I went ahead and ordered some 93C56's and I'll just solder one in once I get them in the mail next week.
But fortunately, I put a JTAG header on the board just in case something like this were to happen, so I was able to plug in a Digilent USB to JTAG adapter to try and program it that way. Sadly, it didn't work at first, but then I noticed that it was because I'd plugged in TDI and TDO backwards. After swapping those, Vivado was actually able to see and program the FPGA!
Then I tested out the configuration flash connected to the FPGA (in case you're not familiar with FPGA stuff, this is the nonvolatile RAM where you store your bitstream if you want the FPGA to automatically load it at boot), and that worked too. I put a bitstream in there, and it clearly loads it and illuminates the "DONE" light once it's finished loading!
I flashed the Lisa bitstream to it to try and see how much life I could get out of the Lisa peripherals, and there certainly is some, but clearly there are still some problems to solve. Hitting the Lisa's power switch causes the power LED (connected to the ON signal) to light up, and I get a good HSYNC out of the video connector, but VSYNC and VID are dead. I'm guessing that it's just a problem in Verilog or my design constraints file though as opposed to an actual board problem. We'll see!
Overall, a really successful test; a good bit better than I was expecting!
Very good news!
Here are the latest LisaFPGA updates, in video form!
https://www.youtube.com/watch?v=zE4fxzj6V4A (https://www.youtube.com/watch?v=zE4fxzj6V4A)
Alex, that is insanely cool! Nice work 8)
Quote from: AlexTheCat123 on December 05, 2025, 11:42:24 PMHere are the latest LisaFPGA updates, in video form!
https://www.youtube.com/watch?v=zE4fxzj6V4A (https://www.youtube.com/watch?v=zE4fxzj6V4A)
Fantastic work, and in such a short time, all things considered! Looking forward to your next update!
@Alex You mention switching over to the dedicated RAM chip but needing a place for the parity bits. I know this is wasteful, but the RAM chip is 2 MiB, so what about an option for emulating a Lisa with the common configuration of 1 MiB of RAM and squeezing the 128 KiB of parity bits somewhere in the space that remains? It would ensure a working system if software ever turns up that uses parity more than the boot ROM does.
Or even 1.5 MiB / 192 KiB to be more space-efficient, though there you will need more than one bit per byte in your parity pool, which could slow things down. This would be like a Lisa with one each of a 1 MiB and a 512 KiB RAM card.
In general, would it be a good idea to allow the amount of RAM to be configurable? I seem to dimly remember hearing here about some OS (maybe one of the UNIXes) or other piece of bootable software that required exactly 1 MiB of RAM.
Quote from: stepleton on December 07, 2025, 12:32:19 AM@Alex You mention switching over to the dedicated RAM chip but needing a place for the parity bits. I know this is wasteful, but the RAM chip is 2 MiB, so what about an option for emulating a Lisa with the common configuration of 1 MiB of RAM and squeezing the 128 KiB of parity bits somewhere in the space that remains? It would ensure a working system if software ever turns up that uses parity more than the boot ROM does.
Or even 1.5 MiB / 192 KiB to be more space-efficient, though there you will need more than one bit per byte in your parity pool, which could slow things down. This would be like a Lisa with one each of a 1 MiB and a 512 KiB RAM card.
Hmm, interesting idea! I had actually considered doing this a while back, but I really wanted the full 2MB to be available, so I decided not too. But maybe it wouldn't be such a bad idea after all. I'd still like to have 2MB of RAM though, so maybe I should just upgrade to a larger RAM chip on the next board revision. Like 4MB maybe. That would give plenty of room for parity! And spoiler: the design meets timing now by a pretty hefty margin, so maybe I would be able to add some parity stuff back into the internal block RAM without destroying timing closure again...
Quote from: stepleton on December 07, 2025, 07:52:03 AMIn general, would it be a good idea to allow the amount of RAM to be configurable? I seem to dimly remember hearing here about some OS (maybe one of the UNIXes) or other piece of bootable software that required exactly 1 MiB of RAM.
Don't worry, that's already part of the plan! I've got 2 jumpers on the board that you'll be able to use to select between 512K, 1M, 1.5M, and 2M of RAM. They don't do anything yet (it's currently hard-coded to 512K), but they will once I get the RAM working reliably.
Speaking of RAM, I've got another update to give! I was surprisingly able to fix all 2000-something of the timing violations yesterday, and now we're down to zero; everything meets timing! Which meant that I was able to proceed to the next step of migrating to the external RAM chip. It went better than I expected given that the Lisa actually tried to boot on the very first attempt, but there are clearly still some problems. It fails the RAM test with error 70 (so actual RAM problems, not just it thinking that parity is wrong) and the entire screen has these weird ghosting artifacts all over it. Anything that's shown on the display will have repeated ghosts of itself off to its right, and there's a bit of noise on certain parts of the screen too. I've also noticed that it now takes the Lisa quite a long time to hunt for a valid chunk of memory to stick the video page in when it first turns on, so I think we've got some sort of RAM timing issue where either certain writes aren't fully sticking or certain reads are being done before the data is ready. Figuring all that out is my job for today!
Quotemaybe I would be able to add some parity stuff back into the internal block RAM
If it is true that nothing important uses/tests the parity circuitry aside from the ROM's self-test, then perhaps modifying the ROM to ignore that part of the self-test would be the best ROI option.
Perhaps even that's not strictly necessary... can one just "Continue" after getting error 71?
Quote from: sigma7 on December 07, 2025, 06:18:25 PMIf it is true that nothing important uses/tests the parity circuitry aside from the ROM's self-test, then perhaps modifying the ROM to ignore that part of the self-test would be the best ROI option.
Perhaps even that's not strictly necessary... can one just "Continue" after getting error 71?
I've already patched the parity test out of the ROM for the sake of testing, so it's certainly an option to just keep that patch in there!
You can indeed Continue after the parity error, so leaving it alone is another option if people are okay with that. Although I'd prefer something more elegant if possible. The solutions, from most to least elegant, are:
1. Get a bigger SRAM and implement parity in there or do parity internally using block RAM.
2. Put some logic on the CPU board that detects the ROM's "write wrong parity" test and asserts HPIR at the appropriate time to simulate the detection of the parity error.
3. Patch the test out of the ROM entirely.
4. Don't do anything, and have the user hit Continue whenever the parity error pops up.
I'll probably try the block RAM strategy, but that'll be a job for later on once much more of the system is fully-functional!
On a real Lisa memory board, the "HDER" (Hard Memory Error) signal (pin 49 on each memory slot) "tells" the Lisa that there was a parity error (during read). If you disconnect that signal, the Lisa will never see such errors and should run happy. Perhaps the same can be done in the FPGA? Then you don't need any parity circuitry.
Source: http://www.bitsavers.org/pdf/apple/lisa/hardware/Lisa_Hardware_Manual_Sep82.pdf
Quote from: TorZidan on December 07, 2025, 11:38:59 PMOn a real Lisa memory board, the "HDER" (Hard Memory Error) signal (pin 49 on each memory slot) "tells" the Lisa that there was a parity error (during read). If you disconnect that signal, the Lisa will never see such errors and should run happy. Perhaps the same can be done in the FPGA? Then you don't need any parity circuitry.
That's absolutely right most of the time, but there's one extra detail to HDER that makes this fail under certain circumstances.
The CPU board has a "write wrong parity" circuit on it the forces the memory board to write incorrect parity info to any address that the CPU writes to while this circuit is enabled. As a test of the error detection circuitry, the boot ROM (and maybe LisaTest too, not sure) uses this feature to write invalid parity to an address, and then reads it back to make sure that the memory board detects the bad parity, that it pulls HDER low, and that the CPU receives the corresponding HPIR (high-priority interrupt). So if we tie HDER high all the time (which is what I'm doing temporarily right now), this test will fail. Parity will be fine all the time otherwise, but not during this test! That's the whole problem here.
Back when I was considering the "tiniest 2 MiB RAM card" project (now shelved since multiple people seem to have this idea cooking away on a side burner), I wondered whether you might be able to accomplish something that passes write-wrong-parity checks using something like a lousy LRU cache. If write-wrong-parity is enabled, push the lower N bits of the address into the cache. Then when retrieving memory data, check to see if any address matches any of the cached address pieces and, if so, invert the parity bit. A bit more complexity and for what? Who knows. Maybe the idea can inspire something else.
I also meant to learn more about the "write wrong parity" feature to see if I could "unlock" 128 KB of extra (albeit inconvenient) RAM in my Lisa, but I haven't done that either.
Quote from: stepleton on December 08, 2025, 07:20:03 PMBack when I was considering the "tiniest 2 MiB RAM card" project (now shelved since multiple people seem to have this idea cooking away on a side burner), I wondered whether you might be able to accomplish something that passes write-wrong-parity checks using something like a lousy LRU cache. If write-wrong-parity is enabled, push the lower N bits of the address into the cache. Then when retrieving memory data, check to see if any address matches any of the cached address pieces and, if so, invert the parity bit. A bit more complexity and for what? Who knows. Maybe the idea can inspire something else.
That was basically my exact idea! But time will tell if we actually need that level of complexity...
In the meantime, check this out; it actually boots MacWorks Plus now (albeit with barely-visible video)!
https://youtu.be/OBmNUpbnqVc (https://youtu.be/OBmNUpbnqVc)
And it boots LOS as well, but it's nearly impossible to see the desktop given the horrendous-looking video! Ignore what I say about it only booting LOS 2 and not LOS 3; it actually boots 3 just fine and only failed because I tried to boot a corrupted image!
https://youtu.be/HIbJPjjW5ls (https://youtu.be/HIbJPjjW5ls)
According to some logic analyzer traces, it seems like the video issues have something to do with the RAM not liking to be read from immediately after a write has finished. The CPU writes data into RAM during its half of the bus cycle, but then when the video circuitry goes to stick some stuff onto the screen during its half of the cycle, it reads completely wrong data and garbage ends up on the screen in that area. So I'm thinking that the RAM needs a little more turnaround/delay time after the end of a write before it's ready to do a read.
This would totally explain why the Lisa passes the RAM test just fine but still has the corrupted video; a CPU write is always followed by a video read, so the video read will get corrupted, but the next CPU read or write doesn't happen until a whole cycle after the first one, so by then the RAM is ready and reads and writes just fine again. If the video circuitry were somehow able to write to RAM too, then this would be a different story and we'd be getting RAM errors left and right.
Now I just need to figure out how to fix it. Right now I assert the RAM's CE only while RAS and CAS are both asserted, but keep WE asserted for a good bit longer (for as long as MREAD is low). I'm wondering if maybe it doesn't start "recovering" from the write until after WE gets deasserted, regardless of whether or not the chip is selected, so I'm going to try and shorten the WE pulse to be the same width as UDS and LDS instead. Hopefully that clears things up a bit! If not, I guess I can try shortening the CE pulse to not even be as long as the overlap of RAS and CAS, but I have to be careful there because it's already so short that I think we're pretty close to the minimum pulse width that the RAM will detect.
Ha, wild about how the major issue is video when so much else works. It's interesting how it's so correlated with what the Lisa is doing. Is there a way to investigate the hypothesis more directly with software? In particular, what if you put the CPU into a tight loop where it wasn't writing to RAM at all, just constantly executing
.lp BRA.S .lp
which you might be able to run in Service Mode "in the blind" by putting 60FE in RAM somewhere and jumping to it. Would the display look OK then?
Meanwhile, I wonder if those bidirectional level shifters you've chosen are the same TXS0108Es I've regretted on Cameo/Aphid...
Quote from: stepleton on December 10, 2025, 04:34:05 AMHa, wild about how the major issue is video when so much else works. It's interesting how it's so correlated with what the Lisa is doing. Is there a way to investigate the hypothesis more directly with software? In particular, what if you put the CPU into a tight loop where it wasn't writing to RAM at all, just constantly executing
.lp BRA.S .lp
which you might be able to run in Service Mode "in the blind" by putting 60FE in RAM somewhere and jumping to it. Would the display look OK then?
Yeah, it's insane that the problem is so horrendous while everything else is fine. You'd think that there would be no way that the RAM would be functional with that kind of corruption.
Trying to write some code to get some more info on what was going on was going to be my next step, but I actually was able to fix it without needing to do that! It turned out to be a problem with the /LDS and /UDS data strobes for the lower and upper bytes of RAM. The stock RAM board uses them for writes (obviously) to determine which byte(s) to write to, but completely disregards them for reads. During a read op, both bytes are always returned no matter their state. During the CPU half of a bus cycle, the CPU always puts the strobes in the "proper" states for reads even though the memory disregards them, but when the video circuity goes to read from memory, it doesn't set the strobes at all. And my RAM chip, unlike the original board, requires the strobes for BOTH reads AND writes; without them, garbage data will be returned. So this explains why CPU cycles were fine, but video ones weren't; the CPU always set the strobes right, but the video didn't. The only reason that video sometimes worked was because a CPU strobe would occasionally overlap into a video cycle just enough to trigger the RAM, but this happened less when the CPU was accessing RAM vs ROM, hence the worse video performance when there was heavy RAM activity. So the solution was to force my RAM's strobes to be asserted all the time when MREAD is high (read mode), and to only follow the strobes from the CPU when MREAD is low (write mode). Now the picture is perfectly-crisp, as you can see here!
Perfect over RGBtoHDMI, that is. Oddly enough, it's a little weird over the native HDMI output. Not sure why; it was fine before. I've attached a pic of the HDMI output too; you can clearly tell the difference!
Quote from: stepleton on December 10, 2025, 04:34:05 AMMeanwhile, I wonder if those bidirectional level shifters you've chosen are the same TXS0108Es I've regretted on Cameo/Aphid...
Yep, that's the exact level shifter I'm using! I'm really regretting them here too. They're probably/hopefully fine on the bidirectional lines, but utterly destroy the unidirectional ones. I just get random 5-20MHz oscillations that come and go. Not the end of the world for the ProFile because I've got the internal ESProFile that works fine, but this might prevent me from getting the floppy drive working on this board. I've got the onboard ESFloppy emulator, but I doubt my code for that is functional, and also it can't do writes yet. I really need to test with a real floppy drive first, but if I can't plug one in without oscillations, then I'm not sure what I'm supposed to do. I think the FPGA can actually tolerate 5V even though it's out of spec, so maybe I just remove the shifters and bridge straight across them?
You know the floppy controller error 57 I was getting? Well, it was really confusing me because the floppy controller worked great on the previous version of the project that ran on the PYNQ board, and I finally figured out why it's broken. It turns out it's not broken; it was getting confused because of the crazy multi-MHz oscillations on its inputs! I checked the status byte at FCC017 and it said the reason why it was failing the test is that couldn't step away from Track 0, but I didn't even have a drive connected, so I figured the oscillations were screwing things up. And sure enough, flipping the "Floppy Drive Source" switch from External to Internal (which isn't even running any code yet) made the error go away entirely. So we now make it through the whole self-test without issue!
The next order of business is going to be to expand the RAM a bit. Right now we're stuck with 512K, which makes LOS painfully-slow. So expanding to the full 2MB and getting the memory size jumpers going is next up.
I also tried booting the Workshop, and it boots just fine, but it's insanely slow. Anytime you type a key from the main menu, it takes about 10 seconds for it to process what you typed, and it comes back with a filesystem error whenever you try to launch a program. But clearly the FS is fine because it can list the directory just fine and the Preferences program can launch from an LOS instance on the same disk just fine. So I'm hoping that maybe the Workshop just hates living in 512K of RAM and that this will fix it.
After that, I'll probably get the H/3A and A8/40 ROM selection switches working. The selection part should be easy, but this also means that I'll need to add extra scaling logic to the HDMI subsystem for when the 3A ROMs are selected. So that'll be a good time to fix the HDMI issue that I'm having too.
Then I'll probably do the speed selection switches, which will control the DOTCK frequency. I'm not sure how high I can go before things break, but the SRAM will probably be the limiting factor. I'm hoping for at least 40MHz (2x the normal DOTCK speed), but maybe we can go even higher. I've already confirmed that I can lower the DOTCK (all the way down to 5MHz) without breaking anything, so hopefully raising it will be the same story.
Ignore the weird artifacts on the "good" picture; that's just thanks to image compression and nothing wrong with the Lisa!
Just got the full 2MB of RAM going. It's crazy how much faster LOS is with that extra RAM. Nearly unusable with 512K, but quite pleasant with 2MB. And those weird issues with the Workshop where programs wouldn't open and everything was insanely slow are all gone now, so it looks like the RAM upgrade fixed that too!
The size selection jumpers don't work because I implemented all the logic for size selection by inhibiting CAS and RAS when the CPU tries to access out-of-bounds memory, but then forgot to send those inhibited versions of CAS/RAS over to the RAM controller. So it's just stuck at 2MB all the time. But that's a super easy fix!
I've implemented that fix now, and I'm currently resynthesizing. I also added code to get the H and 3A CPU board ROM selector switch and the A8 and 40 I/O ROM selector switch working, so we'll see if those go as planned...
At the same time, I also added functionality to the speed selection switches, so that they can alter the speed of the Lisa's DOTCK. I picked 20MHz (stock), 40MHz, 60MHz, and 80MHz, which correspond to CPU clock speeds of 5MHz, 10MHz, 15MHz, and 20MHz, respectively. Given the external SRAM and its 55ns speed limitation, I'm not optimistic about being able to increase the DOTCK very far. I'm hoping to at least get the 40MHz DOTCK to work, but I really only give that about a 50/50 chance of success, and I highly doubt I'll be able to push it past there without having RAM issues.
But I'll find out about all of that in 40-ish minutes when the design is done synthesizing!
Awesome to see good video.
The HDMI issue looks a bit similar to situations I've encountered when making hacky adapters that go from TTL monochrome video to VGA. (Basically you buffer and level-shift the video signal into VGA RGB and then, if needed, massage the sync pulse timing as best you can to match the spec.) Modern LCD monitors do the best they can to cope with your adapted signal, but it's all too common to be off by subpixels, which does a number on single-pixel vertical lines. You find yourself making minute adjustments via on-screen menus to stretch the screen geometry just right, to change the phase of the video signal, and so on.
In the image itself it almost looks to me like the horizontal screen resolution is set to be just a little bit too narrow.
But I can't account for the "ghosts" of vertical contrast edges (e.g. the one just to the right of the window), which seems to me almost like a signal integrity issue!
As for the parallel port issues --- I'm inferring from your remarks that you're running I/O ROM 88, meaning a 2/10 I/O board, meaning a faster parallel port (reflecting earlier discussions). Maybe a 2/5 I/O board would achieve better behaviour? Or maybe the oscillations are more fundamental. For Cameo/Aphid, I achieved some stability improvements by putting some series resistance on the signal lines (on the +5V side).
For future designs, unidirectional ICs are likely the way to go, but if you want a bidirectional approach, the fistful-of-MOSFETs (https://electronics.stackexchange.com/questions/555631/understanding-how-this-bi-directional-logic-level-shift-works) method seems to work pretty well, as James D. can attest.
Eager to see what's next, in any case...
Quote from: stepleton on December 10, 2025, 11:43:03 PMAwesome to see good video.
The HDMI issue looks a bit similar to situations I've encountered when making hacky adapters that go from TTL monochrome video to VGA. (Basically you buffer and level-shift the video signal into VGA RGB and then, if needed, massage the sync pulse timing as best you can to match the spec.) Modern LCD monitors do the best they can to cope with your adapted signal, but it's all too common to be off by subpixels, which does a number on single-pixel vertical lines. You find yourself making minute adjustments via on-screen menus to stretch the screen geometry just right, to change the phase of the video signal, and so on.
In the image itself it almost looks to me like the horizontal screen resolution is set to be just a little bit too narrow.
But I can't account for the "ghosts" of vertical contrast edges (e.g. the one just to the right of the window), which seems to me almost like a signal integrity issue!
Yeah, not completely sure what's going on just yet. I put the image up on my big monitor instead of the HDMI capture device to get a better look at it, and it looks like every 8th column of pixels is a duplicate of the one 8 columns before. So I guess something to do with switching between bytes as we either read Lisa pixels into the framebuffer or read HDMI pixels out of the framebuffer. I'm going to try to change the pipelining logic for the scaling a bit to see if that fixes it, but I'm not sure.
The weird thing is that it was working perfectly up until a few days ago, so I'm not sure what changed!
Quote from: stepleton on December 10, 2025, 11:43:03 PMAs for the parallel port issues --- I'm inferring from your remarks that you're running I/O ROM 88, meaning a 2/10 I/O board, meaning a faster parallel port (reflecting earlier discussions). Maybe a 2/5 I/O board would achieve better behaviour? Or maybe the oscillations are more fundamental. For Cameo/Aphid, I achieved some stability improvements by putting some series resistance on the signal lines (on the +5V side).
I'm actually doing the 2/5 I/O board, mainly because I want Twiggy compatibility for the lucky few people (yourself included!) who happen to have some and might want to plug them in. I'm pretty sure the oscillations are more fundamental though, like where the level shifter can't figure out which side is driving it because it's got a signal from the FPGA on one end and a signal from a pullup on the other, and so it oscillates back and forth between the two, leading to a really high-frequency square wave.
For the next board revision, I'll almost certainly switch to unidirectional ICs for everything other than the few things that need to be bidirectional, and I'll probably go with the MOSFET strategy for those unless I come up with anything better. But I've heard that it's pretty reliable like you're saying, and James' board seems to prove it!
Synthesis finished, and I programmed the board with the new bitstream that should enable some of those additional switches. I ended up having to disable the speed switches thanks to some weird design rules Xilinx has when it comes to routing clock networks, so I'll have to come back to that later. But luckily it looks like the RAM sizing jumpers work now, and so does the CPU ROM H/3A selection switch. I obviously need to update the HDMI logic to detect which VSROM is installed and push out the pixels accordingly, but at least I can see that it's booting even if the square pixels make the screen garbled.
The I/O ROM A8/40 switch unfortunately doesn't seem to work though. The A8 position works fine, but the 40 position causes the I/O ROM revision to display as 52 and it hangs on the I/O board test for a while. Which means that the floppy controller never asserted DISK_DIAG and probably isn't making it through its self-test. I don't see anything it could be other than a corrupted ROM, so I'm rebuilding right now with a new ROM image and some tweaks to hopefully fix the HDMI weirdness to see if I have any luck there.
HDMI issues are (mostly) fixed! The weird repeated 8th line issue was thanks to a mistake in my video pipeline where I was always accidentally reusing the byte index from the previous pixel for the first pixel of the next byte, so the first pixel of the next byte was actually showing the first pixel of the previous byte. That's all fixed now!
The image is still slightly shifted to the right, but hopefully that'll easier to track down.
I think all the video issues are fixed at this point!
Now, when you flip the H/3A toggle switch, it not only switches the ROM, but it also switches the HDMI decoding to account for the different screen resolution and square pixels. It's pretty cool to be able to flip the switch and see the screen change while the Lisa is running, although of course it results in corrupted video until you flip the switch back or reboot!
While messing around in LOS, I discovered a really subtle bug with the I/O board. I was trying to change the contrast and noticed that the HDMI signal brightness wasn't changing, whereas it works fine in MacWorks.
After some digging through the LOS source code (LIBHW/MACHINE.TEXT and LIBHW/DRIVERS.TEXT), I discovered that LOS does a check at boot to see whether your I/O board is the "Sept81" or "Feb82" model. I don't think any Sept81 I/O boards are known to exist, so all 2/5 boards are the Feb82 revision. And interestingly enough, the two revisions handle setting the contrast differently.
But LOS is detecting my board as a Sept81 for some reason. And I think I know why. The way that it determines which one you have is by checking that the ProFile parity generator that hooks to PB5 on the parallel port VIA is functioning properly. Apparently the Sept81 board didn't have this parity generator, so if the parity doesn't work, then it knows it's a Sept81 board. So clearly my parity isn't working for some reason! So now I need to figure out why and fix it so it can set the contrast correctly...
It's just really funny that the parity on the ProFile bus being broken causes contrast not to work! And according to the source code, the contrast is the only thing that cares about whether your board is Sept81 or Feb82, so it makes sense that it's the only thing broken!
I was just thinking it would be pretty funny if someone crammed this into an old Macintosh case, like a 128K or Plus. It would give new meaning to the notion of a "Baby Lisa."
I've given up on the speed selection switches for the moment. I was able to get it going so that the Lisa would run at double speed (40MHz DOTCK, 10MHz CPU clock), and you could tell that it was a good bit faster, but it wouldn't boot anything except the Selector because I somehow managed to break the floppy controller pretty badly. And no matter what I did, I couldn't figure out why it was broken. So I've reverted back to the version before the speed switches were added and started working on other things from there.
The good news is that I fixed the I/O board parity issue that was keeping the contrast from working in LOS! I just had the parity backward from what it should be, so inverting it fixed things and now contrast works fine!
And over the past few days, I've been trying to get USB peripherals working. I've written the full interface modules for both the USB keyboard and mouse, and the mouse one actually works! It's so cool to be able to directly control the Lisa with a modern mouse! The mouse scaling is a little weird and so I'm trying to come up with a good algorithm for making it slower without sacrificing small mouse movements, but that's a minor detail.
The keyboard, on the other hand, is still having some problems. The keyboard module took forever to write thanks to the Lisa's weird keyboard protocol, the challenge of converting USB HID scancodes to the key down/key up codes that the Lisa uses, and the weird way that the USB keyboard handles modifier keys like control and alt, but I think the logic itself is fully-functional at this point. The problem is that it doesn't work! After probing it with a scope and comparing it to a real keyboard (actually a USB to Lisa adapter since I don't have a real Lisa keyboard), I discovered that I'd actually gotten one of the timings waaayyy wrong, so I just changed that and now I'm re-synthesizing to see if that helps at all. It really seems like the keyboard module is doing exactly what it should other than that; hopefully this is the only thing that's wrong with it. Aside from maybe some keys that I mapped incorrectly!
The USB keyboard works too! That timing issue was the whole problem. I've tested every key in LisaWrite and the only ones that don't work are the tilde, left option, numpad +, and numpad enter. Probably just a mistake with the scancode mapping, so any easy fix. And then we'll have a fully-functional USB keyboard interface!
I believe I have experienced some strange mapping issues with my regular Lisa keyboard and tilde vs. option sometimes. Could be a coincidence, but you may want to check what you're seeing against real hardware.
Quote from: stepleton on December 18, 2025, 03:56:37 PMI believe I have experienced some strange mapping issues with my regular Lisa keyboard and tilde vs. option sometimes. Could be a coincidence, but you may want to check what you're seeing against real hardware.
Yep, you were right! The tilde and option weirdness is the same on an actual keyboard. So the only issue was the numpad stuff, which is now fixed!
My next task is getting the USB mouse scaling to feel right, which is proving to be quite a challenge...
I haven't confirmed it, but there might be a difference in the behaviour of tilde/option when booting the Office System directly vs. booting via the Selector.
I base this suspicion on the fact that I can't imagine Apple would have shipped a LOS that swaps the keys like that, and I've observed it happen in basically-fresh LOS installs where the only thing different from 1984 is the presence of the Selector in the boot process.
If the Selector is indeed responsible, then the problem lies somewhere in here (https://codeberg.org/stepleton/lisa_io/src/branch/master/lisa_console_kbmouse.x68), though I have no theory that can account for why it should cause that trouble.
Another little update on what I've been doing over the past few days!
It's been really tough getting the USB mouse scaling right, but I think I'm finally getting close to something that feels decent. The original mouse still feels better and easier to control, but I'm not sure that I can get the USB one to feel much better. It's just really difficult to balance fine control when making small movements with reduced speed when making larger movements. Luckily Mac mice are plentiful, so if a user wants the better mousing experience, it's not very expensive to get it. Whereas the keyboard (where an original is much more expensive) is just as good over USB as it is with an original one.
The floppy drive wouldn't work thanks to the level shifters, so at the risk of pumping 5V straight into the FPGA and damaging it, I just desoldered the shifters and bridged straight over them. Which doesn't seem to damage anything, at least in the context of short-term testing. Obviously shifters will be the long-term solution though! And fortunately, the floppy drive works, so that's nice!
More recently, I've moved to converting HDMI from 1080p30 to 1080p60. I've been running on 30fps up until now because I have to be able to generate 5x the pixel clock in order to get the HDMI subsystem to work, and that comes out to a rather insane 750-ish MHz for 60fps versus about half that for 30fps. And it was pretty hard to meet timing for the 750MHz clock compared to the 325MHz one. But after some messing around, I'm now getting 60fps video out of it! It still doesn't quite meet timing and the audio over HDMI sounds weird for some reason, but I'm working on that right now. Most of the timing violation seems to be coming from the propogation delay through the divider circuit that divides the HDMI Y-coordinate by 3 to scale the pixels to the Lisa's 2x3 pixel aspect ratio. So I'm replacing the divider with a LUT and hopefully that'll fix it. If not, there are still some other solutions to try.
I've also messed around a bit with overclocking the Lisa, to more success than I was expecting. Clock muxing is much harder than I foresaw, so for the sake of testing, I've just been changing the clock speed and resynthesizing instead of making the speed configurable at runtime. It's able to run seemingly perfectly at a 40MHz DOTCK (10MHz CPU clock, twice the normal speed), and GUI operations in LOS feel really snappy at that speed. But disk I/O is still of course a bottleneck, so it's not a true 2x speedup.
It could also (sort of) run with a 60MHz DOTCK (15MHz CPU clock, 3x the normal speed), but there were some things that didn't quite work. For instance, communications with the floppy controller were pretty intermittent, and there were some COP issues too, but only with sending the "power off" command and reading/writing the clock; keyboard and mouse were still fine for some reason. And although the GUI in LOS of course felt really quick, it wasn't a 3x improvement overall thanks to I/O. As a test, I tried compiling LOS (which is a pretty disk-heavy workload) with the stock 20MHz DOTCK and then again with the 60MHz DOTCK, and the tripling of the clock only cut the compile time in half. Still a lot better than the stock compile time though!
I don't see it being very likely that I'll be able to get it working at 80MHz, but there's a chance that I might be able to get 60MHz fully-working, or something between 40 and 60 at the very least. We'll see!
Sorry for the big gap between posts!
Thanks to the help of another LisaList2 user who PM-ed me (they might want to remain anonymous so I won't mention their name, but they can feel free to comment here if they want credit), I was able to implement a much better mouse scaling algorithm that uses accumulators instead of the piecewise function scaling method, and it feels really, really good now. As good as (or maybe even a little better than) the stock Lisa mouse, so I'm going to call that a success!
I've put the 1080p60 and overclocking tasks aside for now so that I can get the more annoying task of designing the v2 PCB out of the way. There are several minor problems with the original board that, when all put together, are really bothering me (and some are a little more than minor), so I think it's time to fix all of them. I'm getting pretty close to finishing the board design, and here's a list of all the improvements:
- Swapped the labeling on two jumpers that I labeled backwards.
- Removed the power rail disconnect jumpers now that I know the switching regulators work.
- Swapped the R and B on ESProFile's RGB LED; I got them backwards the first time around.
- Added a BOOT button to both ESP32s; I learned the hard way that if you accidentally put them into USB-OTG mode and don't have a BOOT button, you'll be locked out and can't upload new code.
- Added a boost converter that generates 12V from the 5V USB-C rail instead of requiring 12V from the barrel jack; I'm honestly not sure why I didn't do this from the start...
- No need for the barrel jack anymore, so I deleted it.
- Replaced the FT2232's 93C46 EEPROM with a 93C56 EEPROM (see the posts from when I first got the v1 boards to learn why).
- Changed the interface between the two ESP32s and their SD cards from SPI to full-blown SDIO. This should greatly increase SD data transfer speeds, and will be necessary for the tight timings of the ESP32 floppy emulator. It won't hurt for ESProFile either!
- Added a jumper that lets you add/remove "scanlines" in the HDMI output; scanlines are added by simply blacking out every third line (or second line when using 3A ROMs) on the screen. I haven't actually tested this yet, so we'll see how well this strategy works before I commit to the jumper.
- Adjusted LED brightnesses by changing their series resistor values. They were all REALLY bright before!
- Fixed the issues with the TL074 op-amp and the onboard speaker. The audio output was really faint, caused by a combination of a few mistakes in that circuit.
- MOST IMPORTANTLY: Changed all the level shifters from TXS0108Es to a combination of other things. The bidirectional lines are now shifted by BSS138 MOSFETs, and the unidirectional lines are shifted by 74HCT245s (3.3V -> 5V) and 74LVC245s (5V -> 3.3V). Hopefully this will get rid of the weird oscillations and work a lot better!
Thanks to the level shifter oscillations, it seems like the external SCC wasn't really working at all, so I'll be able to test that for real when I get the new boards. Right now, any attempt to talk to it from LOS or the Workshop causes the system to hang.
I haven't fully settled on how to do this yet, but I'd also like to add Twiggy support without wasting tons of space with 2 huge Twiggy headers. So I was thinking I'd keep the Sony header, and then add a 4-pin header beside it to hold all the extra Twiggy signals. Then you'd hook both the Sony and 4-pin headers up to a separate breakout board that has the Twiggy ports on it. Does that sound like a good solution to the few Twiggy owners out there, or do you guys have some better ideas?
If anyone has any suggestions of other stuff to add/change on the v2 board, I'd be happy to listen!
Alex, I have to say I am absolutely astonished by what you've accomplished here. I've been lurking and following this thread after discovering it in November, and it finally pushed me to join LisaList2 so I could properly express my appreciation.
The Lisa holds a special place in my heart. As a teenager, it was my dream computer. That gorgeous machine with its revolutionary GUI felt like something from the future. Years later, I finally managed to acquire a Lisa 2/5, but it had suffered a NiCd battery explosion and leaking capacitors that led to corrosion and damaged PCB traces. The power supply was also failing with inconsistent voltages. I tried to repair it, but the work required was beyond my skill level at the time, so I regrettably sold it. I've kicked myself over that decision ever since.
Watching you bring the Lisa to life inside an FPGA has been incredible. From those first CPU board errors back in September to booting LOS and MacWorks, the pace of your progress is remarkable, especially considering you're doing this alongside a PhD program!
I'd love to support this project in any way I can. Whether that's helping with testing when you're ready, contributing toward board costs, or anything else that would be useful, please don't hesitate to reach out.
Thank you for giving those of us who couldn't hold onto our Lisas (or never had one at all) hope for experiencing this incredible machine again.
Is the V2 board going to be in the form factor of a 2/5 motherboard? Ideally this should be the end goal so folks with battery bombed systems could replace the original CPU/IO/MEM/MOTHER boards with a single FPGA board.
Quote from: coffeemuse on January 09, 2026, 12:17:35 PMAlex, I have to say I am absolutely astonished by what you've accomplished here. I've been lurking and following this thread after discovering it in November, and it finally pushed me to join LisaList2 so I could properly express my appreciation.
Thank you! Glad you've been enjoying the journey so far, and I really appreciate the kind words!
Quote from: coffeemuse on January 09, 2026, 12:17:35 PMThe Lisa holds a special place in my heart. As a teenager, it was my dream computer. That gorgeous machine with its revolutionary GUI felt like something from the future. Years later, I finally managed to acquire a Lisa 2/5, but it had suffered a NiCd battery explosion and leaking capacitors that led to corrosion and damaged PCB traces. The power supply was also failing with inconsistent voltages. I tried to repair it, but the work required was beyond my skill level at the time, so I regrettably sold it. I've kicked myself over that decision ever since.
It was my dream computer as a teenager too, thanks to the revolutionary OS and fascinating architecture. But thanks to me being a teenager nearly 40 years after the Lisa was released, I was actually able to get one when I was 16 back in 2019. Also battery-bombed like yours, but I was able to fix it with the help of everybody on this forum. I didn't know a whole lot about them back then, but I've learned a lot over the last few years!
Quote from: coffeemuse on January 09, 2026, 12:17:35 PMI'd love to support this project in any way I can. Whether that's helping with testing when you're ready, contributing toward board costs, or anything else that would be useful, please don't hesitate to reach out.
Wow, thank you! Getting these boards fabricated and assembled in small quantities isn't cheap (around 700-something dollars after tariffs if I remember correctly), so I'd be really grateful for a contribution to cover some of that. Thanks for your generosity!
Quote from: Lisa2 on January 09, 2026, 12:40:45 PMIs the V2 board going to be in the form factor of a 2/5 motherboard? Ideally this should be the end goal so folks with battery bombed systems could replace the original CPU/IO/MEM/MOTHER boards with a single FPGA board.
No, that's going to be a little further down the road. My initial goal is to make a board that's a completely standalone Lisa for people who don't have one but still want to experiment with one (or for people who do have one but want a nice and compact solution for messing around that has modern creature comforts like HDMI, USB peripherals, built in hard/floppy emulation, and USB to serial) so that's what I'm working on now. Besides, it's waaayyy easier to test/debug things on the standalone board versus a motherboard replacement version since you don't have to be tethered to a bulky Lisa chassis all the time.
But once the standalone board is fully-functional and up on GitHub, then I absolutely want to do a motherboard replacement version. After all of the FPGA stuff is working properly, that board should actually be easier to design than the standalone one since we won't need voltage regulation, HDMI, USB, onboard hard/floppy emulation, and other things like that.
Of course, the final designs will be open-source like everything else I make, so anybody is free to make and sell them. But to prevent people from having to pay the rather insane $700-ish just to get their hands on 2 boards, I'm thinking about also ordering a bunch of them in bulk (maybe my parents will give me a loan?) and selling them for a much-reduced price (maybe $160-ish each, it would depend on how big the bulk order is) to anyone who wants them. We'll see.
Quote from: AlexTheCat123 on January 09, 2026, 09:24:35 AMI'd also like to add Twiggy support without wasting tons of space with 2 huge Twiggy headers. So I was thinking I'd keep the Sony header, and then add a 4-pin header beside it to hold all the extra Twiggy signals. Then you'd hook both the Sony and 4-pin headers up to a separate breakout board that has the Twiggy ports on it. Does that sound like a good solution to the few Twiggy owners out there, or do you guys have some better ideas?
Don't go out of your way on my account at least! This seems like a good option to me.
Happy to chip in to the development fund.
Quote from: AlexTheCat123 on January 09, 2026, 01:34:49 PMIt was my dream computer as a teenager too, thanks to the revolutionary OS and fascinating architecture. But thanks to me being a teenager nearly 40 years after the Lisa was released, I was actually able to get one when I was 16 back in 2019. Also battery-bombed like yours, but I was able to fix it with the help of everybody on this forum. I didn't know a whole lot about them back then, but I've learned a lot over the last few years!
It was much earlier than 2019 for me, but I'll respectfully decline to post the actual date.
Quote from: AlexTheCat123 on January 09, 2026, 01:34:49 PMWow, thank you! Getting these boards fabricated and assembled in small quantities isn't cheap (around 700-something dollars after tariffs if I remember correctly), so I'd be really grateful for a contribution to cover some of that. Thanks for your generosity!
Hopefully the US Supreme Court sides with some of the lower courts and undoes some of the recent tariff burdens. In the meantime, I would be happy to contribute toward offsetting some of the costs for your next board run. Feel free to send me a PM and we can work out the details.
Quote from: AlexTheCat123 on January 09, 2026, 01:34:49 PMOf course, the final designs will be open-source like everything else I make, so anybody is free to make and sell them. But to prevent people from having to pay the rather insane $700-ish just to get their hands on 2 boards, I'm thinking about also ordering a bunch of them in bulk (maybe my parents will give me a loan?) and selling them for a much-reduced price (maybe $160-ish each, it would depend on how big the bulk order is) to anyone who wants them. We'll see.
If you end up doing a bulk order for the final standalone boards, I would definitely be interested. I would be thrilled to have one of these sitting on my desk someday. Thanks again for all your hard work on this!
Quote from: stepleton on January 09, 2026, 01:55:03 PMDon't go out of your way on my account at least! This seems like a good option to me.
Happy to chip in to the development fund.
Perfect, that's exactly how I'll do it then.
And thanks so much for the offer! Maybe I'll PM you as I get closer to being ready to order the boards.
Quote from: coffeemuse on January 09, 2026, 05:15:55 PMIn the meantime, I would be happy to contribute toward offsetting some of the costs for your next board run. Feel free to send me a PM and we can work out the details.
Thanks again, I sure will once the time comes!
Quote from: coffeemuse on January 09, 2026, 05:15:55 PMIf you end up doing a bulk order for the final standalone boards, I would definitely be interested. I would be thrilled to have one of these sitting on my desk someday. Thanks again for all your hard work on this!
Yeah, hopefully enough people will be interested for the bulk order idea to work out. Obviously it's not very useful if only 10 or 15 people say they want one!
Hi Alex,
Quote from: AlexTheCat123 on January 09, 2026, 09:24:35 AMIf anyone has any suggestions of other stuff to add/change on the v2 board, I'd be happy to listen!
Taking you up on your invitation for v2 board suggestions! This is more of a question than a suggestion, but I'm curious about enclosure options for the standalone board.
I noticed the OLED display, buttons, and switches for floppy emulation are mounted on the top surface of the PCB. I totally get that this makes sense for prototyping and early testing, but for those of us thinking about eventually housing a finished board in some kind of case or shell, are there any thoughts around this?
Specifically:
- Is there any consideration for providing headers to allow these controls to be relocated/extended?
- Are they positioned with any particular mounting orientation in mind?
Totally understand if this isn't a priority right now given everything else on your plate. I'm curious if it's on the radar.
Quote from: coffeemuse on January 12, 2026, 07:23:40 AMI noticed the OLED display, buttons, and switches for floppy emulation are mounted on the top surface of the PCB. I totally get that this makes sense for prototyping and early testing, but for those of us thinking about eventually housing a finished board in some kind of case or shell, are there any thoughts around this?
I honestly haven't thought about that at all! The floppy emulation is still in a really preliminary stage, so I'm not even sure if I'll be able to get it fully-working yet, and if not then those parts won't even be present on the final board. I think I've got all the code written to do floppy disk reads properly, but it hasn't been tested yet, beyond making sure the DC42-processing stuff works and that conversion to GCR and loading the data into the RMT is working properly. I can't test it until I get the v2 boards since I only have enough RAM to hold 2 sectors at a time (they take up a lot more space when in RMT format, a full 4 bytes of RAM per flux transition), which means that I have to access the SD card a lot, and SPI just isn't fast enough for this. So I need SDIO, which will be present on the v2 board.
All this to say that the relocation of the buttons will probably be an issue for further down the road when I'm sure that they'll actually be on the board to begin with. And if the floppy emulator ends up working out, then I'd probably stick the buttons and OLED (along with the power, reset, and other control switches maybe) on a little breakout board that plugs into the mainboard with a ribbon cable. Then you can just screw that to the top of your (presumably 3D-printed) enclosure. How does that sound?
Quote from: AlexTheCat123 on January 12, 2026, 04:07:24 PMAll this to say that the relocation of the buttons will probably be an issue for further down the road when I'm sure that they'll actually be on the board to begin with. And if the floppy emulator ends up working out, then I'd probably stick the buttons and OLED (along with the power, reset, and other control switches maybe) on a little breakout board that plugs into the mainboard with a ribbon cable. Then you can just screw that to the top of your (presumably 3D-printed) enclosure. How does that sound?
That sounds like a great approach. A breakout board with a ribbon cable would make enclosure design much more flexible. I wasn't trying to request any design changes; just wanted to confirm what enclosure considerations were on the radar.
I've had a 3D-printed case with 1980s-era aesthetics in the back of my mind, so that setup would work really well down the road. Good luck with the SDIO testing on the next iteration of boards. I am looking forward to seeing how the floppy emulation progresses.
Quote from: coffeemuse on January 12, 2026, 04:54:57 PMGood luck with the SDIO testing on the next iteration of boards. I am looking forward to seeing how the floppy emulation progresses.
Thanks! I really hope I can get it working; it would be nice to have an open-source alternative to the Floppy Emu out there. Granted, mine will probably never support 1.44MB disks, but at least it would do 400K, 800K, and maybe/hopefully even Twiggies.
Is there such a thing as a virtual twiggy?
Quote from: Jacexpo on January 14, 2026, 10:16:44 PMIs there such a thing as a virtual twiggy?
As far as I know, there aren't any Twiggy emulators out there right now. LisaEm is capable of using Twiggy images, but it's pretty hit-or-miss, at least in my experience.
And to be clear, my first priority is Sony emulation, with Twiggy coming later. And that's only if I can even get Sony emulation working to begin with! I just want to set expectations low here so that nobody's disappointed if I fail miserably. Making a floppy emulator isn't quite as easy as a ProFile emulator!
Don't want to hijack Alex's thread, just want to add more info:
I added support for Twiggy disk images in LisaEm last year, it wasn't available before that. It works sufficiently well and supports both Twiggy drives, e.g. I am able to install LOS 1.0 from Twiggy floppies which involves using both drives. More info on how to use it: https://github.com/arcanebyte/lisaem/pull/22
Also, there is already an open source alternative to FloppyEmu: https://github.com/vibr77/AppleIIDiskIIStm32F411 , for Apple II only. It is on their roadmap to support Macintosh and Lisa eventually, once Apple II works sufficiently well (it does). You can follow it at https://www.applefritter.com/content/apple-ii-disk-emulator-using-stm32 .
Wow, good to know! I had no clue anybody else was developing an emulator.
I'm getting closer to finishing with the v2 board! I finally got through all the level shifters; upgrading those took forever because it required rerouting pretty significant chunks of the board.
I've also added Twiggy support now, using the breakout idea we discussed earlier. I've attached a pic of what the breakout looks like. I really wish I could've used a shrouded header for the 4-pin connector that carries the extra Twiggy signals, but they just don't have any in stock. And I know it looks really bad, but I decided to just autoroute this board since I don't care about this one that much compared to the main PCB. Maybe I'll go back and manually route it later...
Aside from just checking everything over, I think the v2 board might be done now!
I ended up adding trimpots to all the LEDs instead of fixed series resistors so that I can perfectly dial in the brightness for each one. Right now, some are way too bright and others are on the dimmer side, so this should be a good way to even them out. And then I can measure the pot resistances to swap them out for fixed resistors in the final design.
I also thought that something was wrong with the TL074 audio amp/contrast circuit because it was producing super quiet audio through the built-in speaker, and no contrast signal at all. Not that it mattered a ton in my testing because I'm also doing audio and contrast over HDMI, but many people who will have their board connected to a monitor as opposed to a TV won't be able to get HDMI sound to begin with, so it's still really important for the speaker amp and onboard speaker to work. It turns out that nothing was wrong with it though; I had just forgotten to supply 12V to it and was only sending -12V! The v2 board now has an onboard step-up converter to generate 12V from the 5V USB power instead of requiring 12V to be provided externally from the barrel jack (which has now been removed entirely); not sure why I didn't do it like this before.
Oh yeah, I also added a "scanlines" jumper that will hopefully allow you to enable or disable simulated scanlines in the HDMI output. I accidentally enabled this feature on my RGBtoHDMI and thought it made the Lisa display look pretty cool, so might as well try to add it here. It looks like all it's doing is blacking out every third row (regular Lisa) or every second row (screen-modded Lisa) of pixels in the HDMI framebuffer, so that should be pretty easy to implement!
The v2 board will definitely still be in the prototype phase (although a lot more polished than v1), but hopefully v3 will be the final release board. We'll see!
Of course, if anybody sees any issues or areas for improvement in these pictures of the v2 board, let me know and I'll get it changed!
Quote from: AlexTheCat123 on January 15, 2026, 06:55:07 PMAnd I know it looks really bad, but I decided to just autoroute this board since I don't care about this one that much compared to the main PCB.
tbh I kind-of like it when you find the odd little autorouted (or wire-wrapped or perf-boarded) interposer amidst otherwise thoughtfully-designed hardware --- it says to me "there's a story behind this"...
With the disclaimer that free suggestions are worth what you pay for them... here are two user-interface thoughts for V3, both probably pretty obvious and neither one a showstopper:
(1) It would be nice if it didn't matter which USB port received the mouse and which received the keyboard.
(2) The bottom edge of the board as depicted feels like the user-facing side of the machine, so it makes sense for that side to have the stuff the user is most often going to mess with. To me, this would argue for putting the floppy and hard drive SD slots on or near that edge if you can. If you need to free up room along the edge, maybe one of those double-stack USB sockets might help?
QuoteWith the disclaimer that free suggestions are worth what you pay for them...
Ditto
Quote(2) The bottom edge of the board as depicted feels like the user-facing side of the machine, so it makes sense for that side to have the stuff the user is most often going to mess with.
This seems like great advice, and got me thinking, what would I move where... Lisa mouse port to back perhaps and....
Then it occurred to me that the iterations of the layout for ports and ancillary hardware might be separated from the iterations of the core functionality, reducing the cost for incremental changes to one or the other.
ie. determine a suitable boundary (eg. low speed and fewest signals) for separating legacy ports from the FPGA and RAM, and implement a main board/daughterboard. The cost of an interconnect is substantial, but when the cost of new boards is so high.... depends on your confidence level as to how many iterations you expect and if early versions are complete write-offs or still useful. Lowering the cost of changing the legacy ports layout could make it more practical/economical to have different final configurations too.
Interconnects generate problems as well as increased cost, so quite possibly not a good idea (see top paragraph).
For the floppy, consider making the layout compatible with a 26 pin header that has two pins removed so a 20 pin plug will still fit.
... many decisions
Quote from: stepleton on January 16, 2026, 04:28:54 AM(1) It would be nice if it didn't matter which USB port received the mouse and which received the keyboard.
A very good idea! For some reason, I dismissed this as insanely difficult before, but it's really nothing more than muxing a few signals. I'm working on it now.
Quote from: stepleton on January 16, 2026, 04:28:54 AM(2) The bottom edge of the board as depicted feels like the user-facing side of the machine, so it makes sense for that side to have the stuff the user is most often going to mess with. To me, this would argue for putting the floppy and hard drive SD slots on or near that edge if you can. If you need to free up room along the edge, maybe one of those double-stack USB sockets might help?
My only concern there is that, especially once I switch to SDIO, those SD cards could be running at 50+ MHz (I think the ESP can even go up to 100 on SDIO). And I'm worried that the long traces would lead to signal integrity issues. Whereas right now, the slots are just about as close to the ESPs as they can get.
Quote from: sigma7 on January 16, 2026, 01:41:45 PMThen it occurred to me that the iterations of the layout for ports and ancillary hardware might be separated from the iterations of the core functionality, reducing the cost for incremental changes to one or the other.
ie. determine a suitable boundary (eg. low speed and fewest signals) for separating legacy ports from the FPGA and RAM, and implement a main board/daughterboard. The cost of an interconnect is substantial, but when the cost of new boards is so high.... depends on your confidence level as to how many iterations you expect and if early versions are complete write-offs or still useful. Lowering the cost of changing the legacy ports layout could make it more practical/economical to have different final configurations too.
I thought about this way earlier on in the design process, and opted against it just because it was sort of hard to separate what would go on the peripheral board versus the core board. The boundary isn't super clear-cut. Maybe I should have done this, but at this point I think it would require such a substantial redesign that I'll probably opt not to for now.
Quote from: sigma7 on January 16, 2026, 01:41:45 PMFor the floppy, consider making the layout compatible with a 26 pin header that has two pins removed so a 20 pin plug will still fit.
I briefly considered this too, instead of the 20 pin + 4 pin strategy, but it would require making the 26-pin connector incompatible with Twiggies, and I was trying to avoid the situation where someone might get confused and plug a Twiggy straight into the board expecting it to work. Any ideas to prevent this?
Also, even if I were to remove 2 pins to allow a 20-pin connector to plug in, wouldn't the keying slot still be a problem? I don't think the slot on a 26-pin header would line up with the notch on a 20-pin connector that's plugged into one side of the header.
Sorry for the lack of updates; some weird things have been happening with the FPGA lately. Everything was working so great, and then it all just started breaking. First the floppy controller started acting up, and every time I would resynthesize to try and fix it, the problem would either change or go away entirely. And then other things started breaking and acting intermittent too.
But I think I've finally realized what the problem is. Over the course of the project, I've marked a bunch of signals for debugging using the MARK_DEBUG attribute, which causes the signals to be preserved during synthesis and prevents certain optimizations from taking place. I thought I had been unmarking signals whenever I didn't need to debug them anymore, but clearly not, because I checked and there were nearly 1,500 signals that were marked for debugging! With that many signals being excluded from optimization, weird timing things are bound to happen, so I've been going back through and unmarking as many of them as possible now. And as I do that, everything seems to be starting to work properly again!
Clearly there's still something that's not quite right, even as I remove the MARK_DEBUG attributes, because I just tested with a real floppy drive for the first time and things aren't working quite right over there. So obviously, there's some big difference between the real drive and the Floppy Emu that I need to track down.
Whenever I hit a dead end on one issue, I always try to pivot to another issue and then come back to that one later in the hopes that I'll have new ideas to resolve it, and that's exactly what I did here. I pivoted from trying to get a real floppy drive working back over to trying to get overclocking working. Last time I tried it, I didn't understand the physical arrangement of clock buffers and multiplexers on the chip as well as I do now, and my better knowledge the second time around allowed me to successfully implement the clock mux between 10MHz, 20MHz, 40MHz, and 60MHz dot clocks.
This meant that I could pick between the clocks using the switches on the board instead of having to hard-code one at build time like before, but the weird problems at 40MHz and 60MHz that were present the last time I tried it were unfortunately still there. For anyone who didn't catch the posts where I talked about those issues, basically the speaker would just randomly beep while LOS was running, you couldn't read the clock from the COP (LOS would say the date/time was invalid and the Workshop would actually tell you that the procedure call to get_time() failed), and the COP would never turn the Lisa off after the screen dimmed to black during shutdown.
Obviously the last two problems point toward the COP, but I wanted to address the speaker issue first since it seemed a bit more perplexing. So I hooked a virtual logic analyzer to the FPGA and started looking at the 68K bus to see if it was actually commanding the speaker to beep or if there was just some kind of signal noise that was inadvertently causing the beeps. This is why I asked Claude to make that disassembler I mentioned in another thread. And it turns out that the 68K was actually commanding the VIA to beep the speaker, so I dug into the LOS source code to figure out how the NOISE routine worked (LIBHW/MACHINE.TEXT). Working backwards from the value the 68K was putting into the VIA timer 2 register, I was able to determine that the wavelength of the tone was 4545, so now it was just a matter of looking through the entirety of the LOS source to see what called the beep routine with that wavelength.
There were two callers using that wavelength: one in LIBHW/KEYBD.TEXT and one in LIBHW/TIMERS.TEXT. The one in KEYBD will beep the speaker if the keyboard input queue ever fills up, but the one in TIMERS made a lot more sense in this context. That one beeps the speaker if the Lisa ever fails to read the clock from the COP, which lines up perfectly with the clock errors that we were getting!
So now the question is: why are we failing to read the clock? Whatever the reason, it's probably the same reason why we're failing to shut the system down. Both of those operations involve writing a command to the COP, whereas getting keyboard/mouse movements (which both worked fine) only requires reading from the COP, so it sounds like we have some kind of write issue. So I looked at more signals and slowly figured out what was going on.
The COP puts out a signal called READY that goes into either CA1 or CA2 (I forget which) of the keyboard VIA. READY is high most of the time, but goes low briefly every one in a while to signify that the COP is ready to receive a command from the VIA. If the 68K wants to send the COP a command, it's expected to wait for READY to go low, and then shove the command out onto the COP data bus as soon as READY drops.
That all makes sense, but this is where it gets weird. You'd think that it would be safe to remove the command from the COP's bus as soon as it pulls READY high again, but no, apparently you have to leave it there for a little while longer (2 or 3 extra READY-lengths) after READY is deasserted. Otherwise, the COP doesn't see your command and won't respond with any data. At the standard 20MHz dot clock, the 68K was leaving the command on there for long enough for the COP to see it, but at the higher dot clocks, it was removing the command too fast and the COP would never respond with the clock data, hence the beeps and clock errors! Same goes for the power-off command; the COP never received it and the system never turned off.
Luckily, the solution is simple. The VIA register that determines whether the VIA is outputting over the COP bus is DDRA (Data Direction Register A), and the entire issue here is simply that DDRA is being turned back from an output to an input too quickly. So I just wrote some simple logic that extends the falling edge of DDRA a bit; now DDRA stays in output mode for a little while longer even after the 68K commands it to go back to input mode. I know it sounds weird to say "falling edge" since DDRA is a full register with 8 bits in it, but I'm treating all of DDRA as a single bit since all of the data bits on PORTA will always be going in the same direction.
You might wonder why we don't get errors in the boot ROM, given that it also reads the clock and is capable of powering the system off, neither of which caused any problems. Well, the boot ROM uses a slightly different COP communications routine that already keeps the bus set to output mode for a little longer anyway, so even with the faster clock, it's still within tolerance.
Now that I've fixed that COP issue, the 20, 40, and 60MHz DOTCKs all seem to work perfectly. Unfortunately, 10MHz doesn't quite work right, and probably never will. And it's for the opposite reason from the above. Instead of us switching the COP bus from an output to an input too early like before, now the CPU is so slow to react to READY going low that it misses the READY pulse entirely and doesn't set the COP bus to an output until after the pulse is already over. There's not really a way that I can hack around that problem, short of overclocking the COP, but then that introduces more problems related to compatibility at other clock speeds. Not to mention the fact that the real time clock wouldn't be accurate anymore!
So with that in mind, I think I'm going to get rid of the 10MHz DOTCK (not sure why people would really want a 2.5MHz Lisa anyway) and try to replace it with an 80MHz DOTCK. No promises or anything, but I got it going at 60, so perhaps I can get 80 going too! Who knows, maybe even 100MHz is possible...
Thanks for the write-up!
One question comes immediately to mind: does this mean that the clock speed-up is being applied selectively to some components (like the CPU) and not others (like the COP)?
Here's another: what does beeping sound like at 60 MHz? I assume the VIAs are being sped up and so the beep might be very high-pitched!
Quote from: stepleton on January 29, 2026, 04:01:18 AMOne question comes immediately to mind: does this mean that the clock speed-up is being applied selectively to some components (like the CPU) and not others (like the COP)?
Yes! The only thing I'm messing with is the DOTCK. I don't want to speed up the COP because I'd like for the RTC to still be accurate, and I can't really speed up the 16MHz I/O board clock because that would utterly destroy the floppy disk read/write timings.
Quote from: stepleton on January 29, 2026, 04:01:18 AMHere's another: what does beeping sound like at 60 MHz? I assume the VIAs are being sped up and so the beep might be very high-pitched!
Very fast and high-pitched indeed! I guess this would be the one benefit of recreating the 2/10 I/O board (at least as far as pitch is concerned, not speed), but I still think it's better to do the 2/5 I/O board to remain compatible with Twiggies!
I just tried it at 80MHz, and, well, it's at least partially functional!
Initially, it just gave the two low-pitched (although thanks to the overclocking they're actually really high-pitched) beeps indicating that no RAM was detected, so I started looking at some signals and discovered that we're starting to run up against the limits of my RAM chip. Writes were working fine, but reads weren't; by the time the RAM chip had retrieved the data, the 68K had already moved onto the next instruction. But it was really close; if the RAM chip had been selected just a single DOTCK cycle earlier, I figured that the data would probably be ready in time. So I modified the RAM board code to select the chip on the falling edge of RAS instead of waiting for CAS too, which isn't accurate to the original design anymore and breaks things at lower clock speeds, but at least helps me get further along at 80MHz.
After making that change, the Lisa almost makes it all the way through the self-test. I do get an error 54 though (clock error), indicating that the COP sync issues I talked about in the last post are back again. I guess the clock is getting so fast now that even my (rather long) DDRA delay circuit isn't enough. I'll try making it even longer and we'll see if that revives the COP.
I can boot into the Selector, although I always get an error 84 on my first boot attempt. The second time works fine. At that point, no operating systems fully boot.
LOS crashes out super early, not even giving an error or anything and just resetting the entire machine a few seconds after the boot starts.
MacWorks Plus turns the Lisa off the moment it gets to the blinking question mark screen. It's clearly a controlled power-off (the screen dims nicely and everything), so I'm guessing that COP issues are to blame for this.
MacWorks XL actually boots nearly all the way into Mac OS, but it hangs with the menu bar visible and nothing on the desktop.
Xenix reports hard disk errors; I think we're starting to push up against the limits of what ESProFile can handle too. And Xenix is also really picky about disk timings for some reason. Not sure if a real ProFile would work at these speeds; probably not.
With all of this in mind, I'm guessing that I probably won't be able to get things working at 80MHz. Even if I can get the COP issues fixed, there seem to be deeper issues, and even if all the problems could be fixed then we still have the issue of hard disk timing issues and getting the RAM timings to work with all the DOTCK configs. I'll keep trying a little bit more though!
In the very likely reality where 80MHz doesn't work out, then this begs the question: what clock speeds do we want on the selection switches? There are 4 switch combinations, so 4 different clocks we can select. I was hoping to do either 10, 20, 40, and 60MHz or 20, 40, 60, and 80MHz, but we already know that 10 is out and 80 is probably going to be out too, so that just leaves us with 20, 40, and 60. We need to pick one more speed that's within those bounds for the extra switch position. Any preferences? And do we keep 40 or replace it with some other speed between 20 and 60?
One thing I can say with near certainty: 100MHz is ABSOLUTELY NOT going to happen!!!
42, of course. The answer to the Ultimate Question of Life, the Universe and Everything.
Can the switch control something else? Square pixels vs. tall pixels would be very handy, though I'm guessing you're already accommodating that in one way or another. Otherwise I can't really think of anything. Choose a frequency that makes it easiest to bit-bang an address line so that you can play a tune on an AM radio?
(For v3 move from switches to a rotary encoder that goes from 5 MHz for the 68k to the upper limit...)
If there's any chance, I'd love to see a cheap-and-cheerful video of LisaMandelbrot Solo running on the 80 MHz dot clock Lisa...
Quote from: stepleton on January 30, 2026, 04:15:59 AMCan the switch control something else? Square pixels vs. tall pixels would be very handy, though I'm guessing you're already accommodating that in one way or another. Otherwise I can't really think of anything. Choose a frequency that makes it easiest to bit-bang an address line so that you can play a tune on an AM radio?
Yep, I already have an H/3A switch for that! Playing music sounds like a good idea though...
Quote from: stepleton on January 30, 2026, 04:15:59 AM(For v3 move from switches to a rotary encoder that goes from 5 MHz for the 68k to the upper limit...)
I wish I could, but unfortunately that's not as easy to implement as it sounds. You can't easily sweep the output frequency of an MMCM across a range, and I guess I could use a variable clock enable on a single high-speed clock to accomplish the same thing, but that would require rewriting significant portions of the code. And it would probably break a lot of stuff, so I'm not sure it would be worth it!
Quote from: stepleton on January 30, 2026, 04:15:59 AMIf there's any chance, I'd love to see a cheap-and-cheerful video of LisaMandelbrot Solo running on the 80 MHz dot clock Lisa...
Here you go!
https://www.youtube.com/watch?v=o00bQq0lx0A
I haven't given up on getting 80MHz working just yet. I've tracked down the COP issue; it's no longer a sync problem with sending the command. That part works fine now. The new issue is that the COP takes "so long" to return the data that the 68K times out before it's ready. At the lower (but still overclocked) clock speeds, the timeout loops were tolerant enough that the CPU was still willing to wait long enough to retrieve the data, but not anymore. Remember, we're running at 4x the stock clock speed now, so the timeout periods are essentially 4x shorter than they should be.
There may not be anything I can do about this. Obviously I can't patch the ROM (and every other piece of Lisa software) to extend the timeout because that's just a really dumb solution, and I also can't overclock the COP because that would mess up the RTC and break the timing of the keyboard interface.
Unless anyone else has any suggestions, my only remaining idea is to try and shorten my "extended" pulse on DDRA that feeds the COP its command. Obviously it needs to be extended some because otherwise the COP will never receive its command at the higher clock speeds, but perhaps (and this is just a total guess) the COP doesn't start processing the command until DDRA goes back to an input again and takes the command off the bus. And given that I'm probably extending the pulse for longer than I need to, maybe I'm wasting some time during which the COP could be processing the command. So what if I just shortened the pulse a bit, in the hopes that the COP starts processing the command quicker and returns the data faster, hopefully within the timeout period? That's what I'm about to try, and hopefully it works! If not, then I think we might be stuck with 60MHz, or maybe I can try 70 and see if I have any better luck there...
Yeah, shortening the DDRA pulse made no change whatsoever, so I don't think there's any way to fix this. I'm going to try stepping the clock down in increments of 5MHz until I can find the highest point at which the COP works, and we'll call that the max speed. Hopefully it's above 60MHz!
By the way, it takes about an hour and 15 minutes to resynthesize the design at this point, so if anyone's wondering why progress is slower now than it used to be, that's your answer. Just changing a single line of code necessitates a full resynthesis, so every little change takes that long to test. And if the new design struggles to reach timing closure, it can take even longer than that!
Wow, thanks for recording that video. 80 MHz is quick! It's also cool to see that you can change the clock while the machine runs.
Maybe it really is a little too soon to give up on 80 MHz. If it's the CPU timing out, are there ways to suspend it or slow it down around critical access to the COP (and maybe other components too)? It looks like you would want to specifically detect clock queries as the mouse seems fine in your video. Maybe there's a pin you can keep (de-)asserted that holds the 68000 in place, or if nothing else you could have the hardware send `foo: BRA.S foo` instructions to the CPU on instruction fetch (you'll need to hold off delivering interrupts too in that case, I suppose).
Or just leave things as they are and say: well, it's not for the software we have now, but maybe people designing new software can write code to take advantage of it! Put a skull and crossbones on the silkscreen around the 80 Mhz setting so that people don't make service enquiries to you when throwing it crashes the machine ("what did you expect?"). Then one day much later someone who does have the urge to hack the software can say "I unlocked pirate mode!!"
In re "dial-a-clock", my inspiration: if you ever find yourself in Cambridge, England, there is a computer museum there that has a computer made from discrete surface-mount transistors and so on. I believe the whole computer (and not just the processor) is made this way; in any case, it is huge, filling up multiple panels the size of tall cubicle walls and stretching out along much of the length of the room. They usually have it set up so you can play Tetris on it. But my favourite part of it is a gigantic forearm-sized rheostat that looks like it came out of a locomotive --- I'm sure it's handling only a couple milliamps its current role, which is letting you select the processor clock speed at any point from 0 to a few tens or hundreds of KHz if memory serves.
Quote from: AlexTheCat123 on January 30, 2026, 10:56:23 PMQuote from: stepleton on January 30, 2026, 04:15:59 AM(For v3 move from switches to a rotary encoder that goes from 5 MHz for the 68k to the upper limit...)
I wish I could, but unfortunately that's not as easy to implement as it sounds. You can't easily sweep the output frequency of an MMCM across a range, and I guess I could use a variable clock enable on a single high-speed clock to accomplish the same thing, but that would require rewriting significant portions of the code. And it would probably break a lot of stuff, so I'm not sure it would be worth it!
I may be missing something here, so please take this as a naïve observation rather than a proposal.
I completely agree that a potentiometer or swept clock runs into real MMCM and architectural issues. What occurred to me while reading the thread was that some of the "knob" UX might be achievable with a much simpler mechanism: a 2-pole, 4-position
detented rotary switch, rather than a pot or encoder.
Electrically it would just be selecting among the existing discrete speed presets (the same thing the two speed-select switches already do today), so there's no clock sweeping or refactoring involved. Mechanically it feels like a single "speed knob," which is very period-appropriate, but logically it's still just fixed presets.
If the speed-select signals were ever exposed on a small header, this could even be entirely optional and external (case-mounted), leaving the current on-board switches and hot-switching behavior untouched. I mention it only because v3 is being finalized now and it seemed like a very low-impact way to split the difference between UX and complexity.
Happy to be corrected if I'm overlooking something.
Quote from: stepleton on January 31, 2026, 07:23:43 AMMaybe it really is a little too soon to give up on 80 MHz. If it's the CPU timing out, are there ways to suspend it or slow it down around critical access to the COP (and maybe other components too)? It looks like you would want to specifically detect clock queries as the mouse seems fine in your video. Maybe there's a pin you can keep (de-)asserted that holds the 68000 in place, or if nothing else you could have the hardware send `foo: BRA.S foo` instructions to the CPU on instruction fetch (you'll need to hold off delivering interrupts too in that case, I suppose).
I didn't really want to do anything like that for the sake of remaining completely cycle-accurate to the original Lisa. Sure, the extension of DDRA isn't accurate, but that's completely invisible to the programmer and to all the hardware except the COP, so it doesn't really matter. But stalling the 68K until the COP data is getting closer to being ready definitely hurts that accuracy. Which doesn't quite sit right with me.
But of course, this project is for everybody, not just me, so if it's what other people want then I'll absolutely give it a shot! I'm not super familiar with how the synchronous 6800-style bus cycles work and whether you're allowed to delay VPA like you can with DTACK without the 68K immediately bus erroring, but maybe I could make it so that, after the CPU sends a command to the COP through the VIA, it doesn't actually see VPA get asserted until the data is ready. That way, whenever it's time to read the data, it's guaranteed to be there!
Some pretty good news: I just tested it at 75MHz and it seems to work just fine, with no COP weirdness or RAM mods needed, so at least we have that to fall back on if 80 doesn't work.
Quote from: coffeemuse on January 31, 2026, 09:34:34 AMI completely agree that a potentiometer or swept clock runs into real MMCM and architectural issues. What occurred to me while reading the thread was that some of the "knob" UX might be achievable with a much simpler mechanism: a 2-pole, 4-position detented rotary switch, rather than a pot or encoder.
Ahhh, I see! That's a really excellent idea. I'll look and see what LCSC has in stock and stick one on the board. That would be a much better interface than two switches.
Quote from: coffeemuse on January 31, 2026, 09:34:34 AMI mention it only because v3 is being finalized now
Just to be clear to everybody, v2 is the one that's being finalized right now, and v3 is probably going to be the first release version. There might still be some really minor issues with v2 that need to be cleared up, and labeling that needs to be changed as the design changes, so I don't want to commit to anything with it.
> But of course, this project is for everybody, not just me, so if it's what other people want then I'll absolutely give it a shot!
That's generous! I'm for it, but sparingly. I think this kind of trick is not uncommon with accelerators (though others on this board can say much more authoritatively). My recollection is that the "fast" Apple IIs inside of the IIGS and maybe also the IIc+ and the Apple II expansion card for Macs might do it this way too. So it's straying a bit from the Lisa tradition but still applying a time-honoured technique.
Here's one way to look at it: it might be handy to have a facility that can delay or suspend the CPU temporarily for a few reasons. There's debugging, and also if there's ever a plan to make a version of this apparatus that has an expansion port or that's used to help make expansion cards, then it could be that existing cards and new cards under development may simply be unable to cope with such a high clock rate. Working out a way to selectively slow the Lisa when it carries out certain accesses could be helpful, especially if you've got a kind of bug where it's helpful for the rest of the system to be fast.
Made-up example: you're working on a networked disk driver for the Office System for a Lisa Fujinet expansion board. The hardware can only go a certain speed, so the slowdown feature is handy. But since it's an OS driver, crashes are frequent, and you're often having to reboot the entire machine and start over. Thankfully it doesn't take all that long to get you back to where you were!
Another thing that might be handy for hardware hacking --- and perhaps an alternative to an 80 MHz switch. What about a 0 MHz switch? Some parts of the Lisa might not like it, but if not, if you could freeze it in its tracks, it could be handy. Maybe even have a pin on the board that's ordinarily pulled down to 0V, but make it logic-high and the Lisa stops right where it is... with another pin right next to it for stepping. If triggerable by a logic analyser then that could be very helpful indeed. OK, this is getting a bit elaborate, but it's fun to think about :-)
So even if not part of the "main" Lisa FPGA programming, it could still be handy to know how to do this well.
Quote from: AlexTheCat123 on January 30, 2026, 10:56:23 PMUnless anyone else has any suggestions
Since you've covered the maximum compatibility option, I suggest aiming for a maximum performance option. eg. a setting for maximum workable clock rate, overclock the COPS by say 2x or 4x so it generally 'works' with the caveats that the RTC is wrong and Lisa keyboard/mouse artifacts will occur until someone tweaks the COPS code to account for the faster COPS clock. I suppose that means having a duplicate COPS ROM that is switched in with the expectation that there will someday be two versions. [shrug]
Quote from: stepleton on January 31, 2026, 01:20:00 PMWhat about a 0 MHz switch? Some parts of the Lisa might not like it, but if not, if you could freeze it in its tracks, it could be handy.
That would be a great way to troubleshoot real Lisa CPU boards. Regrettably, the original NMOS 68000 has a minimum clock speed, so one would need to swap in the CMOS version eg. MC68C000 which will operate down to 0 MHz.
Quote from: stepleton on January 31, 2026, 01:20:00 PMAnother thing that might be handy for hardware hacking --- and perhaps an alternative to an 80 MHz switch. What about a 0 MHz switch? Some parts of the Lisa might not like it, but if not, if you could freeze it in its tracks, it could be handy. Maybe even have a pin on the board that's ordinarily pulled down to 0V, but make it logic-high and the Lisa stops right where it is... with another pin right next to it for stepping. If triggerable by a logic analyser then that could be very helpful indeed. OK, this is getting a bit elaborate, but it's fun to think about :-)
Quote from: sigma7 on January 31, 2026, 02:32:10 PMSince you've covered the maximum compatibility option, I suggest aiming for a maximum performance option. eg. a setting for maximum workable clock rate, overclock the COPS by say 2x or 4x so it generally 'works' with the caveats that the RTC is wrong and Lisa keyboard/mouse artifacts will occur until someone tweaks the COPS code to account for the faster COPS clock. I suppose that means having a duplicate COPS ROM that is switched in with the expectation that there will someday be two versions. [shrug]
Wow, these two answers couldn't be any more different! We've got "so fast that it breaks things" and "literally no clock at all", and between the two, I'm probably going to lean towards the 0MHz clock. Pushing the clock even higher at the expense of the COP worries me a bit because I'm not even sure how to test certain things without a keyboard, and I bet a lot of other things will begin to break as I go past 80MHz. Plus, there's no guarantee that anyone will ever fix the COP, so there's a good chance that most people would never end up using this mode. Whereas 0MHz is pretty darn easy to do and could have a wide set of use cases; just wire a switch to the OE pin on the clock mux, which is currently tied low (always enabled).
Given that 75MHz seems to work okay, I'll probably try and find a 5-position rotary switch, with 0, 20, 40, 60, and 75MHz positions. And then I could add a single step pin too; just run an edge detector off the master 125MHz sysclk, look for rising edges on that pin, and let a single cycle of the DOTCK through the clock mux whenever an edge is detected.
I'm compiling LOS at 75MHz right now to see how long it takes, and so far it's made it through all the apps (including the Desktop Manager) in a little over an hour. A significant improvement from the stock Lisa!
Alright, I think I'm ready to order the v2 boards! I made quite a few changes, and I've attached renderings of the new design. If you're not super familiar with the v1 board (and even if you are), it might be kind of tough to spot the changes visually, but trust me, there are quite a few of them! If you're curious exactly what they all are, go check one of my posts from a couple weeks back where I lay them all out.
The price for the minimum quantity of two fully-assembled boards is about $575, so not horrible, but then shipping (about $100) and tariffs (about $275) come in and raise the price to over $900, so it ends up being a lot. If anyone's still willing to donate anything to cover part of the cost, I would be very grateful! You can find me on PayPal at paypal.me/alexthecat123 if you're interested in contributing anything. Don't feel obligated to though; I wouldn't even be asking if people hadn't already expressed interest in giving some money in the past!
And following up from the previous post: the entirety of LOS (including LisaGuide and running PACKSEG to pack all the files for the installer disks) compiled in 3 hours and 10 minutes, which felt lightning fast to someone who's used to the stock Lisa's compile speeds.
Also, remember how I said that the Lisa would work with the Floppy Emu, but not real floppy drives? Well, I figured out why! My real floppy drive (which I thought was functional) was shorting 12V to the RDA (read data) line, and obviously the Lisa had no idea what to do with this. It's a miracle it didn't fry the electronics on the floppy drive, and even more impressively that it didn't kill the FPGA. 12V is muuuuch higher than the 3.3V limit that the FPGA's datasheet specifies!!! After fixing the floppy drive, real floppies now work perfectly fine!
The main things left to do at this point are:
- Get the serial ports working. I think the SCC is completely dead on the current PCB; some of the changes on the v2 board are really going to help with this!
- Improve floppy drive reliability and get external ProFiles working. Both of these problems can be mostly or entirely blamed on the v1 board's terrible level shifters.
- Upgrade the HDMI subsystem from 1080p30 to 1080p60. This may not be possible; the pixel clocks required for 1080p60 are higher than the maximum supported clock speeds of the OSERDES primitives inside the Artix 7 FPGA, so there's physically a hardware limitation preventing it from working quite right. Whenever I try it, the design obviously fails timing, but sometimes I'm actually able to get a stable image on the screen. It's really flaky though and will randomly start tearing the picture and glitching out the sound with weird clicks and pops, so we're truly pushing it up against its limits. I'll mess with it a bit more, but we might be stuck with 30FPS video. Hopefully people don't mind too much; it really bugs me, but it's likely the best we can reliably do!
- Fix a few reliability issues at higher clocks. MacWorks Plus sometimes randomly shuts down at higher clock speeds (60MHz and above), so we might still have intermittent COP problems of some kind that only show up there.
- Get Xenix working. Xenix will sometimes boot, but it's super picky about ProFile timings and fails ProFile handshakes so often that it frequently hangs. I've gotten to the login prompt once or twice though. And don't even think about running it higher than 20MHz; then the ProFile code completely breaks! My upgrade of the SD card slots from SPI to SDIO on the v2 boards will likely help with this too.
- Get UniPlus working. It always seems to initially boot just fine, but kernel panics right after it shows the "welcome" screen that displays your system's configuration. I'm hoping that this is because of the broken SCC, so I'm not going to worry too much until we get the SCC issues fixed and can try again.
- Get GEM working. I haven't tried booting it from hard disk, but at least when you boot it from floppy, it reads a bunch of tracks and then hangs. It never displays the fish icon or anything, so I guess it's failing pretty early.
- Figure out a better solution to the parity issue. I'm torn between implementing the full parity RAM using the FPGA's internal block RAM versus hard-coding the RAM board to only report bad parity when it detects the boot ROM (or LisaTest) performing the "write wrong parity" test. We'll see.
- Get my ESP32-based floppy emulator working! But this is something I can worry about after I've released the LisaFPGA boards; even if I fail; the boards are still perfectly useful as-is.
I think that's it; let me know if there's anything I've promised or mentioned in the past that I'm forgetting. And if you're willing to donate to the PCB order, then thanks again; I really appreciate it!
Wow, I've gotten over $700 in donations since making my post last night. That's way more than I could've ever imagined; thank you so much to everyone who pitched in!
I just placed the order a couple hours ago, but I have some slightly unfortunate news about the timeline of things. Unbeknownst to me until I was about to place the order, JLC just closed down their 6-layer production line for the Chinese Spring Festival, and they won't even start fabricating my board until the 24th. So it probably won't be here until March 10th or so. If only I had placed my order two days earlier; if I had, then they would've put it into production now instead of waiting. But I just didn't know at the time. This is also a really big problem because I ordered a 6-layer board that I need for one of my classes at the same time, and I'm not going to be able to get my assignment done before the deadline if they wait that long!
Having to wait on the LisaFPGA boards is just an inconvenience, but the board I need for school is actually really important, so I paid extra for expedited manufaturing on both boards to get my school one here fast because their site said that it would go into production before the 24th if I did that. But then on my order history page, it ended up saying that it wouldn't go into production until the 24th again!
I reached out to their customer service (who are really good by the way) and it turns out that this was a bug in the site; you weren't supposed to be able to pay extra to get an order in before the holiday if you placed your order after the 2nd. Given that it was a bug on their end, they're going to try their best to honor what the site said and squeeze me in before the factory closes, but if not then they'll just refund me the expedited fee. So we might end up getting the LisaFPGA boards earlier than March, but it's all just going to depend on whether they're able to get me into production quickly.
If only it was a 2-layer or 4-layer board; those production lines are only closed from the 16th to the 19th and if I placed the order now, it would probably be done well before then anyway! But I don't even want to imagine having to design something around a super-dense FPGA with only 4 layers to work with...
Quote from: AlexTheCat123 on Today at 05:30:43 PMHaving to wait on the LisaFPGA boards is just an inconvenience, but the board I need for school is actually really important, so I paid extra for expedited manufaturing on both boards to get my school one here fast because their site said that it would go into production before the 24th if I did that. But then on my order history page, it ended up saying that it wouldn't go into production until the 24th again!
Quote from: AlexTheCat123 on Today at 05:30:43 PMI reached out to their customer service (who are really good by the way) and it turns out that this was a bug in the site; you weren't supposed to be able to pay extra to get an order in before the holiday if you placed your order after the 2nd. Given that it was a bug on their end, they're going to try their best to honor what the site said and squeeze me in before the factory closes, but if not then they'll just refund me the expedited fee. So we might end up getting the LisaFPGA boards earlier than March, but it's all just going to depend on whether they're able to get me into production quickly.
Fingers crossed that JLC's customer service comes through and gets you squeezed in before the holiday shutdown. That's a rough spot to be in with your school project deadline on the line. Hopefully the fact that it was a bug on their end works in your favor.
At least waiting on the LisaFPGA boards is just a patience test. Here's hoping good luck is on your side and the boards make it out before the factory closes.