News:

2022.06.03 added links to LisaList1 and LisaFAQ to the General Category

Main Menu

A Lisa Inside An FPGA

Started by AlexTheCat123, September 04, 2025, 05:20:35 PM

Previous topic - Next topic

coffeemuse

Quote from: AlexTheCat123 on January 12, 2026, 04:07:24 PMAll this to say that the relocation of the buttons will probably be an issue for further down the road when I'm sure that they'll actually be on the board to begin with. And if the floppy emulator ends up working out, then I'd probably stick the buttons and OLED (along with the power, reset, and other control switches maybe) on a little breakout board that plugs into the mainboard with a ribbon cable. Then you can just screw that to the top of your (presumably 3D-printed) enclosure. How does that sound?

That sounds like a great approach. A breakout board with a ribbon cable would make enclosure design much more flexible. I wasn't trying to request any design changes; just wanted to confirm what enclosure considerations were on the radar.

I've had a 3D-printed case with 1980s-era aesthetics in the back of my mind, so that setup would work really well down the road. Good luck with the SDIO testing on the next iteration of boards. I am looking forward to seeing how the floppy emulation progresses.

AlexTheCat123

Quote from: coffeemuse on January 12, 2026, 04:54:57 PMGood luck with the SDIO testing on the next iteration of boards. I am looking forward to seeing how the floppy emulation progresses.

Thanks! I really hope I can get it working; it would be nice to have an open-source alternative to the Floppy Emu out there. Granted, mine will probably never support 1.44MB disks, but at least it would do 400K, 800K, and maybe/hopefully even Twiggies.

Jacexpo

Is there such a thing as a virtual twiggy?

AlexTheCat123

Quote from: Jacexpo on January 14, 2026, 10:16:44 PMIs there such a thing as a virtual twiggy?

As far as I know, there aren't any Twiggy emulators out there right now. LisaEm is capable of using Twiggy images, but it's pretty hit-or-miss, at least in my experience.

And to be clear, my first priority is Sony emulation, with Twiggy coming later. And that's only if I can even get Sony emulation working to begin with! I just want to set expectations low here so that nobody's disappointed if I fail miserably. Making a floppy emulator isn't quite as easy as a ProFile emulator!

TorZidan

#94
Don't want to hijack Alex's thread, just want to add more info:

I added support for Twiggy disk images in LisaEm last year, it wasn't available before that. It works sufficiently well and supports both Twiggy drives, e.g. I am able to install LOS 1.0 from Twiggy floppies which involves using both drives. More info on how to use it: https://github.com/arcanebyte/lisaem/pull/22

Also, there is already an open source alternative to FloppyEmu: https://github.com/vibr77/AppleIIDiskIIStm32F411 ,  for Apple II only. It is on their roadmap to support Macintosh and Lisa eventually, once Apple II works sufficiently well (it does). You can follow it at https://www.applefritter.com/content/apple-ii-disk-emulator-using-stm32 .


AlexTheCat123

Wow, good to know! I had no clue anybody else was developing an emulator.

AlexTheCat123

I'm getting closer to finishing with the v2 board! I finally got through all the level shifters; upgrading those took forever because it required rerouting pretty significant chunks of the board.

I've also added Twiggy support now, using the breakout idea we discussed earlier. I've attached a pic of what the breakout looks like. I really wish I could've used a shrouded header for the 4-pin connector that carries the extra Twiggy signals, but they just don't have any in stock. And I know it looks really bad, but I decided to just autoroute this board since I don't care about this one that much compared to the main PCB. Maybe I'll go back and manually route it later...


AlexTheCat123

Aside from just checking everything over, I think the v2 board might be done now!

I ended up adding trimpots to all the LEDs instead of fixed series resistors so that I can perfectly dial in the brightness for each one. Right now, some are way too bright and others are on the dimmer side, so this should be a good way to even them out. And then I can measure the pot resistances to swap them out for fixed resistors in the final design.

I also thought that something was wrong with the TL074 audio amp/contrast circuit because it was producing super quiet audio through the built-in speaker, and no contrast signal at all. Not that it mattered a ton in my testing because I'm also doing audio and contrast over HDMI, but many people who will have their board connected to a monitor as opposed to a TV won't be able to get HDMI sound to begin with, so it's still really important for the speaker amp and onboard speaker to work. It turns out that nothing was wrong with it though; I had just forgotten to supply 12V to it and was only sending -12V! The v2 board now has an onboard step-up converter to generate 12V from the 5V USB power instead of requiring 12V to be provided externally from the barrel jack (which has now been removed entirely); not sure why I didn't do it like this before.

Oh yeah, I also added a "scanlines" jumper that will hopefully allow you to enable or disable simulated scanlines in the HDMI output. I accidentally enabled this feature on my RGBtoHDMI and thought it made the Lisa display look pretty cool, so might as well try to add it here. It looks like all it's doing is blacking out every third row (regular Lisa) or every second row (screen-modded Lisa) of pixels in the HDMI framebuffer, so that should be pretty easy to implement!

The v2 board will definitely still be in the prototype phase (although a lot more polished than v1), but hopefully v3 will be the final release board. We'll see!

Of course, if anybody sees any issues or areas for improvement in these pictures of the v2 board, let me know and I'll get it changed!

stepleton

Quote from: AlexTheCat123 on January 15, 2026, 06:55:07 PMAnd I know it looks really bad, but I decided to just autoroute this board since I don't care about this one that much compared to the main PCB.

tbh I kind-of like it when you find the odd little autorouted (or wire-wrapped or perf-boarded) interposer amidst otherwise thoughtfully-designed hardware --- it says to me "there's a story behind this"...

With the disclaimer that free suggestions are worth what you pay for them... here are two user-interface thoughts for V3, both probably pretty obvious and neither one a showstopper:

(1) It would be nice if it didn't matter which USB port received the mouse and which received the keyboard.
 
(2) The bottom edge of the board as depicted feels like the user-facing side of the machine, so it makes sense for that side to have the stuff the user is most often going to mess with. To me, this would argue for putting the floppy and hard drive SD slots on or near that edge if you can. If you need to free up room along the edge, maybe one of those double-stack USB sockets might help?

sigma7

QuoteWith the disclaimer that free suggestions are worth what you pay for them...
Ditto

Quote(2) The bottom edge of the board as depicted feels like the user-facing side of the machine, so it makes sense for that side to have the stuff the user is most often going to mess with.

This seems like great advice, and got me thinking, what would I move where... Lisa mouse port to back perhaps and....

Then it occurred to me that the iterations of the layout for ports and ancillary hardware might be separated from the iterations of the core functionality, reducing the cost for incremental changes to one or the other.

ie. determine a suitable boundary (eg. low speed and fewest signals) for separating legacy ports from the FPGA and RAM, and implement a main board/daughterboard. The cost of an interconnect is substantial, but when the cost of new boards is so high.... depends on your confidence level as to how many iterations you expect and if early versions are complete write-offs or still useful. Lowering the cost of changing the legacy ports layout could make it more practical/economical to have different final configurations too.

Interconnects generate problems as well as increased cost, so quite possibly not a good idea (see top paragraph).

For the floppy, consider making the layout compatible with a 26 pin header that has two pins removed so a 20 pin plug will still fit.

... many decisions
Warning: Memory errors found. ECC non-functional. Verify comments if accuracy is important to you.

AlexTheCat123

Quote from: stepleton on January 16, 2026, 04:28:54 AM(1) It would be nice if it didn't matter which USB port received the mouse and which received the keyboard.

A very good idea! For some reason, I dismissed this as insanely difficult before, but it's really nothing more than muxing a few signals. I'm working on it now.

Quote from: stepleton on January 16, 2026, 04:28:54 AM(2) The bottom edge of the board as depicted feels like the user-facing side of the machine, so it makes sense for that side to have the stuff the user is most often going to mess with. To me, this would argue for putting the floppy and hard drive SD slots on or near that edge if you can. If you need to free up room along the edge, maybe one of those double-stack USB sockets might help?

My only concern there is that, especially once I switch to SDIO, those SD cards could be running at 50+ MHz (I think the ESP can even go up to 100 on SDIO). And I'm worried that the long traces would lead to signal integrity issues. Whereas right now, the slots are just about as close to the ESPs as they can get.

Quote from: sigma7 on January 16, 2026, 01:41:45 PMThen it occurred to me that the iterations of the layout for ports and ancillary hardware might be separated from the iterations of the core functionality, reducing the cost for incremental changes to one or the other.

ie. determine a suitable boundary (eg. low speed and fewest signals) for separating legacy ports from the FPGA and RAM, and implement a main board/daughterboard. The cost of an interconnect is substantial, but when the cost of new boards is so high.... depends on your confidence level as to how many iterations you expect and if early versions are complete write-offs or still useful. Lowering the cost of changing the legacy ports layout could make it more practical/economical to have different final configurations too.

I thought about this way earlier on in the design process, and opted against it just because it was sort of hard to separate what would go on the peripheral board versus the core board. The boundary isn't super clear-cut. Maybe I should have done this, but at this point I think it would require such a substantial redesign that I'll probably opt not to for now.

Quote from: sigma7 on January 16, 2026, 01:41:45 PMFor the floppy, consider making the layout compatible with a 26 pin header that has two pins removed so a 20 pin plug will still fit.

I briefly considered this too, instead of the 20 pin + 4 pin strategy, but it would require making the 26-pin connector incompatible with Twiggies, and I was trying to avoid the situation where someone might get confused and plug a Twiggy straight into the board expecting it to work. Any ideas to prevent this?

Also, even if I were to remove 2 pins to allow a 20-pin connector to plug in, wouldn't the keying slot still be a problem? I don't think the slot on a 26-pin header would line up with the notch on a 20-pin connector that's plugged into one side of the header.

AlexTheCat123

Sorry for the lack of updates; some weird things have been happening with the FPGA lately. Everything was working so great, and then it all just started breaking. First the floppy controller started acting up, and every time I would resynthesize to try and fix it, the problem would either change or go away entirely. And then other things started breaking and acting intermittent too.

But I think I've finally realized what the problem is. Over the course of the project, I've marked a bunch of signals for debugging using the MARK_DEBUG attribute, which causes the signals to be preserved during synthesis and prevents certain optimizations from taking place. I thought I had been unmarking signals whenever I didn't need to debug them anymore, but clearly not, because I checked and there were nearly 1,500 signals that were marked for debugging! With that many signals being excluded from optimization, weird timing things are bound to happen, so I've been going back through and unmarking as many of them as possible now. And as I do that, everything seems to be starting to work properly again!

Clearly there's still something that's not quite right, even as I remove the MARK_DEBUG attributes, because I just tested with a real floppy drive for the first time and things aren't working quite right over there. So obviously, there's some big difference between the real drive and the Floppy Emu that I need to track down.

Whenever I hit a dead end on one issue, I always try to pivot to another issue and then come back to that one later in the hopes that I'll have new ideas to resolve it, and that's exactly what I did here. I pivoted from trying to get a real floppy drive working back over to trying to get overclocking working. Last time I tried it, I didn't understand the physical arrangement of clock buffers and multiplexers on the chip as well as I do now, and my better knowledge the second time around allowed me to successfully implement the clock mux between 10MHz, 20MHz, 40MHz, and 60MHz dot clocks.

This meant that I could pick between the clocks using the switches on the board instead of having to hard-code one at build time like before, but the weird problems at 40MHz and 60MHz that were present the last time I tried it were unfortunately still there. For anyone who didn't catch the posts where I talked about those issues, basically the speaker would just randomly beep while LOS was running, you couldn't read the clock from the COP (LOS would say the date/time was invalid and the Workshop would actually tell you that the procedure call to get_time() failed), and the COP would never turn the Lisa off after the screen dimmed to black during shutdown.

Obviously the last two problems point toward the COP, but I wanted to address the speaker issue first since it seemed a bit more perplexing. So I hooked a virtual logic analyzer to the FPGA and started looking at the 68K bus to see if it was actually commanding the speaker to beep or if there was just some kind of signal noise that was inadvertently causing the beeps. This is why I asked Claude to make that disassembler I mentioned in another thread. And it turns out that the 68K was actually commanding the VIA to beep the speaker, so I dug into the LOS source code to figure out how the NOISE routine worked (LIBHW/MACHINE.TEXT). Working backwards from the value the 68K was putting into the VIA timer 2 register, I was able to determine that the wavelength of the tone was 4545, so now it was just a matter of looking through the entirety of the LOS source to see what called the beep routine with that wavelength.

There were two callers using that wavelength: one in LIBHW/KEYBD.TEXT and one in LIBHW/TIMERS.TEXT. The one in KEYBD will beep the speaker if the keyboard input queue ever fills up, but the one in TIMERS made a lot more sense in this context. That one beeps the speaker if the Lisa ever fails to read the clock from the COP, which lines up perfectly with the clock errors that we were getting!

So now the question is: why are we failing to read the clock? Whatever the reason, it's probably the same reason why we're failing to shut the system down. Both of those operations involve writing a command to the COP, whereas getting keyboard/mouse movements (which both worked fine) only requires reading from the COP, so it sounds like we have some kind of write issue. So I looked at more signals and slowly figured out what was going on.

The COP puts out a signal called READY that goes into either CA1 or CA2 (I forget which) of the keyboard VIA. READY is high most of the time, but goes low briefly every one in a while to signify that the COP is ready to receive a command from the VIA. If the 68K wants to send the COP a command, it's expected to wait for READY to go low, and then shove the command out onto the COP data bus as soon as READY drops.

That all makes sense, but this is where it gets weird. You'd think that it would be safe to remove the command from the COP's bus as soon as it pulls READY high again, but no, apparently you have to leave it there for a little while longer (2 or 3 extra READY-lengths) after READY is deasserted. Otherwise, the COP doesn't see your command and won't respond with any data. At the standard 20MHz dot clock, the 68K was leaving the command on there for long enough for the COP to see it, but at the higher dot clocks, it was removing the command too fast and the COP would never respond with the clock data, hence the beeps and clock errors! Same goes for the power-off command; the COP never received it and the system never turned off.

Luckily, the solution is simple. The VIA register that determines whether the VIA is outputting over the COP bus is DDRA (Data Direction Register A), and the entire issue here is simply that DDRA is being turned back from an output to an input too quickly. So I just wrote some simple logic that extends the falling edge of DDRA a bit; now DDRA stays in output mode for a little while longer even after the 68K commands it to go back to input mode. I know it sounds weird to say "falling edge" since DDRA is a full register with 8 bits in it, but I'm treating all of DDRA as a single bit since all of the data bits on PORTA will always be going in the same direction.

You might wonder why we don't get errors in the boot ROM, given that it also reads the clock and is capable of powering the system off, neither of which caused any problems. Well, the boot ROM uses a slightly different COP communications routine that already keeps the bus set to output mode for a little longer anyway, so even with the faster clock, it's still within tolerance.

Now that I've fixed that COP issue, the 20, 40, and 60MHz DOTCKs all seem to work perfectly. Unfortunately, 10MHz doesn't quite work right, and probably never will. And it's for the opposite reason from the above. Instead of us switching the COP bus from an output to an input too early like before, now the CPU is so slow to react to READY going low that it misses the READY pulse entirely and doesn't set the COP bus to an output until after the pulse is already over. There's not really a way that I can hack around that problem, short of overclocking the COP, but then that introduces more problems related to compatibility at other clock speeds. Not to mention the fact that the real time clock wouldn't be accurate anymore!

So with that in mind, I think I'm going to get rid of the 10MHz DOTCK (not sure why people would really want a 2.5MHz Lisa anyway) and try to replace it with an 80MHz DOTCK. No promises or anything, but I got it going at 60, so perhaps I can get 80 going too! Who knows, maybe even 100MHz is possible...