Designing a CPU in VHDL, Part 15: Introducing RPU

This is part of a series of posts detailing the steps and learning undertaken to design and implement a CPU in VHDL. Previous parts are available here, and Iā€™d recommend they are read before continuing.

It’s been a while. Despite the length of time and lack of posts, rest assured a significant amount of progress has been made on my VHDL CPU over the last year. I’ve hinted at that fact multiple times on twitter, as various project milestones have been hit. But what’s changed?

First and foremost; the CPU now consumes RISC-V. It’s decoder, ALU and datapaths have been updated. With that, the data width is now 32 bit. The Decoder accepts the RV32I base integer instruction set.

I’d been putting off multiple side-projects with my existing TPU implementation for a while. It’s 16-bit nature really made integrating the 32MB SDRAM into the system a rather pointless affair. I’m all for reminiscing but I did not want to go down the rabbit hole of memory bank switching and the baggage that would entail. The toolchain for creating software was already at the limit – not the limit in terms of what could be done – but the limit in what I’m prepared to do in order to perform basic tasks. We all love bare metal assembly, but for the sake of my own free time, I wanted to just drop some C in, and I was not going to make my own compiler. I looked into creating a backend for LLVM, but it’s really just another distraction.

As a reminder, here is where we left off.

Block diagram of TPU CoreSo, what’s actually changed from the old 16-bit TPU?

  • Many more registers,
  • Datapath widened to 32-bit
  • Decoder and ALU updates for RV32I ISA
  • Glue logic/datapath updated for new functions

The register file is basically the exact same as before with the 16-bit TPU, extended for 32 entries of 32 bits names x0-x31. In RV32I, x0 is hardwired to 0x00000000. I’ve left it a real register entry, but just never allow a write to x0 to progress, keeping the entry always zero. The decoder and ALU is fairly standard really – there are a small number of instruction formats, and where immediates are concerned there is some bit swapping here and there, but the sign bits are always in the same place which can make things a bit easier in terms of making the decode logic easier to understand.

The last item about datapath changes follows on from how RISC-V branch instructions operate. TPU was always incredibly simple, which meant branching was quite an involved process when it came to function calling and attempting to get a standardized calling convention. There was no call instruction, and the amount of operations required for saving a calculated return from function address, setting up call arguments on the stack and eventually calling via an indirect register was rather irritating. With the limited amount of registers available on TPU, simplification via register parameter passing didn’t solve much as you would be required to save/restore register contents to the stack regardless.

RISC-V JAL instruction definition screenshot from ISARISC-V has several branch instructions with significant changes to dataflow from that of TPU. In addition to calculating the branch target – which is all TPU was able to do, RISC-V calculates a return address at PC+4, which is then written to a register. This means our new ALU needs two outputs, the standard result of a calculation, and the branch target. The branch target with the shouldBranch output status feeds into our existing PC unit, with the result feeding as normal the register file/memory address logic. The new connections are shown below in yellow.

RPU block diagramIn terms of old TPU systems, the old interrupt system needs significant updating and so is disabled. It’s still got block ram storage, UART, and relies on the same underlying memory subsystem, which in all honesty is super bloated and pretty poor. The memory system is currently my main focus – it’s still made up from a single large process at the CPU core clock. It reads like an imperative language function, which is not at all suitable for the CPU moving forward. The CPU needs to interface with various different components, from the block rams, UART, to SPI and SDRAM controllers. These can be running at different clocks and the signals all need to remain valid across these systems. With everything being accessed as memory-mapped IO, the memory system is super important, and I’ve already run into several gotchas when extending it with an SDRAM controller. More information on that later.

Toolchain

As I mentioned earlier, one of the main reasons for moving to RISC-V was toolchain considerations. I have been developing the software for my ‘soc’ with GCC.

With Windows 10, you can now use the Linux Subsystem for Windows to build linux tools within Windows. This makes compiling the RISC-V toolchain for your particular ISA variant a super simple process.

https://riscv.org/software-tools/ has some details of how to build relevant toolchains, and Microsoft have details of how to install the Linux Subsystem.

Data Storage

With the RISC-V GCC toolchain built, targeting RV32I, I was able to write a decent amount of code in order to create a bootloader which existed inside the FPGA block ram. This bootloader looked for an SD card in the miniSpartan6+ SD card slot. It initialized the SD card into slow SPI transfer mode so we could get at it’s contents.

Actually getting to data stored on an SD card is incredibly simple, and has been written up nicely here. In terms of how the CPU accessed the SD card, I found a SPI master VHDL module and threw it into my project, and memory mapped it so you could use it from the CPU.

#define SPI_M1_CONFIG_REG_ADDR 0x00009300
#define SPI_M1_BUSY_REG_ADDR   0x00009304
#define SPI_M1_DATA_REG_ADDR   0x00009308

void spi_sd_reset()
{
	volatile uint32_t* spi_config_reg = (uint32_t*)SPI_M1_CONFIG_REG_ADDR;

	uint32_t current_reg = *spi_config_reg;
	uint32_t reset_bit = 1U << SPI_CONFIG_REG_BIT_RESET;
	uint32_t reset_bitmask = ~reset_bit;

	// Multiple writes gives the controller time to reset - it's clock
	// can be slower than this CPU
	*spi_config_reg = current_reg & reset_bitmask;
	*spi_config_reg = current_reg & reset_bitmask;
	*spi_config_reg = current_reg & reset_bitmask;
	*spi_config_reg = current_reg & reset_bitmask;

	// return to original, but ensure the reset bit is set (active low), 
	// in case the register was previously clobbered by some other operation
	*spi_config_reg = current_reg | reset_bit;
}

uint8_t spi_sd_xchg(uint8_t dat)
{
	volatile uint32_t* spi_busy_reg = (uint32_t*)SPI_M1_BUSY_REG_ADDR;
	volatile uint32_t* spi_data_reg = (uint32_t*)SPI_M1_DATA_REG_ADDR;

	*spi_data_reg = (uint32_t)dat;

	while (*spi_busy_reg != 0);

	return *spi_data_reg;
}

A few helper functions later, the bootloader did a very simple FAT32 root search for a file called BOOT, and copied it into ram before jumping to it. It also copied a file called BIOS to memory location 0x00000000 – which had a table of I/O functions so I could fix/extend functionality without needing to recompile my “user” code.

typedef struct {
    FN_sys_console_put_stringn    sys_console_put_stringn;
    FN_sys_console_put_string     sys_console_put_string;
    FN_sys_console_setcursor      sys_console_setcursor;
    FN_sys_console_setpen         sys_console_setpen;
    FN_sys_console_clear          sys_console_clear;
    FN_sys_console_getcursor_col  sys_console_getcursor_col;
    FN_sys_console_getcursor_row  sys_console_getcursor_row;
    FN_sys_console_getpen         sys_console_getpen;
    FN_sys_clk_get_cyclecount     sys_clk_get_cyclecount;
    
    FN_spi_sd_set_clkdivider      spi_sd_set_clkdivider;
    FN_spi_sd_set_deviceaddr      spi_sd_set_deviceaddr;
    FN_spi_sd_xchg                spi_sd_xchg;
    FN_spi_sd_reset               spi_sd_reset;
    
    FN_uart_tx1_send              uart_tx1_send;
    FN_uart_rx1_recv              uart_rx1_recv;
    FN_uart_trx1_set_baud_divisor uart_trx1_set_baud_divisor;
    FN_uart_trx1_get_baud_divisor uart_trx1_get_baud_divisor;
    
//...
    
// Above functions are basic bios and provided by BIOS.c/BIN
// Below are extensions, so null checks must be performed to check 
// initialization state
    FN_pf_open    pf_open;
    FN_pf_read    pf_read;
    FN_pf_opendir pf_opendir;
    FN_pf_readdir pf_readdir;
} SYSCALL_TABLE;

With the FPGA set to look for this BOOT file off an SD card, testing various items became a whole lot easier. Not having to change the block ram and rebuild the FPGA configuration to run a set of tests saved lots of time, and I had a decent little setup going, however I was severely limited by RAM – many block rams were now in use, and yet I still did not have enough RAM. I could have optimized for space here and there, but when I have a 32MB SDRam chip on the FPGA dev board it seemed rather pointless. It was finally time to get the SDRAM integrated.

SDRAM

Getting an SDRAM interface controller for the chip on the miniSpartan6+ was very easy. Mike Field had done this already and made his code available. The problem was that whatever I tried, the data out from the SDRAM was garbage.

Or, should I say, out by one. Everything to do with computers has an out by one at some point.

Anyway, data was always arriving delayed by one request. If I asked to read data at 0x0000100, I’d get 0. If I asked for data at 0x0000104, I’d get the data at 0x00000100. I could tell that writing seemed to be fine – but reading was broken. Thankfully discovering this didn’t take too much time, due to being able to boot code off the SD card. I wrote little memory checking binaries and executed them via a UART command line, the debug data being displayed on the HDMI out. It was pretty cool, despite me getting wound up at how broken reading from the SDRAM was.

memory testIt took a long time to track down the actual cause of this read corruption. For the most part, I thought it was timing based in the SDRAM controller. I edited the whole memory system multiple times, with no changes to the behavior. This should have been a red flag for me, as the issue was being caused by the constant through all of these tests – the RPU core!

In RPU, the signals that comprise the input and output of each pipeline stage get passed around and moved from unit to unit each core clock. This is where my issue was – not the “external” memory controller!

Despite my CPU being okay with waiting for the results of memory reads from a slower device, it was not set up for correctly forwarding that data on when the device eventually said the request had completed. A fairly basic error on my part, made harder to track down by the fact the old TPU CPU design and thus RPU had memory requests going through the ALU, then control unit, and some secondary smaller memory controller which didn’t really do anything apart from add further latency to the request.

I fixed this, which immediately made all the SDRAM memory space available. Hurrah.

It’s rather annoying this issue did not present itself sooner. There are already many “slow” memory mapped devices connected to the CPU core, but they generally work off of FIFOs – so despite them being slow, they are accessed via queries to lines to say whether there is data in the FIFO, and what the data is. These reads are actually pretty fast, the latency is eaten elsewhere – like spinning on a memory read. Basic SDRAM read latency however for this was in the 20 cycle range at 100MHz, way more than previously tested.

Having the additional memory was great. Now that we have a full GCC toolchain, I could grab the FatFs library, and have a fully usable filesystem mounted from the SD card. Listing directories and printing the contents of files seemed like such a milestone.

Next steps

So that’s where we are at. However, things be changing. I have switched over to Xilinx Spartan 7, using a Digilent Arty S7 board (the larger XC7S50 variant). I’m currently porting my work over to it.

If you saw my last blog post, you’d have seen that HDMI out at least works, and now I’m working on porting the CPU over, using block rams and SD card for memory/storage. Lastly will be a DDR3 memory controller. 256MB of 650MHz memory gives much more scope for messing up my timing even more. It should be great fun!

I hope to document progress more frequently than over the past year. These articles are my projects documentation, and provides an outlet to confirm I actually understand my solutions, so the lack of posts has made things all the more disjoint. The next post shall be soon!

Thanks for reading. Let me know of any queries on twitter @domipheus.

HDMI over Pmod using the Arty Spartan 7 FPGA board

tl;dr: This post shows that driving DVI-D over an HDMI cable, directly connected to the High Speed Pmod connector of Digilents Arty S7 board, is very much possible- even at high resolution.

I’ve been working away on my RISC-V FPGA based computer ‘kit’, which is based on my VHDL CPU: ported to RISC-V. I wanted to get a new development board with faster ram, and found it hard to find boards with DDR3 memory, a large enough FPGA, SD card interface, and HDMI out.

The SD card was not really a problem – it’s low speed, you can just connect it with slow SPI I/O. HDMI is certainly considered high speed – bandwidth across the 4 serial channels top 4.4Gbps – but is driving HDMI though basic I/O interfaces possible?

It most certainly is!

A caveat, though: There is no protection circuitry. Do this at your own risk šŸ™‚

The Digilent Arty S7 board can be had for sub Ā£100, and my XC7S50 variant cost Ā£119 shipped. It has DDR3, but no on-board HDMI. However, it turns out you can get perfectly acceptable (for my needs, anyway) output from the high-speed Pmod I/O ports. This will go very nicely with the small USB powered HDMI screens you can find, which are very handy.Ā Image showing development board conencted via breadboard wires to an HDMI connector breakout board, which is driving a small HDMI display. I have put a 720p output test on github. It should load in Xilinx Vivado out of the box. You need to connect an HDMI cable to the Pmod, by either splicing a cable, or getting something like the Adafruit breakout board (seen above). It connects to Pmod JA, circled.

Image showing what I/O port to use.In the constraints file for the project, the Pmod pins are specified to use the TMDS_33 standard. The pinout is defined as follows:

A diagram showing what Pmod pins are for what HDMI purpose.The VHDL code in the example uses a simplified 1280x720x60 pixel clock of 75MHz – not the required 74.25MHz as per the standard. Due to this, some TVs/monitors may not accept the signal. It runs fine on my Acer XF240H and Samsung LU28E590DS using a 1m cable. I have not tried a television – they tend to be more picky. You can get much closer to 74.25MHz by chaining clock generators, but I have not done that in this instance. The refresh rate reports as 61Hz with this clock, likely due to it being out of spec.

If you want to learn more about how my example project generates a HDMI/DVI-D signal, you can find details here, which is part of my Designing a CPU in VHDL blog series.

Picture showing a monitor displaying a 720p signal.A video of it in action, using timings applicable for my 800×480 module.

You can change the pixel clock and the vga timings (sync starts, ends, porches ) in the code to generate different resolutions. For 1080p60Hz the pixel rate is supposed to be 148.5MHz, but again my monitor will accept a rate of 150MHz, and show 61Hz.

Even 1440p30 is possible. 1440p60 did not work, but I didn’t try hard to get a more accurate pixel clock in that instance. When you get to higher clocks, you can start to get timing constraint issues. At 1080p and 1440p I had some timing failures listed in the implementation report, but they did run. If you were using this in a real system, you’d have to fix those timing issues. That’s out of the scope of this blog, though šŸ™‚

So there you have it. You really can bodge HDMI/DVI-D output directly through a Pmod!

Thanks for reading. Let me know what you think on twitter @domipheus.

Porting my VHDL Character Generator to Spartan3: Reducing clock speeds and pipelining

This is an article on porting my VHDL character generator from a Xilinx Spartan6 device to one with a Spartan3. It starts off as a simple port, analyzing device primitive differences and accounting for them in the design. Along the way, there were considerations on how clocks were generated, characteristics of block ram timing, and general algorithmic design. I’ll assume you’ve read the sections of my Designing a CPU in VHDL series specifically detailing the implementation of the character generator.

Reading time: 10 minutes

When I first attempted to synthesize my TPU CPU Core design on to the miniSpartan3 developer board (made by the great folks at Scarab Hardware), the bulk of the code went without a hitch. The processor core itself contains no primitive parts specific to a single vendor. However, the rest – Block Rams, Clock Generators – used instantiations of specific device primitives. These are different from family to family of FPGAs and those are where the most thought and investigation is needed, as changes can have knock-on impacts to operations further along the device path.

High level device differences are fairly minimal. On the board itself, we have a 32MHz clock input on the miniSpartan3 board instead of the 50MHZ input on the miniSpartan6 setup. So we will need to change ratios of how we generate pixel and ram clocks for the DVI-D/HDMI video output. The FTDI chip for serial communication is similar and a communications channel is connected to the FPGA. We will need to change the constraints file for the pin definitions as well, but that’s always expected.

Clocks

The Spartan6 TPU design utilizes multiple clocks:

  • Base Clock 50MHz
  • CPU Core 100MHz
  • Pixel 25MHz
  • 5x Pixel 125MHz
  • 5x Pixel Inverted 125 MHz
  • Char/Text Clock 250MHz

The CPU Core and Read/Write Port A’s of all Block Rams use the CPU Core clock. The UART Baud Generator uses the Base Clock. The VGA/Graphics signal generator uses the Pixel clock. The TMDS/DVI-D/HDMI encoders and output buffers use the 5x Pixel and 5x Pixel Inverted clocks. The Character generator, and relevant Port B block rams (Text and Font Rams) use the Char/Text Clock.

When porting to Spartan3, we need to use Digital Clock Managers (DCMs) instead of the Phase Locked Loops (PLLs) on Spartan6. The interface to DCMs is considerably different, but the base terms remain and you can understand what needs to change in the design without much thought.

s6_pll

One of the main issues is that DCMs have much less outputs than the PLLs. On the Spartan6 implementation, a single PLL primitive is used to drive all of the different clocks require. On Spartan3, we will need a DCM for each frequency.

s3_dcm

Due to this, we will require 3 DCM objects. Our Spartan3 chip XC3S200A only has 4 in total, so we are using a significant amount of resources to generate these clocks. However, we do have the available DCMs to get started immediately.

The DCMs themselves have multiple configurations to set up. We use the clock synthesizer(DFS) to get our 25MHz pixel clock from our 32MHz input. The maximum rangers for the DFS is outlined in the Spartan-3A datasheet.

dfs_limits

To generate the pixel clock, we multiply our 32MHz input by 15 to 480MHz then divide by 19 to get 25.2MHz.

-- 32MHz -> ~25MHz
DCM_SP_inst : DCM_SP
generic map (
  CLKDV_DIVIDE => 2.0, --  Divide by: 1.5,2.0,2.5,3.0,3.5,4.0,4.5,5.0,5.5,6.0,6.5
                       --     7.0,7.5,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0 or 16.0
  CLKFX_DIVIDE => 19,         --  Can be any interger from 1 to 32
  CLKFX_MULTIPLY => 15,       --  Can be any integer from 1 to 32
  CLKIN_DIVIDE_BY_2 => FALSE, --  TRUE/FALSE to enable CLKIN divide by two feature
  CLKIN_PERIOD => 32.0,       --  Specify period of input clock
  CLKOUT_PHASE_SHIFT => "NONE", --  Specify phase shift of "NONE", "FIXED" or "VARIABLE" 
  CLK_FEEDBACK => "1X",         --  Specify clock feedback of "NONE", "1X" or "2X" 
  DESKEW_ADJUST => "SYSTEM_SYNCHRONOUS", -- "SOURCE_SYNCHRONOUS", "SYSTEM_SYNCHRONOUS" or
                                         --     an integer from 0 to 15
  DLL_FREQUENCY_MODE => "LOW",     -- "HIGH" or "LOW" frequency mode for DLL
  DUTY_CYCLE_CORRECTION => TRUE,   --  Duty cycle correction, TRUE or FALSE
  PHASE_SHIFT => 0,        --  Amount of fixed phase shift from -255 to 255
  STARTUP_WAIT => FALSE)   --  Delay configuration DONE until DCM_SP LOCK, TRUE/FALSE
port map (
  CLK0 => CLK0,     -- 0 degree DCM CLK ouptput
  CLK180 => CLK180, -- 180 degree DCM CLK output
  CLK270 => CLK270, -- 270 degree DCM CLK output
  CLK2X => open,    -- 2X DCM CLK output
  CLK2X180 => open, -- 2X, 180 degree DCM CLK out
  CLK90 => open,    -- 90 degree DCM CLK output
  CLKDV => open,    -- Divided DCM CLK out (CLKDV_DIVIDE)
  CLKFX => clock_pixel_unbuffered,   -- DCM CLK synthesis out (M/D)
  CLKFX180 => CLKFX180, -- 180 degree CLK synthesis out
  LOCKED => LOCKED, -- DCM LOCK status output
  PSDONE => PSDONE, -- Dynamic phase adjust done output
  STATUS => open,   -- 8-bit DCM status bits output
  CLKFB => CLKFB,   -- DCM clock feedback
  CLKIN => clk32_buffered,   -- Clock input (from IBUFG, BUFG or DCM)
  PSCLK => open,    -- Dynamic phase adjust clock input
  PSEN => open,     -- Dynamic phase adjust enable input
  PSINCDEC => open, -- Dynamic phase adjust increment/decrement
  RST => '0'        -- DCM asynchronous reset input
);
	

Block Rams

The block rams on Spartan3 are very similar to the Spartan6 counterparts. They do have different characteristics in terms of timings and therefore maximum operating frequency. The Block Ram primitives on my Spartan3 are not rated for the ~260MHz that those on the Spartan6 run at – so there will be changes required to the Character generator as to account for additional latency in the memory operations.

UART

Thankfully, Xilinx provide the PicoBlaze UART objects for Spartan3 a they do for Spartan6, so there was very little work required in porting these over, apart from using different library objects. The Baud clock routine was changed to strobe correctly using the 32MHz base clock instead of 50MHz on the miniSpartan6. That was the only significant change here.

Differential Signalling Buffers

The OBUFDS output buffers used before can be used on Spartan3, along with the ODDR2 Double Data Rate registers for generating the 10x HDMI signalling.

Character Generator

Most work was on the Character Generator. This was due to the base algorithm of the system needing slight amendments to account for the increased memory latencies and slower clocks. However, I think it’s useful to see what things can happen if we ignore all of that for a second, and simply ‘blind port’ the system ignoring the rated maximum frequencies, just to see what happens.

In fact, the blind port was my first attempt. And this was the result:

There are a few points of interest that you can take from this footage. I’ve singled out a frame to identify them easier.

corruption

  1. The Colour are correct for the areas they should be
  2. The Glyphs seem to be correct
  3. The corruption while random occurs in X directions, as there are bars which are consistent across character locations.

If we look again at the state diagram for the character generator:

text_mode_diagram

As we can tell that the character along with the colour is correct, we know it’s not data corruption in transfer. However, the vertical banding is occurring at the start of the character, indicating the glyph row data is not getting to the system in time.

blockrammhz

In this situation I went to the datasheets and application notes to find maximum frequency ratings for the clock synthesizers and block rams. The 250MHz char/pixel clock is well within specification for generation, but the block rams are only rated for 200MHz. Instead of attempting to redesign the character generator to run off a new slightly slower clock (200MHz), I started modifying it so that it would operate correctly at the 5x pixel serialization clock – as this would free up another DCM object and reduce our utilization from 3 to 2.

The way I started with this problem was to delay the character generator by a single pixel, allowing to pipeline the memory requests up over two pixels instead of one. This would then give us the 10 sub-cycles per pixel.

pipeline_diagram

Table A) shows how 10 sub-cycles are required, table B) shows how they would fit together into a pipelined 5 sub-cycle state machine and C) shows that optimized, as certain stages need only occur when you crossover into a new character. The latching and fetching of the glyph data is idempotent and does not incur additional costs, as the tram data which derives the addresses for glyph rows is only fetched on each 8-pixel character transition.

My first implementation of this seemed to work well enough, apart from there being a duplicated pixel in the glyph.

lastorfirstduplicated

It is harder than you’d think to tell whether this duplicated pixel was at the start or end of glyph processing, so I forced the background colour of the screen to flip-flop between character locations as they were output, allowing you to see the specific zone that a glyph should reside within.

bars_debug

From here, I could tell it was the first pixel which was causing the problem. The first one encapsulates all memory requests – with the further 7 pixels in a glyph row only utilizing a cached version of the data, so from here it was time to go into the simulator and look at some internal signals.

In the Simulator

The first thing to notice was there was disparity between the pixel x coordinate and the actual pixel/5x pixel clocks. Due to the pixel operations being driven from the 5x clock, there could be instances where mid-request there was changes in coordinates. The way to fix this was to have a process driven off of the pixel clock, which then latched the X and Y coordinates, which then the various other logic driven from the 5x clock could utilize.

latching_coords_2

You can see in the above waveform the issue clearly. At (a) we kick off a new initial glyph row request (State 1 is only ever entered on the first pixel of a character row). If we do not latch the coordinate, half way though the request at (b) we could have the coordinate flip.

Since we already had a process running from the pixel clock to manage the blinker flags, this was a simple addition.

  -- This process latches the X and Y on the pixel clock
  -- Also manages the blinker.
  process(I_clk_pixel)
  begin
    if rising_edge(I_clk_pixel) then
      blinker_count <= blinker_count + 1;
      x_latched <= I_x;
      y_latched <= I_y;
    end if;
  end process;

The last issue was a really irritating one. Irritating as it was a very basic bug in my code. A simple state < 4 check should have been <= 4, meaning the 4 state prolonged an additional cycle, throwing the first pixel off. Easily fixed, and easily spotted in the simulator.

good

The last thing to do was to also try it on my miniSpartan6+ project, and it worked first time – which is great šŸ™‚

Wrap Up

We now have the character generator running off of a 5x pixel clock, with the font/text ram read ports also running at that slower clock. As well as allowing us to run to the Spartan3 FPGA specs of the device I have, it will additionally allow for higher resolutions in the future – especially on the Spartan6 variant.

Thanks for reading, let me know what you think on Twitter @domipheus.

Designing a CPU in VHDL, Part 14: ISA changes, software interrupts and bugfixing that BIOS code

This is part of a series of posts detailing the steps and learning undertaken to design and implement a CPU in VHDL. Previous parts are available here, and I’d recommend they are read before continuing.

It’s finally that time! I have committed the latest TPU VHDL, assembler and ISA to my github repository. Fair warning: The assembler is _horrid_. The VHDL contains a ISE project for the LX25 miniSpartan6+ variant, but the font ROM is empty. The font ROM I’ve been testing with doesn’t have a license attached and I don’t want to blindly add it here. You can, however simulate with ISim, and directly inspect the Text Ram blocks in ASCII mode to see any text displayed. I will explain that more later in the part.

The ‘BIOS’ code

So just now we have a TPU program which prints a splash message, checks for the size of the contiguous memory area from 0x0000 (slowly), and then waits for a command from the input UART. This UART is connected to the FTDI chip on port B, so appears as a COM port to a computer when the USB cable is connected to the miniSpartan6+. It only accepts a single character command just now, mainly because I have chosen the path of progress here rather than the path of lovely command-line words, which involves several string functions that honestly nobody should need to waste time writing (yes, looking at you, tech interviewers).

Getting to this point, without even writing significant code to handle a command, I realized that the code was so big (~1.5KB) that I’d have trouble fitting it into a single block ram. TASM, the Tpu ASseMbler, currently only outputs a single block ram initializer, and the VDHL duplicates that single intializer across each memory bank, so it would be a lot of work to fix all of that. I instead wanted to look at why exactly there is so much code, for such little functionality.

I edited TASM to output instruction metrics, simply a count for each operation. I then checked what was the biggest, ignoring the define word (dw) operation, as string constants were significant but not really relevant. This was the result:

before_instcount_pie

So you look at that and scan for counts that don’t make any sense, there are a few that jump right out. For instance, Branch Immediate (bi) only used twice in all this code? 1400 lines of assembly and only 2 branch immediates?

When investigating the code, I realized why. The 16 bit op codes don’t leave a lot of room for immediate values, even if they are shifted to account for 2-byte alignment of instructions. To play it safe, I was instead loading into the high and low bytes of registers, and branching to a register. So 4 instructions (load.l, load.h, or, br) instead of a single branch immediate. There were also lots of function calls, and those branches were performed using Branch Register.

More puzzling was the differences in write.w and read.w count. 111 write words, versus 38 read words? Given most of the reads and writes in the code was stack manipulation for the various function calls, it seemed like there was significant overspill saving registers that were then not required to be read (basically, myself being over cautious).

So, I decided to perform two changes to the ISA:

First, I would remove the Branch Immediate instruction, replacing it with a Branch Immediate Relative Offset. This will allow for the largest possible range, and help with eventual Position Independent Code, as the immediate is a signed offset from the current program counter. Perfect for use when jumping around inside functions.

Second, I will introduce an interrupt instruction, int. This will allow for the software-defined invocation of the interrupt handler, with an immediate value being provided to the Interrupt Event Field. Then I will be able to replicate the old IBM/DOS BIOS routines – where interrupts were signaled to perform low-level tasks.

So, here are the two new instructions. They are part of ISA 1.5, which is on github:

int_biro_defns

Calling conventions

Currently, my code sets up a stack at a known high address. This could then be reset to another location after the memory test, but isn’t done just yet. The stack is pointed to by r7, and currently I’ve defined all other registers r0-r6 to be volatile for the purpose of calling conventions. This simply means that if you need the value of a register preserved across a function call, you must save the value onto the stack before you call. I mentioned function calling and the instructions used in a previous part, but here is a reminder of how it’s done:

# Call setcursor (9, 2)

subi    r7, r7, 2             #reserve stack for preserve
write.w r7, r0, 0             #we want to preserve r0
  
subi    r7, r7, 6             #reserve stack for call
load.l  r1, 9                 #2 word arguments and a return address = 6 bytes
load.l  r2, 2
write.w r7, r1, 2             #arg0 x=9
write.w r7, r2, 4             #arg1 y=2

spc     r6                    #get the current PC value
addi    r6, r6, 14            #offset that value past the branch below
write.w r7, r6                #save the return pc

load.h  r0, $setcursor.h      
load.l  r1, $setcursor.l
or      r0, r0, r1            #build a register containing function location
br      r0                    #branch to the function label
   
addi    r7, r7, 6             #restore stack 

read.w  r0, r7, 0             #read the old r0 back
addi    r7, r7, 2             #restore stack (preserve)

So lots of writes and bloat for every call, but it works well enough. I have return values passed back in registers. By implementing BIOS routines for these I/O functions using software interrupts, we should reduce the code size considerably.

BIOS Routines

The new int instruction allows user code to jump into the interrupt handler, with a user-defined Interrupt Event Field. The space in the immediate area of the opcode allows for 64 values, so we define Interrupt Event Field values less than 64 to be only applicable to software routines.

In the interrupt handler, all registers except r7 are immediately saved to the stack. Of course this means the stack must always have enough space for this, but assume that is true for now. If the Event Field is < 64, we use this value to get a function address from a table. We then setup a function call and branch. On return, we save r0 back to the stack, directly where it will be restored from. This allows for data to be returned from BIOS function calls. Currently, the BIOS table is as follows: interrupts

So the basics are there just now. Enough to make a test with input from UART and output to text-mode display. Note the division by zero entry – there is no hardware divide. I have a naive division function, and throws int 0‘s when division by zero is encountered.

There are obviously restrictions to this; for example, you really should not use the int instruction when inside of an interrupt handler, but it can happen, nothing stops you from doing it. Keeping to these restrictions will be tricky. I have started looking at how the hardware can disable/restore interrupt enable states and also handle when an invalid int instruction is encountered.

I went and moved all the code of my simple bios example over to this new system based on software interrupts. I also changed Branch Immediates to Branch Immediate Relative Offset, but didn’t fish out other opportunities for it significantly.

The output on screen was corrupt. data was being clobbered somewhere, so I needed to debug..

Debugging the issue

TPU does not have debugging functionality for the software running on it, but, you can use the ISIM simulator, and edit the assembly code to at least allow for some ‘what are the registers at point X’ information.

memory__regs

It is incredibly time consuming, but invaluable when things go wrong, which inevitably did when I tried out using the int instruction. I’ve mentioned in previous parts than in ISim you can inspect the contents of the internal Block Ram memories and other signals. This allows you to see register values, and the contents of the text ram. You can set up the data view to show ASCII, and get a representation of the characters you’d want displayed.

memory_tram

What I tend to do is place a branch to self such as “biro 0” where I want to inspect registers, then run the code in the simulator. I could also use the TPU emulator, but I’ve yet to implement the full set of opcodes in it yet. Doing this around various places led me to discover that the r0 register was being overwritten on execution of int instructions. The value written was always 0x0008 – which to those who have read all previous TPU parts may recognize as the interrupt vector.

The way the ALU works is that various signals are always calculated and made available, but then the relevant one is selected, usually by a signal from the decoder.

when OPCODE_SPEC_F2_INT =>
    s_result(15 downto 0) <= ADDR_INTVEC;
    s_interrupt_rpc <= std_logic_vector(unsigned(I_PC) + 2);
    s_interrupt_register <= X"00" & "00" & I_dataIMM(7 downto 2);
    
    s_prev_interrupt_enable <= s_interrupt_enable;
    s_interrupt_enable <= '0';
    s_shouldBranch <= '1';	

The instruction is implemented in the ALU simply – like a standard branch. If s_shouldBranch is ‘1’, the PC unit takes the next PC from the output of the ALU, which is s_result(15 downto 0) – the Interrupt Vector ADDR_INTVEC (0x0008).

Looking back at the int instruction layout:

int_inst

You can see that the bits where a destination register should be (11-8) in an Rd-form instruction are all 0. The issue we have is that register r0 is having 8 written to it after an int instruction, and in the same way a branch uses the output of the ALU, if a register write is requested, the same value – the output of the ALU – will be used as the source data.

Looking at the decoder, I saw that I’d forgot to add the new instruction. I’d only added it to the ALU block. The decoder sets the write enable signal for the destination register:

case I_dataInst(IFO_F2_BEGIN downto IFO_F2_END) is
  when OPCODE_SPEC_F2_GETPC =>
    O_regDwe <= '1';
  when OPCODE_SPEC_F2_GETSTATUS =>
    O_regDwe <= '1';
  when others =>
end case;

With O_regDwe not being set in the event of the OPCODE_SPEC_F2_INT case, it would preserve it’s previous state. Before the int instruction as usually an immediate load into a register – meaning the regDwe signal would be left enabled, and produce the exact incorrect behavior I saw.

case I_dataInst(IFO_F2_BEGIN downto IFO_F2_END) is
  when OPCODE_SPEC_F2_GETPC =>
    O_regDwe <= '1';
  when OPCODE_SPEC_F2_GETSTATUS =>
    O_regDwe <= '1';
  when OPCODE_SPEC_F2_INT =>
    O_regDwe <= '0';
  when others =>
end case;

Quickly adding the case for our new INT instruction, things worked perfectly.

New instruction stats

Now that we have the int instruction, I collated the number of instructions for the same functions, and got the expected reduction in code size.

after_instcount_pie

instruction_count

Finding more bugs

It’s expected that as the amount of TPU code I write grows, the more issues I find in various parts of it’s implementation. Some are obvious bugs, whilst others may be hazards in the CPU itself. One such bug was in the Memory Interrupt Process, only introduced in the last part. It only ever fired once, due to a check for the state-machine state being in the wrong place:

...
if rising_edge(cEng_clk_core) and MEM_access_int_state = 0 then
    if MEM_Access_error = '1' then
      I_int <= '1';
...

The check for MEM_access_int_state should have been after the clock edge check:

...
if rising_edge(cEng_clk_core) then
			if MEM_Access_error = '1' and MEM_access_int_state = 0 then
				I_int <= '1';
...

With this edit in place, multiple memory violation interrupt requests can take place. This is required for some code that I wrote which displays the ‘memory map’ of the system. A read is performed at each 2KB boundary, and if an interrupt violation is encountered it is assumed the whole bank/block is unmapped. This continues until all 32 banks are checked – and displays this to the screen as it happens.

memmap

Explanation:

  • 0x0000 – 0x3FFF: RAM
  • 0x9000 – 0x97FF: Memory mapped I/O
  • 0xA000 – 0xAFFF: Font Ram
  • 0xB000 – 0xBFFF: Text Ram
  • 0xC000 – 0xEFFF: VRAM

Getting back to the start of this post, where we discussed simple command execution, this memory map prints out if the ‘m’ command is entered through the UART. There are two other commands currently, ‘l’ for lighting all LEDs and ‘d’ for darkening them.

There is also a current issue where UART input seems to generate multiple instances of the same character from my read function. I’ve yet to look at that problem in detail just yet.

Wrap Up

So in this part, we’ve investigated some ‘BIOS’ code, modified the branch immediate instruction, added a software interrupt instruction, and fixed some issues in their implementation.

At the moment, I’m finding the hardest part of TPU now bad tooling. TASM for assembling code was good and served its purpose when I needed a quick fix for generating small tests. However, now that I’m thinking of sending program code over UART which is ‘compiled’ as position independent code, TASM is showing significant weaknesses. I’ll need to get around that somehow.

For now that’s the end of this part. Many thanks for reading. As always, if you have any comments of queries, please grab me on twitter @domipheus.