Designing a CPU in VHDL, Part 12: Text mode video output

This is part of a series of posts detailing the steps and learning undertaken to design and implement a CPU in VHDL. Previous parts are available here, and I’d recommend they are read before continuing.

Whilst having a pixel-based video output on TPU is great, there is fundamental limitations with regard to resolutions and memory. It’s very hard to convey real information with such a resolution, and really what I need is the old style text modes of past. Think 80×25 characters, DOS/BIOS post screens. What is needed to implement that sort of output?

First of all, we need to fix down on our ‘text resolutions’. That is, the number of columns/rows, and the size in pixels of the glyphs we will draw. For this, I’m going to continue with 80 columns by 25 rows. This means, if our glyphs are 8×16 pixels, a screen resolution of 640×400 is needed. That fits nicely into 640×480, if you don’t mind a border on the bottom edge – 640x400x70Hz is an option too.

In addition to this, I want to be able to set colours for the text – foreground and background. I’d also want to make blinking of specific characters possible.

Text RAM

The areas of memory where the ASCII characters to render are stored is called TRAM in my design, standing for text ram (not to be confused with the .text executable sections in binaries!). For each character tile on our 80×25 character grid, we will have two bytes – the ASCII character, along with an attribute byte. This attribute byte will define the foreground and background colours for this glyph tile – along with whether or not the tile should be blinking.

80×25 2-byte characters comes out as 4,000 bytes. That will nearly fill two 2KB Xilinx block rams. I could have used 80×30, perfectly filling the whole 640×480 screen resolution, but I couldn’t bring myself to add that third block ram. Despite that, we do have plenty of them available on the miniSpartan6+ board. My LX25 variant has 52 available, for a total of 104KB storage. These block rams are integrated into my top-level TPU design in the same way as the existing VRAM, so they are both readable and writable by TPU, and readable (at a differing clock rate) for use by our new VHDL module which will generate the pixel stream required to represent our text characters.

Font RAM

The glyphs themselves are stored as 16 bytes, with 1 bit corresponding to a pixel in the output. A 1 value indicates foreground shading, whilst 0 is unsurprisingly background.

glyph_layoutWith the glyphs organized linearly as a packed array of 16-byte elements, for the full 256 range of characters, we’ll need exactly two 2KB block rams. This storage could also be implemented as a ROM, but I’m going to go ahead and use the same module I use or the text ram (and VRAM) so that the user can edit the storage to implement custom glyphs.

The character generator – text_gen

In the last part, I introduced a VGA signal generator. This module takes a pixel clock, and generates sync, blanking and an x and y coordinate for the pixel that is being output. This X and Y information is used to generate a memory address, at which VRAM contains the 16-bit 565 colour to output for that pixel. The RGB value then goes to encoders, and serialized out as DVI-D.

With this signal generator, we will first change the timings to output a 640x480x60Hz set of signals. The x and Y output will no longer form an address into vram, but will be passed into a text_gen module. This new module, for a given X and Y, will generate addresses into the text and font rams, manage the operation of the data from those rams, and eventually output a pixel value. This text_gen module needs to operate at a faster clock, as for any pixel, there could be two dependent memory reads issued which need serviced before output is provided.

For each pixel value, the 8×16 text ’tile’ index is calculated. From this, the location in tram is known – a basic tile_y*80+tile_x calculation. In the VHDL, we use the unsigned type which has the multiplication operation defined.

tram_addr_us <= (unsigned( I_y(11 downto 4)) * 80) + unsigned(I_x(11 downto 3));

This synthesizes to a DSP48A1. There are timing considerations here that I need to take into account – more on that later.

dsp48a1The 16-bit data word from TRAM is captured after several cycles of latency. This data is latched within text_gen, and the ASCII character code part of this used to calculate a further address into the font ram. This calculation is easier due to the 16-byte layout, so can be implemented with shifts. After a further few cycles of latency to allow the external memory to respond, we get a single byte equating to a row within the glyph. Using out input X pixel coordinate, we look up the relevant bit in the glyph row – which is then used to select a foreground or background colour. The colours themselves are selected using the other byte obtained from text ram – the attribute byte.

Attribute Byte

The attribute byte layout is the same as other text modes. A single blink bit, 3 bits of background colour and 4 bits of foreground. These could be interpreted in other ways (for example disabling blinking can allow for more background colours) but at the moment they simply index into one of 8 available background colours or 16 available foreground colours. I’ve fixed the colours themselves but there is no reason as to why these colours could not be memory mapped so that the palette can be changed programmatically.

attribute_byteBlinking is achieved by checking an internal counter, along with the blink attribute bit. If the blink bit is set, and the counter is in a non-blink state, the background color is chosen regardless of the glyph properties.

text_gen states

entity text_gen is
  Port ( 
    I_clk_pixel : in  STD_LOGIC;
    I_clk_pixel10x : in  STD_LOGIC;

    -- Inputs from VGA signal generator
    -- defines the 'next pixel' 
    I_blank : in  STD_LOGIC;
    I_x : in  STD_LOGIC_VECTOR (11 downto 0);
    I_y : in  STD_LOGIC_VECTOR (11 downto 0);

    -- Request data for a glyph row from FRAM
    O_FRAM_ADDR : out STD_LOGIC_VECTOR (15 downto 0);
    I_FRAM_DATA : in STD_LOGIC_VECTOR (15 downto 0);

    -- Request data from textual memory TRAM
    O_TRAM_ADDR : out STD_LOGIC_VECTOR (15 downto 0);
    I_TRAM_DATA : in STD_LOGIC_VECTOR (15 downto 0);

    -- The data for the relevant requested pixel
    O_R : out STD_LOGIC_VECTOR (7 downto 0);
    O_G : out STD_LOGIC_VECTOR (7 downto 0);
    O_B : out STD_LOGIC_VECTOR (7 downto 0)
end text_gen;

text_mode_diagramI have the text generator currently running at 10x pixel clock. This is probably being too safe, and I could bring it down to 5x. I’ll have to check the timing constraints more thoroughly.

The module assumes the rows are scanned across the rows just like VGA. Each time a pixel X offset is input which we know is the start of a new glyph, a 2-byte TRAM fetch is initiated. The result of that fetch is used to latch colours from the attribute byte, and then fetch a 1-byte glyph row. That row is latched, and used by the next 8 pixels which are input to the generator. The states are short-circuited to the last stage.

I’ve attached full source of the module below.


offsetThe first run of the text_gen module was actually very successful. I initialized the text block rams with some characters, and used a font ROM that I found which implemented an ASCII character set. The display worked, albeit with characters in the wrong place. The character I expected to be at position 0,0 was actually in 2,0.

I think there is an issue with timing in terms of how much latency the DSP48 slice needs to perform the multiplication required for calculating the TRAM location. One of the things that I needed to do from the previous part is that we need the next pixel locations to be used, rather than the current pixel which is what is used now. To get around this, I implemented a simple FIFO in the VGA signal generator.

The length of the FIFO can be changed, and the module now outputs a set of signals for the current pixel, which is sent to the TMDS encoders, as well as a set of prefetch signals, which are currently 8 pixels early. These prefetch signals are sent to the text_gen and allow for plenty of time for memory and other latencies to be accounted for. With this change, the output was correct. The expected character in 0,0 was rendered at that location.

Another issue was that the colour of any character was incorrect. For example, the character at position 2,0 had the colour of character 1,0. Moving around the point where the attribute byte and colours were latched in the state machine fixed this. I had been doing lots of asynchronous operations, but performing a latching operation on the RGB pixel data made it much more stable.

Testing out custom glyphs

customglyphOne of the things I wanted was the ability to edit the font ram, and you can do that. Above you will see an image with some odd icon the the right, made up of 4 characters. I don’t really know what it is supposed to look like 🙂

Blinking in action

Wrap up

So text mode works, fairly well. This was a lot easier to get working than I thought it would. I hope to get a small demo together where input from the UART is echoed to this command prompt, and get some simple test commands working.

Thanks for reading, as always let me know your thoughts via twitter @domipheus.


-- Company: Domipheus Labs
-- Engineer: Colin Riley
-- Create Date:    16:27:52 05/01/2016 
-- Design Name:    Text-mode output generator
-- Module Name:    text_gen - Behavioral 
-- Project Name:   
-- Target Devices: Tested on Spartan6
-- Tool versions: 
-- Description: 
--   For a 640x480 resolution set of input pixel locations an 80x25 text-mode 
--   representation is generated. It is assumed the x direction pixels are
--   scanned linearly.
--   Glyphs are stored in a font ram as 16 bytes, each bit selecting a foreground
--   or background colour to display for a given pizel in an 8x16 glyph.
--   A clock faster than the pixel clock is needed to account for latency from 
--   worse-case two dependant memory reads per pixel. It is adviced that pixel 
--   locations are inputted early to the text_gen so data can be prefetched.
-- Dependencies: 
-- Revision: 
-- Revision 0.01 - File Created
-- Additional Comments: 
library IEEE;

entity text_gen is
   Port ( 
     I_clk_pixel : in  STD_LOGIC;
     I_clk_pixel10x : in  STD_LOGIC;
     -- Inputs from VGA signal generator
     -- defines the 'next pixel' 
     I_blank : in  STD_LOGIC;
     I_x : in  STD_LOGIC_VECTOR (11 downto 0);
     I_y : in  STD_LOGIC_VECTOR (11 downto 0);
     -- Request data for a glyph row from FRAM
     O_FRAM_ADDR : out STD_LOGIC_VECTOR (15 downto 0);
     I_FRAM_DATA : in STD_LOGIC_VECTOR (15 downto 0);
     -- Request data from textual memory TRAM
     O_TRAM_ADDR : out STD_LOGIC_VECTOR (15 downto 0);
     I_TRAM_DATA : in STD_LOGIC_VECTOR (15 downto 0);
     -- The data for the relevant requested pixel
     O_R : out STD_LOGIC_VECTOR (7 downto 0);
     O_G : out STD_LOGIC_VECTOR (7 downto 0);
     O_B : out STD_LOGIC_VECTOR (7 downto 0)
end text_gen;

architecture Behavioral of text_gen is
   -- state tracks the location in our state machine
   signal state: integer := 0;

   -- The blinking speed of characters is controlled by loctions 
   -- in this counter
   signal blinker_count: unsigned(31 downto 0) := X"00000000";

   -- _us is the result of the address computation,
   -- whereas the logic_vector is the latched output to memory
   signal fram_addr_us: unsigned(15 downto 0):= X"0000";
   signal fram_addr: std_logic_vector( 15 downto 0) := X"0000";
   signal fram_data_latched: std_logic_vector(15 downto 0);

   -- Font ram addresses for glyphs above, text ram for ascii and
   -- attributes below.
   signal tram_addr_us: unsigned(15 downto 0):= X"0000";
   signal tram_addr: std_logic_vector( 15 downto 0) := X"0000";
   signal tram_data_latched: std_logic_vector(15 downto 0);

   -- the latched current_x value we are computing
   signal current_x: std_logic_vector( 11 downto 0) := X"FFF";

   -- Current fg and bg colours
   signal colour_fg: std_logic_vector(23 downto 0) := X"FFFFFF"; 
   signal colour_bg: std_logic_vector(23 downto 0) := X"FFFFFF"; 
   signal blink: std_logic := '1';

   -- outputs for our pixel colour
   signal r: std_logic_vector(7 downto 0) := X"00";
   signal g: std_logic_vector(7 downto 0) := X"00";
   signal b: std_logic_vector(7 downto 0) := X"00";

   type colour_rom_t is array (0 to 15) of std_logic_vector(23 downto 0);
   -- ROM definition
   constant colours: colour_rom_t:=(  
   X"000000", -- 0 Black
   X"0000AA", -- 1 Blue
   X"00AA00", -- 2 Green
   X"00AAAA", -- 3 Cyan
   X"AA0000", -- 4 Red
   X"AA00AA", -- 5 Magenta
   X"AA5500", -- 6 Brown
   X"AAAAAA", -- 7 Light Gray
   X"555555", -- 8 Dark Gray
   X"5555FF", -- 9 Light Blue
   X"55FF55", -- a Light Green
   X"55FFFF", -- b Light Cyan
   X"FF5555", -- c Light Red
   X"FF55FF", -- d Light Magenta
   X"FFFF00", -- e Yellow
   X"FFFFFF"  -- f White


   tram_addr <= std_logic_vector(tram_addr_us);
   O_TRAM_ADDR <= tram_addr(14 downto 0) & '0';
   fram_addr <= std_logic_vector(fram_addr_us);
   O_FRAM_ADDR <= fram_addr(15 downto 0);
      if rising_edge(I_clk_pixel) then
         blinker_count <= blinker_count + 1;
      end if;
   end process;
      if rising_edge(I_clk_pixel10x) then
         if state < 8 then
            -- each clock either stay in a state, or move to the next one
            state <= state + 1;
         end if;
         if state = 3 then
            -- latch the data from TRAM and kick off FRAM read
            tram_data_latched <= I_TRAM_DATA;
            fram_addr_us <= (unsigned(tram_data_latched(7 downto 0)) * 16 ) + unsigned(I_y(3 downto 0));
            blink <= tram_data_latched(15);
            colour_fg <= colours( to_integer(unsigned( tram_data_latched(11 downto 8))));
            colour_bg <= colours( to_integer(unsigned( tram_data_latched(14 downto 12))));
         elsif state = 6 then	
            -- latch the data from FRAM
            fram_data_latched <= I_FRAM_DATA;
            state <= 8;
         elsif current_x /= I_x then
            if (I_x(2 downto 0) = "000") then
               -- Each 8-byte pixel start, set the state and kick off TRAM fetch
               state <= 1;
               -- this multiply becomes a DSP slice
               tram_addr_us <= (unsigned( I_y(11 downto 4)) * 80) + unsigned(I_x(11 downto 3));
               -- short circuit straight to shade state
               state <= 7;
            end if;
            current_x <= I_x;
         elsif state >= 8 then
            -- shade a pixel
            -- If the curret pixel should be foreground, and is not in a blink state, shade it foreground
            if (fram_data_latched(7 - to_integer(unsigned(I_x(2 downto 0)))) = '1')
               and (blinker_count(24) = '1' or (blink = '0')) then
              r <= colour_fg(23 downto 16); 
              g <= colour_fg(15 downto 8);
              b <= colour_fg(7 downto 0);
              r <= colour_bg(23 downto 16);
              g <= colour_bg(15 downto 8);
              b <= colour_bg(7 downto 0);
            end if;
         end if;
      end if;
   end process;
   -- When we are outside of our text area, have black pixels
   O_r <= r when unsigned(I_y) < 400 else X"00";
   O_g <= g when unsigned(I_y) < 400 else X"00";
   O_b <= b when unsigned(I_y) < 400 else X"00";

end Behavioral;

Designing a CPU in VHDL, Part 11: VRAM and HDMI output

This is part of a series of posts detailing the steps and learning undertaken to design and implement a CPU in VHDL. Previous parts are available here, and I’d recommend they are read before continuing.

I’ve been working towards HDMI output on my TPU SOC, and this week I managed to get enough of something to get pixels (very large pixels!) output to the screen.

The plan was to map an area of memory to a VRAM block, which could be read and written to form the TPU, and also read for the graphics subsystem that would generate the video signals that are to be output.


The current ram used in TPU is a block ram primitive entity on Spartan6 – RAMB16BWER. This 16Kbit ram has two ports, which can be run at different clock rates. At the moment, we map this primitive into an ‘ebram’ component, which disables the second port, and services the block ram via the bus signals on TPU. I made a new component, handily named ‘ebram2port’ to expose the second port of the RAMB16BWER instance for read-only use.

entity ebram2port is
  Port (
    // existing 'ebram' TPU interface
    I_clk : in  STD_LOGIC;
    I_cs : in STD_LOGIC;
    I_we : in  STD_LOGIC;
    I_addr : in  STD_LOGIC_VECTOR (15 downto 0);
    I_data : in  STD_LOGIC_VECTOR (15 downto 0);
    I_size : in STD_LOGIC;
    O_data : out  STD_LOGIC_VECTOR (15 downto 0);

    // new read-only secondary port interface
    I_p2_clk : in STD_LOGIC;
    I_p2_cs : in STD_LOGIC;
    I_p2_addr : in  STD_LOGIC_VECTOR (15 downto 0);
    O_p2_data : out  STD_LOGIC_VECTOR (15 downto 0)
end ebram2port;

An instance of eram2port is created in my TPU top-level design. The chip select signals (I_cs and I_p2_cs) are driven via some conditionals which check for address ranges on the TPU output bus.

CS_VRAM <= '1' when ((MEM_O_addr >= X"C000") and (MEM_O_addr < X"C200") and (O_int_ack = '0')) else '0';
CS_ERAM <= '1' when ((MEM_O_addr < X"1000") and (O_int_ack = '0')) else '0';

CS_ERAM being the chip select for the actual embedded ram instance, with the TPU bootloader code.

The input to TPU from our data sources, such as RAM and peripherals, also needs to change.

MEM_I_data <= INT_DATA when O_int_ack = '1'
else ram_output_data when CS_ERAM = '1'
else vram_output_data when CS_VRAM = '1'
else IO_DATA ;

INT_DATA and IO_DATA busses are controlled by other external processes, and thus don’t matter much here. This code is what I’d like auto-generated from my emulator, temu – as it’s the sort of code which when duplicated to the extent I’ll need (for tens of block rams integrated) that human error comes into play. Everyone makes copy and paste errors. Everyone.

The last real item is the address, which is fed into I_addr. This must be modified from the 0xc000 – 0xc200 address that TPU sees to 0x0000 – 0x0200. This is done as you’d expect, by  simply chopping off the high 4 bits.

Now we have a VRAM block integrated into the TPU top-level module, which TPU programs can read and write to via standard memory instructions to our mapped area of memory, but which also has a second port, which can read the same memory but at a different clock rate. The difference in clock rate is the important part in this.

Graphics Output

I have been using Michael Field’s excellent DVID projects for quite a while now, trying to get my head around how HDMI signals are formed. There are four main aspects:

  • DVI is a subset of HDMI.
  • The pixel signals and timing is essentially the same as VGA.
  • The data is encoded as TMDS serial.
  • The data is then sent along 3 differential signal pairs, with a 4th pair for a clock.

My code uses the DVID test project from Michaels Hamsterworks Wiki. I’ve edited some areas of the code, for my own requirements.

VGA timing

VGA timing generally works along the lines of a pixel clock, which is set specifically to allow for the number of pixels required for your resolution to be transmitted within tolerances, along with horizontal and vertical sync signals, and a blanking flag. The pixel data itself can be thought of as a sub-image of a larger set of data which is transmitted, origin in the top left hand corner. The area to the right and bottom which is not part of the original data is ‘blank’.

vga_800x600x60The timings and the durations of these blanking periods all depends on set figures defined by standards. For example, for an 800×600, 60Hz image, the pixel clock is 40MHz. Essentially, each row can be thought of as having around 1056 pixels, with the additional pixels accounting for blanking and sync periods, where the actual pixel value doesn’t matter – it exists only for timing. An example for the resolution above lays out the exact number of pixels in each are, along with time representations.

I have a VGA signal generator, which takes the pixel clocks and counts through the pixels, outputting pixel offsets, sync and blanking bits. Within this VHDL module, the constants for our 800×600 image are as follows:

constant h_rez        : natural := 800;
constant h_sync_start : natural := 800+40;
constant h_sync_end   : natural := 800+40+128;
constant h_max        : natural := 1056;

constant v_rez        : natural := 600;
constant v_sync_start : natural := 600+1;
constant v_sync_end   : natural := 600+1+4;
constant v_max        : natural := 628;

The VGA signal generator is exposed in my TPU design as the following entity.

entity vga_gen is
Port (
pixel_clock     : in STD_LOGIC;

pixel_h : out STD_LOGIC_VECTOR(11 downto 0);
pixel_v : out STD_LOGIC_VECTOR(11 downto 0);

blank   : out STD_LOGIC := '0';
hsync   : out STD_LOGIC := '0';
vsync   : out STD_LOGIC := '0'
end vga_gen;

The pixel_h and pixel_v offsets then combine to form an address which can be looked up in VRAM, which holds the pixel data.

Generating the TMDS data

The actual image data we’ll send over the HDMI cable is actually DVI. The Way HDMI and DVI send image data can be pretty much the same. HDMI can carry more varied data, such as sound – but thats really just hidden in the blanking periods of the communicated image.

TMDS (or Transition-minimized differential signalling if you want the full name!) is a method for transmitting serial data at high clock rates over varying length cables. It has methods for reducing the effects of electromagnetic interference. You can read more about it over at Wikipedia.

The main understanding required is that it’s a form of 8b/10b encoding. 8 bits of data are encoded as 10 bits in such a way that the number of transitions to 1 or 0 states are balanced. This allows the DC voltage to be at a sustained average level – which has various benefits.

Michael has a few TMDS encoder modules available on his various projects, going from basic ones which match low-end 3-bit per pixel input to fixed outputs, to a real encoder capable of the full range of 8bit per pixel RGB. I use the full encoder without modifications. A simple flow of how it works is as follows (again, from Wikipedia):

A two-stage process converts an input of 8 bits into a 10 bit code.

    1. In the first stage, the first bit is transformed and each subsequent bit is either XOR or XNOR transformed against the previous bit.
      The encoder chooses between XOR and XNOR by determining which will result in the fewest transitions. The ninth bit encodes which operation was used.
    2. In the second stage, the first eight bits are optionally inverted to even out the balance of ones and zeros and therefore the sustained average DC level; the tenth bit encodes whether this inversion took place.

With this encoder, we can get the 10 bits we then need to serialize across the cable to our monitor.

Serializing the TMDS data

To serialize the TMDS data to our differential output pairs, we use Double Data rate Registers (ODDR2). These registers are implemented as primitives in the VHDL. Using these DDR registers, we only need a serialization clock 5x that of the pixel clock, rather than 10x. There are ‘true’ serialization primitives available on Spartan6, which I may look at later (there is a SERDES example on Hamsterworks for those interested).


ODDR2_red   : ODDR2
generic map (
INIT => '0',
port map (
Q => red_s,
D0 => shift_red(0),
D1 => shift_red(1),
C0 => clk,
C1 => clk_n,
CE => '1',
R => '0',
S => '0'

Each pixel clock, the 10-bit TMDS value for each pixel is latched. Each subsequent cycle of the 5x pixel clock, the TMDS value is shifted 2 bits to the right. The low 2 bits are then fed into the D0 and D1 inputs of our DDR2 register. The clock inputs C0 and C1 are both 5x pixel, so 200MHz, but the C1 clock input is 180 degrees out of phase.

oddr2_waveThe output of this register, red_s, is then fed into an OBUFDS primitive which drives the TMDS pair output, which is connected to the HDMI socket pins on the miniSpartan6+ board.


OBUFDS_red   : OBUFDS port map (
O  => hdmi_out_p(2),
OB => hdmi_out_n(2),
I  => red_s   );

There is similar for the other 3 channels. It goes in the order 0:Blue, 1:Green, 2:Red, 3:Clock.


At the moment my clocking system needs work, but it’s fixed just now to my needs for 800x600x60Hz. For this, the 50MHz miniSpartan6+ input clock is buffered, then input into a PLL which multiplies it by 20 to 800MHz, before dividing it to 40MHz for the pixel clock, and 200MHz for the serial drivers. There is also a second 200MHz output, 180 degrees out of phase, used in the ODDR registers as clk_n.

generic map (
CLKFBOUT_MULT => 16,           --800MHz
CLKOUT0_DIVIDE => 20,          --40MHz

CLKOUT1_DIVIDE => 4,           --200MHz

CLKOUT2_DIVIDE => 4,           --200MHz
CLKOUT2_PHASE => 180.0,

CLK_FEEDBACK => "CLKFBOUT",    -- Clock source to drive CLKFBIN 
CLKIN_PERIOD => 20.0,          -- IMPORTANT! 20.00 = 50MHz
DIVCLK_DIVIDE => 1             -- Division value for all output clocks (1-52)
port map (
CLKFBOUT => clk_feedback,
CLKOUT0  => clock_x1_unbuffered,
CLKOUT1  => clock_x5_unbuffered,
CLKOUT2  => clock_x5_180_unbuffered,
CLKOUT3  => open,
CLKOUT4  => open,
CLKOUT5  => open,
LOCKED   => pll_locked,
CLKFBIN  => clk_feedback,
CLKIN    => clk50_buffered,
RST      => '0'

As with the 50MHz input, the 3 clock outputs are buffered before being used in the various subsystems. For this the BUFG primitive is used.


VRAM Interface

At the moment I have the second port on my ‘vram’ instance clocked at 200MHz. The first port, which TPU uses, is clocked at 50MHz. 200MHz within the allowable operating range for the device I’m using, and it seems to work well. At the moment, I’m pretty sure that I am 1 pixel out of phase, but I can fix that later. The address that the VRAM sees is the following

-- generate the vram scan address, forcing reads at 2 byte boundaries
vram_addr <= X"0" &"000" & pixel_v(8 downto 5) & pixel_h(8 downto 5) & '0';

-- Only show 512x512 of the display with our expanded virtual pixels
vram_data <= vram_output_2 when ((pixel_h(11 downto 9) = "000") and (pixel_v(11 downto 9) = "000"))
else X"0000";

The ‘VRAM’ is currently set up to contain a 16×16 image. Tiny, but perfect for what I need just now. The 16-bit pixels are in 565 format, and I trivially expand that to 8-bit for the TMDS encoders.

Now we have an integrated graphics subsystem, albeit one that is very rigid (for now).


I currently need to have the following definition in my constraints file for the clock:


Without it, the VHDL doesn’t route. It compiles and works fine (seemingly) with that included, but I’ve still to nail down exactly what it means, an how to fix it. Currently, my understanding is that when my VHDL is built, the compilers can’t generate clock placement which satisfies all the rules set. It’s something I want to understand further. It could be as simple as missing out some buffers.

There is also a line of pixels to the far right of the displayed screen, which suggests I’m out of phase by one pixel with the memory read results and the VGA signals. This isn’t too bad, so I’ll look at fixing that along when I increase the VRAM size for higher resolution.


This brings this part to a close. We have HDMI output which is the representation of a small VRAM that TPU controls. It’s pretty neat. I hope to increase the resolution of the image from it’s current 16×16 to something more manageable.

The emulator was very useful during this, as it validated my output for me. Ignore the 2 bright green pixels in the superimposed emulator output 😉


(The HDMI output to the left is actually 16×16, combination of lighting and a bad camera seems to give the impression of 8×8)

Thanks for reading, as always let me know your thoughts via twitter @domipheus. Also, many thanks to Michael Field and his DVID project of which the bulk of this post is derived from.


Dear ImGui, Thanks, From TEMU – The TPU Emulator

This is part of a series of posts detailing the steps and learning undertaken to design and implement a CPU in VHDL. Previous parts are available here, and I’d recommend they are read before continuing.

A few weeks ago I was in San Francisco for the Game Developers Conference (GDC). I decided not to take my MiniSpartan6+ board with me, despite wanting to get more work on TPU completed. Bare circuit boards don’t look good in luggage, etc.

I did however have an idea on the flight over from London Heathrow, so created a new Visual Studio project: temu, the TPU Emulator. I’ve been working on HDMI output of a small framebuffer in the VHDL, so thought I could make a compact emulator which would draw the framebuffer to a window whilst executing some TPU code. This would allow me to get some demos up and running quickly, so that when I went back to the VHDL, I could hit the ground running.

consoleoutAnnoyingly, I forgot to bring the latest TPU ISA document with me on my trip, and the version on GitHub is hilariously out of date. I did, however, have the latest HDL and the TPU assembler, tasm, so I worked backwards from there. The emulator itself is basic – I implemented the most important instructions, ignoring some of the smaller flags (such as signed addition, I don’t need any of the status flags of that for now). The emulator used stdout and just printed the current PC with the instruction it emulated, and in the case of memory operations output the values written along with the locations. I used SDL to open a window, and each frame processed the data in my ‘vram’ memory, writing pixels directly to the surface.

The main function simply had the standard SDL loop, with a WriteSurface() call which trivially wrote pixel values to the window, converting from my 565 16-bit pixel format to what is required of the SDL surface. The emulation happened in another thread. That other thread just loops until exit performing the emulation of current pc, then setting the next pc. There is no synchronization required, as the main thread simply reads the fake vram (simply a char array). The thread is simply spawned, and allowed to emulate alongside the SDL loop.

justvramIt worked well, until it didn’t. It wasn’t a threading issue, it worked great. The issue is I wanted more. I needed a tool, more than simply an emulator. I wanted single step, memory inspection. I wanted a debugger inside my emulator.

Dear ImGui

I had the emulator working by the time GDC began proper, and was wondering what the next steps would be to make it more usable. I knew I needed some sort of real-time interaction, but didn’t want all the hassle of implementing that myself. I was at a GDC breakfast gathering of developers, and Omar (@ocornut) was there, who created a library I’d heard of but never used before: Dear ImGui. It is a cross-platform Immediate Mode GUI library. The answer I was looking for was staring me in the face. Switch SDL to OpenGL mode, and use Dear ImGUI to add an interactive interface to my emulator.

Later in the day when I had some spare time I tried to get things working. Within 15 minutes (and I really do mean only 15 minutes) I’d downloaded the sources required, integrated them into my temu project, and got a window showing FPS and the representation of my VRAM. Most of those 15 minutes, may I add, was looking up how to create and re-upload texture data in OpenGL – its been so long since I’d done any OpenGL coding I’d forgot the basics.

The architecture of the emulator (if you can call it that) is still as before: The SDL OpenGL loop and ‘vram’ texture update remains in the main thread, with the emulation happening in another. I keep to the single producer single consumer with no timing or order constraints, so no synchronization is needed.

vramThe code for my VRAM window is trivial. It’s ever so slightly changed from a sample, used to show drawing an image. It fits my requirements perfectly.

void DrawWindow_Vram()
  if (window_vram) {
    ImGui::Begin("VRAM Representation", &window_vram);
    ImGui::Text("Application average %.3f ms/frame (%.1f FPS)", 1000.0f / ImGui::GetIO().Framerate, ImGui::GetIO().Framerate);

This code is called each frame. All I need to do is set window_vram to true, and the window is shown. Pressing the close window button will set the bool pointed to in the second argument of Begin() to false, which closes the window. A button elsewhere can set window_vram back to true to show the window again. All ‘widgets’ within the Begin and End calls are ordered sequentially, and there are constructs for ordering things in columns, frames, lines etc. It’s all very straightforward. There is a large demo here which shows the basics and advanced layouts and how they fit together.

This library is brilliant. As someone who has written GUI and UI system code in the past, this has the concept nailed.

Once I realized how easy it was to add different windows, a control window popped up, allowing me to stop execution, resume it, and single-step. It also controlled an arbitrary millisecond delay before each instruction was emulated.


void DrawWindow_Control()
  if (window_control)
    ImGui::Begin("TPU control", &window_control, ImGuiWindowFlags_MenuBar);
    ImGui::Text("Status: %s", (gStatePaused) ? ((gStateSingleStepping) ? "Single Stepping":"Stopped") : "Running...");
    if (!gStatePaused)
      if (ImGui::Button("Stop")) 
        gStatePaused = true;
      if (ImGui::Button("Continue")) 
        gStatePaused = false;
    if (ImGui::Button("Single Step"))
      gStateSingleStepping = true;

    ImGui::SliderInt("Cycle Delay (ms)", &gSeepTime, 0, 100);
    ImGui::Checkbox("Status Prints (stdout)", &gStatusPrints);

This rocks.

startAfter this, I thought of what else to do. So I added quite a lot of extra functionality. Im just going to go through the windows that now exist in the TPU emulator, giving a bit of blurb about each.


registersThe registers window is pretty obvious. I’d like to add to it, including some internal registers – like status, which tracks overflow and carry style information.

Memory Map

memory_mapThe memory map/layout is essential information when developing TPU programs, since it can change so often when the top-level VHDL is edited. Things like block rams and memory-mapped registers/signals, like what is used for button inputs and LED output, need to be placed in the correct memory-mapped offsets. This window serves this purpose. At the moment, it’s fixed, but it’s designed to be extensible, in that you can add another block ram at a certain place, define whether its read/write or read and write, and then view how it fits into the whole scheme of things. A further feature I’d like to implement is a generation button which outputs top-level vhdl for a memory management unit, which will take the memory bus signals from TPU and map that to the various areas, providing the relevant clock enables and other state.

Memory Viewer

memoy_viewFrom the Memory Layout window, clicking view on any mapped element opens a memory viewer. This widget is available on the ImGui wiki on github.

Interactive Assembler

assemblerThe interactive assembler allows the TPU code to be updated within the emulator, at runtime. There is a small multi-line text box, containing the contents of an assembly file. There is an assemble button which when clicked invokes TASM on the file, opting to output as a binary blob, rather than the VHDL block-ram initializer format I usually use. This binary is then loaded into ram starting at address 0x0000. This address (at present) is fixed, more due to limitations in TASM than anything else.

This allows some really easy development. I can change constants and swap instructions on the fly without even resetting the emulator state, so in the case of drawing colours to the screen the results are instantly visible. For other changes it’s wise to either pause execution, or reset the TPU (Set the PC to 0 and re-initialize all memories), but it’s still miles away from my previous workflow which was insanely tedious, involving VHDL synthesis steps which take minutes.


buttons Buttons! There needed to be input somewhere, and there are 4 input switches on my miniSpartan6+ board. I decided to expand that for the emulator, simply due to the fact that I can add more I/O to the hardware device anyway. The buttons are memory mapped to a single 16-bit word, and there is a window showing a checkbox for each bit.


ledsAs with the buttons, LEDs provide quick and easy visible status output on my development board, and so they have a window in the emulator. 8 LEDS map to a single byte in the memory space. The code for it, again, is trivial.

void DrawWindow_OutputLeds()
  // Storage for the led status values, ImGui expects bools
  static bool ledvals[8];

  if (window_outputleds) {
    // Expand the bit values in the memory_leds mapped area to booleans
    for (uint32_t i = 0; i < 8; i++)
      ledvals[i] = (memory_leds & (1 << i))?true:false;
    ImGui::Begin("Output Leds", &window_outputleds);
    ImGui::Text("Mapping: 0x%04X", memory_mapping_leds);
    // Arrange the checkboxes and text labels in 8 columns
    ImGui::Columns(8, 0, false);
    // Make the check colour green, a much better LED colour
    ImGui::PushStyleColor(ImGuiCol_CheckMark, ImVec4(0, 1, 0, 1));
    for (int i = 0; i < 8; i++) {
      ImGui::Text("%d", 7-i);
      ImGui::Checkbox("", &ledvals[7-i]);
    // undo the change to the checkmark colour

Interrupt Firer

interrupt_firerThe next window is for testing interrupt routines. You can set what the Interrupt Event Field value will be (what is read externally off the data bus in the hardware on a interrupt ACK) and then signal to the emulator that an interrupt is waiting to be requested. If interrupts are disabled in the current emulator state, this can be forced to enabled from here, and a small status console shows the stage of the interrupt, such as ‘Waiting for ACK’.

Breakpoint Set

breakpointI mentioned debuggers earlier, and this window allows the emulator to get closer to that goal. A single breakpoint can be set by PC value, and enabled. There is nothing more than that just now – although it can be easily expanded to multiple breakpoints. An issue in the emulator at the moment is one which crops up in real debuggers, in that to continue from the breakpoint you need to disable the breakpoint, single step, re-enable the breakpoint, and only then continue. Something to fix later, when it causes me more of a headache than it does now.

PC Hit Counts

histogramAnother window came about from me looking over the ImGui demo, and trying to see what built-in widgets could add some simple functionality. The histogram seemed a winner almost instantly, and it proved itself when I came to try out the interrupt firer feature. The emulator, when executing instructions, grows an std::vector to accommodate the location of the PC. It then increments the value in the bin located at that PC. With this, the histogram is implemented with a single library function:

if (pc_hitcounts.size() > 0) {
  ImGui::PlotHistogram("##histo", &pc_hitcounts[0], pc_hitcounts.size(), 0, nullptr, 0.0f, pc_maxhits / hist_zoom, ImVec2(0, 140));

It’s basically a PC hit counter, so is a very simple profiler. Your hot code is the high bars, but you can also use it to identify code coverage issues. For instance, ensuring that the exception handler code you’ve written actually gets executed when you fire an interrupt.

I made a simple video of me using TEMU, showing some of the features and how they work and help things. It’s really helped speed up how quickly I can write TPU assembly, which in turn means I can develop new hardware features quicker. I aim to fix the emulator up (things like project files don’t really exist just no) and upload it to github eventually.

Thanks for reading! Let me know what you think by messaging me on twitter @domipheus, and generally pour praise in the direction of @ocornut for his wonderful library – you can support him via Patreon 🙂

Designing a CPU in VHDL, Part 10b: A very irritating issue, resolved.

This is part of a series of posts detailing the steps and learning undertaken to design and implement a CPU in VHDL. Previous parts are available here, and I’d recommend they are read before continuing.

It’s been a significant amount of time between this post and my last TPU article. A variety of things caused this – mainly working on a few other projects – but also due to an issue I had with TPU itself.

I had been hoping to interface TPU with an ESP8266 Wifi module, using the UART. For those not aware, the ESP8266 is a nifty little device comprising of a chipset containing the wifi radio but also a microcontroller handling the full TCP/IP stack. They are very simple to interface over a UART, working with simple AT commands. You can connect to a wifi network and spawn a server listening for requests in a few string operations – very powerful.

I started writing in TPU assembly the various code areas I’d need to interface with ESP8266. They were mainly string operations over the UART – so, things like SendString, RecieveString, and ExpectString, where the function waits until it matches a string sequence received from the UART. The code and data strings needed for a very simple test is well over 1KB which was a lot of code for this. I created various test benches and the code eventually passed those tests in the simulator.

At this point, I thought things were working well. However, on flashing the miniSpartan6+ board with my new programming file, nothing worked. I could tell the CPU was running something, but it was not the behavior I expected and integrated into the embedded block ram for execution.

When this happens, I usually expect that I’ve done something stupid in the VHDL and so I create post-translate simulation models of my design. This basically spits out further VHDL source which represents TPUs design but using general FPGA constructs, after compilation steps. In the software sense, imagine compiling C++ to C source – you’d see the internals of how virtual function calls work, and how the code increases in size with various operations. You can then simulate these post-translate models, which (in theory) give more accurate results.

The simulation completed (and took a lot longer than normal simulation), and the odd behavior persisted. So standard behavioral simulation worked, and post-translate model simulation failed – just like when on device. This is good, we can reproduce in the simulator.

Looking at the waveforms in the simulator, I could see what was going on: my code in the block ram was being corrupted somehow. When simulating my normal code, the waveform was as follows:

good_sim_run_8012The important part of the waveform is the block of 8012 on mem_i_data, bottom right of the image. That is the value of location 0x001c, as set in the block ram. However, when running my post-translate model, the following result occurred:

bad_sim_run_0012The 0x8012 data is now 0x0012. The high byte has been reset/written/removed. The code itself was setting the high-order memory mapped UART address, so that failing explains why the TPU never worked with the ESP8266 chip.

  write.w r0, r1, 5
  ##Send command to uart 0x12
  load.h  r0, 0x12

The code itself above performs a write, before loading 0x12 into the high byte of r0. you can see from the simulation waveform that the write enable goes high – this is from the write.w instruction. The instructions following a write were having any memory address destination location overwritten.

It took far too long to realise what was causing it. Looking at the simulation waveforms for both behavioral and post-translate the actions leading up to the corruption seemed identical. The memory bus signals were all changing at the same cycle and being sampled at the same time, but, as the issue always happened with a write operation, attention was drawn to the write enable line. I tried various things at this point, spending an embarrassing amount of time on it.

The issue was staring me in the face, and in both waveforms – albeit not completely surfacing. When I had my previous embedded RAM implementation (before I moved to RAMB16BWER block ram) I sampled all inputs differently, only on the CMD line going active. The block rams were connected directly. The chip select signals are generated from the address which is active longer than needs be, and also, more importantly, the write enable remains active for a whole memory ‘cycle’. The remedy was to feed the block ram write enable with (WE or CMD) from the TPU output. This means WE only goes active for a single cycle. as with CMD.

Don’t keep your write enables active longer than needed, folks!

TPU is continuing very slowly, but progress is being made. Whilst trying to fix this issue I also integrated the UARTs that Xilinx provide for use. They have 16 entry FIFO buffers on both receive and transmit. This may help when interfacing with various other items, like the ESP8266. I’m still interested as to why the simulator didn’t show any differences in signal timing in the post-translate mode, despite being able to reproduce the issue. If anyone knows hints and tips for figuring out issues in this area, please let me know! This issue should really have been noticed by me sooner, though.

Thanks for reading, as always.