Ads have been removed from these pages.
Instead, please consider these charities for donation:

  • CLIC Sargent is the UK's leading cancer charity for children, young people and their families. Donate.
  • Bloodwise is a UK based charity dedicated to funding research into all blood cancers including leukaemia, lymphoma and myeloma. Donate.
  • Children’s Cancer and Leukaemia Group are a leading children’s cancer charity and the UK and Ireland’s professional association for those involved in the treatment and care of children with cancer. Donate.

Designing a RISC-V CPU in VHDL, Part 16: Arty S7 RPU SoC, Block Rams, 720p HDMI

This is part of a series of posts detailing the steps and learning undertaken to design and implement a CPU in VHDL. Previous parts are available here, and I’d recommend they are read before continuing.

It’s finally time – the big deploy onto Digilent’s Arty S7 board.

In my previous part, I went over at a high level the changes made to my TPU cpu core in order to make it consume RISC-V. The CPU itself is still very simple, and I removed some of the more interesting features from TPU such as interrupts. Interrupts as implemented on TPU would not comply to the RISC-V spec, so best they were stripped out for the time being.

After I got my RISC-V SoC up and running on MiniSpartan6+, I was looking to develop my own Spartan 7 FPGA board to use as a programmable computer kit – FPGA for Soft CPU, another for Soft GPU, a microcontroller for system management – maybe even a 3rd FPGA for chipset I/O. However I quickly came to my senses and realised just how much of an endeavour that is. Spinning my own PCBs, soldering those big BGA chips (and ultimately failing to solder those £40-a-piece chips) would be a very costly affair. It is still a long term goal, but in the meantime I wanted to get an off-the-shelf Spartan 7 FPGA board, essentially to bringup the FPGA side of an eventual move to my own development board. When I saw the Digilent Arty S7 announced last year, I kept an eye on it knowing it would be a contender for my upgrade path to the 7-series chips. The Arty S7 I ended up purchasing is the S7-50 variant, sporting the XC7S50 Spartan 7 FPGA with significantly more resources than my previous Spartan 6 board. It also had 256MB DDR3 RAM, but lacked HDMI connectivity. Thankfully, the HDMI/DVI-D output issue has been solved and you can read about that in a previous article.

I’m based in the UK, and was able to purchase the Arty S7-50 from digikey.co.uk for £119 delivered. Checking just now, it looks like the price has increased and you’d now be paying in the region of £125. It’s still a very nice board for that! So, this post is dealing with porting my existing RISC-v “SoC” to this new FPGA board.

The SoC consists of my RPU CPU, fast internal FPGA Block RAM storage, external (and slow!) DDR3 memory, my HDMI output with legacy text mode HDMI output, and finally, access to SD card storage via SPI. First, we have to tackle a new development environment.

Xilinx Vivado

With the new 7-series FPGAs comes a new set of design tools for authoring HDL and deploying to devices. As with ISE, which we used for Spartan 6 FPGAs, Vivado has a free “webpack” edition which is compatible with Arty S7. You can grab the download from Xilinx here. For clarity, I use the 2018.1 version. 2018.2 is the latest version as this is written.

The UI has changed quite a bit from ISE. In my opinion, Vivado is harder to navigate and parts of the interface seem very clunky. The general areas of interest remain; a Project Manager “flow” holds the source hierarchy, and the Simulation, Synthesis, and Implementation flows hold the respective details and commands for those aspects of the project.

To create projects for the board, we will need the board definition files which are provided by Digilent. There is a small guide on how to download and install them, available here. If you already have Vivado installed, you want to skip to section 3. Digilent also have a demo project available with your usual flashy LED hello world functionality, if you want to start very simple. For the rest of this article however, I’m jumping straight into the project.

Vivado base Arty S7 project

With our RPU core interface already defined, we need to just import the VHDL source files into our project and begin a new top level design which incorporates the core. As a reminder, the RPU core entity is shown below, and it still heavily resembles the old TPU core.

The top level component will have various input and output definitions. We require the following:

  • Clock input
  • Switch input
  • LED output
  • TMDS DVI-D HDMI output
  • SPI input/output
  • DDR3 memory input/output

We will leave out SPI and DDR3 definitions for now. With this, our definition is as follows:

entity rpu_top is
Port (
-- Input 100MHz clock
CLK100MHZ : in STD_LOGIC;

-- Input switches from board
sw : in STD_LOGIC_VECTOR (3 downto 0);

-- Output leds to board
led : out STD_LOGIC_VECTOR (3 downto 0);

-- HDMI (DVI-D) video output
hdmi_out_p : out STD_LOGIC_VECTOR(3 downto 0);
hdmi_out_n : out STD_LOGIC_VECTOR(3 downto 0)
);
end rpu_top;

We need to ensure these signals are mapped to the pin constraints of the Arty S7. My board is a Rev. B board, so I made a copy of the respective .xdc file provided with the Digilent board files and exited it to point to my named signals. If you’re familiar with Xilinx ISE, this is like the old .ucf constraints file from my TPU CPU project. An example is below, with LEDs, Clock and the HDMI output defined. With HDMI we need to ensure the signal standard is TMDS_33. This is the definition required to map to my simple HDMI Pmod connectors.

Block RAM

The next thing we need to do is figure out some block ram memory. The block ram primitive objects have changed in the Spartan 7 series FPGAs and are larger. Unlike the MiniSpartan6 implementation, where I manually initialized the block rams and added additional switching logic, I am now using the Xilinx IP Block memory generator. This is available in the free Webpack version of vivado and allows for generation of a block ram object of your own data width and memory size. Internally, multiple block ram primitives will be combined into a single interface. I want 64Kbyte block rams, which are made up of 16 smaller 4Kbyte hardware block rams. With a 32 bit data interface, this Block Ram Generator saves us a lot of work. It also allows for a initialization file to be provided, so the block ram can have a defined contents at reset. This allows us to have our bootloader present in memory for bootstrapping the SoC.

Another option in the block memory generator which we must ensure is unselected is “common clock”. We want our block ram to be a true dual-port ram, with separate clocks for each. This allows the rams to be connected both to the CPU core for read/write, and also another system, such as our HDMI character generator for use as text console storage – running at a pixel clock rate, instead of the CPU clock rate.

With this, you can create the interface via the GUI and generate an HDL wrapper.

I created a wrapper so I could edit the source and ensure the data out ports were tri-stated when the block ram was disabled, for easier plumbing of multiple block rams together.

With our block ram wrapper available, we can connect this to memory interface of the RPU core. This is fairly simple, and we can attack multiple block rams by enabling them and muxing data lines depending on address bits. Because we tri-state the data output bus when the block ram is not selected we should be able to use a single output for all block rams, but for now I assign a signal to each output and explicitly select the one we need.

With the above, we should have 196Kbyte of total addressable RAM. However, there is a significant issue which needs attention before any code will run from these BRAMs. RPU currently expects data presented to it to be formatted into the expected data format. The memory interface does not have a byte enable or such like, as you would typically expect. So memory requests need swizzled for endianness and size prior to being given to RPU. This system will be getting a significant overhaul soon, which will change all of this so RPU is presented with simply a raw view of memory – but for now, we need to swizzle data from the BRAM before it enters the CPU. To do this, there are two additional processes, and the signals assigned by these processes are what is read from or written to the CPU interface.

There is one more process associated with memory, and that handles the state machine for handling the requests from the CPU. It also will assign various I/O data. It simply maps addresses to signals. I think this process can be implemented better, but for ease of editing for tinkering it works well for now.

Getting some code running

At this point, you should be able to write your “chasing LED” program and have it as the BRAM initial contents, so when the Arty S7 board is flashed with the FPGA bitcode you will see the onboard LEDs flash.

The code for this is just as simple as you’d expect.

{
    unsigned int i = 0u;
    volatile unsigned int* ui_addr_leds = (unsigned int *)IO_ADDR_LEDS;

    while (1)
    {
        *ui_addr_leds = (i++)>>18;
    }
}

IO_ADDR_LEDS above is defined to be 0xf0009000, so the MEM_proc process you can see earlier picks up this memory write and redirects the lowest significant 4 bits to the external LED I/O pins.

I previously mentioned that I’d build my own RISC-V toolchain using Windows Subsystem for Linux. I attempted to build the latest version of riscv-tools, however I kept running into build issues this time around. I instead have switched to using the GNU MCU Eclipse RISC-V Embedded GCC toolchain which is very handily released as a full windows binary package. It can be obtained from the github releases project page. A basic main() function with the above code is compiled with a linker script to place it at location 0x00000000 with no standard libraries or start files. The resulting elf binary is tiny, and you can use objdump with the -s argument to get part of a hex dump output which you can then transform into the .coe file required by the Xilinx block ram generator to use as the initial BRAM contents. The .coe file format is simple, and consists of two declarations – the radix of the data to follow, and a comma separated vector containing the data itself.

memory_initialization_radix=16;
memory_initialization_vector=
37810000, 1301c1ff, ef005074, 6f000000,
13060500, 13050000, 93f61500, 63840600,
...

I use this method to create the real bootstrap firmware which initializes and ends up copying code from SD card into the DDR3 ram for execution – but more on that in the next post! I was also able to use the new simulator in Vivado to check internal signals while some code ran.

One thing that caught me out is that changing the .coe initial BRAM contents file and rebuilding the project will not bring the changes from that file into the new BRAM IP. You need to right click the BRAM in the designer, select reset output products, and then generate them again for the updated .coe to be integrated. A rather annoying, slow and unnecessary step in my opinion – but maybe there is a reason for this that I do not understand as yet.

HDMI output

Flashing LEDs are cool, but we have a character generator to port! My previous miniSpartan6 design ran HDMI out at 640×480, using a widely available USB powered HDMI panel targeted at Raspberry Pi use. With ArtyS7, I wanted more resolution, and have defaulted to outputting 720p60. The changes to allow this on the DVI-D side of things are minimal – pixel clock updated, and the VGA timing signals for 1280×720 at 60Hz are used.

The pixel clock for 720p should be 74.25MHz, but the much-easier to obtain 75MHz will generally still work. The previous character generator (discussed in this blog post) works off of a 5x pixel clock – which in this case would be 375MHz. This is too high – the Spartan7 block rams of the Arty S7 are rated for around 350MHz – so we need to architect the character generator to run off of the raw pixel clock of 75MHz. This is actually fairly simple to do. The character generator works by feeding in an X and Y position of the current VGA timing location as it’s scanned out. If we offset the timing locations, by putting the various sync signals through an 8-deep FIFO, we can feed the character generator pixel values which are 8 pixels early – allowing the generator 8 cycles of latency in order to perform the necessary text and font lookups from BRAM. As the font glyphs are 8 pixels wide, we can prefetch the next glyph data. By the time the sync signals are at the end of the FIFO, the character generator will be providing the correct pixel colour values for the glyph row required.

The new pipeline for the character generator is as follows:

This provides a 160×45 text console display with 8×16 glyphs. To connect the FPGA dev board to an HDMI monitor, I took what I learned from my previous HDMI pmod post and made my first PCB using PCBWay. Then I soldered an surface mount HDMI connector and the PMOD 0.1″ angled header terminals. I may post a video of this in the future. I tinned the surface pads with a soldering iron and used a hot air gun with lots of flux to reflow the connector to the pads.

It’s a very useful converter! And for my first attempt at soldering 0.5mm pins, a successful first PCB 🙂

I am using an Atrix Lapdock as my HDMI sink – it’s a laptop form factor with HDMI input for the screen, and a USB hub with integrated keyboard, trackpad and battery. Again, this is usually used to make raspberry pi laptops, as the Lapdock itself can provide 5v power to devices as well as powering the screen. So with this, I have a RISC-V Laptop!

That is it for now – I intended to discuss the DDR3 memory in this post, but it just got too long. That post will follow shortly.

I have put the RPU CPU Core HDL on Github, as well as the Arty SoC project. This code is in front of these blog posts and includes the DDR3 implementation, so if you are impatient you can go and look now. There are timing constraint violations introduced with the HDMI output and DDR3 IP implementation, but I have yet to look into them – and the built FPGA bitfile flashes to my board and runs at a lower speed.

Thanks for reading! As always, I am available on twitter @domipheus for any queries. If you try out the SoC from github, let me know!

Designing a CPU in VHDL, Part 15: Introducing RPU

This is part of a series of posts detailing the steps and learning undertaken to design and implement a CPU in VHDL. Previous parts are available here, and I’d recommend they are read before continuing.

It’s been a while. Despite the length of time and lack of posts, rest assured a significant amount of progress has been made on my VHDL CPU over the last year. I’ve hinted at that fact multiple times on twitter, as various project milestones have been hit. But what’s changed?

First and foremost; the CPU now consumes RISC-V. It’s decoder, ALU and datapaths have been updated. With that, the data width is now 32 bit. The Decoder accepts the RV32I base integer instruction set.

I’d been putting off multiple side-projects with my existing TPU implementation for a while. It’s 16-bit nature really made integrating the 32MB SDRAM into the system a rather pointless affair. I’m all for reminiscing but I did not want to go down the rabbit hole of memory bank switching and the baggage that would entail. The toolchain for creating software was already at the limit – not the limit in terms of what could be done – but the limit in what I’m prepared to do in order to perform basic tasks. We all love bare metal assembly, but for the sake of my own free time, I wanted to just drop some C in, and I was not going to make my own compiler. I looked into creating a backend for LLVM, but it’s really just another distraction.

As a reminder, here is where we left off.

Block diagram of TPU CoreSo, what’s actually changed from the old 16-bit TPU?

  • Many more registers,
  • Datapath widened to 32-bit
  • Decoder and ALU updates for RV32I ISA
  • Glue logic/datapath updated for new functions

The register file is basically the exact same as before with the 16-bit TPU, extended for 32 entries of 32 bits names x0-x31. In RV32I, x0 is hardwired to 0x00000000. I’ve left it a real register entry, but just never allow a write to x0 to progress, keeping the entry always zero. The decoder and ALU is fairly standard really – there are a small number of instruction formats, and where immediates are concerned there is some bit swapping here and there, but the sign bits are always in the same place which can make things a bit easier in terms of making the decode logic easier to understand.

The last item about datapath changes follows on from how RISC-V branch instructions operate. TPU was always incredibly simple, which meant branching was quite an involved process when it came to function calling and attempting to get a standardized calling convention. There was no call instruction, and the amount of operations required for saving a calculated return from function address, setting up call arguments on the stack and eventually calling via an indirect register was rather irritating. With the limited amount of registers available on TPU, simplification via register parameter passing didn’t solve much as you would be required to save/restore register contents to the stack regardless.

RISC-V JAL instruction definition screenshot from ISARISC-V has several branch instructions with significant changes to dataflow from that of TPU. In addition to calculating the branch target – which is all TPU was able to do, RISC-V calculates a return address at PC+4, which is then written to a register. This means our new ALU needs two outputs, the standard result of a calculation, and the branch target. The branch target with the shouldBranch output status feeds into our existing PC unit, with the result feeding as normal the register file/memory address logic. The new connections are shown below in yellow.

RPU block diagramIn terms of old TPU systems, the old interrupt system needs significant updating and so is disabled. It’s still got block ram storage, UART, and relies on the same underlying memory subsystem, which in all honesty is super bloated and pretty poor. The memory system is currently my main focus – it’s still made up from a single large process at the CPU core clock. It reads like an imperative language function, which is not at all suitable for the CPU moving forward. The CPU needs to interface with various different components, from the block rams, UART, to SPI and SDRAM controllers. These can be running at different clocks and the signals all need to remain valid across these systems. With everything being accessed as memory-mapped IO, the memory system is super important, and I’ve already run into several gotchas when extending it with an SDRAM controller. More information on that later.

Toolchain

As I mentioned earlier, one of the main reasons for moving to RISC-V was toolchain considerations. I have been developing the software for my ‘soc’ with GCC.

With Windows 10, you can now use the Linux Subsystem for Windows to build linux tools within Windows. This makes compiling the RISC-V toolchain for your particular ISA variant a super simple process.

https://riscv.org/software-tools/ has some details of how to build relevant toolchains, and Microsoft have details of how to install the Linux Subsystem.

Data Storage

With the RISC-V GCC toolchain built, targeting RV32I, I was able to write a decent amount of code in order to create a bootloader which existed inside the FPGA block ram. This bootloader looked for an SD card in the miniSpartan6+ SD card slot. It initialized the SD card into slow SPI transfer mode so we could get at it’s contents.

Actually getting to data stored on an SD card is incredibly simple, and has been written up nicely here. In terms of how the CPU accessed the SD card, I found a SPI master VHDL module and threw it into my project, and memory mapped it so you could use it from the CPU.

#define SPI_M1_CONFIG_REG_ADDR 0x00009300
#define SPI_M1_BUSY_REG_ADDR   0x00009304
#define SPI_M1_DATA_REG_ADDR   0x00009308

void spi_sd_reset()
{
	volatile uint32_t* spi_config_reg = (uint32_t*)SPI_M1_CONFIG_REG_ADDR;

	uint32_t current_reg = *spi_config_reg;
	uint32_t reset_bit = 1U << SPI_CONFIG_REG_BIT_RESET;
	uint32_t reset_bitmask = ~reset_bit;

	// Multiple writes gives the controller time to reset - it's clock
	// can be slower than this CPU
	*spi_config_reg = current_reg & reset_bitmask;
	*spi_config_reg = current_reg & reset_bitmask;
	*spi_config_reg = current_reg & reset_bitmask;
	*spi_config_reg = current_reg & reset_bitmask;

	// return to original, but ensure the reset bit is set (active low), 
	// in case the register was previously clobbered by some other operation
	*spi_config_reg = current_reg | reset_bit;
}

uint8_t spi_sd_xchg(uint8_t dat)
{
	volatile uint32_t* spi_busy_reg = (uint32_t*)SPI_M1_BUSY_REG_ADDR;
	volatile uint32_t* spi_data_reg = (uint32_t*)SPI_M1_DATA_REG_ADDR;

	*spi_data_reg = (uint32_t)dat;

	while (*spi_busy_reg != 0);

	return *spi_data_reg;
}

A few helper functions later, the bootloader did a very simple FAT32 root search for a file called BOOT, and copied it into ram before jumping to it. It also copied a file called BIOS to memory location 0x00000000 – which had a table of I/O functions so I could fix/extend functionality without needing to recompile my “user” code.

typedef struct {
    FN_sys_console_put_stringn    sys_console_put_stringn;
    FN_sys_console_put_string     sys_console_put_string;
    FN_sys_console_setcursor      sys_console_setcursor;
    FN_sys_console_setpen         sys_console_setpen;
    FN_sys_console_clear          sys_console_clear;
    FN_sys_console_getcursor_col  sys_console_getcursor_col;
    FN_sys_console_getcursor_row  sys_console_getcursor_row;
    FN_sys_console_getpen         sys_console_getpen;
    FN_sys_clk_get_cyclecount     sys_clk_get_cyclecount;
    
    FN_spi_sd_set_clkdivider      spi_sd_set_clkdivider;
    FN_spi_sd_set_deviceaddr      spi_sd_set_deviceaddr;
    FN_spi_sd_xchg                spi_sd_xchg;
    FN_spi_sd_reset               spi_sd_reset;
    
    FN_uart_tx1_send              uart_tx1_send;
    FN_uart_rx1_recv              uart_rx1_recv;
    FN_uart_trx1_set_baud_divisor uart_trx1_set_baud_divisor;
    FN_uart_trx1_get_baud_divisor uart_trx1_get_baud_divisor;
    
//...
    
// Above functions are basic bios and provided by BIOS.c/BIN
// Below are extensions, so null checks must be performed to check 
// initialization state
    FN_pf_open    pf_open;
    FN_pf_read    pf_read;
    FN_pf_opendir pf_opendir;
    FN_pf_readdir pf_readdir;
} SYSCALL_TABLE;

With the FPGA set to look for this BOOT file off an SD card, testing various items became a whole lot easier. Not having to change the block ram and rebuild the FPGA configuration to run a set of tests saved lots of time, and I had a decent little setup going, however I was severely limited by RAM – many block rams were now in use, and yet I still did not have enough RAM. I could have optimized for space here and there, but when I have a 32MB SDRam chip on the FPGA dev board it seemed rather pointless. It was finally time to get the SDRAM integrated.

SDRAM

Getting an SDRAM interface controller for the chip on the miniSpartan6+ was very easy. Mike Field had done this already and made his code available. The problem was that whatever I tried, the data out from the SDRAM was garbage.

Or, should I say, out by one. Everything to do with computers has an out by one at some point.

Anyway, data was always arriving delayed by one request. If I asked to read data at 0x0000100, I’d get 0. If I asked for data at 0x0000104, I’d get the data at 0x00000100. I could tell that writing seemed to be fine – but reading was broken. Thankfully discovering this didn’t take too much time, due to being able to boot code off the SD card. I wrote little memory checking binaries and executed them via a UART command line, the debug data being displayed on the HDMI out. It was pretty cool, despite me getting wound up at how broken reading from the SDRAM was.

memory testIt took a long time to track down the actual cause of this read corruption. For the most part, I thought it was timing based in the SDRAM controller. I edited the whole memory system multiple times, with no changes to the behavior. This should have been a red flag for me, as the issue was being caused by the constant through all of these tests – the RPU core!

In RPU, the signals that comprise the input and output of each pipeline stage get passed around and moved from unit to unit each core clock. This is where my issue was – not the “external” memory controller!

Despite my CPU being okay with waiting for the results of memory reads from a slower device, it was not set up for correctly forwarding that data on when the device eventually said the request had completed. A fairly basic error on my part, made harder to track down by the fact the old TPU CPU design and thus RPU had memory requests going through the ALU, then control unit, and some secondary smaller memory controller which didn’t really do anything apart from add further latency to the request.

I fixed this, which immediately made all the SDRAM memory space available. Hurrah.

It’s rather annoying this issue did not present itself sooner. There are already many “slow” memory mapped devices connected to the CPU core, but they generally work off of FIFOs – so despite them being slow, they are accessed via queries to lines to say whether there is data in the FIFO, and what the data is. These reads are actually pretty fast, the latency is eaten elsewhere – like spinning on a memory read. Basic SDRAM read latency however for this was in the 20 cycle range at 100MHz, way more than previously tested.

Having the additional memory was great. Now that we have a full GCC toolchain, I could grab the FatFs library, and have a fully usable filesystem mounted from the SD card. Listing directories and printing the contents of files seemed like such a milestone.

Next steps

So that’s where we are at. However, things be changing. I have switched over to Xilinx Spartan 7, using a Digilent Arty S7 board (the larger XC7S50 variant). I’m currently porting my work over to it.

If you saw my last blog post, you’d have seen that HDMI out at least works, and now I’m working on porting the CPU over, using block rams and SD card for memory/storage. Lastly will be a DDR3 memory controller. 256MB of 650MHz memory gives much more scope for messing up my timing even more. It should be great fun!

I hope to document progress more frequently than over the past year. These articles are my projects documentation, and provides an outlet to confirm I actually understand my solutions, so the lack of posts has made things all the more disjoint. The next post shall be soon!

Thanks for reading. Let me know of any queries on twitter @domipheus.

HDMI over Pmod using the Arty Spartan 7 FPGA board

tl;dr: This post shows that driving DVI-D over an HDMI cable, directly connected to the High Speed Pmod connector of Digilents Arty S7 board, is very much possible- even at high resolution.

I’ve been working away on my RISC-V FPGA based computer ‘kit’, which is based on my VHDL CPU: ported to RISC-V. I wanted to get a new development board with faster ram, and found it hard to find boards with DDR3 memory, a large enough FPGA, SD card interface, and HDMI out.

The SD card was not really a problem – it’s low speed, you can just connect it with slow SPI I/O. HDMI is certainly considered high speed – bandwidth across the 4 serial channels top 4.4Gbps – but is driving HDMI though basic I/O interfaces possible?

It most certainly is!

A caveat, though: There is no protection circuitry. Do this at your own risk 🙂

The Digilent Arty S7 board can be had for sub £100, and my XC7S50 variant cost £119 shipped. It has DDR3, but no on-board HDMI. However, it turns out you can get perfectly acceptable (for my needs, anyway) output from the high-speed Pmod I/O ports. This will go very nicely with the small USB powered HDMI screens you can find, which are very handy. Image showing development board conencted via breadboard wires to an HDMI connector breakout board, which is driving a small HDMI display. I have put a 720p output test on github. It should load in Xilinx Vivado out of the box. You need to connect an HDMI cable to the Pmod, by either splicing a cable, or getting something like the Adafruit breakout board (seen above). It connects to Pmod JA, circled.

Image showing what I/O port to use.In the constraints file for the project, the Pmod pins are specified to use the TMDS_33 standard. The pinout is defined as follows:

A diagram showing what Pmod pins are for what HDMI purpose.The VHDL code in the example uses a simplified 1280x720x60 pixel clock of 75MHz – not the required 74.25MHz as per the standard. Due to this, some TVs/monitors may not accept the signal. It runs fine on my Acer XF240H and Samsung LU28E590DS using a 1m cable. I have not tried a television – they tend to be more picky. You can get much closer to 74.25MHz by chaining clock generators, but I have not done that in this instance. The refresh rate reports as 61Hz with this clock, likely due to it being out of spec.

If you want to learn more about how my example project generates a HDMI/DVI-D signal, you can find details here, which is part of my Designing a CPU in VHDL blog series.

Picture showing a monitor displaying a 720p signal.A video of it in action, using timings applicable for my 800×480 module.

You can change the pixel clock and the vga timings (sync starts, ends, porches ) in the code to generate different resolutions. For 1080p60Hz the pixel rate is supposed to be 148.5MHz, but again my monitor will accept a rate of 150MHz, and show 61Hz.

Even 1440p30 is possible. 1440p60 did not work, but I didn’t try hard to get a more accurate pixel clock in that instance. When you get to higher clocks, you can start to get timing constraint issues. At 1080p and 1440p I had some timing failures listed in the implementation report, but they did run. If you were using this in a real system, you’d have to fix those timing issues. That’s out of the scope of this blog, though 🙂

So there you have it. You really can bodge HDMI/DVI-D output directly through a Pmod!

Thanks for reading. Let me know what you think on twitter @domipheus.

The Boat PC – a marine based Raspberry Pi project

Motivation

In late 2015 I was doing my usual head-scratching about what gifts to get various family members for the holiday season. My wife mentioned making something electronic for my father-in-laws boat, and after a few hours of collecting thoughts came up with an idea:

  • A Raspberry Pi computer, which could be powered off the boats 12v batteries.
  • This computer would have sensors which made sense on a boat. Certainly GPS.
  • I’d have some software which collated the sensor data and displayed it nicely.
  • This could plug into the onboard TV using HDMI.
  • It would all be put into a suitable enclosure.

Excellent – a plan. I expected the hardware part to be easy, the enclosure part fairly straightforward, and the software part to be an absolute disaster. I started searching for an already-existing project to take care of the software side of things.

That’s when I came upon a project called OpenPlotter. It’s a fully-featured linux distribution for Raspberry Pi, specifically for use on a boat, and includes the relevant software for calibrating, collating and transforming data from various sensors into a form that can be used practically. I’ve got to be honest here – OpenPlotter is solid, does exactly what it advertises, and very simple for someone familiar with RPi/Linux to set up and use.

After firmly deciding on OpenPlotter for the software, and knowing I’d be using an old Raspberry Pi 2 I had collecting dust, I looked at what hardware OpenPlotter supported. The list is fairly long, and gave me ideas I had not thought of previously – for example using a USB DVB-T television dongle as an AIS receiver with Software Defined Radio (SDR), allowing real-time data of nearby ships to be displayed. MarineTraffic uses this AIS data, but of course on a boat you can’t rely on an internet connection to pull data from – it’s much better to get the data directly from the VHF signals.

In addition to AIS and GPS, I’d add an Inertial Measurement Unit (IMU – basically an accelerometer, gyroscope and magnetometer in one) in the form of an InvenSense MPU-9150, and also a USB to RS422 converter. RS422 is specified as part of the protocol standard for NMEA 0183, which in turn is the communication specification used in marine electronics. Supporting input and output of direct NMEA using RS422 would allow for some extendibility, for example depth sensors that are already present can feed data into OpenPlotter using this port.

After going and purchasing all of these sensors, I realised that actually using the TV inside the boat isn’t going to be useful, as it’s not visible from the helm. Thankfully, OpenPlotter allows for headless operation, and will automatically set up a WiFi hotspot so you can connect a phone/tablet to the Raspberry Pi and control it using VNC or other software.

The Build

So, to clarify, all the hardware gubbins required:

  • Raspberry Pi 2
  • Invensense MPU9150 board
  • RTL2832U DVB-T USB
  • USB to RS422 Converter
  • USB GPS module
  • USB wifi module

Of course, we need some associated utility to make this into an actual device;

  • 12V to 5V power converter
  • Power switch & connector
  • Status LED
  • Enclosure

When I’ve done projects in the past (the biggest one being PiOnTheWall from years ago), I spend a significant amount searching for the right enclosure to put the hardware in. It’s not just a case of going and getting something that’s big enough to fit the contents, you need to know how thick the sides are, what kind of plastic is it, are there PCB standoffs included, are there vent holes?

After several days, I came up with the following which I got off ebay.

enclosure

I knew already the RTL2832U SDR dongle could run quite hot – so ventilation holes were a must. It’s the hottest part of this hardare, easily 60C+, whilst the Broadcom SOC of the Raspberry Pi will have to be working fairly hard to hit 45C. I did not plan to heatsink anything, and in the end it works fine without them. I did make a concious choice though to have the SDR board at the highest point in the enclosure, closest to the vents.

The design was simple – switch and status LED at the front, RS422, SDR antenna, Power In and Raspberry Pi Mini USB/HDMI/Audio out at the back. I removed all plastic covers from any USB devices, as they just bloated the inside, and I knew removing USB connectors would be a requirement. Laying out the components, I found one which worked well.

layout_annotated

The Raspberry pi would be put on metal standoffs – I used some spares I had from various PC motherboards and cases. I just drilled straight though the bottom of the plastic case with a bit size such that the thread would drive into the plastic.

In my previous Raspberry Pi project I butchered the board, and I’m pleased to say the only thing I had to do in this instance was make the fixing holes on the PCB slightly larger to accommodate the screws for standoffs.

rpi_drill

standoffs

The GPS and Wifi modules remained as dongles, simply connected into one dual header on the Raspberry Pi. To aid fitting all the boards into the enclosure, the male USB connector of the RTL2832U SDR dongle was placed on a ribbon cable. Additionally, the miniUSB cable for the RS422 converter was made small enough to fit in the limited space available. These two boards were physically fixed to the rear panel via bolts, and in the SDR boards case, a little shelf made from spare plastic.

422_sdr_cables

422_sdr_affixed

I’m not very good at making good panel openings, so sadly my HDMI and microUSB ports are very poor. At least they are at the back, where nobody should be able to see them 😉

Internally, all that was left was to connect the 12V->5V DC-DC converter to the Pi, put a power switch inline with the input 12v Power jack, attach the LED to 3v3 (there is a resistor in the LED leg heat-shrink), and fix the rest of it down with the same standoffs. It ended up looking fairly neat and tidy.

complete_internals

For those wondering, I connected the 5V output from the DC-DC converter direct to the 5V rail of the Pi. It bypasses some input protection which exists on the miniUSB power input. For me this is okay, I hoped it would allow the SDR USB dongle to draw more power than is ‘technically’ allowed from the onboard USB ports. I knew that was an issue back in the Raspberry Pi 1 days, and couldn’t remember if that was still the case with RPi 2.

The final rear panel:

complete_rear

The front of the enclosure, unit powered and closed.

complete_front

You will notice the USB socket on the front; I thought it could be useful to trickle charge phones or the tablet that would connect through WiFi to offer controls. I connected the unit to an HDMI monitor to do first-time OpenPlotter setup, making sure the sensors worked, and then switched it into headless mode, with VNC and NMEA 0183 output over it’s own ad-hoc WiFi hotspot.

Testing on the boat!

One thing that I could not test at home and needed to do on the boat was calibrate and test the AIS Receiver. There was a long gap between the hardware being “complete” in summer 2016, and testing it on-board in spring 2017.

AIS runs off VHF frequencies of around 162MHz, a wavelength of 1.85 meters. The boat has a marine antenna already which will work fine, but when I brought the device for testing did not have the correct connector to interface with the SDR dongle.

antenna

Because of this, I made a quick and dirty 1-wire, quarter wavelength antenna. I used a good quality coax, with one end exposing only the inner core to a length of 46 centimeters. I then hooked this around a bit of the boat outside. It wouldn’t get long range, but hoped I’d get some ship signatures in the marina – and it did! After following the calibration instructions on the OpenPlotter guide, I rebooted and after a few minutes the tablet (now connected to the RPi using wifi) displayed the following:

tablet_ais

We used an Android app called SailTracker which takes the collated NMEA datastream and displays the data in an appropriate format. There are several paid apps that come complete with nautical maps, which is neat.

installed

And that’s it! All installed, wired into the 12v, and also now using the VHF antenna at the top of the mast. I’m quite proud with how this one turned out, and I’m very impressed with the OpenPlotter distribution for allowing this project to work as well as it did.

What I’d change

There are 3 things I’d change if I was to do this again:

  1. Changing the front panel LED to RGB, and have it a real status LED rather than power. For example,
    • solid blue: OS booting,
    • flashing green: OpenPlotter starting services,
    • solid green: WiFi hotspot up,
    • red would be an error condition.
  2. Mounting the SDR dongle further in, allowing me to wire up the antenna input from the onboard mini MCX to a PL259 VHF connector on the rear panel. This would have eliminated some of the external complexity of needing various converters.
  3. I’d have a large cover over the microUSB/HDMI/audio raspberry pi connectors, as they are really only needed for debug, and it would have stopped me from making the messy cuts I did 🙂

Thanks for reading. If you have any questions or queries feel free to contact me at @domipheus.