Tuesday, February 7, 2017

SystemVerilog and Python

Design patterns in programming are when engineers find themselves writing the same code over and over to solve the same problems. Design patterns for statically typed object oriented languages (C++ and Java) were cataloged and enshrined in the famous book, "Design Patterns: Elements of Reusable Object-Oriented Software" by Erich Gamma, John Vlissides, Ralph Johnson, and Richard Helm. The authors are lovingly called, The Gang of Four, or the GOF and the book is often referred to as the GOF book.

The subset of SystemVerilog used in writing testbenches is a statically typed object oriented language (it's most similar to Java). As people started using SystemVerilog to write testbenches, frameworks for writing testbenches quickly became popular. These frameworks all provide code that implements design patterns from the GOF book. The various frameworks were similar because they were all essentially implementing the same design patterns. Eventually the various frameworks all coalesced into one, the humbly named, Universal Verification Methodology, or UVM.

Below is a table that matches up GOF design patterns with their UVM implementation. This was adapted from this presentation:

GOF Pattern Name UVM name 
Factory Method, Abstract Factory uvm_factory, inheriting from uvm base classes 
Singleton UVM Pool, UVM Global report server, etc. 
Composite UVM Component Hierarchy, UVM Sequence Hierarchy 
Facade TLM Ports, UVM scoreboards 
Adapter UVM Reg Adapter 
Bridge UVM sequencer, UVM driver 
Observer UVM Subscriber, UVM Monitor, UVM Coverage 
Template Method UVM Transaction (do_copy, do_compare), UVM Phase 
Command UVM Sequence Item 
Strategy UVM Sequence, UVM Driver 
Mediator UVM Virtual Sequencer 

If we switched from SystemVerilog to Python for writing our testbenches, would we need to re-implement each of those parts of the UVM? Python is not a statically typed object oriented language like Java and SystemVerilog. It is a dynamically typed language. Prominent and well respected computer scientist Peter Norvig explored this topic for us already. He did this when Python was still a very young language, so he examined other dynamic languages instead (Dylan and Lisp) and he concluded that of the 23 design patterns from the GOF book, 16 of them are either invisible or greatly simplified due to the nature of dynamic languages and their built-in features. As an example to explain how this could be, he points out that a defining a function and calling it used to be design patterns. Higher-level languages came along and made the pattern of defining and calling a function a part of the language.

This is essentially what has happened with dynamic languages. Many design patterns from GOF are now simply part of the language. According to Dr. Norvig, the patterns that dynamic languages obsolete are:

  • Abstract-Factory
  • Flyweight
  • Factory-Method
  • State
  • Proxy
  • Chain-Of-Responsibility
  • Command
  • Strategy
  • Template-Method
  • Visitor
  • Interpreter
  • Iterator
  • Mediator
  • Observer
  • Builder
  • Facade

That reduces the above table to:

GOF Pattern Name UVM name 
Singleton UVM Pool, UVM Global report server, etc. 
Composite UVM Component Hierarchy, UVM Sequence Hierarchy 
Adapter UVM Reg Adapter 
Bridge UVM sequencer, UVM driver 

Trusting that, if we were to write a pure Python testbench we can see that we would still likely implement a few design patterns. Thinking about this, it makes sense that we'd still probably have classes dedicated to transforming high-level sequence items to pin wiggles, just like the sequencer and driver work together to do in the UVM. It also makes sense that we'd have a class hierarchy to organize and relate components (such as the sequencer and driver equivalents) and sequences (high-level stimulus generation). Thing like that.

The more striking thing is the amount of code we would not need.

Saturday, December 31, 2016

Adventures in Arch Linux

I just installed Arch Linux on my 4th machine. It has been fun and painful. Painfully fun? I have learned a lot and that is always fun. There.

I have loved using Ubuntu over the last several (eight, I think) years. Ubuntu is easy to install and most things Just Work. It's easy to find packages for most software I want to run, and there is lots of help on the internet for accomplishing whatever you want to accomplish with Ubuntu. My frustrations have been that even though you can find instructions for getting whatever software you want to run, it's not always a simple apt-get install. Sometimes it's configuring a PPA source and sometimes it's compiling from source. Sometimes a PPA works and serves you well for years and then suddenly it disappears. Another frustration is out of date packages, and full system upgrades in general. Keeping up with the latest emacs was a chore. Going from one release of Ubuntu to another works surprisingly well, but it's a big enough chore that I keep putting it off. One of my desktop machines at home was still running 12.04 up until yesterday. That's 5 and a half years old now!

These concerns led me to Arch. It seems to be addressing them beautifully. Every package I have wanted is either in the main repositories and I can install it with a simple pacman command, or it's in the AUR and I can install it with a simple yaourt command. There are no releases of Arch, they just continually update the packages that are in the repositories. Staying up to date is always the same pacman command to upgrade your installed packages. There are times where you have to take some manual steps to fix an interaction between two packages, or to switch from a package that has been made obsolete by another newer package, but that's fairly rare, well documented, and you just deal with it a little at a time when the situations come up. With Ubuntu dist-upgrades you had to deal with many of those scenarios all at once, every 6 months if you were keeping fully up to date. With Arch, keeping up with the latest emacs happens without me even realizing it.

Where Arch is not as nice as Ubuntu is in the installation. With Arch it's all manual. What you should do is pretty well documented, but you have to type all the commands yourself and make decisions about alternative ways of doing various things. It's a fun learning experience as I mentioned at the beginning of this post, but not a process that I really enjoyed repeating over and over. This is really where Ubuntu made it's name. The nice package system and repositories were straight from Debian originally, but the installer and default configurations are what made Ubuntu so accessible. There was a joke in the early Ubuntu days that Ubuntu was an African word meaning, "can't install Debian."

It turns out that there's a distribution with a name meaning, "can't install Arch." It's Antergos. It really is just a graphical installer on top of Arch. Once it's done you are running Arch with some Antergos chosen configuration, which is exactly what I wanted. It does feel like it's still early days for this project. On one laptop I tried Antergos on it didn't have the wifi drivers I needed. I had to go back to plain arch and figure out how to load the driver by hand in order to complete the installation (that should be a blog post of its own). On another machine once the Antergos install was done the display manager would crash with a webkitwebprocess coredump. The antergos forums told me how to switch to lxdm and that fixed my problem (probably should be another blog post). I don't think a linux beginner would have enjoyed that process, but overall Antergos looks promising. Mostly I'm looking forward to never needing to do a fresh install on any of those machines ever again.

Tuesday, July 19, 2016

Another SystemVerilog Streaming Example: Size Mismatch

I had a packed struct who's size was not evenly divisible by 8 (it was one bit short, in fact) and I had an array of bytes that I needed to stream into it. The extra bits in the array of bytes were not relevant, so I tried just doing this:

my_struct = {>>byte{my_array_of_bytes}};

But my simulator complained that my_array_of_bytes was bigger than the destination (my_struct). It took me longer to figure out than I'd like to admit that I just needed to do this:

bit extra_bit;
{my_struct, extra_bit} = {>>byte{my_array_of_bytes}};

That did the trick.

Friday, April 15, 2016

Get xpra to work on Ubuntu 14.04

Xpra is like screen or tmux for X apps.  There is a commercial app called Exceed on Demand and xpra seems to work very similarly to that.  Xpra is a very nice alternative to VNC and performs a lot better than forwarding X over ssh.  Here's how I got it to work using Ubuntu 14.04 as a server and windoze (is that joke getting old yet?) as a client.  Xpra says you can run Mac and Linux clients as well, but I haven't tried that yet.

To get it installed and running, dig down from the main Xpra site to the trusty download area, or just click here for 64-bit.  Download the highest version numbered python-rencode and xpra packages there, then do this on your command line:

sudo dpkg -i ~/Downloads/python-rencode_1.0.3-1_amd64.deb
sudo apt-get install python-numpy python-opengl python-lzo python-appindicator python-gtkglext1 xvfb
sudo dpkg -i ~/Downloads/xpra_0.15.10-1_amd64.deb

When you try to install either .deb package it might report other dependencies that are missing. Just sudo apt-get install those and then try the sudo dpkg -i command again. After it's all installed you can run an xpra server like so (the options were suggested to my by xpra the first time I tried to run it):

xpra start :1234 --start-child=xterm --no-notifications --no-pulseaudio

On the windows side, download the xpra installer by clicking the link on the main xpra page. After running that it will offer to run xpra. Go ahead and do that. Choose ssh in the Mode dropdown. Leave all the other fields as they are and enter your ssh login information (what you would use to ssh to the ubuntu machine you just started the server on) and the display number (we used 1234 above when we started the server). You can leave the password field blank, it will prompt you for your ssh password after you click connect. Once you do that an xterm will open up on your windows desktop and you can start any other linux apps you want from there. There will be an xpra tray icon you can use to change settings and disconnect. After you disconnect you can reconnect and all the windows you had open will come right back just like they were when you disconnected. It also saves your state if you are disconnected from the network unexpectedly (like maybe your laptop goes to sleep or something). It's very nice.

One other thing I noticed is that the apps xpra was showing me were a little fuzzy (the text in emacs was hard to read). I had to click on the xpra tray icon and change the desktop scaling option (it was making the windows larger for some reason). You can also edit the C:\Program Files (x86)\Xpra\xpra.conf file and change the desktop scaling option there (along with many other settings, for example, I turned off sound and microphone because I don't need that and I figured it might save some CPU and bandwidth).

I'm glad I found xpra and got it working.   It works so well, I'm really surprised I haven't heard more people talking about it.  Go try it out!

Thursday, March 3, 2016

Best Part of Distributed Version Control


I switched jobs recently and I am now using git on a day to day basis.  My previous jobs had been either subversion (boo) or mercurial (which I really liked).  Transitioning to git has been relatively easy.  I've created several aliases to do things I used to do in mercurial (well, as close as I can get for some of them) and to make certain common git operations one command instead command --option --option argument [argument], and it's not too bad.  Once I learned how to "bring back" "lost" commits (aka move branch pointers around with git reset) I lost my fear of losing work.  I do still have some fear when I interact with our "central" git repo, because it's not always clear to me what exactly git push is going to do to the remote repo, but it's becoming more clear as I do it more and more.

In all my googling to learn how to do the things I want with git I came across, Unorthodocs: Abandon your DVCS and Return to Sanity."  I have to agree with some of what Benjamin says there.  For me sane branching and merging was the number one reason I was first attracted to distributed version control and Benjamin is right, good branching and merging could be provided by a centralized tool.  In fact, most people seem to be using decentralized tools just like they used their centralized tools in the past (see: github, gitlab, bitbucket, even hgweb).

I have found, however, that the longer I've used mercurial (and now git), the thing I love most about them is local commits.  I'm pretty sure that local commits are really the thing people want when they talk about needing good branching and merging.  99% of the time, people just want a way to commit their work but not inflict it on the rest of the team.  Then they would like to do some testing, commit and checkpoint their some more, and repeat that until they are sure it's ready to share.  With old centralized tools the only way to do that is with branches and merges (it's actually the only way with DVCS tool too, but they have the ability to mostly hide that from you).

The longer we used mercurial at my last job, the less and less we used branches.  The workflow was basically, do some work, commit it, post the changes to review board for review, and then once you have tested and had your code reviewed, rebase it onto the main branch (after folding all the work-in-progress intermediate commits together) and push.  The history in our main repo was one straight line.  Easy to look at and find the changes in the history you cared about.

The more advanced workflow might have involved downloading a patch from reviewboard and importing it as a local commit to test it out in your local clone, or sending a patch directly to someone else for them to import as a local commit in their local clone to test.  In either case you could then push that new commit (imported from the patch) or strip it if you didn't like it.  You could also make modifications, amend the commit with those modification, etc., etc.

The cognitive load of that workflow was so small and nothing you did in the experimental development stage could affect anyone else.  Your own work was safe, your co-workers work was safe, yet you could share work with each other very easily too.  The commands you had to know were literally:

hg log # -G was sometimes nice

hg commit # maybe with --amend

hg incoming # to preview a pull

hg pull --rebase

rbt post # code review

hg outgoing # to preview a push

hg push

That's it!  Advanced commands were:

hg update # to jump to another revision

hg export > patch-name

hg import patch-name

hg strip

Notice the lack of HEAD^ and reset --hard and checkout -b --track.  Man, those were the days.  Despite the more obtuse command, you can use that workflow with git too, and I'll probably learn how because right now everything we do is create a branch (which includes inventing a name for it) push to central server, pull (or should I fetch?) from central server, and merge on top of merge on top of merge.  It's a lot more to think about and keep straight in your mind, even without git's complex and unintuitive commands.

Having the ability to have those local commits, commits that are essentially in a draft state (not intended to be inflicted on the whole team) is the real killer feature of distributed version control tools.  Yes, you can have that draft state even in a centralized tool by committing to branches, but the amazing thing about DVCSs is you don't *need* to use an explicit branch.  You just commit, right on to trunk/master/default (whatever you call in), and it's local.  A draft.  A work-in-progress.  That's the default mode of operation.  And isn't that how it should be?  The default, no effort, no cognitive load mode of operation should be: create a private, draft commit.  When you are ready to put that commit into production, then a little cognitive load is OK.

 When you use git and the Very Branchy development model, you keep much of the cognitive load of centralized systems and using branches to maintain your work in progress.  The trick with DVCS tools is that you don't have to think about branches at all.  Just commit.  A simple pull --rebase is all it takes to integrate your changes with others, still privately, still preserving your original commit in case you need to go back.  Do the simplest thing that could possibly work.  I think I've heard that somewhere before.