Tuesday, July 19, 2016

Another SystemVerilog Streaming Example: Size Mismatch

I had a packed struct who's size was not evenly divisible by 8 (it was one bit short, in fact) and I had an array of bytes that I needed to stream into it. The extra bits in the array of bytes were not relevant, so I tried just doing this:

my_struct = {>>byte{my_array_of_bytes}};

But my simulator complained that my_array_of_bytes was bigger than the destination (my_struct). It took me longer to figure out than I'd like to admit that I just needed to do this:

bit extra_bit;
{my_struct, extra_bit} = {>>byte{my_array_of_bytes}};

That did the trick.

Friday, April 15, 2016

Get xpra to work on Ubuntu 14.04

Xpra is like screen or tmux for X apps.  There is a commercial app called Exceed on Demand and xpra seems to work very similarly to that.  Xpra is a very nice alternative to VNC and performs a lot better than forwarding X over ssh.  Here's how I got it to work using Ubuntu 14.04 as a server and windoze (is that joke getting old yet?) as a client.  Xpra says you can run Mac and Linux clients as well, but I haven't tried that yet.

To get it installed and running, dig down from the main Xpra site to the trusty download area, or just click here for 64-bit.  Download the highest version numbered python-rencode and xpra packages there, then do this on your command line:

sudo dpkg -i ~/Downloads/python-rencode_1.0.3-1_amd64.deb
sudo apt-get install python-numpy python-opengl python-lzo python-appindicator python-gtkglext1 xvfb
sudo dpkg -i ~/Downloads/xpra_0.15.10-1_amd64.deb

When you try to install either .deb package it might report other dependencies that are missing. Just sudo apt-get install those and then try the sudo dpkg -i command again. After it's all installed you can run an xpra server like so (the options were suggested to my by xpra the first time I tried to run it):

xpra start :1234 --start-child=xterm --no-notifications --no-pulseaudio

On the windows side, download the xpra installer by clicking the link on the main xpra page. After running that it will offer to run xpra. Go ahead and do that. Choose ssh in the Mode dropdown. Leave all the other fields as they are and enter your ssh login information (what you would use to ssh to the ubuntu machine you just started the server on) and the display number (we used 1234 above when we started the server). You can leave the password field blank, it will prompt you for your ssh password after you click connect. Once you do that an xterm will open up on your windows desktop and you can start any other linux apps you want from there. There will be an xpra tray icon you can use to change settings and disconnect. After you disconnect you can reconnect and all the windows you had open will come right back just like they were when you disconnected. It also saves your state if you are disconnected from the network unexpectedly (like maybe your laptop goes to sleep or something). It's very nice.

One other thing I noticed is that the apps xpra was showing me were a little fuzzy (the text in emacs was hard to read). I had to click on the xpra tray icon and change the desktop scaling option (it was making the windows larger for some reason). You can also edit the C:\Program Files (x86)\Xpra\xpra.conf file and change the desktop scaling option there (along with many other settings, for example, I turned off sound and microphone because I don't need that and I figured it might save some CPU and bandwidth).

I'm glad I found xpra and got it working.   It works so well, I'm really surprised I haven't heard more people talking about it.  Go try it out!

Thursday, March 3, 2016

Best Part of Distributed Version Control


I switched jobs recently and I am now using git on a day to day basis.  My previous jobs had been either subversion (boo) or mercurial (which I really liked).  Transitioning to git has been relatively easy.  I've created several aliases to do things I used to do in mercurial (well, as close as I can get for some of them) and to make certain common git operations one command instead command --option --option argument [argument], and it's not too bad.  Once I learned how to "bring back" "lost" commits (aka move branch pointers around with git reset) I lost my fear of losing work.  I do still have some fear when I interact with our "central" git repo, because it's not always clear to me what exactly git push is going to do to the remote repo, but it's becoming more clear as I do it more and more.

In all my googling to learn how to do the things I want with git I came across, Unorthodocs: Abandon your DVCS and Return to Sanity."  I have to agree with some of what Benjamin says there.  For me sane branching and merging was the number one reason I was first attracted to distributed version control and Benjamin is right, good branching and merging could be provided by a centralized tool.  In fact, most people seem to be using decentralized tools just like they used their centralized tools in the past (see: github, gitlab, bitbucket, even hgweb).

I have found, however, that the longer I've used mercurial (and now git), the thing I love most about them is local commits.  I'm pretty sure that local commits are really the thing people want when they talk about needing good branching and merging.  99% of the time, people just want a way to commit their work but not inflict it on the rest of the team.  Then they would like to do some testing, commit and checkpoint their some more, and repeat that until they are sure it's ready to share.  With old centralized tools the only way to do that is with branches and merges (it's actually the only way with DVCS tool too, but they have the ability to mostly hide that from you).

The longer we used mercurial at my last job, the less and less we used branches.  The workflow was basically, do some work, commit it, post the changes to review board for review, and then once you have tested and had your code reviewed, rebase it onto the main branch (after folding all the work-in-progress intermediate commits together) and push.  The history in our main repo was one straight line.  Easy to look at and find the changes in the history you cared about.

The more advanced workflow might have involved downloading a patch from reviewboard and importing it as a local commit to test it out in your local clone, or sending a patch directly to someone else for them to import as a local commit in their local clone to test.  In either case you could then push that new commit (imported from the patch) or strip it if you didn't like it.  You could also make modifications, amend the commit with those modification, etc., etc.

The cognitive load of that workflow was so small and nothing you did in the experimental development stage could affect anyone else.  Your own work was safe, your co-workers work was safe, yet you could share work with each other very easily too.  The commands you had to know were literally:

hg log # -G was sometimes nice

hg commit # maybe with --amend

hg incoming # to preview a pull

hg pull --rebase

rbt post # code review

hg outgoing # to preview a push

hg push

That's it!  Advanced commands were:

hg update # to jump to another revision

hg export > patch-name

hg import patch-name

hg strip

Notice the lack of HEAD^ and reset --hard and checkout -b --track.  Man, those were the days.  Despite the more obtuse command, you can use that workflow with git too, and I'll probably learn how because right now everything we do is create a branch (which includes inventing a name for it) push to central server, pull (or should I fetch?) from central server, and merge on top of merge on top of merge.  It's a lot more to think about and keep straight in your mind, even without git's complex and unintuitive commands.

Having the ability to have those local commits, commits that are essentially in a draft state (not intended to be inflicted on the whole team) is the real killer feature of distributed version control tools.  Yes, you can have that draft state even in a centralized tool by committing to branches, but the amazing thing about DVCSs is you don't *need* to use an explicit branch.  You just commit, right on to trunk/master/default (whatever you call in), and it's local.  A draft.  A work-in-progress.  That's the default mode of operation.  And isn't that how it should be?  The default, no effort, no cognitive load mode of operation should be: create a private, draft commit.  When you are ready to put that commit into production, then a little cognitive load is OK.

 When you use git and the Very Branchy development model, you keep much of the cognitive load of centralized systems and using branches to maintain your work in progress.  The trick with DVCS tools is that you don't have to think about branches at all.  Just commit.  A simple pull --rebase is all it takes to integrate your changes with others, still privately, still preserving your original commit in case you need to go back.  Do the simplest thing that could possibly work.  I think I've heard that somewhere before.

Thursday, December 10, 2015

Linux Environment Management

Linux Environment Considerations

The Linux Environments that ASIC and SoC (chip) design teams use are often messy and confusing. When team members work on multiple ASIC projects that each require different sets of tools the problem is even worse. When engineers spend time fighting the environment that slows down the development of our chips little by little each day. This doesn't need to be the case. This post explains:
  • What a Linux Environment is
  • Why it's important, especially for ASIC projects
  • Techniques to configure and manage the environment
What is an Environment?
In Linux, every process runs with a set of environment variables available to it. This set of environment variables is often referred to simply as, the environment. Here are some examples of how programs use the environment:

  • The command-line shell uses the PATH environment variable to find the programs you ask it to run
  • Programs use LD_LIBRARY_PATH to find compiled libraries that they rely on
  • ASIC design and verification tools use the LM_LICENSE_FILE environment variable to determine how to contact their required license servers.

For most Linux users the environment isn't much of a concern. When they log in it gets configured by shell initialization files for the common programs and libraries that they use and they are good to go. We ASIC engineers are much more demanding of our environment. We generally use a wide array of software tools that are not included in our Linux distribution. We also keep multiple versions of each of those tools installed so we can try new versions out and revert back to using old versions when needed. Our environment needs to be configured for each of these tools and reconfigured when we want to switch which version of the tool we are using. Making matters worse most of these ASIC tools require more than just PATH and LM_LICENSE_FILE environment variables, they have a wide assortment of other variables they expect to be set in your environment for proper operation.
Managing The Environment
There are several ways to manage your Linux shell environment. Let's take a look at them.
Default Shell Initialization Files
This was mentioned in the introduction. With this technique you simply put all configuration in the shell initialization files (~/.profile, ~/.bashrc, ~/.cshrc, etc.).

Benefits:
  • This is the standard way of managing your environment in linux
  • Simple, easy to understand for everyone
  • You can use standard shell commands to inspect your environment: env, echo $VARIABLE_NAME, etc.
Downsides:
  • Only supports one version of any given tool. To use a different version of a tool you have to edit your shell initialization files, then start a new shell for them to take affect.
  • Everyone has their own initialization files, which makes it hard to ensure everyone is using the same environment
Explicit Environment Files
Another approach is to have explicit environment configuration files. When you log in or start a new terminal session a minimal environment will be configured by the normal shell initialization files, and then you issue a command to configure that shell instance with the desired environment. The environment configuration files can be simple shell-syntax files (and the command as simple as . environment-init or source environment-init). Alternatively, there is an open source tool named Environment Modules that teams often use for this.

Benefits:
  • Environment configuration files can be centralized so there is one file that everyone uses
  • You can maintain multiple configuration files as needed: one per tool version, one per project, one per engineering role, etc.
  • Project leads and/or tool administrators can easily create and maintain the configuration files so individual engineers don't have to
  • You can use standard shell commands to inspect your environment: env, echo $VARIABLE_NAME, etc.
Downsides:
  • Once an environment configuration is loaded it's difficult to unload (you need to start a fresh shell to be sure)1
  • If you use Environment Modules for this, environment configuration files have to be written in Tcl2
Per-command Environment Files
Another approach is to prefix every command that needs a special environment with a command that spawns a subshell, sets up the necessary environment, and then runs the intended command. It looks sort of like this:
envA simulation-command
This way your interactive shell is never poluted with project- or tool-specific settings (just the subshell is) and you can easily switch to a different environment on a per-command basis:
envA simulation-command
cd <another-project-area>
envB synthesis-command
Benefits:
  • All the same benefits of Explicit Environment Files mentioned above
  • Easy to use different environment configurations, even on a per-command basis (you don't have to start a fresh shell for each new environment)
Downsides:
  • Sometimes hard to remember to prefix every command
  • If you don't often use different environments the prefix feels like unnecessary awkwardness
  • Inspecting the per-command shell environment is not as simple as typing echo $VARIABLE_NAME or env, you have to do something like envA sh -c 'echo $PATH'= or =envA env
Smart Environment Manager Tool
This is basically the same as using Explicit Environment Files above, but instead of a simple source command or Environment Modules, you can use a tool that has the ability to load an environment and to safely and completely undo (unload) an environment configuration when you want to switch from one environment to another. An open source tool that does this is named albion (full disclosure: I wrote albion). Using it looks like this:
albion env projectA
simulation-command
albion env projectB
synthesis-command
Benefits:
  • All the same benefits of Explicit Environment Files mentioned above
  • Easy to use different environment configurations and switch between them
  • Environment configuration files use sh syntax, not Tcl
Downsides:
  • You can't switch environments in a single command like Per-command Environment Files allows you to, but a future version of albion could support this
  • albion is still somewhat new and might need a little work or customization to fit your specific needs
Conclusion
A messy Linux environment can be confusing to engineers and slow down a project. With some thought and use of a good tool the Linux environment can be tamed. A tame environment will make your engineers happier and your project will go smoothly and more quickly.

Footnotes:

1. Environment Modules claims it can cleanly undo (unload, in their terminology) an environment by simply inverting every command in the modulefile (e.g., setting a variable becomes unsetting the variable). If someone has removed or changed a command in the modulefile or deleted it altogether in the time after you loaded it this technique obviously does not work.

2. If this seems OK to you, consider that most ASIC tools provide you with an environment configuration file in csh or sh syntax that you will then have to translate into Tcl, for every version of each tool you install.

Tuesday, April 14, 2015

Mercurial Offers More Choice Than Git

I know, I know, this is the emacs vs. vi debate of our age, never to be settled and pointless to continue arguing.  I'm going to write just a little more about mercurial as compared to git (or git as compared to mercurial, whichever is less inflammatory to say) because today I realized something and I need to write about it.

I've been using mercurial daily at work for a good 5 years now and not much git.  I recently had cause to use git a little more  and I realized something.  The common belief is that git is Freedom and mercurial is tightly constrained Bondage.  I think that is mostly based on a very outdated understanding of how mercurial works and what you can do with it.  Today it has all the same commit and history editing functionality of git with record, amend, graft, rebase, histedit, and so forth commands built-in.  Mercurial is not missing flexibility and freedom enhancing functionality that git has, as far as I know.  The thing I realized is that today's mercurial actually has more freedom and flexibility than git.  Using git I felt like I was being constrained to work in the way Linus does.  For example, I really don't feel the need for the index, especially when you are able to amend commits or do all sorts of other editing of commits with rebase, yet, there's really no way around it, with git you have to use the index.  With mercurial you can choose to have index-like functionality, or not.  With git you have one way to keep track of branches, using the reference-like things that git calls branches.  These come with all the complexity and confusion of remote tracking branches and local branches and fast-forward merges (that are not really merges) and so forth.  With mercurial you have three choices for keeping track of branches, a couple of which are much more simple and easy to use than git branches. I like having these options.

I guess I can't think of anything else right now, both tools are very similar in functionality and both offer far, far more freedom and power than any other version control tool that I know of.  I'm sure git users will comment and tell us where git is more flexible than mercurial.