Friday, September 15, 2017

Not Leaky, Just Wrong

Intel recently announced new tools for FPGA design. I should probably try to understand OpenCL better before bagging on it, but when I read, "[OpenCL] allows users to abstract away hardware-specific development and use a higher-level software development flow." I cringe. I don't think that's how we get to a productive, higher-level of abstraction in FPGA design. When you look at the progress of software from low-level detailed design to high-level abstract design you see assembly to C to Java to Python (to pick one line of progression among many). The thing that happened every time a new higher-level language gained traction is people recognized patterns that developers were using over and over in one language and made language features in a new language that made those patterns one-liners to implement.

Examples of design patterns turning into language features are, in assembly people developed the patterns of function calls: push arguments onto the stack, save the program counter, jump to the code the implements the function, the function code pops arguments off the stack, does it's thing, then jumps back to the the code that called it. In C the tedium of all that was abstracted away by the language providing you with syntax to define a function, pass it arguments, and just call return at the end. In C people then started developing patterns of structs containing data and function pointers for operating on that data which turned into classes and objects in Java. Java also abstracted away memory management with a garbage collector. Patterns in Java (Visitor, State, etc.) are no longer needed in Python because of features in that language (related discussion here).

This is the path that makes most sense to me for logic design as well. Right now in RTL Verilog people use patterns like registers (always block that activates on posedge clk, has reset, inputs, outputs, etc.), state machines (case statement and state registers, next_state logic...), interfaces (SV actually attempted to add syntax for this), and so on. It seems like the next step in raising the abstraction level is to have a language with those sorts of constructs built-in. Then let people use that for a while and see what new patterns develop and encapsulate those patterns in new language features. Maybe OpenCL does this? I kind of doubt it if it's a "software development flow." It's probably still abstracting away CPU instructions.

Wednesday, May 24, 2017

Facebook Should Split In Two

Facebook has done wonders to get people creating and consuming content on the internet. However, Facebook has grown to the point where it has no competition and is no longer innovating in ways that benefit us. Facebook should split into Facebook the aggregator and Facebook the content hoster.  You could talk about a third piece that is Facebook the content provider, which is for providing things like gifs, templates, memes, emoji, games, and other stuff like that.  Because Facebook hasn't completely broken from open web standards those types of content providers already exist today.

Aggregators would be where you go to set up your friend list and see your feed.  It could look and feel like Facebook does now.  It would have an open standard protocol that content hosters would use if they wanted to be aggregated.  This could still be an add driven business, but subscription, self hosted, and DIY solutions could exist too.

Content hosters could either charge a monthly hosting fee, or they could serve up their own adds.  Self hosted and DIY solutions could also exist.

The big benefit to this would of course be the competition.  Since it's an open standard anyone could be a content host, and anyone could be an aggregator.

To make extra sure there is competition, and this could come in a phase two after the initial splitting up of Facebook, there should be open standards for exporting and importing friends, follows, likes, etc. to and from aggregators, and open standards for importing and exporting content from the hosters.

Speaking of follows and likes, there could also be aggregator aggregators (AAs). People could opt in to publicly and anonymously share their likes and follows and the AAs would consume those and report on trends that cross aggregator boundaries.  Anonymity could be much more protected this way while still giving us that interesting information about what is trending.

One tricky part of this is how do I as a content author only allow my friends to see certain posts of mine?  It would have to be with encryption. My content provider could keep public keys of my friends and only my friends (well, their aggregators) would be able to decrypt my posts using my friends' private keys.  I can see some challenges and holes in this, but it doesn't seem any worse overall than how Facebook protects privacy now. Open implementations and peer review could get us to better-than-Facebook privacy quickly.

Facebook would ideally recognize their stagnation and initiate this split themselves. We as their user base can and should help them understand the importance of this. Hopefully it doesn't have to come down to government enforcement of anti-trust laws, but that could be a useful tool to apply here as well.

Monday, March 13, 2017

Quick Thoughts on Creating Coding Standards


No team says, "write your code however the heck you want." Unless you are coding alone, it generally helps to have an agreed upon coding standard. Agreeing upon a coding standard, however, can be a painful process full of heated arguments and hurt feelings. This morning I thought it might be useful to first categorize coding standard items before starting the arguments. My hope is that once we categorize coding standard items we can use better decision criteria for each category of items and cut down on arguing. Below are the categories I came up with really quickly with descriptions, examples, and decision criteria for each category. Feedback is welcome in the comments.

Categories of Things in Coding Standards

Language Specific Pitfalls


  • not subjective, easy to recognize pattern
  • well recognized in the industry as dangerous
  • people have war stories and about these with associated scars to prove it


  • no multiple declarations on one line in C
  • Cliff Cummings rules for blocking vs. non-blocking assignments in Verilog
  • no willy nilly gotos in C
  • no omitting braces for one liner blocks (or begin-end in Verilog)
  • no compiler warnings allowed

How to resolve disputes on which of these should be in The Coding Standard?

Defer to engineers with best war stories. If nobody has a war story for one, you can probably omit it (or can you?).

General Readability/Maintainability

"Any fool can write code that a computer can understand. Good programmers write code that humans can understand." –Martin Fowler


  • things that help humans quickly read, understand, and safely modify code
  • usually not language specific
  • the path from these items to bugs is probably not as clear as with the above items, but a path does exist


  • no magic numbers
  • no single letter variable names
  • keep functions short
  • indicators in names (_t for typedef's, p for pointers, etc.)

How to resolve disputes on which of these should be in The Coding Standard?

If someone says, "this really helps me" then the team should suck it up and do it. This is essentially the "put the slowest hiker at the front of the group" principle.

Alternatively these can be discussed on a case by case basis during code reviews instead of being codified in The Coding Standard. Be prepared for more "lively" code reviews if you go this route.

Code Formatting

The biggest wars often erupt over these because they are so subjective. This doesn't have to be the case.


  • these probably aren't really preventing any bugs
  • most can easily be automatically corrected
  • are largely a matter of taste
  • only important for consistency (which is important!)


  • amount of indent
  • brace style
  • camelCase vs. underscore_names
  • 80 column rule
  • dare I even mention it? tabs vs. spaces

How to resolve disputes on which of these should be in The Coding Standard?

Don't spend a long time arguing about these. Because they are so subjective and not likely to cause or reduce bugs one way or the other, nobody should get bent out of shape if their preference is not chosen by the team. Give everyone two minutes to make their case for their favorite, have a vote, majority wins, end of discussion. Use an existing tool (astyle, autopep8, an emacs mode, whatever is available for the language) to help people follow these rules.

Tuesday, February 7, 2017

SystemVerilog and Python

Design patterns in programming are when engineers find themselves writing the same code over and over to solve the same problems. Design patterns for statically typed object oriented languages (C++ and Java) were cataloged and enshrined in the famous book, "Design Patterns: Elements of Reusable Object-Oriented Software" by Erich Gamma, John Vlissides, Ralph Johnson, and Richard Helm. The authors are lovingly called, The Gang of Four, or the GOF and the book is often referred to as the GOF book.

The subset of SystemVerilog used in writing testbenches is a statically typed object oriented language (it's most similar to Java). As people started using SystemVerilog to write testbenches, frameworks for writing testbenches quickly became popular. These frameworks all provide code that implements design patterns from the GOF book. The various frameworks were similar because they were all essentially implementing the same design patterns. Eventually the various frameworks all coalesced into one, the humbly named, Universal Verification Methodology, or UVM.

Below is a table that matches up GOF design patterns with their UVM implementation. This was adapted from this presentation:

GOF Pattern Name UVM name 
Factory Method, Abstract Factory uvm_factory, inheriting from uvm base classes 
Singleton UVM Pool, UVM Global report server, etc. 
Composite UVM Component Hierarchy, UVM Sequence Hierarchy 
Facade TLM Ports, UVM scoreboards 
Adapter UVM Reg Adapter 
Bridge UVM sequencer, UVM driver 
Observer UVM Subscriber, UVM Monitor, UVM Coverage 
Template Method UVM Transaction (do_copy, do_compare), UVM Phase 
Command UVM Sequence Item 
Strategy UVM Sequence, UVM Driver 
Mediator UVM Virtual Sequencer 

If we switched from SystemVerilog to Python for writing our testbenches, would we need to re-implement each of those parts of the UVM? Python is not a statically typed object oriented language like Java and SystemVerilog. It is a dynamically typed language. Prominent and well respected computer scientist Peter Norvig explored this topic for us already. He did this when Python was still a very young language, so he examined other dynamic languages instead (Dylan and Lisp) and he concluded that of the 23 design patterns from the GOF book, 16 of them are either invisible or greatly simplified due to the nature of dynamic languages and their built-in features. As an example to explain how this could be, he points out that a defining a function and calling it used to be design patterns. Higher-level languages came along and made the pattern of defining and calling a function a part of the language.

This is essentially what has happened with dynamic languages. Many design patterns from GOF are now simply part of the language. According to Dr. Norvig, the patterns that dynamic languages obsolete are:

  • Abstract-Factory
  • Flyweight
  • Factory-Method
  • State
  • Proxy
  • Chain-Of-Responsibility
  • Command
  • Strategy
  • Template-Method
  • Visitor
  • Interpreter
  • Iterator
  • Mediator
  • Observer
  • Builder
  • Facade

That reduces the above table to:

GOF Pattern Name UVM name 
Singleton UVM Pool, UVM Global report server, etc. 
Composite UVM Component Hierarchy, UVM Sequence Hierarchy 
Adapter UVM Reg Adapter 
Bridge UVM sequencer, UVM driver 

Trusting that, if we were to write a pure Python testbench we can see that we would still likely implement a few design patterns. Thinking about this, it makes sense that we'd still probably have classes dedicated to transforming high-level sequence items to pin wiggles, just like the sequencer and driver work together to do in the UVM. It also makes sense that we'd have a class hierarchy to organize and relate components (such as the sequencer and driver equivalents) and sequences (high-level stimulus generation). Thing like that.

The more striking thing is the amount of code we would not need.

Saturday, December 31, 2016

Adventures in Arch Linux

I just installed Arch Linux on my 4th machine. It has been fun and painful. Painfully fun? I have learned a lot and that is always fun. There.

I have loved using Ubuntu over the last several (eight, I think) years. Ubuntu is easy to install and most things Just Work. It's easy to find packages for most software I want to run, and there is lots of help on the internet for accomplishing whatever you want to accomplish with Ubuntu. My frustrations have been that even though you can find instructions for getting whatever software you want to run, it's not always a simple apt-get install. Sometimes it's configuring a PPA source and sometimes it's compiling from source. Sometimes a PPA works and serves you well for years and then suddenly it disappears. Another frustration is out of date packages, and full system upgrades in general. Keeping up with the latest emacs was a chore. Going from one release of Ubuntu to another works surprisingly well, but it's a big enough chore that I keep putting it off. One of my desktop machines at home was still running 12.04 up until yesterday. That's 5 and a half years old now!

These concerns led me to Arch. It seems to be addressing them beautifully. Every package I have wanted is either in the main repositories and I can install it with a simple pacman command, or it's in the AUR and I can install it with a simple yaourt command. There are no releases of Arch, they just continually update the packages that are in the repositories. Staying up to date is always the same pacman command to upgrade your installed packages. There are times where you have to take some manual steps to fix an interaction between two packages, or to switch from a package that has been made obsolete by another newer package, but that's fairly rare, well documented, and you just deal with it a little at a time when the situations come up. With Ubuntu dist-upgrades you had to deal with many of those scenarios all at once, every 6 months if you were keeping fully up to date. With Arch, keeping up with the latest emacs happens without me even realizing it.

Where Arch is not as nice as Ubuntu is in the installation. With Arch it's all manual. What you should do is pretty well documented, but you have to type all the commands yourself and make decisions about alternative ways of doing various things. It's a fun learning experience as I mentioned at the beginning of this post, but not a process that I really enjoyed repeating over and over. This is really where Ubuntu made it's name. The nice package system and repositories were straight from Debian originally, but the installer and default configurations are what made Ubuntu so accessible. There was a joke in the early Ubuntu days that Ubuntu was an African word meaning, "can't install Debian."

It turns out that there's a distribution with a name meaning, "can't install Arch." It's Antergos. It really is just a graphical installer on top of Arch. Once it's done you are running Arch with some Antergos chosen configuration, which is exactly what I wanted. It does feel like it's still early days for this project. On one laptop I tried Antergos on it didn't have the wifi drivers I needed. I had to go back to plain arch and figure out how to load the driver by hand in order to complete the installation (that should be a blog post of its own). On another machine once the Antergos install was done the display manager would crash with a webkitwebprocess coredump. The antergos forums told me how to switch to lxdm and that fixed my problem (probably should be another blog post). I don't think a linux beginner would have enjoyed that process, but overall Antergos looks promising. Mostly I'm looking forward to never needing to do a fresh install on any of those machines ever again.