tag:blogger.com,1999:blog-36698097521726830972024-03-12T21:13:45.873-07:00Cyclopedia SquareBryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.comBlogger206125tag:blogger.com,1999:blog-3669809752172683097.post-9267855687752955632023-02-12T05:42:00.000-08:002023-02-12T05:42:18.316-08:00My New Favorite Phone Game: termux<p> A lot of people when bored pull out their phone and play games. Games just don't do it for me for some reason. I mean, I played chess for a while, and recently I tried Shoot Bubble and 5 Dice (generic Yahtzee), but for some reason I always forget I even have them available and I end up just scrolling Twitter :-/ My feed is pretty good, if someone gets overly political or negative (usually its a combination of the two) I block them, but stuff still leaks in. Usually if Twitter is holding my attention for a long time it's because it's making me angry. I don't like that. I think I've found a better solution for when I'm bored and pull out my phone: <a href="https://termux.dev/en/">termux</a>.</p><p>Termux is a linux command-line environment for Android. It's a kinda like wsl (or cygwin if you are old like me) or Terminal.app, but for your phone. It's almost linux but not quite. Normally I find that very frustrating, especially if I'm trying to get real work done (you may have seen my past blog posts <a href="http://bryan-murdock.blogspot.com/2009/07/if-you-have-to-run-windows.html">here</a> or <a href="http://bryan-murdock.blogspot.com/2010/08/mac-nfs-client-linux-nfs-server.html">here</a> or <a href="http://bryan-murdock.blogspot.com/2009/05/early-thoughts-on-developing-for.html">here</a> or <a href="http://bryan-murdock.blogspot.com/2011/12/make-terminalapp-useful-for-emacs.html">here</a>, for example), but I'm actually enjoying just playing with termux and seeing what it can do. Apparently Frustration Games is a popular genre now (I'm old enough to remember when they were all Frustration Games), and I'm liking this one!</p><p>To be completely honest, I've actually been surprised at how not frustrating it is. I'd say it's better than trying to get a linux-like environment on Windows or Mac, except for the minor detail that it's on a phone with a tiny screen and no physical keyboard. I've actually been able to install emacs and python and write a <a href="https://github.com/krupan/vpiler">very (very!) basic little SystemVerilog parser</a>, and test it! And use git to version it! I'm kind of amazed.<br /></p><p>Some things I've learned about termux:</p><ul style="text-align: left;"><li>The app store version is old and broken, get <a href="https://github.com/termux/termux-app/releases">the apk from github</a> and install that</li><li>You can use apt to install packages, but the built-in termux pkg command is more convenient (you can type <code>pkg in</code> instead of <code>apt install</code>, which makes a difference using your phone keyboard)</li><li>It has built-in scripts that do nice things:</li><ul><li>termux-setup-storage to make it easy to access your files</li><li>termux-change-repo to easily pick a package repo or group of mirrors</li><li>termux-backup and termux-restore to backup and restore your environment</li><li>probably others I wish I knew</li></ul><li>You can install dropbear for ssh because you are thinking this is a phone and slim simple dropbear would be efficient and cool, but dropbear doesn't support a <code>~/.ssh/config</code> file</li><li>When you do <code>pkg remove dropbear</code> it removes dropbear and conveniently installs openssh in its place</li><li>I use gnu screen (not tmux. Again, I'm old) and I've changed the default <code>ctrl-a</code> to <code>ctrl-j</code> instead, but for some reason <code>ctrl-j j</code> is treated like <code>ctrl-j ctrl-j</code> on termux</li><li>Files are owned by root and I can't change that</li><li>I can't make a script executable, it seems. Hmm, I wonder about a compiled binary?</li><li>The filesystem is pretty slow, something like <code>git status</code> takes a loooong time</li><li>Termux hard-codes the DNS server to be 8.8.8.8, it doesn't use whatever the rest of Android is using</li></ul><p>I'm sure there's more that I'll come across, but I figured this was enough to fill out a blog post at this point. If you are looking for a new activity for your phone, give termux a try.<br /></p>Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com0tag:blogger.com,1999:blog-3669809752172683097.post-89490070801968854202022-12-09T12:20:00.004-08:002023-07-20T16:59:13.532-07:00Git Rebase Explained<div class="content" id="content" style="margin: auto; max-width: 60em;"><h1 class="title" style="margin-bottom: 0.2em; text-align: center;"><span style="text-align: left;">Introduction</span></h1><div class="outline-2" id="outline-container-org8934fdd"><div class="outline-text-2" id="text-org8934fdd"><p>Rebase is a helpful git command that can be used to turn a branchy two-headed version history into an easy to follow linear one. A linear history is much easier for a human eye to follow and a human brain to understand. Rebase is a very safe command to use, despite some fear that people have about it.</p><p>This post should explain how rebase works and clear up any misunderstandings about it.</p></div></div><div class="outline-2" id="outline-container-org5e96b5b"><h2 id="org5e96b5b">Rebase Basics</h2><div class="outline-text-2" id="text-org5e96b5b"><p>The short summary of rebase is that it cuts commits out of the git history and reconnects them somewhere else.</p><p>Let's say the common scenario comes up: you commit a change to the main branch of a repository, you then run <code>git fetch</code>, and in come a couple changes that other people made on the main branch. If your commit is version 46f and the others that just came in on the fetch are versions ef9 and 4a2, your version history might look like this:</p><div class="figure" id="orgc2f3eab" style="padding: 1em;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhUS5-JarhRaJAS9-3BqL33l-I-zbcIngKHkBK36wWwfvPjTr0GkKobmIRhLgGKFqTiGHXuocnS0q2qXbzoE8ynyS_vCU7HXm6K5qerwzcklZyfODE_1rQ25ipYS0S4JYNltXYYxEco90zIFj0EnVSmtlAgcoXrx5NHxjfPkMOYZCLmFnissIsq2wJm7Q/s563/diagram-before-rebase.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="131" data-original-width="563" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhUS5-JarhRaJAS9-3BqL33l-I-zbcIngKHkBK36wWwfvPjTr0GkKobmIRhLgGKFqTiGHXuocnS0q2qXbzoE8ynyS_vCU7HXm6K5qerwzcklZyfODE_1rQ25ipYS0S4JYNltXYYxEco90zIFj0EnVSmtlAgcoXrx5NHxjfPkMOYZCLmFnissIsq2wJm7Q/s16000/diagram-before-rebase.png" /></a></div><br /></div><p>The main branch now has two heads, 46f and 4a2. You can run <code>git rebase</code> to disconnect your commit from version d88 and connect it to 4a2. In one simple command the above version history morphs into this nice linear version history:</p><div class="figure" id="orgdbb48ba" style="padding: 1em;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZJQrssykZ5GFRtRqP2n8V1f5n9fENY0UQtVVqtByDtfwGaYBqF9cQSoOHWM33yCKLG-fVQ3-DNK2m7A4NvMsgzAho_zPk8i8npZzzdQLy3O95Fa6H7KR-cObG7gmkua_8y2SP1TyjrVmlbw94lfhrujFLH5sbwTRh_SkioquM2gaxPUZvz3YeBD_YYQ/s672/diagram-after-rebase.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="58" data-original-width="672" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZJQrssykZ5GFRtRqP2n8V1f5n9fENY0UQtVVqtByDtfwGaYBqF9cQSoOHWM33yCKLG-fVQ3-DNK2m7A4NvMsgzAho_zPk8i8npZzzdQLy3O95Fa6H7KR-cObG7gmkua_8y2SP1TyjrVmlbw94lfhrujFLH5sbwTRh_SkioquM2gaxPUZvz3YeBD_YYQ/s16000/diagram-after-rebase.png" /></a></div><br /></div><p>It's called rebase because 46f was based on d88 and now it is based on 4a2. Simple. Multiple head problem solved. That's all you need to know. If you really want to know more then read on.</p></div></div><div class="outline-2" id="outline-container-org8b96ba7"><h2 id="org8b96ba7">Automating rebase</h2><div class="outline-text-2" id="text-org8b96ba7"><p>Most git users have been trained to run <code>git pull</code> instead of <code>git fetch</code>. That's because <code>git pull</code> does a git fetch and then usually follows that by a git merge if needed. You can tell <code>git pull</code> to do a rebase instead of a merge like so:</p><div class="org-src-container"><pre class="src src-sh" style="background-color: #f2f2f2; border-radius: 3px; border: 1px solid rgb(230, 230, 230); margin: 1.2em; overflow: auto; padding: 8pt; position: relative;">git pull --rebase
</pre></div><p>Or, you can configure git to always use rebase for <code>git pull</code> by running this command, which is highly recommended:</p><div class="org-src-container"><pre class="src src-sh" style="background-color: #f2f2f2; border-radius: 3px; border: 1px solid rgb(230, 230, 230); margin: 1.2em; overflow: auto; padding: 8pt; position: relative;">git config pull.rebase true
</pre></div><p>Then you don't need to add <code>--rebase</code> to your <code>git pull</code> command.</p></div></div><div class="outline-2" id="outline-container-org55b36eb"><h2 id="org55b36eb">More Detail</h2><div class="outline-text-2" id="text-org55b36eb"><p>You might be wondering <b>how</b> the rebase command does what it does. You might also have other questions like:</p><ul class="org-ul"><li>Does rebase change your commit?</li><li>Does it change the other commits (ef9 and 4a2, in our example)?</li><li>Can anything go wrong?</li><li>If so, what if something does go wrong?</li></ul><p>If so, read on.</p></div><div class="outline-3" id="outline-container-org40bbb4c"><h3 id="org40bbb4c">How It Works</h3><div class="outline-text-3" id="text-org40bbb4c"><p>I'm sure there are some gory details of how it works that I'm leaving out, but essentially rebase just does a merge and then deletes part of the history of the merge so that you get a simple linear history. If we start again with this:</p><div class="figure" id="orgbdc9e60" style="padding: 1em;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAmuMqvgijwww0Fx32SaNzTB3LsNEjbCOkhBjtj0-o8Rq2yeCZtUKHEagnm365Cz4W8rHyD1AyZ0AnB3tVFujOXfREOpp9Wb5tFn2Jo6bNnI-9plS8bZFv-zaaue20KibCqlakueJMi3wOfH-NjT_UMWvDBWLpH3o-e_JtwuLlk2iZn1HGoVHTFgV-Iw/s563/diagram-before-rebase.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="131" data-original-width="563" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAmuMqvgijwww0Fx32SaNzTB3LsNEjbCOkhBjtj0-o8Rq2yeCZtUKHEagnm365Cz4W8rHyD1AyZ0AnB3tVFujOXfREOpp9Wb5tFn2Jo6bNnI-9plS8bZFv-zaaue20KibCqlakueJMi3wOfH-NjT_UMWvDBWLpH3o-e_JtwuLlk2iZn1HGoVHTFgV-Iw/s16000/diagram-before-rebase.png" /></a></div><br /></div><p>And run rebase, it first does a merge to get this:</p><div class="figure" id="org96b4d29" style="padding: 1em;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEvO5kIlaOougsZ4Yz1PGDsIGrQSN3Nb0e4B0CJbVl4qLrWpav_hbEPdJ2S3mg7sXC712tpN7Qz40esynxawMGvnQE8J65qh4omJpuxYlIQ6X4DCvNy3imz1tsYcnDZu81TZY7IWufARh6BJSb8yzXhwidyiV1iMcr3GNI_2BZiZ5g-C8hX_L2pz_W1Q/s672/diagram-after-merge.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="129" data-original-width="672" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEvO5kIlaOougsZ4Yz1PGDsIGrQSN3Nb0e4B0CJbVl4qLrWpav_hbEPdJ2S3mg7sXC712tpN7Qz40esynxawMGvnQE8J65qh4omJpuxYlIQ6X4DCvNy3imz1tsYcnDZu81TZY7IWufARh6BJSb8yzXhwidyiV1iMcr3GNI_2BZiZ5g-C8hX_L2pz_W1Q/s16000/diagram-after-merge.png" /></a></div><br /></div><p>Then it removes the original 46f and any pointers to it. That's how you end up with this:</p><div class="figure" id="org5b3106d" style="padding: 1em;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhe9mvI4-oqJssHg7APJ6fQlYUEcalxmDDArhgcGeYNrScfPIH2PseNYZOvuW3UNDlU8TAkbAIFDs2ftDi4xy46tt5H7zEcL-b-5c5-3taNA1-hwoYSrljTqt4LgVmPgoahbo5cIeeQDRWy1cYFf3niC4w1x08bmnlJ77vQKnaIWCSEqk_z_WXE5bS5gQ/s672/diagram-after-rebase.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="58" data-original-width="672" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhe9mvI4-oqJssHg7APJ6fQlYUEcalxmDDArhgcGeYNrScfPIH2PseNYZOvuW3UNDlU8TAkbAIFDs2ftDi4xy46tt5H7zEcL-b-5c5-3taNA1-hwoYSrljTqt4LgVmPgoahbo5cIeeQDRWy1cYFf3niC4w1x08bmnlJ77vQKnaIWCSEqk_z_WXE5bS5gQ/s16000/diagram-after-rebase.png" /></a></div><br /></div><p>Knowing that, you can see that rebase does in fact change your original commit. If nothing else, your commit gets a new parent pointer. If 46f changed some of the same files that ef9 or 4a2 changed then those changes will have to be merged and there can even be merge conflicts. If that happens you can resolve the merge conflicts however you are used to doing that (I prefer kdiff3). Most rebases (like most merges) don't have any conflicts and it's very nearly a no-op. Just the parent pointer changes.</p><p>One thing to note is that the parent pointer change is still significant and that's why there is an asterisk on the new 46f commit. The commit ID will actually not stay the same after a rebase. Git computes the commit id based on the diff, your comments, and the parent(s) of the commit. The rebase command changed the parent of 46f so it will calculate a new commit ID. I left it the same in the diagram so it'd be easy to follow where the commit actually went.</p><p>Also note that only the commit(s) being rebased get changed. In our example, ef9 and 4a2 are the destination of the rebase but they aren't touched themselves at all.</p></div></div><div class="outline-3" id="outline-container-org3e28670"><h3 id="org3e28670">If Things Go Wrong</h3><div class="outline-text-3" id="text-org3e28670"><p>Just like with some merges, some rebases can result in a multitude of merge conflicts or broken compiles or failing tests due to subtle code bugs. Sometimes when that happens you just want to get back to where you were before the merge or rebase. If a rebase goes horribly wrong and you want to abort, your old commits are all still there, they are just hidden by git. If you haven't finished resolving conflicts you can simply type <code>git rebase --abort</code> and everything will be restored. If you have completed the rebase but you remember the commit id of your original commit, you can run <code>git checkout <commit-id></code> to get git to show you the original commit. If you don't remember the commit id, you can use the <code>git reflog</code> command to find the commit id and use git checkout to make git show it to you again.</p></div></div></div><div class="outline-2" id="outline-container-org6c57aec"><h2 id="org6c57aec">Conclusion</h2><div class="outline-text-2" id="text-org6c57aec"><p>Rebase is a very useful and safe git tool that keeps our version history clean and easy to understand. You can automatically use rebase for every <code>git pull</code> that you do by configuring git with this command:</p><div class="org-src-container"><pre class="src src-sh" style="background-color: #f2f2f2; border-radius: 3px; border: 1px solid rgb(230, 230, 230); margin: 1.2em; overflow: auto; padding: 8pt; position: relative;">git config pull.rebase true</pre></div></div></div></div><div class="status" id="postamble"></div>Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com0tag:blogger.com,1999:blog-3669809752172683097.post-41535620633016954242022-06-21T10:16:00.005-07:002022-06-21T10:16:57.308-07:00My 2013 DVCon Paper and Poster<p>I just noticed that my 2013 DVCon paper and poster are no longer archived on the DVCon website. So, for my records at least, here they are:</p><p>The poster: <a href="https://drive.google.com/file/d/1T_NL_ijdvCzdOhKN_BkW6NgTM6GxxEQI/view?usp=sharing" target="_blank">Poster: ASIC-Strength Verification in a Fast-Moving FPGA World</a></p><p>The paper: <a href="https://drive.google.com/file/d/1hXcdx1XSEDcFdRSzgtDv3yZhX1uNxJW1/view?usp=sharing" target="_blank">ASIC-Strength Verification in a Fast-Moving FPGA World</a></p><p>Apologies for the PDF of the paper, but converting it to html requires more time than I have at the moment.</p>Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com0tag:blogger.com,1999:blog-3669809752172683097.post-73012533968813546642022-04-21T12:02:00.003-07:002022-04-21T12:02:29.132-07:00Zmodem File Transfer With GNU Screen<p>I've been using <a href="https://bryan-murdock.blogspot.com/2008/02/screen-as-serial-terminal.html">screen as my serial terminal</a> (as opposed to minicom or picocom) for a long time now. Today I had the need to transfer a file from my PC to the embedded device I was connected to with screen. Networking was not working, and a co-worker reminded me that zmodem was a way to transfer a file over the serial port. I googled for instructions on exactly how to use zmodem and I found stack overflow answers and blog entries all mentioning GNU screen for doing so, but none of them really explained it right. Here's what worked for me today.</p><p>To transfer a file from my PC to the embedded device:</p><ol style="text-align: left;"><li>in screen hit ctrl-a : zmodem catch</li><li>in screen, at the linux command-line on your device type: rz</li><li>screen will prompt with an sz command, just append the filename to that command and hit enter<br /></li></ol><p>Note that this works best if the file is in the same directory as you ran screen from.</p><p> To send a file from the embedded device to your PC:</p><ol style="text-align: left;"><li>in screen hit ctrl-a : zmodem catch</li><li>in screen, at the linux command-line on your device type: sz <filename></li><li>screen will prompt with an rz command, just hit enter <br /></li></ol><p>You only need to run the zmodem catch command once in your screen session. In fact, you can just put this at the end of your .screenrc and then you will never have to do step 1 at all:</p><p>zmodem catch <br /></p>Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com0tag:blogger.com,1999:blog-3669809752172683097.post-48025026797343100782022-04-16T07:20:00.001-07:002022-04-16T07:20:40.419-07:00Giving Bitcoin as a Gift<p> If you've been following along, you know <a href="https://bryan-murdock.blogspot.com/2021/12/giving-bitcoin-as-gift-initial-thoughts.html">I was considering giving bitcoin as a Christmas gift</a>. I did end up giving bitcoin to friends and family for Christmas. I'm pretty happy with how it turned out. I went with the HD Paper Wallet solution, and so I <a href="https://github.com/krupan/bitcoin_gifts">wrote some python code to generate paper wallets</a>. The code is open source, and instructions are in the README there on github. You'll probably want to tweak the template to customize it for you instead of me. Hopefully someday I can modify the code to make that easy, but I'm pretty new at generating images and PDFs. The code I write for my day job doesn't involve pretty pictures, or words even :-)<br /></p>Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com0tag:blogger.com,1999:blog-3669809752172683097.post-57914459452762000252021-12-03T22:23:00.004-08:002021-12-03T22:23:50.374-08:00Giving Bitcoin as a Gift, Initial Thoughts<div><p> 'Tis the season and I'm thinking about how to give bitcoin as a gift, to non-technical people, without requiring them to do anything like create an account or download software in order to accept it.</p><h3 style="text-align: left;">Paper Wallet <br /></h3><p>Bitcoin paper wallets have been around a long time. The concept is simple. Generate a send/receive address pair (private/public key pair) and print them on a piece of paper. Basic operations:</p><ul style="text-align: left;"><li>To add bitcoin to the wallet, send some bitcoin to the receive address</li><li>To check your balance, type or scan the public key into any blockchain explorer </li><li>To gift that bitcoin, just hand over the piece of paper</li><li>To send the bitcoin on the blockchain, type or scan the private key into the bitcoin wallet software of your choice</li></ul><p>Pros:</p><ul style="text-align: left;"><li>simple, no fancy hardware or software required for them to receive the bitcoin <br /></li></ul>Cons:<ul style="text-align: left;"><li>No guarantee that the giver didn't keep a copy of the private key</li><li>Private key could be easy for someone else to see/copy</li><li>Private key could be easily lost <br /></li><li>If you want to add more funds, you have to reuse the same receive address, which is bad for privacy</li></ul><h3 style="text-align: left;">Hardware "paper" Wallet <br /></h3><p>There is a hardware solution to the first problem, the <a href="https://opendime.com/">Opendime</a>. It's essentially the same thing as the paper wallet, except it generates the key pair and keeps the private key hidden until you mechanically alter the device. If someone gives you an Opendime, it's easy to see that they have not altered the device and seen the private key. It has the same address re-use downside of the paper wallet. It's also pretty expensive if you just want to give a kid $5 worth of bitcoin. Even more so if you want to give a bunch of nieces and nephews bitcoin!</p><p>I thought more about keeping the private key private and realized that some niece or nephew is likely to lose either a piece of paper or an Opendime and then come to me and ask what they can do to recover their bitcoin. I think that in this scenario of mine, it would be best if I did keep a copy of the private key someplace safe. That turns the first con into a pro!</p><h3 style="text-align: left;">Full Hardware Wallet </h3><p>For my own private keys, I use a <a href="https://trezor.io/">Trezor hardware wallet</a>. It generates the private key on the device and never lets anyone see it, except once at setup time in the form of a <a href="https://learnmeabitcoin.com/technical/mnemonic">backup seed phrase</a>. It uses the <a href="https://learnmeabitcoin.com/technical/hd-wallets">Hierarchical Deterministic Wallet</a> (HD Wallet) structure, which is really cool. You can generate a sequence of private keys and corresponding public keys from the main private key that comes from the seed phrase. You can also generate a sequence of keys under each of those keys, making them parent keys of a bunch of other keys (thus, hierarchical). But the really cool part is you can generate just the sequence of public keys from a given public key, no private keys have to be involved in that calculation.</p><p>A practical example of why this is great. I use <a href="https://www.swanbitcoin.com/bdmurdock/">Swan Bitcoin</a> (affiliate link, we each get $10 if you sign up with it) to buy my bitcoin. Swan automatically transfers the bitcoin from their account to mine on a regular basis. I could give them a single public-key (address) that they always use for those transfers, or, I can give them a public-key from my hierarchical wallet and they can generate a series of public keys to send the bitcoin to, using a new key each time. This eliminates address reuse and I don't have to manually give them a new key for each transaction. If I keep that public key private between me and them, then nobody knows that each of those transactions from Swan are going to me.</p><p>Back to giving bitcoin as a gift. I could give each person a parent public key from my Trezor. There is a nice bitcoin wallet app called <a href="https://bluewallet.io/">BlueWallet</a> that also knows how to do the HD Wallet thing that they can use to manage the public keys. That would keep the private keys totally safe. I could still print the master public key onto a piece of paper. The basic operations become:<br /></p><ul style="text-align: left;"><li>To add bitcoin to the wallet, type or scan the main public key into BlueWallet, get the next public key (receive address), send some bitcoin to the receive address</li><li>To check your balance, type or scan the public key into BlueWallet</li><li>If you want to gift that bitcoin, just hand over the piece of paper</li><li>To send the bitcoin on the blockchain, they have to call me up and I have to use the Trezor to send it<br /></li></ul></div><p>Pros:</p><ul style="text-align: left;"><li>Simple, no fancy hardware or software required for them to receive the bitcoin<br /></li><li>No address reuse</li><li>Private key is safely backed up with me<br /></li></ul><div><p>The downsides to this are:</p><ul style="text-align: left;"><li>They have no control over the private key<br /></li></ul><p>That is a pretty big downside, they really don't own the bitcoin.</p><h3 style="text-align: left;">HD Paper Wallet<br /></h3><div>Once I figured out this whole seed phrase and HD wallet thing, I came up with another idea. I could generate a seed phrase and the corresponding master public and private key pairs and give them those. I'd keep a copy of the seed phrase myself just in case they lose it. Now the operations become:<div><ul style="text-align: left;"><li>To add bitcoin to the wallet,
type or scan the main public key into BlueWallet, get the next public
key (receive address), send some bitcoin to the receive address</li><li>To check your balance, type or scan the public key into BlueWallet</li><li>If you want to gift that bitcoin, just hand over the piece of paper</li><li>To send the bitcoin on the blockchain, type or scan the main private key into BlueWallet<br /></li></ul><p>Pros:</p><ul style="text-align: left;"><li>Simple, no fancy hardware or software required for them to receive the bitcoin<br /></li><li>No address reuse</li><li>Private key is backed up with me</li></ul><p>The downsides to this are:</p><ul style="text-align: left;"><li>Private key could be easy for someone else to see/copy</li><li>Private key could be easily lost</li></ul><p>I feel like this is the best compromise. There is one more downside. No websites or tools exist to make a pretty paper wallet out of HD wallet master public and private keys. I'm going to have to make that on my own.</p><p>Any thoughts? Please let me know in the comments. I sometimes think I'm over complicating this with the HD wallet. Who cares that much about a little address reuse? I like the tech (math, really) of the HD wallet though. Or maybe the Trezor is the best way? That keeps the private keys safest. They can check their balance with their public keys and feel like they own it. Is that good enough?<br /></p></div></div></div>Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com0tag:blogger.com,1999:blog-3669809752172683097.post-30005762700587282722021-06-30T08:27:00.003-07:002021-06-30T08:27:26.951-07:00Traffic in Little Cottonwood CanyonThis is my comment on the <a href="https://littlecottonwoodeis.udot.utah.gov/">Utah Department of Transportation's plans</a> to "to provide an integrated transportation system that improves the reliability, mobility and safety for residents, visitors, and commuters who use S.R. 210."<div><br /></div><div>This is long, but I have tried to order it in such a way that the most important points come first, so don't give up now. At least read the first 3 paragraphs, please.</div><div><br /></div><div>First and foremost I'd like to ask, what problem are we really trying to solve? Roughly 355 days a year there are no reliability, mobility, or safety problems on S.R. 210. The weather is good, the roads are clean and clear, and traffic flows at or above the speed limit of the road. We all need to understand that the problems with reliability, mobility, and safety only happen about 10 days a year, if the skiers are lucky and we get that many big snow storms.<br /><div><br /></div><div><div>Mobility</div><div><br /></div><div>Congestion on roads is annoying, but we need to seek to understand it before we try to fix it. Congestion on a road happens because it leads to a popular place. Lot's of people want to get to that place, so they get on that road. The road gets congested and nobody can get to the popular place as fast as they could if there was no congestion. This is what bothers us. We have a road that could allow travel at a given speed, but because of the over crowding on the road, we all have to go slower than that speed.</div><div><br /></div><div>Solutions to congestion are all temporary. When a road is congested, there are some number of people that will simply choose not to go to the popular destination. If you widen the road or add alternative means to get to the popular destination, at first the congestion will be alleviated, but before too long the people that were avoiding the popular place because of congestion will see that there is no congestion and they will start traveling to the popular place again. Before too long you will have congestion again. Anyone who has seen the progression of I-15 over the years here in Utah can understand this. There will be more people getting to the popular destination than there were before, but there will still be congestion.</div><div><br /></div><div>Understanding all that, we can better talk about what we are really doing. We are not alleviating congestion (increasing mobility) long-term. We are alleviating it short-term only, and we are providing the means for more people to reach the popular destination. Is that really what we want in Little Cottonwood Canyon? Can the ski resorts, hiking trails, picnic areas, climbing routes, etc. handle more people? Or will they become congested too?</div><div><br /></div><div>Reliability and Safety</div><div><br /></div><div>These are essentially the same concern. When it snows, cars and busses are less reliable because they might get stuck or slide off the road. In extreme cases they might slide into each other or off the road which is a safety issue. This is where I would like to point out how strange it is that UDOT has recently stopped talking about these concerns in Big Cottonwood Canyon (S.R. 190) and is now only talking about Little Cottonwood Canyon (S.R. 210). I would really like to see data on reliability and safety in both canyons because in my following of the two it appears that S.R. 190 has far more accidents and slide offs than S.R. 210. S.R. 190 is a much longer, windier road with areas of very steep drop-offs down to the creek. I have noticed that S.R. 190 gets closed to deal with accidents (stranding skiers on the road or at the resorts for hours on end) far, far more often than S.R. 210. Is any of this plan really concerned with reliability and safety? If so, it should consider both canyons.</div><div><br /></div><div>Bus Lanes vs. Gondola</div><div><br /></div><div>Now, all that being said, let's address this specific plan which seems to assume that yes, the canyon can and should accommodate more people and is in dire need of more reliability and safety. Considering all the above, I believe neither solution is a good idea. Both will be incredibly costly and have very real negative impacts on the environment. Neither will make a difference on the 355 good traffic days a year, and in the long run, neither will solve the congestion problems on the 10 bad days a year. The one thing the gondola plan has going for it is the increased reliability and safety on those 10 bad days, but I see no data that justifies the extreme cost for what is likely to be only very small increase in reliability in safety in the one canyon that doesn't have that big of a reliability and safety problem anyway, while we ignore the other canyon that does have real reliability and safety problems (on those 10 days a year).</div></div></div><div><br /></div><div>My belief is we should look for more cost effective ways to address the reliability and safety issues only, in both canyons(!), and not proceed with either a road widening or gondola project.</div>Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com1tag:blogger.com,1999:blog-3669809752172683097.post-86751825920050645752021-04-30T19:10:00.000-07:002021-04-30T19:10:10.997-07:00Fix for Cura's Ender 3 gcode<p>A child of mine finally asked for a 3d printer. I knew that if I tried to push it, no kid would be interested, so I didn't. But finally, one of them asked for one. We ordered the <a href="https://www.creality.com/goods-detail/ender-3-3d-printer">Creality Ender 3</a> that night from their website and some filament from Amazon. It all arrived a couple days later and we enjoyed the process of assembling it and then finally printing the gcode files that were on the SD card that came with the printer. Including those was a really nice touch. Once we got the bed close enough to the nozzle it all worked great.</p><p>After the initial success we found some models on thingiverse, sliced them with <a href="https://ultimaker.com/software/ultimaker-cura">Cura</a>, and then saw the printer <a href="https://www.youtube.com/watch?v=jgqMpexHQ5k">do something like this</a> (not my video) over and over. Too much filament in the wrong place, no filament in the wrong place. It was a strange and bewildering start-up sequence to watch. I searched the internet for advice and didn't find much. I finally just opened up the gcode file that Cura created and compared it to the gcode files that came with the printer. There was definitely a more complicated start-up sequence in the Cura file. I deleted it, replaced it with what came with the printer, and prints are all working again.</p><p>You can make this change permanent in Cura by clicking Settings->Printer->manage printer. Then click on your printer and click the Machine Settings button. In the text box on the left labeled "Start G-code" delete all the gcode there and replace it with this:</p><p><br /></p><pre><code>; Ender 3 Custom Start G-code
G28 ; home all axes
G29
G92 E0
G1 E-10.0000 F6000
G1 Z0.750 F1002
; process Process1
; layer 1, Z = 0.450
T0</code></pre><p>Now your Cura-sliced prints will work nicely on your Ender 3.</p>Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com0tag:blogger.com,1999:blog-3669809752172683097.post-83174208487960255392021-03-15T09:45:00.001-07:002021-03-15T09:46:04.972-07:00Linux Environment Management: direnv does it all<p> </p><div class="wiki-content" id="main-content" style="background-color: white; color: #172b4d; font-family: -apple-system, system-ui, "Segoe UI", Roboto, Oxygen, Ubuntu, "Fira Sans", "Droid Sans", "Helvetica Neue", sans-serif; margin: 0px; padding: 0px;"><h2 id="LinuxShellEnvironmentSetup-Usedirenv" style="border-bottom-color: rgb(255, 255, 255); color: #233741; font-size: 20px; font-weight: normal; letter-spacing: -0.008em; line-height: 1.5; margin: 30px 0px 0px; padding: 0px;">Linux Environment Management: direnv does it all</h2><div style="font-size: 14px;">A few years back I wrote about different options for <a href="https://bryan-murdock.blogspot.com/2015/12/linux-environment-management.html">linux environment management</a>. I recently learned about another option, direnv. I think I'm convinced that it is the only tool you need. Read this as if it's another section added to that previous post.</div><h2 id="LinuxShellEnvironmentSetup-Usedirenv" style="border-bottom-color: rgb(255, 255, 255); color: #233741; font-size: 20px; font-weight: normal; letter-spacing: -0.008em; line-height: 1.5; margin: 30px 0px 0px; padding: 0px;">Use direnv</h2><p style="font-size: 14px; margin: 10px 0px 0px; padding: 0px;">Straight from <a class="external-link" href="https://direnv.net/" rel="nofollow" style="color: #0052cc;">the direnv website</a>: "direnv is an extension for your shell. It augments existing shells with a new feature that can load and unload environment variables depending on the current directory. Before each prompt, direnv checks for the existence of a .envrc file in the current and parent directories. If the file exists (and is authorized), it is loaded"</p><p style="font-size: 14px; margin: 10px 0px 0px; padding: 0px;">This happens automatically, so it solves the problem of the "Explicit Environment Files" solution above in a way that is much more convenient than the "Per-command Environment Files" solution. The .envrc files are in standard shell syntax and it properly unloads environments like the "Smart Environment Manager Tool" mentioned above as well. It has the downside that it is not easy to share the same environment setup in multiple directories.</p><p style="font-size: 14px; margin: 10px 0px 0px; padding: 0px;">I'm not sure if there is a simple solution that gives us both of those things, but we have the option with direnv to choose any of the three the discussed environment setup solutions in any given terminal.</p><h2 id="LinuxShellEnvironmentSetup-Whynotallthree?" style="border-bottom-color: rgb(255, 255, 255); color: #233741; font-size: 20px; font-weight: normal; letter-spacing: -0.008em; line-height: 1.5; margin: 30px 0px 0px; padding: 0px;"><span class="handy-header aui-icon aui-icon-small aui-iconfont-link" style="background-position: 0px 0px; background-repeat: no-repeat; border: none; cursor: copy; display: inline-block; height: 16px; line-height: 0; margin: 6px 0px 0px -20px; padding: 0px 4px 0px 0px; position: absolute; text-indent: -999em; vertical-align: text-top; visibility: hidden; width: 16px;" title="Copy link"></span>Why not all three?</h2><p style="font-size: 14px; margin: 10px 0px 0px; padding: 0px;">direnv is powerful enough to allow all three techniques for shell configuration described above.</p><h3 id="LinuxShellEnvironmentSetup-Standarddirenv" style="color: #233741; font-size: 16px; letter-spacing: -0.006em; line-height: 1.5; margin: 30px 0px 0px; padding: 0px;"><span class="handy-header aui-icon aui-icon-small aui-iconfont-link" style="background-position: 0px 0px; background-repeat: no-repeat; border: none; cursor: copy; display: inline-block; height: 16px; line-height: 0; margin: 2px 0px 0px -20px; padding: 0px 4px 0px 0px; position: absolute; text-indent: -999em; vertical-align: text-top; visibility: hidden; width: 16px;" title="Copy link"></span>Standard direnv</h3><p style="font-size: 14px; margin: 10px 0px 0px; padding: 0px;">The default automatic direnv behavior is enabled by putting this in your .bashrc file:</p><div class="code panel pdl conf-macro output-block" data-hasbody="true" data-macro-name="code" style="border-radius: 3px; border: 1px solid rgb(223, 225, 229); color: #333333; font-size: 14px; margin: 10px 0px; overflow: auto; padding: 0px;"><div class="codeContent panelContent pdl" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border-bottom-left-radius: 3px; border-bottom-right-radius: 3px; line-height: 20px; margin: 0px; overflow: hidden; padding: 0px;"><div style="margin: 0px; padding: 0px;"><div class="syntaxhighlighter sh-emacs nogutter bash" id="highlighter_844262" style="background-color: black; font-size: 1em; margin: 0px; overflow: auto; padding: 0px; position: relative; width: 971px;"><table border="0" cellpadding="0" cellspacing="0" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; float: none; font-family: Consolas, "Bitstream Vera Sans Mono", "Courier New", Courier, monospace; font-size: 14px; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: 971px;"><tbody style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; float: none; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: auto;"><tr style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; float: none; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: auto;"><td class="code" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; float: none; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px 0px 0px 15px; position: static; vertical-align: baseline; width: 956px;"><div class="container" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; float: none; height: auto; inset: auto; line-height: 20px; margin: 15px 0px 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px 0px 15px 0em; position: relative; vertical-align: baseline; white-space: pre-wrap; width: auto;" title="Hint: double-click to select code"><div class="line number1 index0 alt2" style="background-attachment: initial; background-clip: initial; background-image: none; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border-radius: 0px; border: 0px; box-sizing: content-box; float: none; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px 1em 0px 0em; position: static; vertical-align: baseline; white-space: nowrap; width: auto;"><code class="bash functions" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; color: #81cef9; float: none; font-family: Consolas, "Bitstream Vera Sans Mono", "Courier New", Courier, monospace; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: auto;">eval</code> <code class="bash string" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; color: #ff9e7b; float: none; font-family: Consolas, "Bitstream Vera Sans Mono", "Courier New", Courier, monospace; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: auto;">"$(direnv hook bash)"</code></div></div></td></tr></tbody></table></div></div></div></div><p style="font-size: 14px; margin: 10px 0px 0px; padding: 0px;"><code style="font-family: SFMono-Medium, "SF Mono", "Segoe UI Mono", "Roboto Mono", "Ubuntu Mono", Menlo, Courier, monospace;"></code></p><p style="color: #233741; letter-spacing: -0.006em; line-height: 1.5; margin: 30px 0px 0px; padding: 0px; text-align: left;"><span style="font-size: x-small;">If you want to easily choose between standard direnv and the below option when you start a new shell you could encapsulate this in a shell function named direnvenable. When your terminal starts up, you would run that function if you want standard direnv behavior.</span></p><h3 id="LinuxShellEnvironmentSetup-Theequivalentofsourcingasetup_env.sh" style="color: #233741; font-size: 16px; letter-spacing: -0.006em; line-height: 1.5; margin: 30px 0px 0px; padding: 0px;"><span class="handy-header aui-icon aui-icon-small aui-iconfont-link" style="background-position: 0px 0px; background-repeat: no-repeat; border: none; cursor: copy; display: inline-block; height: 16px; line-height: 0; margin: 2px 0px 0px -20px; padding: 0px 4px 0px 0px; position: absolute; text-indent: -999em; vertical-align: text-top; visibility: hidden; width: 16px;" title="Copy link"></span>The equivalent of Shell Initialization Files</h3><p style="font-size: 14px; margin: 10px 0px 0px; padding: 0px;">To "source" a given .envrc file you can just spawn a subshell using the direnv exec command, passing it the path to a project and its .envrc file:</p><div class="code panel pdl conf-macro output-block" data-hasbody="true" data-macro-name="code" style="border-radius: 3px; border: 1px solid rgb(223, 225, 229); color: #333333; font-size: 14px; margin: 10px 0px; overflow: auto; padding: 0px;"><div class="codeContent panelContent pdl" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border-bottom-left-radius: 3px; border-bottom-right-radius: 3px; line-height: 20px; margin: 0px; overflow: hidden; padding: 0px;"><div style="margin: 0px; padding: 0px;"><div class="syntaxhighlighter sh-emacs nogutter bash" id="highlighter_247553" style="background-color: black; font-size: 1em; margin: 0px; overflow: auto; padding: 0px; position: relative; width: 971px;"><table border="0" cellpadding="0" cellspacing="0" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; float: none; font-family: Consolas, "Bitstream Vera Sans Mono", "Courier New", Courier, monospace; font-size: 14px; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: 971px;"><tbody style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; float: none; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: auto;"><tr style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; float: none; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: auto;"><td class="code" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; float: none; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px 0px 0px 15px; position: static; vertical-align: baseline; width: 956px;"><div class="container" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; float: none; height: auto; inset: auto; line-height: 20px; margin: 15px 0px 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px 0px 15px 0em; position: relative; vertical-align: baseline; white-space: pre-wrap; width: auto;" title="Hint: double-click to select code"><div class="line number1 index0 alt2" style="background-attachment: initial; background-clip: initial; background-image: none; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border-radius: 0px; border: 0px; box-sizing: content-box; float: none; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px 1em 0px 0em; position: static; vertical-align: baseline; white-space: nowrap; width: auto;"><code class="bash plain" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; color: lightgrey; float: none; font-family: Consolas, "Bitstream Vera Sans Mono", "Courier New", Courier, monospace; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: auto;">direnv </code><code class="bash functions" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; color: #81cef9; float: none; font-family: Consolas, "Bitstream Vera Sans Mono", "Courier New", Courier, monospace; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: auto;">exec</code> <code class="bash plain" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; color: lightgrey; float: none; font-family: Consolas, "Bitstream Vera Sans Mono", "Courier New", Courier, monospace; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: auto;">$(readlink -f </code><code class="bash plain" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; color: lightgrey; float: none; font-family: Consolas, "Bitstream Vera Sans Mono", "Courier New", Courier, monospace; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: auto;">/path/to/git/clone</code><code class="bash plain" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; color: lightgrey; float: none; font-family: Consolas, "Bitstream Vera Sans Mono", "Courier New", Courier, monospace; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: auto;">) $SHELL -i</code></div></div></td></tr></tbody></table></div></div></div></div><p style="color: #233741; letter-spacing: -0.006em; line-height: 1.5; margin: 30px 0px 0px; padding: 0px; text-align: left;"><span style="font-size: x-small;">I would suggest wrapping this a shell function to make it easier.</span></p><h3 id="LinuxShellEnvironmentSetup-Prefixindividualcommandswithdirenvexec" style="color: #233741; font-size: 16px; letter-spacing: -0.006em; line-height: 1.5; margin: 30px 0px 0px; padding: 0px;"><span class="handy-header aui-icon aui-icon-small aui-iconfont-link" style="background-position: 0px 0px; background-repeat: no-repeat; border: none; cursor: copy; display: inline-block; height: 16px; line-height: 0; margin: 2px 0px 0px -20px; padding: 0px 4px 0px 0px; position: absolute; text-indent: -999em; vertical-align: text-top; visibility: hidden; width: 16px;" title="Copy link"></span>The equivalent of Per-command Environment Files</h3><p style="font-size: 14px; margin: 10px 0px 0px; padding: 0px;">This can work in conjunction with the automatic direnv behavior (you can run direnvenable and still use this for commands outside of any project directory). This is a good way to run commands in scripts using the correct shell environment. It's the same direnv exec call above prefixing any shell command:</p><div class="code panel pdl conf-macro output-block" data-hasbody="true" data-macro-name="code" style="border-radius: 3px; border: 1px solid rgb(223, 225, 229); color: #333333; font-size: 14px; margin: 10px 0px; overflow: auto; padding: 0px;"><div class="codeContent panelContent pdl" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border-bottom-left-radius: 3px; border-bottom-right-radius: 3px; line-height: 20px; margin: 0px; overflow: hidden; padding: 0px;"><div style="margin: 0px; padding: 0px;"><div class="syntaxhighlighter sh-emacs nogutter bash" id="highlighter_554950" style="background-color: black; font-size: 1em; margin: 0px; overflow: auto; padding: 0px; position: relative; width: 971px;"><table border="0" cellpadding="0" cellspacing="0" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; float: none; font-family: Consolas, "Bitstream Vera Sans Mono", "Courier New", Courier, monospace; font-size: 14px; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: 971px;"><tbody style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; float: none; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: auto;"><tr style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; float: none; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: auto;"><td class="code" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; float: none; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px 0px 0px 15px; position: static; vertical-align: baseline; width: 956px;"><div class="container" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; float: none; height: auto; inset: auto; line-height: 20px; margin: 15px 0px 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px 0px 15px 0em; position: relative; vertical-align: baseline; white-space: pre-wrap; width: auto;" title="Hint: double-click to select code"><div class="line number1 index0 alt2" style="background-attachment: initial; background-clip: initial; background-image: none; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border-radius: 0px; border: 0px; box-sizing: content-box; float: none; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px 1em 0px 0em; position: static; vertical-align: baseline; white-space: nowrap; width: auto;"><code class="bash plain" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; color: lightgrey; float: none; font-family: Consolas, "Bitstream Vera Sans Mono", "Courier New", Courier, monospace; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: auto;">direnv </code><code class="bash functions" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; color: #81cef9; float: none; font-family: Consolas, "Bitstream Vera Sans Mono", "Courier New", Courier, monospace; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: auto;">exec</code> <code class="bash plain" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; color: lightgrey; float: none; font-family: Consolas, "Bitstream Vera Sans Mono", "Courier New", Courier, monospace; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: auto;">$(readlink -f </code><code class="bash plain" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; color: lightgrey; float: none; font-family: Consolas, "Bitstream Vera Sans Mono", "Courier New", Courier, monospace; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: auto;">/path/to/git/clone</code><code class="bash plain" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; color: lightgrey; float: none; font-family: Consolas, "Bitstream Vera Sans Mono", "Courier New", Courier, monospace; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: auto;">) <</code><code class="bash functions" style="background: none; border-radius: 0px; border: 0px; box-sizing: content-box; color: #81cef9; float: none; font-family: Consolas, "Bitstream Vera Sans Mono", "Courier New", Courier, monospace; height: auto; inset: auto; line-height: 20px; margin: 0px; min-height: inherit; outline: 0px; overflow: visible; padding: 0px; position: static; vertical-align: baseline; width: auto;">command></code></div></div></td></tr></tbody></table></div></div></div></div><p style="font-size: 14px; margin: 10px 0px 0px; padding: 0px;"><code style="font-family: SFMono-Medium, "SF Mono", "Segoe UI Mono", "Roboto Mono", "Ubuntu Mono", Menlo, Courier, monospace;"></code></p><p style="margin: 10px 0px 0px; padding: 0px;"><span style="color: #233741; font-size: x-small; letter-spacing: -0.008em;">I would also suggest wrapping this in a shell function to make it easier.</span></p><p style="font-size: 14px; margin: 10px 0px 0px; padding: 0px;"><span style="color: #233741; font-size: 20px; letter-spacing: -0.008em;"><br /></span></p><h2 style="font-size: 14px; margin: 10px 0px 0px; padding: 0px; text-align: left;"><span style="color: #233741; font-size: 20px; letter-spacing: -0.008em;">Conclusion</span></h2><p style="font-size: 14px; margin: 10px 0px 0px; padding: 0px;">direnv gives you the power of never needing to manually source an environment setup script when you are working in a git clone of a project. It also gives you the ability to use project settings from a git clone in other directories if needed.</p></div><div id="likes-and-labels-container" style="background-color: white; clear: both; color: #172b4d; font-family: -apple-system, system-ui, "Segoe UI", Roboto, Oxygen, Ubuntu, "Fira Sans", "Droid Sans", "Helvetica Neue", sans-serif; font-size: 14px; margin: 10px 0px; overflow: hidden; padding: 10px 0px;"><div class="no-print" id="likes-section" style="color: #3f4c66; float: left; font-size: 12px; margin: 0px; padding: 0px; width: 467.031px;"><a class="like-button" href="https://wiki.robot.car/display/HD/Linux+Shell+Environment+Setup" style="color: #0052cc; display: block; float: left; text-decoration-line: none;"><span class="aui-icon aui-icon-small aui-iconfont-like" style="background-position: 0px 0px; background-repeat: no-repeat; border: none; color: #505f79; display: inline-block; height: 16px; line-height: 0; margin: 0px; padding: 0px; position: relative; text-indent: -999em; vertical-align: text-top; width: 16px;"></span></a></div></div>Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com0tag:blogger.com,1999:blog-3669809752172683097.post-2744214186238449752020-10-17T08:30:00.004-07:002020-12-06T08:38:53.807-08:00Effectively Internet Filtering in 2020<p>(To skip my rambling intro and get to the nitty gritties, search this page for, "After that long introduction")</p>
<p>In college, back when the internet was young, I hated the clumsy ineffective
internet filtering that was in place on campus. It often blocked sites that were
perfectly fine, and did not catch all the sites of the type that they were
trying to block. Fast forward 10 years or so and I saw my children stumbling
upon some content that I didn't want them to see on my unfiltered home internet
and my attitude changed a bit. Back then web filtering was pretty easy. Nothing
was encrypted and
<a href="https://en.wikipedia.org/wiki/DansGuardian">DansGuardian</a> was the
go-to tool. You set up a transparent web proxy and DansGuardian would scan the
entire content of every website that you downloaded in your home. Incriminating
words and phrases would trigger its blocking and it would replace the website
you were downloading with an explanatory message. The beauty was that there was
no need to scour the web, categorize every website in the world, and maintain lists. It still had its false positives and if a website hand
objectionable images but otherwise benign text there was nothing it could do,
but it took the edge off the raw internet.</p>
<p>Today, it's not so easy. The HTTPS
Everywhere campaign bothered me at first. It felt unnecessary, and it most
definitely broke my DansGuardian filtering. I have since come to understand the
importance and necessity of HTTPS and I'm very glad that Let's Encrypt has made it easy for all of us to use it. But I do still have kids.</p>
<p>DNS filtering came to
the rescue, first with <a href="https://www.opendns.com/">OpenDNS</a>, and now I
use <a href="https://cleanbrowsing.org/">CleanBrowsing</a>. It's pretty good,
but sometimes I want more control. One night our school had a parent-night
presentation about internet safety for kids and they had invited some vendors to
pitch their wares. One of them was
<a href="https://routerlimits.com/">RouterLimits</a>. They had a small box that
you simply connected to your network and it would filter internet traffic based
on categories or individual sites you listed. No software or configuration of
other hosts or your router required. It could also enforce time windows with no
internet. "How is this possible when this box is just another client on the
LAN?" I pressed their salesman. It was a small company and I think he was also
an engineer because he realized I was one, and he slyly said to me, "ARP
spoofing."</div><div>"That's evil!" I instinctively replied. And then I thought a little
more about it and realized it was evil. Evil genius! I bought one right there.
Their model was great. Pay $80 for $5 worth of hardware and you get their
service for life. Plug the little box into your LAN and connect to their web
service. The little box collects a list of hosts on the LAN by paying attention
to broadcast traffic, then it floods each host with ARP replies to tell them all
that it is the gateway and begins its Man in the Middle attack. If a kid tries
to visit badsite.example.com, or any site during the time window when internet
is configured to be off for their device, the RouterLimits box sees that and
just drops the packet. If a kid tries to visit goodsite.example.com, the
RouterLimits box simply forwards the packet along to the actual gateway. Simple
and very effective.</p>
<p>The schedule was the thing I loved the most. I had never had
that with DansGuardian or CleanBrowsing alone. Sadly, RouterLimits was bought by
a bigger company that changed the business model to a yearly subscription. Also,
right about the same time, the RouterLimits box lost its ability to block my
Roku for some reason. Kids were watching Netflix late into the night on school
nights again, dang it. I worked with the RouterLimits support team a bit, but
they couldn't figure out what was going on. I wasn't super motivated to debug it
myself, because I didn't want to start paying a regular fee for this service
anyway.</p>
<p>I still wanted my kids kicked off the internet at a decent time on
school nights, though, so I started looking for solutions. The first thing I
tried was a pi-hole. It doesn't have scheduling built-in, but I was able to hack
together a script that modified the pi-hole database directly to put my kids'
devices into a group that had a blocklist that filtered everything. That mostly
worked, but it was really a hack. And then my raspberry pi's SD card died and I
didn't have a backup. I started looking for another solution. I remember ARP
spoofing and did a little research. Sure enough, there is a tool called
<a href="https://www.ettercap-project.org/">ettercap</a> that make it pretty
easy, especially if you just want to block everything.</p>
<p>After that long introduction, some nitty gritties. To run ettercap in text-mode and see what it can do, run this command:</p>
<pre><code>sudo ettercap -Tq</code></pre>
<p>Play around with it a bit, it's pretty cool.</p>
<p>To filter (perform a Man in the Middle Attack), you'll want to scan and save a list of hosts on the LAN, like so (change the 1000 to your user ID):</p>
<pre><code>sudo env EC_UID=1000 ettercap -Tqk lan-hosts</code></pre>
<p>To man-in-the-middle a host with IP 192.168.1.193, and if your gateway is 192.168.1.1, run this:</p>
<pre><code>sudo ettercap -Tq -M arp:remote /192.168.1.1// /192.168.1.193//</code></pre>
<p>For me that didn't really do anything because it simply forwarded the packets it was intercepting on to the gateway. To do something with the packets ettercap is intercepting, you need to create a filter. My filter is simple, just drop every packet:</p>
<pre><code>drop();</code></pre>
<p>Put that in a text file named drop-all.ecf and run this to compile the filter</p>
<pre><code>etterfilter drop-all.ecf -o drop-all.ef</code></pre>
<p>You can read the ettercap-filter man page for more information about what you can do. I image the RouterLimits box had some more interesting filters (assuming they were using ettercap).</p>
<p>Once you have your filter compiled, add it to the above ettercap command like so:</p>
<pre><code>sudo ettercap -Tq --filter drop-all.ef -M arp:remote /192.168.1.1// /192.168.1.193//</code></pre>
<p>You have successfully performed a Denial of Service attack against 192.168.1.193. If you have, for example, two kids devices you want to block, you need the lan-hosts file you made earlier, and you do this:</p>
<pre><code>sudo ettercap -Tz -j lan-hosts --filter drop-all.ef -M arp:remote /192.168.1.1// /192.168.1.193\;192.168.1.221//</code></pre>
<p>You can add as many ip addresses as you like to list, separated by semi-colons. As far as I can tell, they all need to be listed in lan-hosts too. I believe you could use MAC addresses instead of IP addresses, but I have my router giving out fixed IP addresses to all my kids devices (that was to make the pi-hole hack work), so I just use the IP addresses.</p>
<p>All that's left is to run ettercap with --daemon, make a cron job or systemd timer to start and stop it at the times you want to block your kids' internet access, and you are done! It just so happens that I have written <a href="https://github.com/krupan/internet-block">an ansible playbook</a> that does all this for you. You'll have to modify lan-hosts and the internet-stop.service to use your own devices MAC and IP addresses, then run ansible-playbook to deploy this to a raspberry pi (or some other linux box on your LAN that you leave on all the time) and you are good to go.</p>
<p>P.S. This even block the Roku. ettercap couldn't detect the Roku on my LAN like it could other hosts for some reason, so that's probably why Router Limits couldn't block it, but once I manually entered the Roku's IP and MAC into the lan-hosts file, ettercap was able to DoS it just like all the other hosts.</p>Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com2tag:blogger.com,1999:blog-3669809752172683097.post-5438247315625759942020-03-24T20:55:00.000-07:002020-03-24T20:55:27.616-07:00How To Retroactively Annex Files Already in a Git Repo<style type="text/css">
<!--/*--><![CDATA[/*><!--*/
.title { text-align: center;
margin-bottom: .2em; }
.subtitle { text-align: center;
font-size: medium;
font-weight: bold;
margin-top:0; }
.todo { font-family: monospace; color: red; }
.done { font-family: monospace; color: green; }
.priority { font-family: monospace; color: orange; }
.tag { background-color: #eee; font-family: monospace;
padding: 2px; font-size: 80%; font-weight: normal; }
.timestamp { color: #bebebe; }
.timestamp-kwd { color: #5f9ea0; }
.org-right { margin-left: auto; margin-right: 0px; text-align: right; }
.org-left { margin-left: 0px; margin-right: auto; text-align: left; }
.org-center { margin-left: auto; margin-right: auto; text-align: center; }
.underline { text-decoration: underline; }
#postamble p, #preamble p { font-size: 90%; margin: .2em; }
p.verse { margin-left: 3%; }
pre {
border: 1px solid #ccc;
box-shadow: 3px 3px 3px #eee;
padding: 8pt;
font-family: monospace;
overflow: auto;
margin: 1.2em;
}
pre.src {
position: relative;
overflow: visible;
padding-top: 1.2em;
color: white;
background-color: black;
}
pre.src:before {
display: none;
position: absolute;
background-color: white;
top: -10px;
right: 10px;
padding: 3px;
border: 1px solid black;
}
pre.src:hover:before { display: inline;}
/* Languages per Org manual */
pre.src-asymptote:before { content: 'Asymptote'; }
pre.src-awk:before { content: 'Awk'; }
pre.src-C:before { content: 'C'; }
/* pre.src-C++ doesn't work in CSS */
pre.src-clojure:before { content: 'Clojure'; }
pre.src-css:before { content: 'CSS'; }
pre.src-D:before { content: 'D'; }
pre.src-ditaa:before { content: 'ditaa'; }
pre.src-dot:before { content: 'Graphviz'; }
pre.src-calc:before { content: 'Emacs Calc'; }
pre.src-emacs-lisp:before { content: 'Emacs Lisp'; }
pre.src-fortran:before { content: 'Fortran'; }
pre.src-gnuplot:before { content: 'gnuplot'; }
pre.src-haskell:before { content: 'Haskell'; }
pre.src-hledger:before { content: 'hledger'; }
pre.src-java:before { content: 'Java'; }
pre.src-js:before { content: 'Javascript'; }
pre.src-latex:before { content: 'LaTeX'; }
pre.src-ledger:before { content: 'Ledger'; }
pre.src-lisp:before { content: 'Lisp'; }
pre.src-lilypond:before { content: 'Lilypond'; }
pre.src-lua:before { content: 'Lua'; }
pre.src-matlab:before { content: 'MATLAB'; }
pre.src-mscgen:before { content: 'Mscgen'; }
pre.src-ocaml:before { content: 'Objective Caml'; }
pre.src-octave:before { content: 'Octave'; }
pre.src-org:before { content: 'Org mode'; }
pre.src-oz:before { content: 'OZ'; }
pre.src-plantuml:before { content: 'Plantuml'; }
pre.src-processing:before { content: 'Processing.js'; }
pre.src-python:before { content: 'Python'; }
pre.src-R:before { content: 'R'; }
pre.src-ruby:before { content: 'Ruby'; }
pre.src-sass:before { content: 'Sass'; }
pre.src-scheme:before { content: 'Scheme'; }
pre.src-screen:before { content: 'Gnu Screen'; }
pre.src-sed:before { content: 'Sed'; }
pre.src-sh:before { content: 'shell'; }
pre.src-sql:before { content: 'SQL'; }
pre.src-sqlite:before { content: 'SQLite'; }
/* additional languages in org.el's org-babel-load-languages alist */
pre.src-forth:before { content: 'Forth'; }
pre.src-io:before { content: 'IO'; }
pre.src-J:before { content: 'J'; }
pre.src-makefile:before { content: 'Makefile'; }
pre.src-maxima:before { content: 'Maxima'; }
pre.src-perl:before { content: 'Perl'; }
pre.src-picolisp:before { content: 'Pico Lisp'; }
pre.src-scala:before { content: 'Scala'; }
pre.src-shell:before { content: 'Shell Script'; }
pre.src-ebnf2ps:before { content: 'ebfn2ps'; }
/* additional language identifiers per "defun org-babel-execute"
in ob-*.el */
pre.src-cpp:before { content: 'C++'; }
pre.src-abc:before { content: 'ABC'; }
pre.src-coq:before { content: 'Coq'; }
pre.src-groovy:before { content: 'Groovy'; }
/* additional language identifiers from org-babel-shell-names in
ob-shell.el: ob-shell is the only babel language using a lambda to put
the execution function name together. */
pre.src-bash:before { content: 'bash'; }
pre.src-csh:before { content: 'csh'; }
pre.src-ash:before { content: 'ash'; }
pre.src-dash:before { content: 'dash'; }
pre.src-ksh:before { content: 'ksh'; }
pre.src-mksh:before { content: 'mksh'; }
pre.src-posh:before { content: 'posh'; }
/* Additional Emacs modes also supported by the LaTeX listings package */
pre.src-ada:before { content: 'Ada'; }
pre.src-asm:before { content: 'Assembler'; }
pre.src-caml:before { content: 'Caml'; }
pre.src-delphi:before { content: 'Delphi'; }
pre.src-html:before { content: 'HTML'; }
pre.src-idl:before { content: 'IDL'; }
pre.src-mercury:before { content: 'Mercury'; }
pre.src-metapost:before { content: 'MetaPost'; }
pre.src-modula-2:before { content: 'Modula-2'; }
pre.src-pascal:before { content: 'Pascal'; }
pre.src-ps:before { content: 'PostScript'; }
pre.src-prolog:before { content: 'Prolog'; }
pre.src-simula:before { content: 'Simula'; }
pre.src-tcl:before { content: 'tcl'; }
pre.src-tex:before { content: 'TeX'; }
pre.src-plain-tex:before { content: 'Plain TeX'; }
pre.src-verilog:before { content: 'Verilog'; }
pre.src-vhdl:before { content: 'VHDL'; }
pre.src-xml:before { content: 'XML'; }
pre.src-nxml:before { content: 'XML'; }
/* add a generic configuration mode; LaTeX export needs an additional
(add-to-list 'org-latex-listings-langs '(conf " ")) in .emacs */
pre.src-conf:before { content: 'Configuration File'; }
table { border-collapse:collapse; }
caption.t-above { caption-side: top; }
caption.t-bottom { caption-side: bottom; }
td, th { vertical-align:top; }
th.org-right { text-align: center; }
th.org-left { text-align: center; }
th.org-center { text-align: center; }
td.org-right { text-align: right; }
td.org-left { text-align: left; }
td.org-center { text-align: center; }
dt { font-weight: bold; }
.footpara { display: inline; }
.footdef { margin-bottom: 1em; }
.figure { padding: 1em; }
.figure p { text-align: center; }
.inlinetask {
padding: 10px;
border: 2px solid gray;
margin: 10px;
background: #ffffcc;
}
#org-div-home-and-up
{ text-align: right; font-size: 70%; white-space: nowrap; }
textarea { overflow-x: auto; }
.linenr { font-size: smaller }
.code-highlighted { background-color: #ffff00; }
.org-info-js_info-navigation { border-style: none; }
#org-info-js_console-label
{ font-size: 10px; font-weight: bold; white-space: nowrap; }
.org-info-js_search-highlight
{ background-color: #ffff00; color: #000000; font-weight: bold; }
.org-svg { width: 90%; }
/*]]>*/-->
</style><br />
<h2>Table of Contents</h2><ul><li><a href="#orgb046fe1">How To Retroactively Annex Files Already in a Git Repo</a><br />
<ul><li><a href="#org13a141c">First Tries: filter-branch, filter-repo</a></li>
<li><a href="#org15d7f99">Success with git rebase –interactive</a><br />
<ul><li><a href="#orga29b890">Added binary files</a></li>
<li><a href="#org5e93a48">Deleted binary files</a></li>
<li><a href="#orge2bae77">Modified binary files</a></li>
<li><a href="#org8ee708e">Moved binary files</a></li>
</ul></li>
<li><a href="#orgf8af56e">Dealing with Tags</a></li>
<li><a href="#orgd54882c">Clean Up and Results</a></li>
</ul></li>
</ul><h2 id="orgb046fe1">How To Retroactively Annex Files Already in a Git Repo</h2><p>In my <a href="/2020/03/git-annex-is-great.html">last post</a> I talked about how surprisingly easy it is to use <a href="https://git-annex.branchable.com/">git annex</a> to manage your large binary files (or even small ones). In this post, I'm going to show how hard it is to go back and fix the mistake you made when you decided not to learn and use git annex at the start of your project. Learn from my mistake!<br />
</p><p>When I started developing <a href="https://millenniumsoftwaredesign.com/">the website for my business</a>, I figured that editing history in git is easy, and I could just check in binary files (like the images) for now and fix it later. Well, it was starting to get a little sluggish, and I had some bigger binary files that I wanted to start keeping with the website code, so I figured the time had come. Once I decided on git annex, it was time to go edit that history.<br />
</p><h3 id="org13a141c">First Tries: filter-branch, filter-repo</h3><p>There is <a href="https://git-annex.branchable.com/tips/How_to_retroactively_annex_a_file_already_in_a_git_repo/">a very old page of instructions for doing this using <code>git filter-branch</code></a>. The first thing I noticed when I tried that was this message from git:<br />
</p><pre class="example">WARNING: git-filter-branch has a glut of gotchas generating mangled history
rewrites. Hit Ctrl-C before proceeding to abort, then use an
alternative filtering tool such as 'git filter-repo'
(https://github.com/newren/git-filter-repo/) instead. See the
filter-branch manual page for more details; to squelch this warning,
set FILTER_BRANCH_SQUELCH_WARNING=1.
</pre><p>Yikes! A warning like that from a tool (git) that is already known for its gotchas is one I decided to take seriously. Besides, I'm always down to try the new hotness, so I started reading about <a href="https://github.com/newren/git-filter-repo"><code>git-filter-repo</code></a>. The more I read and experimented, even dug into the source code, the more I came to understand that it could not do what I needed, sadly. Maybe someone will read this and correct me.<br />
</p><h3 id="org15d7f99">Success with git rebase –interactive</h3><p>Not seeing a nice pre-built tool or command that could do this for me, I set out to manually edit the repository history using good ol' <code>git rebase --interactive</code>. First, I had to find the all the binary files that are in the repo (not just the ones in the current revision). Here's how I did it:<br />
</p><div class="org-src-container"><pre class="src src-sh"><span style="color: #888a85;"># </span><span style="color: #888a85;">The --stat=1000 is so it doesn't truncate anything</span>
git log --stat=1000 | grep Bin | sort | uniq > binary-files
</pre></div><p>Note the comment. Isn't it cute that <code>git log</code> truncates long lines even when stdout is not connected to your terminal? There are lots of little annoying gotchas like that throughout this process. Makes me miss mercurial, but don't worry, I will try not to mention mercurial again.<br />
</p><p>Now, you'll still have duplicates in <code>binary-files</code> because the other stuff that <code>git log --stat</code> spits out on each line. I personally used some emacs commands to remove everything but the filename from each line of the <code>binary-files</code> file, and then did a sort and uniq again.<br />
</p><p>Next, I had to find each commit that modified any of these binary files. Here's how I did that:<br />
</p><div class="org-src-container"><pre class="src src-sh"><span style="color: #fcaf3e;">for</span> file<span style="color: #fcaf3e;"> in</span> $(<span style="color: #fa8072;">cat</span> binary-files); <span style="color: #fcaf3e;">do</span>
git log --pretty=oneline --follow -- $<span style="color: #fce94f;">file</span> >> commits;
<span style="color: #fcaf3e;">done</span>
</pre></div><p>Then I did another <code>sort</code> and <code>uniq</code> on that. Luckily there were only about 15 commits. Phew.<br />
</p><p>Next I tried to find the earliest commit in the list I had, but that was a pain (<i>don't…mention…mercurial…</i>), so I just ran <code>git rebase --interactive</code> and gave it one of the first commits I made in the repository. I actually used <a href="https://magit.vc/">emacs magit</a> to start the rebase, but the surgery required throughout the process made me drop to the command-line for most of it. magit did make it really easy to mark the 15 commits from my <code>commits</code> file with an <code>e</code> though.<br />
</p><p>OK, once the rebase got rolling I ran into a few different scenarios. Commits that added a new binary file, commits that deleted binary files, commits that modified binary files, and a commit that moved binary files.<br />
</p><h4 id="orga29b890">Added binary files</h4><p>When a binary file was added, git would act like I have always seen rebase interactive work, it would show the normal thing:<br />
</p><pre class="example">Stopped at 53fc550... some commit message here
You can amend the commit now, with
git commit --amend
Once you are satisfied with your changes, run
git rebase --continue
</pre><p>In that case I did this:<br />
</p><div class="org-src-container"><pre class="src src-sh">git show --stat=1000 <span style="color: #888a85;"># </span><span style="color: #888a85;">to see binary (Bin) files</span>
git rm --cached <the-binary-files>
git add <the-binary-files> <span style="color: #888a85;"># </span><span style="color: #888a85;">git annex will annex them</span>
git commit --amend
git rebase --continue
</pre></div><p>Easy peasy, as long as you have <a href="/2020/03/git-annex-is-great.html">set up annex like my previous post explains</a> so that annexing happens automatically.<br />
</p><h4 id="org5e93a48">Deleted binary files</h4><p>When a binary file was deleted, git would throw up a message like this up:<br />
</p><pre class="example">$ git rebase --continue
[detached HEAD 130bcc4] banner on each page now
21 files changed, 190 insertions(+), 42 deletions(-)
create mode 100644 msd/webshop/static/webshop/img/common/adi-goldstein-EUsVwEOsblE-unsplash.jpg
create mode 100644 msd/webshop/static/webshop/img/common/alexandre-debieve-FO7JIlwjOtU-unsplash.jpg
delete mode 100644 msd/webshop/static/webshop/img/common/file-icons.png
create mode 100644 msd/webshop/static/webshop/img/common/kevin-ku-w7ZyuGYNpRQ-unsplash.jpg
create mode 100644 msd/webshop/static/webshop/img/common/levi-saunders-1nz-KjRdg-s-unsplash.jpg
create mode 100644 msd/webshop/static/webshop/img/common/max-duzij-qAjJk-un3BI-unsplash.jpg
create mode 100644 msd/webshop/static/webshop/img/common/nick-fewings-ZJAnGFg-rM4-unsplash.jpg
create mode 100644 msd/webshop/static/webshop/img/common/umberto-jXd2FSvcRr8-unsplash.jpg
create mode 100644 msd/webshop/static/webshop/img/common/yogesh-phuyal-mjwGKmwkDDA-unsplash.jpg
CONFLICT (modify/delete): msd/webshop/static/webshop/img/common/nick-fewings-ZJAnGFg-rM4-unsplash.jpg deleted in 90d71fb... refactored banners in pricing.css to reduce code duplication and modified in HEAD. Version HEAD of msd/webshop/static/webshop/img/common/nick-fewings-ZJAnGFg-rM4-unsplash.jpg left in tree.
error: could not apply 90d71fb... refactored banners in pricing.css to reduce code duplication
Resolve all conflicts manually, mark them as resolved with
"git add/rm <conflicted_files>", then run "git rebase --continue".
You can instead skip this commit: run "git rebase --skip".
To abort and get back to the state before "git rebase", run "git rebase --abort".
Could not apply 90d71fb... refactored banners in pricing.css to reduce code duplication
</pre><p>I guess in this case it was that I had added some new files too, so the message was extra verbose. The key message in all that was: "msd/webshop/static/webshop/img/common/nick-fewings-ZJAnGFg-rM4-unsplash.jpg deleted…" Here's what you do in this case:<br />
</p><div class="org-src-container"><pre class="src src-sh">git rm msd/webshop/static/webshop/img/common/nick-fewings-ZJAnGFg-rM4-unsplash.jpg
git diff --stat=1000 --staged <span style="color: #888a85;"># </span><span style="color: #888a85;">to find full paths for any Bin files</span>
git restore --staged <binary-files>
git add <binary-files>
git diff --stat --staged <span style="color: #888a85;"># </span><span style="color: #888a85;">just to double check there are no Bin files now</span>
git rebase --continue
</pre></div><p>Looks so simple (heh), but it took me a decent amount of web searching and experimentation to figure it out. All for you, dear reader, all for you.<br />
</p><h4 id="orge2bae77">Modified binary files</h4><p>Here's one where I resized several images, git helpfully uttered:<br />
</p><pre class="example">$ git rebase --continue
[detached HEAD 7dfb28c] refactored banners in pricing.css to reduce code duplication
4 files changed, 28 insertions(+), 75 deletions(-)
create mode 100644 msd/webshop/static/webshop/img/common/connor-betts-QK6Iwzd5MhE-unsplash.jpg
delete mode 100644 msd/webshop/static/webshop/img/common/nick-fewings-ZJAnGFg-rM4-unsplash.jpg
warning: Cannot merge binary files: msd/webshop/static/webshop/img/common/yogesh-phuyal-mjwGKmwkDDA-unsplash.jpg (HEAD vs. a90710f... scaled images down to max width of 1920 pixels)
warning: Cannot merge binary files: msd/webshop/static/webshop/img/common/umberto-jXd2FSvcRr8-unsplash.jpg (HEAD vs. a90710f... scaled images down to max width of 1920 pixels)
warning: Cannot merge binary files: msd/webshop/static/webshop/img/common/max-duzij-qAjJk-un3BI-unsplash.jpg (HEAD vs. a90710f... scaled images down to max width of 1920 pixels)
warning: Cannot merge binary files: msd/webshop/static/webshop/img/common/levi-saunders-1nz-KjRdg-s-unsplash.jpg (HEAD vs. a90710f... scaled images down to max width of 1920 pixels)
warning: Cannot merge binary files: msd/webshop/static/webshop/img/common/kevin-ku-w7ZyuGYNpRQ-unsplash.jpg (HEAD vs. a90710f... scaled images down to max width of 1920 pixels)
warning: Cannot merge binary files: msd/webshop/static/webshop/img/common/connor-betts-QK6Iwzd5MhE-unsplash.jpg (HEAD vs. a90710f... scaled images down to max width of 1920 pixels)
warning: Cannot merge binary files: msd/webshop/static/webshop/img/common/alexandre-debieve-FO7JIlwjOtU-unsplash.jpg (HEAD vs. a90710f... scaled images down to max width of 1920 pixels)
warning: Cannot merge binary files: msd/webshop/static/webshop/img/common/adi-goldstein-EUsVwEOsblE-unsplash.jpg (HEAD vs. a90710f... scaled images down to max width of 1920 pixels)
Auto-merging msd/webshop/static/webshop/img/common/yogesh-phuyal-mjwGKmwkDDA-unsplash.jpg
CONFLICT (content): Merge conflict in msd/webshop/static/webshop/img/common/yogesh-phuyal-mjwGKmwkDDA-unsplash.jpg
Auto-merging msd/webshop/static/webshop/img/common/umberto-jXd2FSvcRr8-unsplash.jpg
CONFLICT (content): Merge conflict in msd/webshop/static/webshop/img/common/umberto-jXd2FSvcRr8-unsplash.jpg
Auto-merging msd/webshop/static/webshop/img/common/max-duzij-qAjJk-un3BI-unsplash.jpg
CONFLICT (content): Merge conflict in msd/webshop/static/webshop/img/common/max-duzij-qAjJk-un3BI-unsplash.jpg
Auto-merging msd/webshop/static/webshop/img/common/levi-saunders-1nz-KjRdg-s-unsplash.jpg
CONFLICT (content): Merge conflict in msd/webshop/static/webshop/img/common/levi-saunders-1nz-KjRdg-s-unsplash.jpg
Auto-merging msd/webshop/static/webshop/img/common/kevin-ku-w7ZyuGYNpRQ-unsplash.jpg
CONFLICT (content): Merge conflict in msd/webshop/static/webshop/img/common/kevin-ku-w7ZyuGYNpRQ-unsplash.jpg
Auto-merging msd/webshop/static/webshop/img/common/connor-betts-QK6Iwzd5MhE-unsplash.jpg
CONFLICT (content): Merge conflict in msd/webshop/static/webshop/img/common/connor-betts-QK6Iwzd5MhE-unsplash.jpg
Auto-merging msd/webshop/static/webshop/img/common/alexandre-debieve-FO7JIlwjOtU-unsplash.jpg
CONFLICT (content): Merge conflict in msd/webshop/static/webshop/img/common/alexandre-debieve-FO7JIlwjOtU-unsplash.jpg
Auto-merging msd/webshop/static/webshop/img/common/adi-goldstein-EUsVwEOsblE-unsplash.jpg
CONFLICT (content): Merge conflict in msd/webshop/static/webshop/img/common/adi-goldstein-EUsVwEOsblE-unsplash.jpg
error: could not apply a90710f... scaled images down to max width of 1920 pixels
Resolve all conflicts manually, mark them as resolved with
"git add/rm <conflicted_files>", then run "git rebase --continue".
You can instead skip this commit: run "git rebase --skip".
To abort and get back to the state before "git rebase", run "git rebase --abort".
Could not apply a90710f... scaled images down to max width of 1920 pixels
</pre><p>The trick to fixing this is to notice which commit it's trying to let you edit, which is in the last line of that message, and then checkout that version of each of the unmerged binary files it mentions, like so:<br />
</p><div class="org-src-container"><pre class="src src-sh">git status <span style="color: #888a85;"># </span><span style="color: #888a85;">to get the names of the unmerged binary files</span>
git checkout a90710f <filenames>
</pre></div><p>Now you can do the same thing you did for the deleted file:<br />
</p><div class="org-src-container"><pre class="src src-sh">git restore --staged <filenames>
git add <filenames>
git diff --stat --staged <span style="color: #888a85;"># </span><span style="color: #888a85;">just to double check there are no Bin files now</span>
git rebase --continue
</pre></div><h4 id="org8ee708e">Moved binary files</h4><p>When I ran <code>git log --follow</code> to find all the commits that modified binary files, it flagged one where I had moved them. I'm not sure I actually had to edit that commit and I wonder if I would not have had this weird situation if I had not edited it. But for completeness, here's what I saw. Git rebase stopped to let me edit the commit and git annex printed out this message for every file that was moved:<br />
</p><pre class="example">git-annex: git status will show <filename> to be modified, since content availability has changed and git-annex was unable to update the index. This is only a cosmetic problem affecting git status; git add, git commit, etc won't be affected. To fix the git status display, you can run: git update-index -q --refresh <filename>
</pre><p>Sounds…quite weird. But git rebase would not continue until I did run the suggested command:<br />
</p><div class="org-src-container"><pre class="src src-sh">git update-index -q --refresh <filenames>
git rebase --continue
</pre></div><h3 id="orgf8af56e">Dealing with Tags</h3><p>Once the rebase was done I noticed that the tags I had all still pointed to the original commits. Oops. A quick internet search led me to <a href="https://ownyourbits.com/2017/08/14/rebasing-in-git-without-losing-tags/">this post about rebasing and moving tags to the new commits</a> (written by a former co-worker, it just so happens). Too bad I didn't look for that before I rebased. I thought about redoing the whole rebase, but in the end I just wrote my own quick python script (using snippets from Nacho's) to take care of my specific situation. Here it is:<br />
</p><div class="org-src-container"><pre class="src src-python"><span style="color: #888a85;">#</span><span style="color: #888a85;">! /usr/bin/env python</span>
<span style="color: #fcaf3e;">from</span> subprocess <span style="color: #fcaf3e;">import</span> run, PIPE
<span style="color: #fce94f;">tags</span> = run([<span style="color: #73d216;">'git'</span>, <span style="color: #73d216;">'show-ref'</span>, <span style="color: #73d216;">'--tags'</span>],
stdout=PIPE).stdout.decode(<span style="color: #73d216;">'utf-8'</span>).splitlines()
<span style="color: #fce94f;">tags_with_comments</span> = {}
<span style="color: #fcaf3e;">for</span> tag <span style="color: #fcaf3e;">in</span> tags:
<span style="color: #fce94f;">tag_hash</span>, <span style="color: #fce94f;">tag_name</span> = tag.split(<span style="color: #73d216;">' '</span>)
<span style="color: #fce94f;">tag_name</span> = tag_name.split(<span style="color: #73d216;">'/'</span>)[-1]
<span style="color: #fce94f;">comment</span> = run([<span style="color: #73d216;">'git'</span>, <span style="color: #73d216;">'--no-pager'</span>, <span style="color: #73d216;">'show'</span>, <span style="color: #73d216;">'-s'</span>,
<span style="color: #73d216;">'--format=%s'</span>, tag_hash],
stdout=PIPE).stdout.decode(<span style="color: #73d216;">'utf-8'</span>).splitlines()[-1]
<span style="color: #fcaf3e;">print</span>(f<span style="color: #73d216;">'{tag_name}: {comment}'</span>)
<span style="color: #fce94f;">tags_with_comments</span>[tag_name] = comment
<span style="color: #fce94f;">commits</span> = run([<span style="color: #73d216;">'git'</span>, <span style="color: #73d216;">'log'</span>, <span style="color: #73d216;">'--oneline'</span>],
stdout=PIPE).stdout.decode(<span style="color: #73d216;">'utf-8'</span>).splitlines()
<span style="color: #fcaf3e;">for</span> tag_name <span style="color: #fcaf3e;">in</span> tags_with_comments:
<span style="color: #fcaf3e;">for</span> c <span style="color: #fcaf3e;">in</span> commits:
<span style="color: #fce94f;">commit_hash</span> = c.split(<span style="color: #73d216;">' '</span>)[0]
<span style="color: #fce94f;">comment</span> = c.split(<span style="color: #73d216;">' '</span>)[1:]
<span style="color: #fce94f;">comment</span> = <span style="color: #73d216;">' '</span>.join(comment)
<span style="color: #fcaf3e;">if</span> comment == tags_with_comments[tag_name]:
run([<span style="color: #73d216;">'git'</span>, <span style="color: #73d216;">'tag'</span>, <span style="color: #73d216;">'--force'</span>, tag_name, commit_hash])
</pre></div><h3 id="orgd54882c">Clean Up and Results</h3><p>Well, with all that done, it was time to see how it all turned out. My original git repo was sitting at about 1.4 GB. This new repo was…3 GB!? Something wasn't right. Here are some steps I took to clean it up after making sure there weren't any old branches or remotes laying around:<br />
</p><div class="org-src-container"><pre class="src src-sh">git clean -fdx
git annex fsck
git fsck
git reflog expire --verbose --expire=0 --all
git gc --prune=0
</pre></div><p>The <code>git clean</code> command showed that I had a weird leftover <code>.git</code> directory in another directory somehow, so I deleted that. I don't think the <code>fsck</code> commands really did anything, but the <code>gc</code> definitely did. Size was now down to 985 MB. Much better. Wait a minute, what if I did a <code>git gc</code> on the original repo? It's size went down to 984 MB. Oh shoot. I guess it makes sense though, if both git and git annex are storing full versions of each binary file they would end up the same size. The real win is the faster git operations, especially clones.<br />
</p><p>A local git clone now happens in the blink of an eye, and its size is only 153 MB. Now, that's a little unfair because it doesn't have any of the binary files. After a <code>git annex get</code> to get the binary files for the current checkout it goes up to 943 MB. Not a huge savings, but it only gets better as time goes on and more edits happen. Right? This was all worth it, wasn't it?!<br />
</p><p>Let me know in the comments if this is helpful, hurtful, or if I did this totally wrong.<br />
</p>Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com0tag:blogger.com,1999:blog-3669809752172683097.post-78654292118941458382020-03-24T20:19:00.000-07:002020-03-24T20:57:59.341-07:00Git Annex is Great<style type="text/css">
<!--/*--><![CDATA[/*><!--*/
.title { text-align: center;
margin-bottom: .2em; }
.subtitle { text-align: center;
font-size: medium;
font-weight: bold;
margin-top:0; }
.todo { font-family: monospace; color: red; }
.done { font-family: monospace; color: green; }
.priority { font-family: monospace; color: orange; }
.tag { background-color: #eee; font-family: monospace;
padding: 2px; font-size: 80%; font-weight: normal; }
.timestamp { color: #bebebe; }
.timestamp-kwd { color: #5f9ea0; }
.org-right { margin-left: auto; margin-right: 0px; text-align: right; }
.org-left { margin-left: 0px; margin-right: auto; text-align: left; }
.org-center { margin-left: auto; margin-right: auto; text-align: center; }
.underline { text-decoration: underline; }
#postamble p, #preamble p { font-size: 90%; margin: .2em; }
p.verse { margin-left: 3%; }
pre {
border: 1px solid #ccc;
box-shadow: 3px 3px 3px #eee;
padding: 8pt;
font-family: monospace;
overflow: auto;
margin: 1.2em;
}
pre.src {
position: relative;
overflow: visible;
padding-top: 1.2em;
}
pre.src:before {
display: none;
position: absolute;
background-color: white;
top: -10px;
right: 10px;
padding: 3px;
border: 1px solid black;
}
pre.src:hover:before { display: inline;}
/* Languages per Org manual */
pre.src-asymptote:before { content: 'Asymptote'; }
pre.src-awk:before { content: 'Awk'; }
pre.src-C:before { content: 'C'; }
/* pre.src-C++ doesn't work in CSS */
pre.src-clojure:before { content: 'Clojure'; }
pre.src-css:before { content: 'CSS'; }
pre.src-D:before { content: 'D'; }
pre.src-ditaa:before { content: 'ditaa'; }
pre.src-dot:before { content: 'Graphviz'; }
pre.src-calc:before { content: 'Emacs Calc'; }
pre.src-emacs-lisp:before { content: 'Emacs Lisp'; }
pre.src-fortran:before { content: 'Fortran'; }
pre.src-gnuplot:before { content: 'gnuplot'; }
pre.src-haskell:before { content: 'Haskell'; }
pre.src-hledger:before { content: 'hledger'; }
pre.src-java:before { content: 'Java'; }
pre.src-js:before { content: 'Javascript'; }
pre.src-latex:before { content: 'LaTeX'; }
pre.src-ledger:before { content: 'Ledger'; }
pre.src-lisp:before { content: 'Lisp'; }
pre.src-lilypond:before { content: 'Lilypond'; }
pre.src-lua:before { content: 'Lua'; }
pre.src-matlab:before { content: 'MATLAB'; }
pre.src-mscgen:before { content: 'Mscgen'; }
pre.src-ocaml:before { content: 'Objective Caml'; }
pre.src-octave:before { content: 'Octave'; }
pre.src-org:before { content: 'Org mode'; }
pre.src-oz:before { content: 'OZ'; }
pre.src-plantuml:before { content: 'Plantuml'; }
pre.src-processing:before { content: 'Processing.js'; }
pre.src-python:before { content: 'Python'; }
pre.src-R:before { content: 'R'; }
pre.src-ruby:before { content: 'Ruby'; }
pre.src-sass:before { content: 'Sass'; }
pre.src-scheme:before { content: 'Scheme'; }
pre.src-screen:before { content: 'Gnu Screen'; }
pre.src-sed:before { content: 'Sed'; }
pre.src-sh:before { content: 'shell'; }
pre.src-sql:before { content: 'SQL'; }
pre.src-sqlite:before { content: 'SQLite'; }
/* additional languages in org.el's org-babel-load-languages alist */
pre.src-forth:before { content: 'Forth'; }
pre.src-io:before { content: 'IO'; }
pre.src-J:before { content: 'J'; }
pre.src-makefile:before { content: 'Makefile'; }
pre.src-maxima:before { content: 'Maxima'; }
pre.src-perl:before { content: 'Perl'; }
pre.src-picolisp:before { content: 'Pico Lisp'; }
pre.src-scala:before { content: 'Scala'; }
pre.src-shell:before { content: 'Shell Script'; }
pre.src-ebnf2ps:before { content: 'ebfn2ps'; }
/* additional language identifiers per "defun org-babel-execute"
in ob-*.el */
pre.src-cpp:before { content: 'C++'; }
pre.src-abc:before { content: 'ABC'; }
pre.src-coq:before { content: 'Coq'; }
pre.src-groovy:before { content: 'Groovy'; }
/* additional language identifiers from org-babel-shell-names in
ob-shell.el: ob-shell is the only babel language using a lambda to put
the execution function name together. */
pre.src-bash:before { content: 'bash'; }
pre.src-csh:before { content: 'csh'; }
pre.src-ash:before { content: 'ash'; }
pre.src-dash:before { content: 'dash'; }
pre.src-ksh:before { content: 'ksh'; }
pre.src-mksh:before { content: 'mksh'; }
pre.src-posh:before { content: 'posh'; }
/* Additional Emacs modes also supported by the LaTeX listings package */
pre.src-ada:before { content: 'Ada'; }
pre.src-asm:before { content: 'Assembler'; }
pre.src-caml:before { content: 'Caml'; }
pre.src-delphi:before { content: 'Delphi'; }
pre.src-html:before { content: 'HTML'; }
pre.src-idl:before { content: 'IDL'; }
pre.src-mercury:before { content: 'Mercury'; }
pre.src-metapost:before { content: 'MetaPost'; }
pre.src-modula-2:before { content: 'Modula-2'; }
pre.src-pascal:before { content: 'Pascal'; }
pre.src-ps:before { content: 'PostScript'; }
pre.src-prolog:before { content: 'Prolog'; }
pre.src-simula:before { content: 'Simula'; }
pre.src-tcl:before { content: 'tcl'; }
pre.src-tex:before { content: 'TeX'; }
pre.src-plain-tex:before { content: 'Plain TeX'; }
pre.src-verilog:before { content: 'Verilog'; }
pre.src-vhdl:before { content: 'VHDL'; }
pre.src-xml:before { content: 'XML'; }
pre.src-nxml:before { content: 'XML'; }
/* add a generic configuration mode; LaTeX export needs an additional
(add-to-list 'org-latex-listings-langs '(conf " ")) in .emacs */
pre.src-conf:before { content: 'Configuration File'; }
table { border-collapse:collapse; }
caption.t-above { caption-side: top; }
caption.t-bottom { caption-side: bottom; }
td, th { vertical-align:top; }
th.org-right { text-align: center; }
th.org-left { text-align: center; }
th.org-center { text-align: center; }
td.org-right { text-align: right; }
td.org-left { text-align: left; }
td.org-center { text-align: center; }
dt { font-weight: bold; }
.footpara { display: inline; }
.footdef { margin-bottom: 1em; }
.figure { padding: 1em; }
.figure p { text-align: center; }
.inlinetask {
padding: 10px;
border: 2px solid gray;
margin: 10px;
background: #ffffcc;
}
#org-div-home-and-up
{ text-align: right; font-size: 70%; white-space: nowrap; }
textarea { overflow-x: auto; }
.linenr { font-size: smaller }
.code-highlighted { background-color: #ffff00; }
.org-info-js_info-navigation { border-style: none; }
#org-info-js_console-label
{ font-size: 10px; font-weight: bold; white-space: nowrap; }
.org-info-js_search-highlight
{ background-color: #ffff00; color: #000000; font-weight: bold; }
.org-svg { width: 90%; }
/*]]>*/-->
</style><br />
<p>I'm developing <a href="https://millenniumsoftwaredesign.com/">the website for my business</a> and I have a mix of code an images in my git repository. Since everyone seems to know that you shouldn't keep large binary files in your git repo, I decide to see what the current solutions to that problem are.<br />
</p><p>After doing a little bit of searching, I narrowed things down to <a href="https://git-lfs.github.com/">git lfs</a> and <a href="https://git-annex.branchable.com/">git annex</a>. Git lfs looks so nice and simple, except I'm not using github. Sure, you can set up your own central git lfs server yourself, but that sounded suddenly not so nice and simple.<br />
</p><p>The website for git annex immediately hits you with all the power and flexibility that it has, and I was turned off by that complexity. I did like that it doesn't require any kind of central server for me to set up, so I didn't reject it outright. After some digging I found out that it does actually support a usage that looks a lot like git lfs, where you can configure it to automatically manage certain sets of files and then you just use git commands like normal. This is very nice. Here's how you set it up in your git repo (hopefully before you have committed any binary files to git, see my <a href="/2020/03/how-to-retroactively-annex-files.html">next blog post about fixing that</a>):<br />
</p><div class="org-src-container"><pre class="src src-sh">git annex init
git annex config --set annex.largefiles <span style="color: #73d216;">'mimeencoding=binary and largerthan=1b'</span>
</pre></div><p>That's it! Now just use git commands like normal and annex will take care of binary files for you. The only tricky part to setting this up was figuring out that empty files, like those <code>__init__.py</code> files that Django creates, were considered binary files. That's why I had to add the <code>and largerthan</code> clause.<br />
</p><p>There are a couple of other things you might need to be aware of:<br />
</p><ul class="org-ul"><li>When you clone, none of the binary files will get copied to your clone until you run <code>git annex get</code>, and that will only copy over the files for the current commit. If you checkout an older commit or another branch, you might need to run <code>git annex get</code> again.</li>
<li>If you do start collaborating with others you'll have to make sure that their <code>git annex get</code> command can access the binary files. That's where you have many many options for setting up network communication that can work for your team. I have not delved into the details of that yet.</li>
<li>If you try to delete a clone, you'll discover that the annex files down under <code>.git</code> are read-only. Using <code>sudo</code>, or changing permissions with <code>chmod</code> will fix that.</li>
</ul><p>Aside from those things I haven't noticed any trickiness with git annex. It seems like a great tool.<br />
</p>Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com0tag:blogger.com,1999:blog-3669809752172683097.post-74729307826802537862020-01-22T06:01:00.000-08:002020-01-22T06:09:37.891-08:00Millennium Discount CodeI promise I'm not turning this into a spam blog for my business (see the <a href="https://twitter.com/llc_millennium" target="_blank">business twitter account</a> for all my self-promotion), but I want to get the word out about early-adopter discount codes that I have made available. I'll start an MSD specific blog and continue promoting things there as well.<br />
<br />
The discount code is for $99 off, which is a free Self-support subscription. There are only 20 purchases available with this code:<br />
<br />
a5b2758959<br />
<br />
Please give it a try, download and install the product, go through the <a href="https://millenniumsoftwaredesign.com/documentation/how-to-hello-world/" target="_blank">Hello World tutorial</a>, let me know how it all works. Thank you!Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com0tag:blogger.com,1999:blog-3669809752172683097.post-60797409576289777422020-01-21T11:03:00.000-08:002020-01-21T16:46:06.468-08:00MSD: A Red Hat-like business for Open Source EDA (Verilog, VHDL, etc.)I have been working with commercial EDA tools (Verilog, VHDL, etc.) for years and always found them to be quite overpriced and frustrating. Whenever I bring up the idea of using open source tools I get responses just like I got when first brining up Linux in the workplace back in 2001. Things like, "you get what you pay for" and "it's only free if you don't value your time." Red Hat (and SUSE/Novell) addressed those concerns for business people and made Linux mainstream (and put UNIX out of business and made a lot money for themselves in the process). Maybe a similar business could do the same for open source EDA.<br />
<br />
To that end, I have quit my job and I've spent the past few week putting this business together. What do you think?<br />
<br />
<a href="https://millenniumsoftwaredesign.com/">https://millenniumsoftwaredesign.com</a> (AKA, <a href="https://msd.llc/">https://msd.llc/</a>)Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com2tag:blogger.com,1999:blog-3669809752172683097.post-83866487936815584052019-10-28T20:10:00.001-07:002019-10-28T20:11:53.346-07:00Playing With PijulI tried <a href="https://pijul.org/">pijul</a> out a while back, I can't remember how long exactly, and it was not ready for prime time. Today I tried it out again and I have to say, it feels pretty polished. I have a couple problems though.<br />
<br />
One is, how do I un-apply patches? What I really want is the git equivalent of doing a <code>git checkout <commit></commit></code> to go to an older revision (and from there build and test, or even just look around at the files). You could do a series of <code>pijul unrecord</code> and keep answering yes to all the patches you want to remove, and then do a <code>pijul revert</code> but that seems tedious and weird. I don't want to delete the patches, I just want to un-apply them for a bit. Is that possible?<br />
<br />
The other problem I have is similar to this one. pijul promises better cherry picking than git, since it just records patches not snapshots of the repo. However, it has no cherry-pick command. It does have a <code>patch</code> command and an <code>apply</code> command that I've been trying to use to manually cherry-pick, but it's not working. For example, if I make 3 simple changes to a file (add three new lines, one after another), and record a patch for each change, is there a way to cherry pick a patch from the middle? It seems not because no matter what I try it complains that the patch in the middle is missing a dependency. I can't cherry pick just the first and third patches either. I can't even do it manually by creating traditional patches and using patch(1) because pijul doesn't output diffs/patches in normal pactch/diff format.<br />
<br />
I feel like maybe I'm missing something here.Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com1tag:blogger.com,1999:blog-3669809752172683097.post-31620322939646207342018-03-29T16:17:00.000-07:002018-03-30T10:49:50.398-07:00Fixing xref-find-referencesI was annoyed that <code>xref-find-references</code> searched more than the files in the TAGS file (it seems to search <em>everything</em> below the top level directory of the TAGS file?) so I went looking for a fix. I found that apparently my emacs knowledge is out of date (the fact that it's xref now and not tags was my first clue). I couldn't find any way to customize <code>xref-find-references</code>. Instead I found people referring to project, ivy, helm, confirm, The Silver Searcher (ag), ripgrep (rg), and dumb jump. I didn't go all the way and get into project or ivy or any of the others, but I did download ag and rg and tried them from a command-line outside of emacs and saw exactly what I was expecting <code>xref-find-references</code> to do. I figured all I needed was to replace xref-find-references with one of those. I got ag.el installed and working before any of the ripgrep packages (there's both rg.el and ripgrep.el) and then struggled to remap M-? to call <code>ag-project</code> instead of xref-find-references. The thing that finally worked was <a href="https://www.gnu.org/software/emacs/manual/html_node/elisp/Remapping-Commands.html#Remapping-Commands">remapping commands</a>. Here's the magic:<br />
<br />
<pre><code>(define-key global-map [remap xref-find-references] 'ag-project)</code></pre><br />
And actually, to work completely like xref-find-references I added one option to the ag command, <code>--word-regexp</code>, like so (oh, I also removed <code>--stats</code> which is there by default in ag.el):<br />
<br />
<pre><code>(setq ag-arguments (list "--word-regexp" "--smart-case"))</code></pre><br />
Much better. Are all those other packages worth digging into? I'm not particularly unhappy with ido.Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com4tag:blogger.com,1999:blog-3669809752172683097.post-73446469843855879852018-03-22T21:40:00.002-07:002018-03-22T21:46:59.976-07:00It Is Time To Replace Passwords With KeysIt is time to stop using passwords. The Troy Hunt article, <a href="https://www.troyhunt.com/passwords-evolved-authentication-guidance-for-the-modern-era/">Passwords Evolved: Authentication Guidance for the Modern Era</a> just got passed around the office again and wow, what a mess we are in. Instead of passwords we should switch to using <a href="http://https://en.wikipedia.org/wiki/Public-key_cryptography">public-key cryptography</a> like <a href="http://https://en.wikipedia.org/wiki/Transport_Layer_Security">TLS</a> and <a href="http://https://en.wikipedia.org/wiki/Secure_Shell">ssh</a> use.<br />
<br />
"But people can barely manage passwords, there's no way they can manage keys!"<br />
<br />
Wrong. People can't barely manage passwords, they can't manage them at all. Read the Troy Hunt article where he says, "Embrace Password Managers." We've given up on humans managing passwords and we are now relying on software to do it. We use software that securely stores our passwords and synchronizes them over the internet so that we have those passwords on all our devices. Often the software generates the passwords for us too. Guess what we could do with keys? The exact same thing.<br />
<br />
Now read the Troy Hunt article again and this time think about how it's not just end-users that can't manage passwords, but how developers also cannot manage passwords. There are UI problems, there are concerns with code injection attacks, there are problems with hashing, salting, and storing passwords and with protecting those stored passwords from thieves. Those problems all go away if developers only have to store public keys. No hashing, no salting, no secrets. Think about it. As a user the only thing you'll give to a website to authenticate is your <em>public</em> key. You don't have to trust them with any secrets at all. From the point of view of a developer, you don't have to worry about keeping your customers secrets safe anymore. What a relief! As for UI, if we do it right websites and apps don't need any UI at all for authentication. What is the UI for TLS? The lock symbol that the browser displays. That's it! Websites you visit authenticate themselves to you using strong public-key cryptography behind the scenes, under the covers. It could be the same with user authentication.<br />
<br />
Now read the article one more time and think about how not only are users and developers unable to deal with passwords, but security experts can't either. They can't agree on what makes a good secure password, what format it should be in. Special characters? Random strings of characters? Long passphrases of regular words? What's easier to create? Easier to type? Easier to memorize? Should a website show the password as users type it or not? How often should we change passwords? If we are giving up on memorizing and using password managers, does any of that matter? Maybe?<br />
<br />
Now think about public-key cryptography. Security experts generally all agree on what makes good public-key cryptography, what format the keys should be in and what length. That was all hashed out years ago. True, there might be disagreement on when stronger keys should be used, whether to use RSA or ECC, and if ECC which curves to use and so on, but regular users relying on key manager software don't have to be involved in those discussions. They don't have to worry anymore about whether they should use a special character or how long of a password to use or if passwords made up of song lyrics are a good idea or not. The discussions on key size or which ECC curve to use raises the debate way beyond trying to figure out what is user friendly but still defeats rainbow tables and script kiddies. It takes the debate up to the level of wondering which nation state might attack you. If we eliminate human involvement in coming up with authentication tokens and remove script kiddies from the attack surface altogether that's a *huge* improvement over passwords.<br />
<br />
"OK, but if we use public-key cryptography we also need to use full <a href="http://https://en.wikipedia.org/wiki/Public_key_infrastructure">PKI</a> like TLS."<br />
<br />
Do we? PKI provides identity confirmation and key revocation. Do we have identity confirmation for account passwords today? When I create an account with, say, Amazon do they verify that I really am who I say I am? Nope. Not at all. They don't care one bit about who I really am. They just want my money. What about key revocation? Do we have password revocation today? Again, other than manually logging into a website and changing your password, we don't, and we can easily duplicate that manual process with keys in order to get us started. If we don't have full PKI with our public-key authentication we are no worse off than we are today.<br />
<br />
The great thing about switching to public-key cryptography is that someday we could add in some sort of Let's Encrypt-like easy-to-use PKI if we want, which would take us light-years beyond where we are with passwords today. We aren't going to get there though if we don't take the first step of ditching passwords.<br />
<br />
Getting rid of password authentication and using public-key cryptography instead will make user authentication easier for users, developers, and security experts, and it will make us all more secure.<br />
Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com0tag:blogger.com,1999:blog-3669809752172683097.post-14521451550328168062017-09-15T08:04:00.000-07:002017-09-15T08:04:33.995-07:00Not Leaky, Just Wrong<a href="https://builders.intel.com/blog/fpga-in-the-data-center-programming-for-all/">Intel recently announced</a> new tools for FPGA design. I should probably try to understand OpenCL better before bagging on it, but when I read, "[OpenCL] allows users to abstract away hardware-specific development and use a higher-level software development flow." I cringe. I don't think that's how we get to a productive, higher-level of abstraction in FPGA design. When you look at the progress of software from low-level detailed design to high-level abstract design you see assembly to C to Java to Python (to pick one line of progression among many). The thing that happened every time a new higher-level language gained traction is people recognized patterns that developers were using over and over in one language and made language features in a new language that made those patterns one-liners to implement.<br />
<br />
Examples of design patterns turning into language features are, in assembly people developed the patterns of function calls: push arguments onto the stack, save the program counter, jump to the code the implements the function, the function code pops arguments off the stack, does it's thing, then jumps back to the the code that called it. In C the tedium of all that was abstracted away by the language providing you with syntax to define a function, pass it arguments, and just call return at the end. In C people then started developing patterns of structs containing data and function pointers for operating on that data which turned into classes and objects in Java. Java also abstracted away memory management with a garbage collector. Patterns in Java (Visitor, State, etc.) are no longer needed in Python because of features in that language (<a href="http://bryan-murdock.blogspot.com/2017/02/systemverilog-and-python.html">related discussion here</a>).<br />
<br />
This is the path that makes most sense to me for logic design as well. Right now in RTL Verilog people use patterns like registers (always block that activates on posedge clk, has reset, inputs, outputs, etc.), state machines (case statement and state registers, next_state logic...), interfaces (SV actually attempted to add syntax for this), and so on. It seems like the next step in raising the abstraction level is to have a language with those sorts of constructs built-in. Then let people use that for a while and see what new patterns develop and encapsulate those patterns in new language features. Maybe OpenCL does this? I kind of doubt it if it's a "software development flow." It's probably still abstracting away CPU instructions.<br />
<br />
Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com3tag:blogger.com,1999:blog-3669809752172683097.post-53526313603646452732017-05-24T03:58:00.000-07:002017-05-24T03:58:59.760-07:00Facebook Should Split In TwoFacebook has done wonders to get people creating and consuming content on the internet. However, Facebook has grown to the point where it has no competition and is no longer innovating in ways that benefit us. Facebook should split into Facebook the aggregator and Facebook the content hoster. You could talk about a third piece that is Facebook the content provider, which is for providing things like gifs, templates, memes, emoji, games, and other stuff like that. Because Facebook hasn't completely broken from open web standards those types of content providers already exist today.<br />
<br />
Aggregators would be where you go to set up your friend list and see your feed. It could look and feel like Facebook does now. It would have an open standard protocol that content hosters would use if they wanted to be aggregated. This could still be an add driven business, but subscription, self hosted, and DIY solutions could exist too.<br />
<br />
Content hosters could either charge a monthly hosting fee, or they could serve up their own adds. Self hosted and DIY solutions could also exist.<br />
<br />
The big benefit to this would of course be the competition. Since it's an open standard anyone could be a content host, and anyone could be an aggregator.<br />
<br />
To make extra sure there is competition, and this could come in a phase two after the initial splitting up of Facebook, there should be open standards for exporting and importing friends, follows, likes, etc. to and from aggregators, and open standards for importing and exporting content from the hosters.<br />
<br />
Speaking of follows and likes, there could also be aggregator aggregators (AAs). People could opt in to publicly and anonymously share their likes and follows and the AAs would consume those and report on trends that cross aggregator boundaries. Anonymity could be much more protected this way while still giving us that interesting information about what is trending. <br />
<br />
One tricky part of this is how do I as a content author only allow my friends to see certain posts of mine? It would have to be with encryption. My content provider could keep public keys of my friends and only my friends (well, their aggregators) would be able to decrypt my posts using my friends' private keys. I can see some challenges and holes in this, but it doesn't seem any worse overall than how Facebook protects privacy now. Open implementations and peer review could get us to better-than-Facebook privacy quickly.<br />
<br />
Facebook would ideally recognize their stagnation and initiate this split themselves. We as their user base can and should help them understand the importance of this. Hopefully it doesn't have to come down to government enforcement of anti-trust laws, but that could be a useful tool to apply here as well.Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com1tag:blogger.com,1999:blog-3669809752172683097.post-15672110782544497752017-03-13T12:35:00.001-07:002017-04-18T09:49:52.218-07:00Quick Thoughts on Creating Coding Standards<h2>
Introduction</h2>
No team says, "write your code however the heck you want." Unless you are coding alone, it generally helps to have an agreed upon coding standard. Agreeing upon a coding standard, however, can be a painful process full of heated arguments and hurt feelings. This morning I thought it might be useful to first categorize coding standard items before starting the arguments. My hope is that once we categorize coding standard items we can use better decision criteria for each category of items and cut down on arguing. Below are the categories I came up with really quickly with descriptions, examples, and decision criteria for each category. Feedback is welcome in the comments.<br />
<br />
<h2 id="sec-2">
Categories of Things in Coding Standards</h2>
<h3 id="sec-2-1">
Language Specific Pitfalls</h3>
<h4 id="sec-2-1-1">
Characteristics</h4>
<ul class="org-ul">
<li>not subjective, easy to recognize pattern<br />
</li>
<li>well recognized in the industry as dangerous<br />
</li>
<li>people have war stories and about these with associated scars to prove it<br />
</li>
</ul>
<h4 id="sec-2-1-2">
Examples</h4>
<ul class="org-ul">
<li>no multiple declarations on one line in C<br />
</li>
<li>Cliff Cummings rules for blocking vs. non-blocking assignments in Verilog<br />
</li>
<li>no willy nilly gotos in C<br />
</li>
<li>no omitting braces for one liner blocks (or begin-end in Verilog)<br />
</li>
<li>no compiler warnings allowed<br />
</li>
</ul>
<h4 id="sec-2-1-3">
How to resolve disputes on which of these should be in The Coding Standard?</h4>
Defer to engineers with best war stories. If nobody has a war story for one, you can probably omit it (or can you?).<br />
<br />
<h3 id="sec-2-2">
General Readability/Maintainability</h3>
"Any fool can write code that a computer can understand. Good programmers write code that humans can understand." –Martin Fowler<br />
<br />
<h4 id="sec-2-2-1">
Characteristics</h4>
<ul class="org-ul">
<li>things that help humans quickly read, understand, and safely modify code<br />
</li>
<li>usually not language specific<br />
</li>
<li>the path from these items to bugs is probably not as clear as with the above items, but a path does exist<br />
</li>
</ul>
<h4 id="sec-2-2-2">
Examples</h4>
<ul class="org-ul">
<li>no magic numbers<br />
</li>
<li>no single letter variable names<br />
</li>
<li>keep functions short<br />
</li>
<li>indicators in names (_t for typedef's, p for pointers, etc.)<br />
</li>
</ul>
<h4 id="sec-2-2-3">
How to resolve disputes on which of these should be in The Coding Standard?</h4>
If someone says, "this really helps me" then the team should suck it up and do it. This is essentially the "put the slowest hiker at the front of the group" principle.<br />
<br />
<br />
Alternatively these can be discussed on a case by case basis during code reviews instead of being codified in The Coding Standard. Be prepared for more "lively" code reviews if you go this route.<br />
<br />
<h3 id="sec-2-3">
Code Formatting</h3>
<div class="outline-text-3" id="text-2-3">
The biggest wars often erupt over these because they are so subjective. This doesn't have to be the case.<br />
<br />
<h4 id="sec-2-3-1">
Characteristics</h4>
<ul class="org-ul">
<li>these probably aren't really preventing any bugs<br />
</li>
<li>most can easily be automatically corrected<br />
</li>
<li>are largely a matter of taste<br />
</li>
<li>only important for consistency (which is important!)<br />
</li>
</ul>
<h4 id="sec-2-3-2">
Examples</h4>
<div class="outline-text-4" id="text-2-3-2">
<ul class="org-ul">
<li>amount of indent<br />
</li>
<li>brace style<br />
</li>
<li>camelCase vs. underscore_names
</li>
<li>80 column rule<br />
</li>
<li>dare I even mention it? tabs vs. spaces<br />
</li>
</ul>
<h4 id="sec-2-3-3">
How to resolve disputes on which of these should be in The Coding Standard?</h4>
Don't spend a long time arguing about these. Because they are so subjective and not likely to cause or reduce bugs one way or the other, nobody should get bent out of shape if their preference is not chosen by the team. Give everyone two minutes to make their case for their favorite, have a vote, majority wins, end of discussion. Use an existing tool (astyle, autopep8, an emacs mode, whatever is available for the language) to help people follow these rules.</div>
</div>
Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com2tag:blogger.com,1999:blog-3669809752172683097.post-31590610103668405282017-02-07T11:42:00.000-08:002017-02-07T11:53:52.262-08:00SystemVerilog and PythonDesign patterns in programming are when engineers find themselves writing the same code over and over to solve the same problems. Design patterns for statically typed object oriented languages (C++ and Java) were cataloged and enshrined in the famous book, "<span class="wikiexternallink"><a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns: Elements of Reusable Object-Oriented Software</a></span>" by Erich Gamma, John Vlissides, Ralph Johnson, and Richard Helm. The authors are lovingly called, <a href="http://c2.com/cgi/wiki?GangOfFour">The Gang of Four</a>, or the GOF and the book is often referred to as the GOF book.<br />
<br />
The subset of SystemVerilog used in writing testbenches is a statically typed object oriented language (it's most similar to Java). As people started using SystemVerilog to write testbenches, frameworks for writing testbenches quickly became popular. These frameworks all provide code that implements design patterns from the GOF book. The various frameworks were similar because they were all essentially implementing the same design patterns. Eventually the various frameworks all coalesced into one, the humbly named, Universal Verification Methodology, or UVM.<br />
<br />
Below is a table that matches up GOF design patterns with their UVM implementation. This was adapted from <a href="http://www.ensilica.com/wordpress_live/wp-content/uploads/DesignPatterns-DVCON.pdf">this presentation</a>:<br />
<br />
<table><tbody>
<tr><th scope="col">GOF Pattern Name </th><th scope="col">UVM name </th></tr>
<tr><td>Factory Method, Abstract Factory </td><td>uvm_factory, inheriting from uvm base classes </td></tr>
<tr><td>Singleton </td><td>UVM Pool, UVM Global report server, etc. </td></tr>
<tr><td>Composite </td><td>UVM Component Hierarchy, UVM Sequence Hierarchy </td></tr>
<tr><td>Facade </td><td>TLM Ports, UVM scoreboards </td></tr>
<tr><td>Adapter </td><td>UVM Reg Adapter </td></tr>
<tr><td>Bridge </td><td>UVM sequencer, UVM driver </td></tr>
<tr><td>Observer </td><td>UVM Subscriber, UVM Monitor, UVM Coverage </td></tr>
<tr><td>Template Method </td><td>UVM Transaction (do_copy, do_compare), UVM Phase </td></tr>
<tr><td>Command </td><td>UVM Sequence Item </td></tr>
<tr><td>Strategy </td><td>UVM Sequence, UVM Driver </td></tr>
<tr><td>Mediator </td><td>UVM Virtual Sequencer </td></tr>
</tbody></table>
<br />
If we switched from SystemVerilog to Python for writing our testbenches, would we need to re-implement each of those parts of the UVM? Python is not a statically typed object oriented language like Java and SystemVerilog. It is a dynamically typed language. Prominent and well respected computer scientist Peter Norvig <a href="http://www.norvig.com/design-patterns/design-patterns.pdf">explored this topic for us already</a>. He did this when Python was still a very young language, so he examined other dynamic languages instead (Dylan and Lisp) and he concluded that of the 23 design patterns from the GOF book, 16 of them are either invisible or greatly simplified due to the nature of dynamic languages and their built-in features. As an example to explain how this could be, he points out that a defining a function and calling it used to be design patterns. Higher-level languages came along and made the pattern of defining and calling a function a part of the language.<br />
<br />
This is essentially what has happened with dynamic languages. Many design patterns from GOF are now simply part of the language. According to Dr. Norvig, the patterns that dynamic languages obsolete are:<br />
<br />
<ul>
<li>Abstract-Factory</li>
<li>Flyweight</li>
<li>Factory-Method</li>
<li>State</li>
<li>Proxy</li>
<li>Chain-Of-Responsibility</li>
<li>Command</li>
<li>Strategy</li>
<li>Template-Method</li>
<li>Visitor</li>
<li>Interpreter</li>
<li>Iterator</li>
<li>Mediator</li>
<li>Observer</li>
<li>Builder</li>
<li>Facade</li>
</ul>
<br />
That reduces the above table to:<br />
<br />
<table><tbody>
<tr><th scope="col">GOF Pattern Name </th><th scope="col">UVM name </th></tr>
<tr><td>Singleton </td><td>UVM Pool, UVM Global report server, etc. </td></tr>
<tr><td>Composite </td><td>UVM Component Hierarchy, UVM Sequence Hierarchy </td></tr>
<tr><td>Adapter </td><td>UVM Reg Adapter </td></tr>
<tr><td>Bridge </td><td>UVM sequencer, UVM driver </td></tr>
</tbody></table>
<br />
Trusting that, if we were to write a pure Python testbench we can see that we would still likely implement a few design patterns. Thinking about this, it makes sense that we'd still probably have classes dedicated to transforming high-level sequence items to pin wiggles, just like the sequencer and driver work together to do in the UVM. It also makes sense that we'd have a class hierarchy to organize and relate components (such as the sequencer and driver equivalents) and sequences (high-level stimulus generation). Thing like that.<br />
<br />
The more striking thing is the amount of code we would <b>not</b> need.Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com7tag:blogger.com,1999:blog-3669809752172683097.post-22061073522624724892016-12-31T09:23:00.000-08:002016-12-31T09:23:42.712-08:00Adventures in Arch LinuxI just installed <a href="https://www.archlinux.org">Arch Linux</a> on my 4th machine. It has been fun and painful. Painfully fun? I have learned a lot and that is always fun. There.<br />
<br />
I have loved using Ubuntu over the last several (eight, I think) years. Ubuntu is easy to install and most things Just Work. It's easy to find packages for most software I want to run, and there is lots of help on the internet for accomplishing whatever you want to accomplish with Ubuntu. My frustrations have been that even though you can find instructions for getting whatever software you want to run, it's not always a simple apt-get install. Sometimes it's configuring a PPA source and sometimes it's compiling from source. Sometimes a PPA works and serves you well for years and then suddenly it disappears. Another frustration is out of date packages, and full system upgrades in general. Keeping up with the latest emacs was a chore. Going from one release of Ubuntu to another works surprisingly well, but it's a big enough chore that I keep putting it off. One of my desktop machines at home was still running 12.04 up until yesterday. That's 5 and a half years old now!<br />
<br />
These concerns led me to Arch. It seems to be addressing them beautifully. Every package I have wanted is either in the main repositories and I can install it with a simple pacman command, or it's in the AUR and I can install it with a simple yaourt command. There are no releases of Arch, they just continually update the packages that are in the repositories. Staying up to date is always the same pacman command to upgrade your installed packages. There are times where you have to take some manual steps to fix an interaction between two packages, or to switch from a package that has been made obsolete by another newer package, but that's fairly rare, well documented, and you just deal with it a little at a time when the situations come up. With Ubuntu dist-upgrades you had to deal with many of those scenarios all at once, every 6 months if you were keeping fully up to date. With Arch, keeping up with the latest emacs happens without me even realizing it.<br />
<br />
Where Arch is not as nice as Ubuntu is in the installation. With Arch it's all manual. What you should do is pretty <a href="https://wiki.archlinux.org/index.php/Beginners'_guide">well</a> <a href="https://wiki.archlinux.org/index.php/Installation_guide">documented</a>, but you have to type all the commands yourself and make decisions about alternative ways of doing various things. It's a fun learning experience as I mentioned at the beginning of this post, but not a process that I really enjoyed repeating over and over. This is really where Ubuntu made it's name. The nice package system and repositories were straight from Debian originally, but the installer and default configurations are what made Ubuntu so accessible. There was a joke in the early Ubuntu days that Ubuntu was an African word meaning, "<a href="http://web.archive.org/web/20080208114333/http://diveintomark.org/archives/2006/06/26/essentials-2006">can't install Debian.</a>"<br />
<br />
It turns out that there's a distribution with a name meaning, "<a href="https://antergos.com/wiki/miscellaneous/frequently-asked-questions/">can't install Arch</a>." It's <a href="https://antergos.com/">Antergos</a>. It really is just a graphical installer on top of Arch. Once it's done you are running Arch with some Antergos chosen configuration, which is exactly what I wanted. It does feel like it's still early days for this project. On one laptop I tried Antergos on it didn't have the wifi drivers I needed. I had to go back to plain arch and figure out how to load the driver by hand in order to complete the installation (that should be a blog post of its own). On another machine once the Antergos install was done the display manager would crash with a webkitwebprocess coredump. The antergos forums told me how to switch to lxdm and that fixed my problem (probably should be another blog post). I don't think a linux beginner would have enjoyed that process, but overall Antergos looks promising. Mostly I'm looking forward to never needing to do a fresh install on any of those machines ever again.<br />
<br />
Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com2tag:blogger.com,1999:blog-3669809752172683097.post-46158388879785245762016-07-19T09:44:00.000-07:002016-07-19T09:44:19.486-07:00Another SystemVerilog Streaming Example: Size MismatchI had a packed struct who's size was not evenly divisible by 8 (it was one bit short, in fact) and I had an array of bytes that I needed to stream into it. The extra bits in the array of bytes were not relevant, so I tried just doing this:<br />
<br />
<pre><code>my_struct = {>>byte{my_array_of_bytes}};</code></pre><br />
But my simulator complained that my_array_of_bytes was bigger than the destination (my_struct). It took me longer to figure out than I'd like to admit that I just needed to do this:<br />
<br />
<pre><code>bit extra_bit;
{my_struct, extra_bit} = {>>byte{my_array_of_bytes}};</code></pre><br />
That did the trick.Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com0tag:blogger.com,1999:blog-3669809752172683097.post-78724081819000880552016-04-15T09:25:00.000-07:002016-04-15T09:25:22.741-07:00Get xpra to work on Ubuntu 14.04<a href="https://xpra.org/">Xpra </a>is like screen or tmux for X apps. There is a commercial app called Exceed on Demand and xpra seems to work very similarly to that. Xpra is a very nice alternative to VNC and performs a lot better than forwarding X over ssh. Here's how I got it to work using Ubuntu 14.04 as a server and windoze (is that joke getting old yet?) as a client. Xpra says you can run Mac and Linux clients as well, but I haven't tried that yet.<br />
<br />
To get it installed and running, dig down from the main Xpra site to the trusty download area, or just<a href="https://xpra.org/dists/trusty/main/binary-amd64/"> click here for 64-bit</a>. Download the highest version numbered python-rencode and xpra packages there, then do this on your command line:<br />
<br />
<pre>sudo dpkg -i ~/Downloads/python-rencode_1.0.3-1_amd64.deb
sudo apt-get install python-numpy python-opengl python-lzo python-appindicator python-gtkglext1 xvfb
sudo dpkg -i ~/Downloads/xpra_0.15.10-1_amd64.deb</pre>
<br />
When you try to install either .deb package it might report other dependencies that are missing. Just sudo apt-get install those and then try the sudo dpkg -i command again. After it's all installed you can run an xpra server like so (the options were suggested to my by xpra the first time I tried to run it):<br />
<br />
<pre>xpra start :1234 --start-child=xterm --no-notifications --no-pulseaudio</pre>
<br />
On the windows side, download the xpra installer by clicking the link on the main xpra page. After running that it will offer to run xpra. Go ahead and do that. Choose ssh in the Mode dropdown. Leave all the other fields as they are and enter your ssh login information (what you would use to ssh to the ubuntu machine you just started the server on) and the display number (we used 1234 above when we started the server). You can leave the password field blank, it will prompt you for your ssh password after you click connect. Once you do that an xterm will open up on your windows desktop and you can start any other linux apps you want from there. There will be an xpra tray icon you can use to change settings and disconnect. After you disconnect you can reconnect and all the windows you had open will come right back just like they were when you disconnected. It also saves your state if you are disconnected from the network unexpectedly (like maybe your laptop goes to sleep or something). It's very nice. <br />
<br />
One other thing I noticed is that the apps xpra was showing me were a little fuzzy (the text in emacs was hard to read). I had to click on the xpra tray icon and change the desktop scaling option (it was making the windows larger for some reason). You can also edit the C:\Program Files (x86)\Xpra\xpra.conf file and change the desktop scaling option there (along with many other settings, for example, I turned off sound and microphone because I don't need that and I figured it might save some CPU and bandwidth).<br />
<br />
I'm glad I found xpra and got it working. It works so well, I'm really surprised I haven't heard more people talking about it. Go try it out!Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com2tag:blogger.com,1999:blog-3669809752172683097.post-24622356801896789062016-03-03T17:39:00.003-08:002016-03-03T17:39:49.646-08:00Best Part of Distributed Version Control<br />
I switched jobs recently and I am now using git on a day to day basis. My previous jobs had been either subversion (boo) or mercurial (which I really liked). Transitioning to git has been relatively easy. I've created several aliases to do things I used to do in mercurial (well, as close as I can get for some of them) and to make certain common git operations one command instead command --option --option argument [argument], and it's not too bad. Once I learned how to "bring back" "lost" commits (aka move branch pointers around with git reset) I lost my fear of losing work. I do still have some fear when I interact with our "central" git repo, because it's not always clear to me what exactly git push is going to do to the remote repo, but it's becoming more clear as I do it more and more.<br />
<br />
In all my googling to learn how to do the things I want with git I came across, <a href="https://bitquabit.com/post/unorthodocs-abandon-your-dvcs-and-return-to-sanity/">Unorthodocs: Abandon your DVCS and Return to Sanity</a>." I have to agree with some of what Benjamin says there. For me sane branching and merging was the number one reason I was first attracted to distributed version control and Benjamin is right, good branching and merging could be provided by a centralized tool. In fact, most people seem to be using decentralized tools just like they used their centralized tools in the past (see: github, gitlab, bitbucket, even hgweb).<br />
<br />
I have found, however, that the longer I've used mercurial (and now git), the thing I love most about them is local commits. I'm pretty sure that local commits are really the thing people want when they talk about needing good branching and merging. 99% of the time, people just want a way to commit their work but not inflict it on the rest of the team. Then they would like to do some testing, commit and checkpoint their some more, and repeat that until they are sure it's ready to share. With old centralized tools the only way to do that is with branches and merges (it's actually the only way with DVCS tool too, but they have the ability to mostly hide that from you).<br />
<br />
The longer we used mercurial at my last job, the less and less we used branches. The workflow was basically, do some work, commit it, post the changes to review board for review, and then once you have tested and had your code reviewed, rebase it onto the main branch (after folding all the work-in-progress intermediate commits together) and push. The history in our main repo was one straight line. Easy to look at and find the changes in the history you cared about.<br />
<br />
The more advanced workflow might have involved downloading a patch from reviewboard and importing it as a local commit to test it out in your local clone, or sending a patch directly to someone else for them to import as a local commit in their local clone to test. In either case you could then push that new commit (imported from the patch) or strip it if you didn't like it. You could also make modifications, amend the commit with those modification, etc., etc.<br />
<br />
The cognitive load of that workflow was so small and nothing you did in the experimental development stage could affect anyone else. Your own work was safe, your co-workers work was safe, yet you could share work with each other very easily too. The commands you had to know were literally:<br />
<br />
hg log # -G was sometimes nice<br />
<br />
hg commit # maybe with --amend<br />
<br />
hg incoming # to preview a pull <br />
<br />
hg pull --rebase<br />
<br />
rbt post # code review<br />
<br />
hg outgoing # to preview a push<br />
<br />
hg push<br />
<br />
<b>That's it!</b> Advanced commands were:<br />
<br />
hg update # to jump to another revision<br />
<br />
hg export > patch-name<br />
<br />
hg import patch-name<br />
<br />
hg strip<br />
<br />
Notice the lack of HEAD^ and reset --hard and checkout -b --track. Man, those were the days. Despite the more obtuse command, you can use that workflow with git too, and I'll probably learn how because right now everything we do is create a branch (which includes inventing a name for it) push to central server, pull (or should I fetch?) from central server, and merge on top of merge on top of merge. It's a lot more to think about and keep straight in your mind, even without git's complex and unintuitive commands.<br />
<br />
Having the ability to have those local commits, commits that are essentially in a draft state (not intended to be inflicted on the whole team) is the real killer feature of distributed version control tools. Yes, you can have that draft state even in a centralized tool by committing to branches, but the amazing thing about DVCSs is you don't *need* to use an explicit branch. You just commit, right on to trunk/master/default (whatever you call in), and it's local. A draft. A work-in-progress. That's the default mode of operation. And isn't that how it should be? The default, no effort, no cognitive load mode of operation should be: create a private, draft commit. When you are ready to put that commit into production, then a little cognitive load is OK.<br />
<br />
When you use git and the Very Branchy development model, you keep much of the cognitive load of centralized systems and using branches to maintain your work in progress. The trick with DVCS tools is that you don't have to think about branches at all. Just commit. A simple pull --rebase is all it takes to integrate your changes with others, still privately, still preserving your original commit in case you need to go back. Do the simplest thing that could possibly work. I think I've heard that somewhere before.Bryanhttp://www.blogger.com/profile/11394436715172971234noreply@blogger.com3