This is a blargg

There are many blarggs like it, but this one is mine...

Seriously though, this is where I put random bits of information. I'm way too lazy to set up a real blog somewhere so this will have to do for now. It is held together by RCS and probably invalid XHTML 1.0. I hope you can enjoy it anyway. It serves as a reminder for myself, because I tend to forget things more quickly than people my age should, but what can you do...

Normally, when people start a blog, they start introducing themselves. I won't do that, because somebody who lands on this page probably already knows me. Those who don't, rest assured that there's not much to say anyway.

Where normal blogs have a comment form, I'd like to put E-Mail, because this already exists, works, and doesn't require yet another signup. Please mention if you would like me to mention your real name or mail-address in the comment, otherwise I'll add it anonymously. Information on how to contact me can be found here.


UFS file system snapshots on NetBSD

2012-01-02 19:09

NetBSD has ZFS in -current, but it's a little-known fact that it's also possible to take snapshots from UFS (well, as long as they have a UFS2 superblock) volumes. This feature has been present since NetBSD 2, but unfortunately doesn't seem quite stable yet. I currently have three kernel crashdumps lying around which I intend to debug, so beware and back up any data before playing around with it. Taking a snapshot works like this (let's say we store all out snapshots in /var/snap):

# fssconfig -c fss0 / /var/snap/`date +%F-%R`

Afterwards, the snapshot has been created and can be shown with fssconfig -vl, and unconfigured again with fssconfig -u fss0. So far the examples which are also listed in the manpage, but of course you can also mount the snapshot (read-only, of course): mount /dev/fss0 /mnt and poke around in it.

What also wasn't clear (to me) from the manpage alone is that you can open old snapshots the same way you create new ones, e.g. with fssconfig -c fss1 / /var/snap/`date -d yesterday +%F-%R`.

This might come handy before updates... But I haven't found out how to replay a snapshot back to the filesystem again, in case the update fails. Maybe with a kludge using rsync, but I'd prefer a native method.

Update (2012-02-11): the native methode one's supposed to use is apparently dump/restore(8). I wonder whether it works in-place (i.e. piping dump into restore in order to put it back to the main filesystem) or whether I need to temporarily dump it to another location.

Update (2012-05-22): The snapshot experiment made the filesystem die. Due to this, the machine is pending reinstallation. Actually, the bug I hit is rather funny, because executing urxvt made the machine spontaneously reboot without any prior error message. Without urxvt, the machine seems stable-ish (and reboots much less frequently). Of course, this could also mean hardware failure, but let's not assume the worst right from the start. Unfortunately, I currently don't have enough time left for NetBSD experiments, so the debugging part is deferred (possibly until forever, or at least until it's no longer relevant).

Scratchpad

2012-01-03 15:37

I'll note some things in this "post", which may not be of general interest, but worth knowing and likely to be forgotten unless written down somewhere. This is work in progress and will be expanded whenever I think of something.

Why RCS is still awesome

2012-01-03 19:02

There's about a bazillion blog posts about how to use git, but none about why RCS is still awesome.

I don't really know why, but I'm a bit obsessed with RCS. Some people call me backwards, but one can get used to that, and it can certainly be noted that technology isn't bad just because it's old. For me, RCS is an example for something from the stone-age that can still be useful today and lacks counterparts in more "modern" alternatives.

As already mentioned in the description at the beginning of this document, I use RCS to manage changes to this page. You can download the complete history with all changes here. Feel free to check it out.

But first things first. RCS can only manage single files, while pretty much everything else (except for SCCS, which I never used) manages multiple files in a repository/directory and subdirectories. While this sounds like a horrible drawback from the dawn of time when dinosaurs walked the earth, it can come in handy when the files in the directory where you want to add revision control are completely unrelated, the most prominent example being /etc on a UNIX system. With git, one would have a bunch of unrelated commits cluttering the timeline, while with RCS it all stays clean and single-file, the way I consider optimal for this particular use case.

Other than that, RCS has pretty much everything you want. It even supports branches. Branches are a feature I rarely need for config files, but can come in handy for instance while merging changes from the official upstream configuration. I use this for instance on my exim4 configuration. My changes go to the trunk (the main branch), upstream changes go to a branch named OFFICIAL_CONFIG, which starts at the beginning of the timeline (for people who know a bit about RCS or CVS, revision number 1.1, which makes the branch number 1.1.1). Whenever upstream releases a new version of the config file, it is commited into this branch, and the changes are merged into the top of the trunk.

What's also nice about RCS is that one can just use it for files where one wants to track changes temporarily, and which one wants to dispose of completely a week later. For instance, I often make small, but nontrivial changes (bugfixes, patches, from a release tarball so there's no SCM already present that I could use) to projects I'm not directly affiliated with; these minor changes often happen only to one or two files. Then, before saving the fix, I suspend my editor (^Z, shell-job-control, another illegitimately widely ignored feature) and check the working revision of the file into an RCS file via ci -l filename, resume my editor, write the changes and continue, checking in more revisions as I make progress. Once done with the file, I can then send a complete diff (rcsdiff -r1.1 -u filename, -u meaning unified diff, remember that RCS is old ;) of all the changes I made to the developers/mailing list.

For a blog or wiki (another use case where revisions are better managed per-file than as changesets), the built-in keyword expansion (see the bottom of the page for an example of the Id keyword) is nice. I can see directly on the page whether it's the current version of my website or some random old one that I never really wanted but that's stuck in the firefox cache for miraculous reasons. Also particularly the Id-keyword mentioned above is useful, because it saves me from updating the timestamp in my HTML pages by hand. I have come to the conclusion that managing my blog by hand without the help of any fancy CGI magic is far easier, and so far, it seems I've not been proved wrong.

Another feature that makes RCS a great plus is that it's often already there as part of the base system of whatever Unix flavour one uses, even if it's an older release, RCS is almost always there, for instance on that old FreeBSD 5.x machine you still have hidden somewhere in your closet, which means you get instant version control even on machines where you can't install packages (no administrative privileges) and don't want to start compiling your own tools. (Example: nasty, outdated university machines)

If you have any questions or ideas regarding RCS and where else it can be used, don't hesitate to send me lots of E-Mails. :)

I might be writing a short tutorial with general usage instructions for RCS at some point later in future, if there's enough people reporting their interest, although the RCS manpage is an excellent source of documentation. For anyone interested, I suggest starting with rcsintro(1).

Atom feed

2012-01-04 18:24

I spent another day procrastinating an excercise sheet, so I decided to work on an Atom feed for the blargg. I created a totally awful XSLT stylesheet that generates a feed after checking out the RCS-file when I'm done working on it. You can find the result of this rather cumbersome operation here.

Some things I found out during the process:

Nasty factory defaults on WD Caviar Green harddisks

2012-02-17 17:07

I have two or three WD caviar green harddisks. If you intend to buy one, this is a reminder to turn the "intellipark" feature off. By factory default, the drives park their heads after 8 seconds without access. Then, after 9 seconds, something tries to read or write on the disk, and the head is moved back onto the disk. This quickly increases the load cycle count. This is marketed as being a feature to save power, but people have reported that it only makes the disk die faster. (Note, I turned it off in advance to prevent this right away.)

There is a DOS tool called wdidle3.exe to disable the heads from being parked before any nasty surprises occur (e.g. premature drive death). According to the ubuntuusers wiki (German) one should be careful with wdidle3 because it might break the firmware while flashing. To me, this sounds like bullshit, since the linux port only executes a vendor specific ATA command on the drive, so why would wdidle3 need to do anything different? But then again, I don't know what these vendors put in their tools. In any case, you're messing with your harddisk here, at a very low level. Be careful. You can't say I haven't warned you ;)

Links: port of wdidle3 to linux (tried it, worked for me, might work/be portable enough to be ported to *BSD), some other blog post about the problem. There were some more references, but I lost them.

Keeping pkgsrc clean with unionfs

2012-03-19 01:09

The goal is to mount pkgsrc read-only (in my case via NFS), and an unionfs on top of it so that files written into the pkgsrc tree will land elsewhere. This is supposed to keep the tree clean. There are two different implementations to choose from: Aufs and Unionfs. Due to reasons still not entirely clear to me, all of this will happen using Unionfs on Slackware 13.37. There is some kernel-patching involved, which is boring to write about and hence left as excercise for the reader. The reason is that I want to keep a pristine, untainted pkgsrc tree without keeping and updating multiple checkouts.

In theory, pkgsrc already supports mounting the tree read-only, but there are various variables that default to putting things into ${_PKGSRCDIR} and I want to save me the trouble to find and replace them all (note that finding out how to use unionfs will probably take much longer, but don't even try to apply reason when talking to crazy people).

The read-only pkgsrc-tree is mounted in /mnt/pkgsrc. What happens next is mostly self-explaining after reading the documentation. Who would have thought it could be that easy?

# mount -t unionfs -o dirs=/usr/pkgsrc=rw:/mnt/pkgsrc=ro none /usr/pkgsrc
# touch /usr/pkgsrc/foo1234
# ls /usr/pkgsrc | grep foo
foo1234
# umount /usr/pkgsrc
# ls /usr/pkgsrc
foo1234
Yay! This looks promising. What it does is this: It mounts /usr/pkgsrc (a previously empty directory) as an overlay over the read-only pkgsrc-tree. Any files not present in the overlay will be looked up in the read-only pkgsrc tree. Then, when I create a file foo1234, it is written into the writable location. Essentially, this allows me to change parts of the pkgsrc tree locally. After unmounting the union, we can take a look on how it manages this internally.

But what about whiteouts, i.e. when you remove a file in the writable union? This is documented in detail here, but I just want to take a quick look (tl;dr-syndrome). Let's mount it again..

# mount -t unionfs -o dirs=/usr/pkgsrc=rw:/mnt/pkgsrc=ro none /usr/pkgsrc
# rm /usr/pkgsrc/README
# ls /usr/pkgsrc | grep README | wc -l
# ls /usr/pkgsrc/README
/bin/ls: cannot access /usr/pkgsrc/README: No such file or directory
# umount /usr/pkgsrc
# ls /usr/pkgsrc/ -la
total 8
drwxr-xr-x  2 root root 4096 Mar 19 15:59 ./
drwxr-xr-x 18 root root 4096 Mar 19 01:35 ../
----------  1 root root    0 Mar 19 15:59 .wh.README
Aha! This is actually a bit dirty, but there's probably not a better way to do it, at least not a straightforward one. Maybe with extended attributes, but not every file system supports them. Let's restore it by removing the whiteout file:
# rm /usr/pkgsrc/.wh.README

Now I want to add the wip repository, which is mounted read-only in /mnt/wip. Adding it as another branch won't work, because it has to be mounted in a sub-directory. Of course, I could just move it in the tree, but that'd be boring :-)

# mount -t unionfs -o dirs=/usr/pkgsrc=rw:/mnt/pkgsrc=ro none /usr/pkgsrc
# mkdir /usr/pkgsrc/wip
# mount -t unionfs -o dirs=/usr/pkgsrc/wip=rw:/mnt/wip=ro none /usr/pkgsrc/wip
mount: wrong fs type, bad option, bad superblock on none,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)
       In some cases useful info is found in syslog - try
       dmesg | tail  or so
Interesting, the "intuitive" way doesn't work.

# umount /usr/pkgsrc
# mount -t unionfs -o dirs=/usr/pkgsrc/wip=rw:/mnt/wip=ro none /usr/pkgsrc/wip
# mount -t unionfs -o dirs=/usr/pkgsrc=rw:/mnt/pkgsrc=ro none /usr/pkgsrc
# ls /usr/pkgsrc/wip 
/bin/ls: cannot access /usr/pkgsrc/wip: Operation not permitted
It doesn't work the other way around either, so there's a shortcoming here. Maybe Aufs does it better, but I'm currently not in the mood for cloning three git repositories. Because I like dirty hacks, here's a workaround:
# umount -t unionfs -a
# mkdir /mnt/wip-local
# mount -t unionfs -o dirs=/mnt/wip-local=rw:/mnt/wip=ro none /mnt/wip-local
# mount -t unionfs -o dirs=/usr/pkgsrc=rw:/mnt/pkgsrc=ro none /usr/pkgsrc
# mount -o bind /mnt/wip-local /usr/pkgsrc/wip
# ls /usr/pkgsrc/wip | wc -l
3276
# touch /usr/pkgsrc/wip/foo1234
# umount /usr/pkgsrc/wip
# umount /mnt/wip-local
# ls /mnt/wip-local
foo1234
Have fun!

Update (2012-05-22): It should be noted that on at least Debian and Ubuntu (and Archlinux and probably others, but these were the ones I could quickly check), aufs is patched into the mainline kernel. Why you would want to use pkgsrc on these isn't entirely clear to me though.

Avoiding getline(3) collisions

2012-05-02 10:47

Recently, there has been a new wave of getline(3) double-declaration related mailing list posts. getline(3), which was previously a GNU extension was accepted into POSIX.1-2008, which means software with it's own getline function will no longer properly compile. (Usually, the type signature of the POSIX and project specific getline function will also be slightly different).

Software that's no longer really maintained is unlikely to put fixes online. In order to compile such software anyway, there are some options (just to state the obvious):

Known affected software: CVS, ldapvi, thttpd.

Announcing feedreader project

2012-05-07 16:05

I've written a feedreader and I'm not certain where else to announce it if not on the blargg. This is a rather small python program (using feedparser) which stuffs newsfeeds into mailboxes so that I can read them with mutt and which runs best as a cronjob. I heard there were certain issues with newsbeuter, which I personally actually never used, and I've wanted a newsreader without having to learn yet another set of keybindings for yet another application (and I also believe that the mutt of feedreaders is mutt), so here it is, released to the public, for the greater good of all of us.

Note that as of today, the official version number is the one in the RCS-Id, even if releasing the first version as 1.x doesn't really seem sensible, it makes even less sense to introduce yet another independent version number.

Thanks to s4msung, without whom I would probably have lost interest in the project in the process. In case anyone ever discovers this, I'd like your feedback. The code is currently rather ugly-ish (well, I've seen worse), but I don't estimate it's lifespan to be very long anyway, given that mozilla already removed the Newsfeed button from firefox's address bar and most people don't care about feeds anyway. (I mean, how long do you imagine us being stuck with RSS and Atom for syndication under these conditions? Then again, software lifespan has always been mis-estimated in the past, so let's not get into this).

Please note that it makes absolutely no sense to use it with anything but a console mail reader, because only a terminal will ever hilight a link in a "To"-Header.

The code is placed in the public domain. As usual, patches and criticism are welcome.

Update (2012-05-08): The project has moved to a git repository on bitbucket because this eases collaboration with other people.

Avoiding Google Search redirection

2012-05-24 15:09

One thing that never ceases to annoy me is google obfuscating links so I can't copy them out. When just clicking on links, it's mostly irrelevant, but when you try to download other file types or copy parts of a longer URL, because you want to use it for one of google's advanced features (like "inurl:"-search), it quickly becomes very annoying, because google also replaces parts of the real URL by dots, which it usually shows as non-clickable, green URL at the very bottom of each result.

Anyway, this Greasemonkey script finally allows me to directly copy links out of google without the redirect. I tried two other Greasemonkey scripts, but they didn't work as promised (which is the main reason for me to write this entry). I hope you can enjoy a rewrite-free google as much as I do..

Thanks to dangoakachan for this script. You made my life more awesome, therefore you are awesome. :-)

Fuckings to google for being so hungry for every tiniest bit of data that they decided it's a good idea to rewrite all URLs over their own servers. I'd have switched to Duckduckgo ages ago, but it finds what I'm looking for even more rarely than google. (Which gives me useless Ubuntu results for most things I'm interested in).

Minimising browser attack surface by user separation

2012-06-20 09:57

Well, today during breakfast, while reading jwz's pages about old browsers, I decided that browsers are a large attack surface. I point mine at different sites on the internet all day long, which might include browser exploits, so I decided, it might be a good idea to let firefox run under a separate UNIX UID. I had been becoming increasingly concerned with old software and vulnerabilities ever since I noticed that the software running on the CIP pools at my university is very ancient. (Don't ask me how that concerned me, I really don't know.)

It turned out that running firefox as another UID is actually fairly easy, I came up with the following hack.

I chose to use ssh (but not X forwarding, which would probably break MIT-SHM and other X11 features or at least add another layer of indirection), because I can easily authenticate with my SSH key. Using su, I'd have to dig into configuring pam to allow me to run programs as "browser" without entering my password every time. An alternative would have been to just use sudo, but I wanted to minimise attack surface, not maximise it. After creating the user, I just copied and chowned ~/.mozilla into browser's home directory.

The script basically allows the user browser to access my X11 display every time it's started (because xhost, unlike xauth does not seem to keep this data persistently between restarts; xauth has a ~/.Xauthority file for that purpose; I'm rather unfamiliar with both though, and this attempt worked just fine, so I decided to go with it.) It's actually pretty trivial, since everything else it does is forking (-f) an ssh connection to browser@localhost, setting the DISPLAY variable to my local X11 display so that firefox can connect to it, and invoke firefox with all parameters I passed it. Since I sometimes change my browsers, all I had to do is replace ~/.bin/www-browser (which originally was a symlink to my favourite browser) to be this script instead.

Although this is trivial, most people haven't done this before, so I thought writing something about it might be useful.

Update (same day): Of course it works for all other untrustworthy applications you may happen to run: XMPP clients, OpenOffice, you name it. Obvious downside is that you need to be root in order to create n accounts for the n things you usually do with your computer. These programs won't be able to read/write ~/.ssh and similar important files. I obviously also also recommend creating one account per application instead of reusing the browser account for e.g. OpenOffice because it's more convenient. If you need shared files, create a shared directory. You need to be root on your machine for this to work, anyway.

Further ideas:

Examples are currently not included, since I'm not trusting my iptables/pf-fu enough to be absolutely certain I didn't screw up, so that's currently left as an excercise for the reader. I'm looking forward to hear from you regarding this whole thing, though.

Annoying firefox features

2012-06-22 15:32

So this is about the various annoying Firefox features and how to turn them off.

Feature Workaround Version
Download window is suboptimal There are pretty download manager extensions, for instance Download Statusbar. n.a.
Firefox searches non-FQDN text entered to the URL bar using the default search engine In about:config, set keyword.enable to false 2.0[1]
Gopher support was dropped It was never much good to begin with, use the overbite extension instead. 4.0
RSS/Atom feed button was removed from the "location" bar There are several addons to add the feature back in. However, this was a really cool feature and it's a pity it was removed for the stupidest possible reasons. 4.0
Version number inflation none known 5.0
Parts of the URL other than the domain are displayed in a grey shade Set browser.urlbar.formatting.enable to false in about:config. 6.0[1]
Parts of the URL are hidden by firefox Set browser.urlbar.trimURLs to false in about:config 7.0
Firefox hides the forward button if there's nothing to go forward to, rather than graying it out. ffextensionguru mentions that enabling small icons circumvents the problem. This thread on the mozilla support forum mentions that putting a button between the forward and backward button also unhides the forward button. This is a theme thing, so using a different theme also solves the problem. 10.0
"Smooth Scrolling" is turned on by default, resulting in eye cancer Set general.smoothScroll to false in about:config 13.0
Firefox shows a "Home screen" with recently accessed URLs This can be toggled by a small icon in the upper right corner or by setting the start page to an empty document. 13.0

To get a decent-looking firefox back, it's also possible to use the Firefox 2 or Firefox 3 themes, which also solves the problem of the hidden forward button. Also it makes me feel young again :-)
I might have forgotten some. The table will be updated whenever I can think of anything. I'm pretty sure there is (or at least will be) more.

mkpatches, vim and pkgsrc

2012-08-17 11:02

I recently found out about vim's patchmode option, which greatly decreases my pain when patching software for pkgsrc. The variable is set to an extension which is appended to the original file name and which specifies a file where a copy of the original file will be saved and remain afterwards unchanged. (e.g. after subsequent vim restarts).

After I'm done making changes, I then change directory to the package directory and run mkpatches (from pkgtools/pkgdiff), which diffs all files ending in orig to the modified version and puts a patch in the patches subdirectory of the package directory.

Vimrc snippet:

augroup pkgsrc
au!
au BufRead /usr/pkgsrc/*/*/work/*       set patchmode=.orig
au BufRead /usr/pkgsrc/*/*/patches/*	set syntax=diff
augroup END

lynx -dump

2012-08-18 21:59

In this article, I'm going to compare several text browsers by their -dump output. The dump option is used to dump the HTML to plain text after rendering, which is e.g. useful for reading HTML mails with a text-based mail reader, or dumping RSS/Atom feeds to HTML without having to implement my own renderer in the feedreader. Because I decided my descriptions are rather useless without a screenshot, I've dumped http://www.google.com/ncr in each of the browsers and included them in a <pre> tag. Based on my IP, w3m's result is in German, while the other browsers seem to default to sending Accept-Language: en.

Conclusion: Only lynx and elinks have a -dump option that works for me. Since I like elinks's curses UI better than lynx's, and I sometimes use a text browser for actually browsing websites, not dumping them, I choose elinks as my favourite. However, that's a matter of personal preference. A big plus for lynx is that it outputs UTF-8 characters as such.

Cloning the NetBSD version history

2012-09-22 09:48

I'm going to explain how to clone the NetBSD CVS repo, because it's not completely obvious. Apparently, the methods you can use are cvsup (written in Modula 3, so you need to install an old version of the Modula 3 environment just to do this), csup (cvsup reimplementation in C) or rsync.

The problem here is, that you will have to pick a mirror and hope (or know) it works, since the mirror list for mirroring the entire cvsroot (du -sh: 1.5G) instead of just checking it out by anoncvs (du -sh: 6.5G) is not published along with the other mirrors.

Your best bet is picking one of the CVSWeb CVSup mirrors, because running a CVSWeb mirror implies having a copy of the entire history, but cvsweb.se.netbsd.org, which is the only European option, was so horribly slow that I had to switch to the American mirror. The official supfile for CVSWeb CVSup mirrors can be used, but some modules listed in there, e.g. netbsd-syssrc seem not to exist. Instead, I used netbsd-src, which worked.

For rsync, it's even worse, because the mirrors (are there any?) aren't listed at all. As it was pointed out to me on IRC by phadthai,

rsync -rltHz --delete rsync://anoncvs.netbsd.org/cvsroot/ /data/rsync/netbsd-cvs/
should do the trick. The usual rsync mirrors don't have the "cvsroot" module, though.

Some developers are considering to switch to some other VCS, which might in fact make this sort of thing easier. FreeBSD recently switched to SVN entirely, which doesn't help here at all but rather makes it harder, with the SVN repository format not consisting of plaintext RCS files, so they apparently still mirror their SVN in a CVS repo. DragonflyBSD has been on git for a while, although voices in the NetBSD community claim that no DVCS right at this point supports the huge commit history of the project very well, which is apparently why they didn't switch to anything else yet. I think this decision is right, because breaking consistency just to break consistency again in five years does not sound benificial.

The key problem is, though, that for ordinary users or infrequent contributors, CVS has the advantage of not having the overkill of pulling in the entire version history, which is often unneeded and unwanted. Now, if there was a VCS (other than CVS) that offered a distributed mode, while also supporting a checkout-mode (not git clone --depth=1, which isn't a real checkout mode), while also supporting the huge backlog of an entire operating system over some 20 years, maybe that would be an option.

Granted, git would probably shrink the space required for an entire repository using it's builtin compression, but at the end of the day, you would need to check it out again, which would add another 1.5G again.

Update (same day): sjamaan mentioned in a conversation on IRC that bzr supports plain checkouts in addition to it's distributed default mode of operation. It's nice to see someone else already thought of this and implemented it. Unfortunately, bzr would probably barf over the repository size, but I doubt anyone has tried it yet.

Update (2012-12-13): Because of a security breach, FreeBSD's CVS and CVSup (and of course also CVSweb) has been permanently taken offline. Good riddance.

Changing pkgsrc mirrors

2012-10-01 11:03

It's as simple as

cd /usr/pkgsrc
find . -type d -name CVS -exec gsed -i 's%anoncvs@anoncvs3.de.NetBSD.org:/cvsroot%anoncvs@anoncvs.fr.NetBSD.org:/pub/NetBSD-CVS%' '{}/Root' \;
Too bad NetBSD sed doesn't have -i. Maybe %w {}/Root can be used instead, but I didn't feel like trying. That said, the only remaining German mirror (were there more?) is rather slow.

Update (2023-02-16): I was pointed to the fact that the German mirror went down long ago, and in fact it's been 10 years since the last time I touched this file :)

Regardless, I wrote a little rant about sed -i. NetBSD-6 apparently had no -i, so I guess in 2012 this wasn't even wrong. And although the NetBSD sed(1) manual page nowadays seems to be more or less identical to the one on FreeBSD, people on NetBSD-9 have reported that sed -i as well as sed -i '' work for them? Maybe NetBSD sed has changed more from FreeBSD than the manpage makes it seem... which adds some extra confusion for me. The more you know you don't know! (thanks to Burnhard and logix on IRCnet #netbsd)

About sed -i

As I learned some time later but never corrected here, FreeBSD sed (as well as possibly any other sed descended from FreeBSD's – which includes modern NetBSD) does have -i, but I didn't understand the difference between how the command behaves on GNU vs. BSD sed.

GNU sed -i takes an optional parameter without a space between -i and the argument, such as -i.orig, and it is most often used with an empty argument to -i in Linux shell scripts, which makes sed not keep a backup of the original file.

Compatibility pitfall

If you see something like sed -i s/bar/foo/ filename in a shell script originating on a GNU system, you'll get the following cryptic error message trying to execute it on FreeBSD 12:
bsd$ sed -i s/bar/foo/ .ssh/authorized_keys
sed: 1: ".ssh/authorized_keys": invalid command code .
bsd$
and the fix on FreeBSD is to use instead:
bsd$ sed -i '' s/bar/foo/ .ssh/authorized_keys
bsd$
Likewise with GNU sed, the FreeBSD command fails with a cryptic error message:
gnu$ sed -i '' -e s/foo/bar/ go.mod
sed: can't read : No such file or directory
gnu$ sed -i '' s/foo/bar/ go.mod
sed: can't read : No such file or directory
gnu$
while this works:
gnu$ sed -i -e s/foo/bar/ go.mod
gnu$ sed -i s/foo/bar/ go.mod
gnu$

On FreeBSD sed, the parameter is not optional, but because an empty string cannot be passed to a command via getopt(3) unless it is a separate command argument, the GNU syntax above of passing plain -i for making sed not keep a backup file does not work on FreeBSD. Not sure what they do on GNU sed to get the special behaviour out of getopt, because declaring it as an option instead of a flag probably isn't enough...

What this means for the original problem

Using only FreeBSD/NetBSD sed, a command with a bit more status information thanks to -print to switch pkgsrc mirrors would be:

bsd$ cd /usr/pkgsrc
bsd$ find . -type d -name CVS -print -exec sed -i '' 's%anoncvs@anoncvs3.de.NetBSD.org:/cvsroot%anoncvs@anoncvs.fr.NetBSD.org:/pub/NetBSD-CVS%' '{}/Root' \;

If you're not bothered by having a bunch of .orig files lying around, you can also use -i.orig which I guess should work everywhere.

OpenBSD

For the sake of completeness, OpenBSD sed if implemented according to the manual seems to mimic GNU sed with regards to the handling of space with -i; If someone can verify that, please let me know. I don't have an OpenBSD machine currently.

Other *NIX

You tell me.

Font madness

2012-10-02 07:10

Does anyone reading this have any idea about fonts, fontconfig and Xft? Currently, I am facing the following problem: Using urxvt -fn 'xft:Adobe Source Sans Pro:' renders extremely wide spacing between letters (Screenshot). I think there is some fontconfig-xml-fu needed to get rid of this. Any help is appreciated! (But maybe I just entered the wrong font string.)

Explaination: My laptop has a 1024x786 screen. I am running a tiling wm, and it would be super awesome to be able to actually use the tiling feature to place two terminal windows next to each other, and have them both be at least 80 characters wide. I consider 80x25 characters the absolute minimum size for a terminal emulator window, and so does probably most of the software.

Unfortunately, my understanding of fonts and how Xft handles them is very poor at the moment. I'd like to learn about it, but the documentation is usually a bit sparse.

Edit (2012-12-13): I realise that the "lazyweb" approach does not really work when nobody else actually reads your blog...

Edit (2013-02-27): Since Source Sans Pro works on a different machine with a different font string, I assume I indeed entered the wrong one. Sparse documentation aside this particular post was complete bogus, so you may wish to ignore it. It's still here because I dislike removing something I already published. Just be warned that I'm stupid and make mistakes.

kopimi Public Domain

$Id: blargg.xhtml,v 1.74 2023/02/16 15:57:33 mw Exp mw $