This is a blargg
There are many blarggs like it, but this one is mine...
Seriously though, this is where I put random bits of information. I'm way
too lazy to set up a real blog somewhere so this will have to do for now.
It is held together by RCS and probably invalid XHTML 1.0. I hope you can enjoy
it anyway. It serves as a reminder for myself, because I tend to forget things
more quickly than people my age should, but what can you do...
Normally, when people start a blog, they start introducing themselves.
I won't do that, because somebody who lands on this page probably already knows
me. Those who don't, rest assured that there's not much to say anyway.
Where normal blogs have a comment form, I'd like to put E-Mail, because this
already exists, works, and doesn't require yet another signup.
Please mention if you would like me to mention your real name or mail-address in
the comment, otherwise I'll add it anonymously. Information on how to contact me
can be found here.
UFS file system snapshots on NetBSD
2012-01-02 19:09
NetBSD has ZFS in -current, but it's a little-known fact that it's also
possible to take snapshots from UFS (well, as long as they have a UFS2
superblock) volumes. This feature has been present since NetBSD 2, but
unfortunately doesn't seem quite stable yet. I currently have three kernel
crashdumps lying around which I intend to debug, so beware and back up any data
before playing around with it. Taking a snapshot works like this (let's say we
store all out snapshots in /var/snap):
# fssconfig -c fss0 / /var/snap/`date +%F-%R`
Afterwards, the snapshot has been created and can be shown with
fssconfig -vl
, and unconfigured again with fssconfig -u fss0
.
So far the examples which are also listed in the manpage, but of course you can
also mount the snapshot (read-only, of course): mount /dev/fss0 /mnt
and poke around in it.
What also wasn't clear (to me) from the manpage alone is that you can open
old snapshots the same way you create new ones, e.g. with
fssconfig -c fss1 / /var/snap/`date -d yesterday +%F-%R`
.
This might come handy before updates... But I haven't found out how to replay a
snapshot back to the filesystem again, in case the update fails. Maybe with a
kludge using rsync, but I'd prefer a native method.
Update (2012-02-11): the native methode one's supposed to use is
apparently dump/restore(8). I wonder whether it works in-place (i.e. piping
dump into restore in order to put it back to the main filesystem) or whether I
need to temporarily dump it to another location.
Update (2012-05-22): The snapshot experiment made the filesystem die.
Due to this, the machine is pending reinstallation. Actually, the bug I hit is
rather funny, because executing urxvt made the machine spontaneously reboot
without any prior error message. Without urxvt, the machine seems stable-ish
(and reboots much less frequently). Of course, this could also mean hardware
failure, but let's not assume the worst right from the start.
Unfortunately, I currently don't have enough time left for NetBSD experiments,
so the debugging part is deferred (possibly until forever, or at least until
it's no longer relevant).
Scratchpad
2012-01-03 15:37
I'll note some things in this "post", which may not be of general interest, but
worth knowing and likely to be forgotten unless written down somewhere.
This is work in progress and will be expanded whenever I think of something.
- Opening a Pipe in Tcl (similar to perl):
set p [open
|process]
- RCS can be made to use local time in keyword expansions instead of UTC by
adding
-zLT
to it's parameters, possibly by setting
the RCSINIT
environment variable.
- One can "fake" the name of the person who committed a certain RCS version by
adding
-wName
to the ci(1) parameters (useful for multiple
sysadmins working as root, managing config files with RCS).
Why RCS is still awesome
2012-01-03 19:02
There's about a bazillion blog posts about how to use git, but none about
why RCS is still awesome.
I don't really know why, but I'm a bit obsessed with RCS. Some people call me
backwards, but one can get used to that, and it can certainly be noted that
technology isn't bad just because it's old. For me, RCS is an example for
something from the stone-age that can still be useful today and lacks
counterparts in more "modern" alternatives.
As already mentioned in the description at the beginning of this document, I use
RCS to manage changes to this page. You can download the complete history with
all changes here. Feel free to check it out.
But first things first. RCS can only manage single files, while pretty much
everything else (except for SCCS, which I never used) manages multiple files in
a repository/directory and subdirectories. While this sounds like a horrible
drawback from the dawn of time when dinosaurs walked the earth, it can
come in handy when the files in the directory where you want to add revision
control are completely unrelated, the most prominent example being
/etc
on a UNIX system. With git, one would have a bunch of
unrelated commits cluttering the timeline, while with RCS it all stays clean and
single-file, the way I consider optimal for this particular use case.
Other than that, RCS has pretty much everything you want. It even supports
branches. Branches are a feature I rarely need for config files, but can come in
handy for instance while merging changes from the official upstream
configuration. I use this for instance on my exim4 configuration. My changes go
to the trunk (the main branch), upstream changes go to a branch named
OFFICIAL_CONFIG
, which starts at the beginning of the timeline (for
people who know a bit about RCS or CVS, revision number 1.1, which makes the
branch number 1.1.1). Whenever upstream releases a new version of the config
file, it is commited into this branch, and the changes are merged into the top
of the trunk.
What's also nice about RCS is that one can just use it for files where one wants
to track changes temporarily, and which one wants to dispose of completely a
week later. For instance, I often make small, but nontrivial changes (bugfixes,
patches, from a release tarball so there's no SCM already present that I could
use) to projects I'm not directly affiliated with; these minor changes often
happen only to one or two files. Then, before saving the fix, I suspend my
editor (^Z
, shell-job-control, another illegitimately widely
ignored feature) and check the working revision of the file into an RCS file via
ci -l filename
, resume my editor, write the changes and
continue, checking in more revisions as I make progress. Once done with the
file, I can then send a complete diff (rcsdiff -r1.1 -u
filename
, -u
meaning unified diff, remember that
RCS is old ;) of all the changes I made to the developers/mailing list.
For a blog or wiki (another use case where revisions are better managed
per-file than as changesets), the built-in keyword expansion (see the bottom of
the page for an example of the Id
keyword) is nice. I can see
directly on the page whether it's the current version of my website or some
random old one that I never really wanted but that's stuck in the firefox cache
for miraculous reasons. Also particularly the Id
-keyword mentioned
above is useful, because it saves me from updating the timestamp in my HTML
pages by hand. I have come to the conclusion that managing my blog by hand
without the help of any fancy CGI magic is far easier, and so far, it seems I've
not been proved wrong.
Another feature that makes RCS a great plus is that it's often already there as
part of the base system of whatever Unix flavour one uses, even if it's an older
release, RCS is almost always there, for instance on that old FreeBSD 5.x
machine you still have hidden somewhere in your closet, which means you get
instant version control even on machines where you can't install packages (no
administrative privileges) and don't want to start compiling your own tools.
(Example: nasty, outdated university machines)
If you have any questions or ideas regarding RCS and where else it can be used,
don't hesitate to send me lots of E-Mails. :)
I might be writing a short tutorial with general usage instructions for RCS at
some point later in future, if there's enough people reporting their interest,
although the RCS manpage is an excellent source of documentation. For anyone
interested, I suggest starting with
rcsintro(1).
Atom feed
2012-01-04 18:24
I spent another day procrastinating an excercise sheet, so I decided to work on
an Atom feed for the blargg. I created a totally awful XSLT stylesheet
that generates a feed after checking out the RCS-file when I'm done working on
it. You can find the result of this rather cumbersome operation here.
Some things I found out during the process:
- XPath is an amazing notation for describing paths in a tree
- I still don't really like XML, but it's starting to make sense (in a twisted way)
- XSLT is useful, but I don't enjoy writing it
- XML can't be avoided because it's everywhree
- Tcl's tDOM library looks like a way to handle XML while retaining one's sanity
- I wasted 10 hours on this, and I wonder where the time went, and I'm almost
certain that it wasn't worth it.
Nasty factory defaults on WD Caviar Green harddisks
2012-02-17 17:07
I have two or three WD caviar green harddisks. If you intend to buy one, this is
a reminder to turn the "intellipark" feature off. By factory default, the drives
park their heads after 8 seconds without access. Then, after 9 seconds,
something tries to read or write on the disk, and the head is moved back onto
the disk. This quickly increases the load cycle count. This is marketed as being
a feature to save power, but people have reported that it only makes the disk
die faster. (Note, I turned it off in advance to prevent this right away.)
There is a DOS tool called
wdidle3.exe
to disable the heads from being parked before any nasty surprises occur (e.g.
premature drive death). According to the
ubuntuusers wiki (German)
one should be careful with wdidle3 because it might break the firmware while
flashing. To me, this sounds like bullshit, since the linux port only executes a
vendor specific ATA command on the drive, so why would wdidle3 need to do
anything different? But then again, I don't know what these vendors put in their
tools. In any case, you're messing with your harddisk here, at a very low level.
Be careful. You can't say I haven't warned you ;)
Links:
port of wdidle3 to linux
(tried it, worked for me, might work/be portable enough to be ported to *BSD),
some other blog post about the problem.
There were some more references, but I lost them.
Keeping pkgsrc clean with unionfs
2012-03-19 01:09
The goal is to mount pkgsrc read-only (in my case via NFS), and an unionfs on
top of it so that files written into the pkgsrc tree will land elsewhere. This
is supposed to keep the tree clean.
There are two different implementations to choose from:
Aufs and
Unionfs.
Due to reasons still not entirely clear to me, all of this will happen using
Unionfs on Slackware 13.37. There is some kernel-patching involved, which is
boring to write about and hence left as excercise for the reader.
The reason is that I want to keep a pristine, untainted pkgsrc tree without
keeping and updating multiple checkouts.
In theory, pkgsrc already supports mounting the tree read-only, but there are
various variables that default to putting things into ${_PKGSRCDIR}
and I want to save me the trouble to find and replace them all (note that
finding out how to use unionfs will probably take much longer, but don't even
try to apply reason when talking to crazy people).
The read-only pkgsrc-tree is mounted in /mnt/pkgsrc. What happens next is
mostly self-explaining after reading the
documentation.
Who would have thought it could be that easy?
# mount -t unionfs -o dirs=/usr/pkgsrc=rw:/mnt/pkgsrc=ro none /usr/pkgsrc
# touch /usr/pkgsrc/foo1234
# ls /usr/pkgsrc | grep foo
foo1234
# umount /usr/pkgsrc
# ls /usr/pkgsrc
foo1234
Yay! This looks promising. What it does is this: It mounts /usr/pkgsrc (a
previously empty directory) as an overlay over the read-only pkgsrc-tree.
Any files not present in the overlay will be looked up in the read-only pkgsrc
tree. Then, when I create a file foo1234, it is written into the writable
location. Essentially, this allows me to change parts of the pkgsrc tree
locally. After unmounting the union, we can take a look on how it manages this
internally.
But what about whiteouts, i.e. when you remove a file in the writable union?
This is documented in detail
here,
but I just want to take a quick look (tl;dr-syndrome). Let's mount it again..
# mount -t unionfs -o dirs=/usr/pkgsrc=rw:/mnt/pkgsrc=ro none /usr/pkgsrc
# rm /usr/pkgsrc/README
# ls /usr/pkgsrc | grep README | wc -l
# ls /usr/pkgsrc/README
/bin/ls: cannot access /usr/pkgsrc/README: No such file or directory
# umount /usr/pkgsrc
# ls /usr/pkgsrc/ -la
total 8
drwxr-xr-x 2 root root 4096 Mar 19 15:59 ./
drwxr-xr-x 18 root root 4096 Mar 19 01:35 ../
---------- 1 root root 0 Mar 19 15:59 .wh.README
Aha! This is actually a bit dirty, but there's probably not a better way to do
it, at least not a straightforward one. Maybe with extended attributes, but not
every file system supports them. Let's restore it by removing the whiteout
file:
# rm /usr/pkgsrc/.wh.README
Now I want to add the wip
repository, which is mounted read-only in /mnt/wip. Adding it as another branch
won't work, because it has to be mounted in a sub-directory. Of course, I could
just move it in the tree, but that'd be boring :-)
# mount -t unionfs -o dirs=/usr/pkgsrc=rw:/mnt/pkgsrc=ro none /usr/pkgsrc
# mkdir /usr/pkgsrc/wip
# mount -t unionfs -o dirs=/usr/pkgsrc/wip=rw:/mnt/wip=ro none /usr/pkgsrc/wip
mount: wrong fs type, bad option, bad superblock on none,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so
Interesting, the "intuitive" way doesn't work.
# umount /usr/pkgsrc
# mount -t unionfs -o dirs=/usr/pkgsrc/wip=rw:/mnt/wip=ro none /usr/pkgsrc/wip
# mount -t unionfs -o dirs=/usr/pkgsrc=rw:/mnt/pkgsrc=ro none /usr/pkgsrc
# ls /usr/pkgsrc/wip
/bin/ls: cannot access /usr/pkgsrc/wip: Operation not permitted
It doesn't work the other way around either, so there's a shortcoming here.
Maybe Aufs does it better, but I'm currently not in the mood for cloning three
git repositories. Because I like dirty hacks, here's a workaround:
# umount -t unionfs -a
# mkdir /mnt/wip-local
# mount -t unionfs -o dirs=/mnt/wip-local=rw:/mnt/wip=ro none /mnt/wip-local
# mount -t unionfs -o dirs=/usr/pkgsrc=rw:/mnt/pkgsrc=ro none /usr/pkgsrc
# mount -o bind /mnt/wip-local /usr/pkgsrc/wip
# ls /usr/pkgsrc/wip | wc -l
3276
# touch /usr/pkgsrc/wip/foo1234
# umount /usr/pkgsrc/wip
# umount /mnt/wip-local
# ls /mnt/wip-local
foo1234
Have fun!
Update (2012-05-22): It should be noted that on at least Debian and
Ubuntu (and Archlinux and probably others, but these were the ones I could
quickly check), aufs is patched into the mainline kernel. Why you would want to
use pkgsrc on these isn't entirely clear to me though.
Avoiding getline(3) collisions
2012-05-02 10:47
Recently, there has been a new wave of getline(3) double-declaration related
mailing list posts. getline(3), which was previously a GNU extension was
accepted into POSIX.1-2008, which means software with it's own getline function
will no longer properly compile. (Usually, the type signature of the POSIX and
project specific getline function will also be slightly different).
Software that's no longer really maintained is unlikely to put fixes online. In
order to compile such software anyway, there are some options (just to state the
obvious):
- fix the code to use the new getline function
- add
-D_POSIX_C_SOURCE=200112L
to CFLAGS or define it in the
source file before including stdio.h
- rename the project specific getline function so that the identifiers do not
collide
Known affected software: CVS, ldapvi, thttpd.
Announcing feedreader project
2012-05-07 16:05
I've written a feedreader and I'm not certain
where else to announce it if not on the blargg. This is a rather small python
program (using feedparser) which stuffs newsfeeds into mailboxes so that I can
read them with mutt and which runs best as a cronjob. I heard there were certain
issues with newsbeuter, which I personally actually never used, and I've wanted
a newsreader without having to learn yet another set of keybindings for yet
another application (and I also believe that the mutt of feedreaders is mutt),
so here it is, released to the public, for the greater good of all of us.
Note that as of today, the official version number is the one in the RCS-Id,
even if releasing the first version as 1.x doesn't really seem sensible, it
makes even less sense to introduce yet another independent version number.
Thanks to s4msung, without whom I would probably have lost interest in the
project in the process. In case anyone ever discovers this, I'd like your
feedback. The code is currently rather ugly-ish (well, I've seen worse), but I
don't estimate it's lifespan to be very long anyway, given that mozilla already
removed the Newsfeed button from firefox's address bar and most people don't
care about feeds anyway. (I mean, how long do you imagine us being stuck with
RSS and Atom for syndication under these conditions? Then again, software
lifespan has always been mis-estimated in the past, so let's not get into this).
Please note that it makes absolutely no sense to use it with anything but a
console mail reader, because only a terminal will ever hilight a link in a
"To"-Header.
The code is placed in the public domain. As usual, patches and criticism are
welcome.
Update (2012-05-08):
The project has moved to a git repository on
bitbucket
because this eases collaboration with other people.
Avoiding Google Search redirection
2012-05-24 15:09
One thing that never ceases to annoy me is google obfuscating links so I can't
copy them out. When just clicking on links, it's mostly irrelevant, but when you
try to download other file types or copy parts of a longer URL, because you want
to use it for one of google's advanced features (like "inurl:"-search), it
quickly becomes very annoying, because google also replaces parts of the real
URL by dots, which it usually shows as non-clickable, green URL at the very
bottom of each result.
Anyway, this
Greasemonkey
script finally allows me to directly copy links out of google without the
redirect. I tried two other Greasemonkey scripts, but they didn't work as
promised (which is the main reason for me to write this entry). I hope you
can enjoy a rewrite-free google as much as I do..
Thanks to dangoakachan for this script. You made my life more awesome,
therefore you are awesome. :-)
Fuckings to google for being so hungry for every tiniest bit of data that they
decided it's a good idea to rewrite all URLs over their own servers. I'd have
switched to Duckduckgo ages ago, but it finds what I'm looking for even more
rarely than google. (Which gives me useless Ubuntu results for most things I'm
interested in).
Minimising browser attack surface by user separation
2012-06-20 09:57
Well, today during breakfast, while reading jwz's pages about old browsers, I
decided that browsers are a large attack surface. I point mine at different
sites on the internet all day long, which might include browser exploits, so I
decided, it might be a good idea to let firefox run under a separate UNIX UID.
I had been becoming increasingly concerned with old software and vulnerabilities
ever since I noticed that the software running on the CIP pools at my university
is very ancient. (Don't ask me how that concerned me, I really don't know.)
It turned out that running firefox as another UID is actually fairly easy, I
came up with the following hack.
I chose to use ssh (but not X forwarding, which would probably break MIT-SHM and
other X11 features or at least add another layer of indirection), because I can
easily authenticate with my SSH key. Using su, I'd have to dig into configuring
pam to allow me to run programs as "browser" without entering my password every
time. An alternative would have been to just use sudo, but I wanted to minimise
attack surface, not maximise it. After creating the user, I just copied and
chowned ~/.mozilla
into browser's home directory.
The script basically allows the user browser to access my X11 display every time
it's started (because xhost, unlike xauth does not seem to keep this data
persistently between restarts; xauth has a ~/.Xauthority
file for
that purpose; I'm rather unfamiliar with both though, and this attempt worked
just fine, so I decided to go with it.)
It's actually pretty trivial, since everything else it does is forking
(-f
) an ssh connection to browser@localhost
, setting
the DISPLAY
variable to my local X11 display so that firefox can
connect to it, and invoke firefox with all parameters I passed it.
Since I sometimes change my browsers, all I had to do is replace
~/.bin/www-browser
(which originally was a symlink to my favourite
browser) to be this script instead.
Although this is trivial, most people haven't done this before, so I thought
writing something about it might be useful.
Update (same day): Of course it works for all other untrustworthy applications
you may happen to run: XMPP clients, OpenOffice, you name it. Obvious downside
is that you need to be root in order to create n accounts for the n things you
usually do with your computer. These programs won't be able to read/write
~/.ssh
and similar important files. I obviously also also recommend
creating one account per application instead of reusing the browser account for
e.g. OpenOffice because it's more convenient. If you need shared files, create a
shared directory. You need to be root on your machine for this to work, anyway.
Further ideas:
- Use iptables/pf/whatever to limit the ports the browser user is allowed to
open. However, limiting outgoing connections to Ports 53, 80 and 443
cripples too much again, e.g. HTTP servers running on nonstandard ports
will not be accessible, for which there is no cure except omission of an output
restriction rule. Blocking incoming traffic should not hurt, as well as outgoing
traffic to known hosts and ports which are none of the browser user's business.
- Run it in a jail or chroot. A "real VM" would be even more secure, but also
way more painful, not only speed-wise on an eeepc, but also usability-wise. In
any case, there is more overhead involved here.
Examples are currently not included, since I'm not trusting my iptables/pf-fu
enough to be absolutely certain I didn't screw up, so that's currently left as
an excercise for the reader. I'm looking forward to hear from you regarding this
whole thing, though.
Annoying firefox features
2012-06-22 15:32
So this is about the various annoying Firefox features and how to turn them off.
Feature |
Workaround |
Version |
Download window is suboptimal |
There are pretty download manager extensions, for instance
Download Statusbar.
|
n.a. |
Firefox searches non-FQDN text entered to the URL bar using the default
search engine |
In about:config , set keyword.enable to
false
|
2.0[1] |
Gopher support was dropped |
It was never much good to begin with, use the
overbite extension
instead. |
4.0 |
RSS/Atom feed button was removed from the "location" bar |
There
are
several
addons
to add the feature back in. However, this was a really cool feature and it's
a pity it was removed for the stupidest possible reasons. |
4.0 |
Version number inflation |
none known |
5.0 |
Parts of the URL other than the domain are displayed in a grey shade |
Set browser.urlbar.formatting.enable to false in about:config. |
6.0[1] |
Parts of the URL are hidden by firefox |
Set browser.urlbar.trimURLs to false in about:config |
7.0 |
Firefox hides the forward button if there's nothing to go forward to,
rather than graying it out. |
ffextensionguru
mentions that enabling small icons circumvents the problem. This thread on the
mozilla support forum mentions that putting a button between the forward and
backward button also unhides the forward button. This is a theme thing, so using
a different theme also solves the problem. |
10.0 |
"Smooth Scrolling" is turned on by default, resulting in eye cancer |
Set general.smoothScroll to false in about:config |
13.0 |
Firefox shows a "Home screen" with recently accessed URLs |
This can be toggled by a small icon in the upper right corner or by
setting the start page to an empty document. |
13.0 |
To get a decent-looking firefox back, it's also possible to use the
Firefox 2 or
Firefox 3 themes,
which also solves the problem of the hidden forward button. Also it makes me feel young again :-)
I might have forgotten some. The table will be updated whenever I can think of
anything. I'm pretty sure there is (or at least will be) more.
mkpatches, vim and pkgsrc
2012-08-17 11:02
I recently found out about vim's patchmode
option, which greatly
decreases my pain when patching software for pkgsrc. The variable is set to an
extension which is appended to the original file name and which specifies a file
where a copy of the original file will be saved and remain afterwards unchanged.
(e.g. after subsequent vim
restarts).
After I'm done making changes, I then change directory to the package directory
and run mkpatches
(from pkgtools/pkgdiff
), which diffs
all files ending in orig
to the modified version and puts a patch
in the patches
subdirectory of the package directory.
Vimrc snippet:
augroup pkgsrc
au!
au BufRead /usr/pkgsrc/*/*/work/* set patchmode=.orig
au BufRead /usr/pkgsrc/*/*/patches/* set syntax=diff
augroup END
lynx -dump
2012-08-18 21:59
In this article, I'm going to compare several text browsers by their
-dump
output. The dump option is used to dump the HTML to plain
text after rendering, which is e.g. useful for reading HTML mails with a
text-based mail reader, or dumping RSS/Atom feeds to HTML without having to
implement my own renderer in the
feedreader.
Because I decided my descriptions are rather useless without a screenshot, I've
dumped http://www.google.com/ncr
in each of the browsers and
included them in a <pre>
tag. Based on my IP, w3m's result is
in German, while the other browsers seem to default to sending
Accept-Language: en
.
- w3m (0.5.3) dumps websites just the way it would have been rendered
in a terminal, but not handling URLs in any special way. However, it's probably
enough for dumping HTML mail, if you already have it installed anyway. The
obvious benefit of this method is that it's slightly better readable than the
other output method, which includes URLs in the output.
[ ]
Suche Bilder Videos Maps News Shopping Gmail Mehr >>
Webprotokoll | Einstellungen | Anmelden
Deutschland
[ ] Erweiterte Suche
Sprachoptionen
[Google-Suche][Auf gut Glück!]
Werben mit GoogleUnternehmensangeboteGoogle+Über GoogleGoogle.com
(C) 2012 - Datenschutzerklärung & Nutzungsbedingungen
- elinks (0.11.7) dumps
<a>
-tags with a number next
to them, and then prints all the numbers and associated links at the end of
the page as references. This seems to be the only sensible way around to
"export" HTML to plain text, while retaining the context in which an URL
originally occured, and without cluttering the output too much.
[1]
_____________________
Search [2]Images [3]Videos [4]Maps [5]News [6]Shopping [7]Gmail [8]More >>
[9]Web History | [10]Settings | [11]Sign in
[12]Google
[13]__________________________________________________________ [16]Advanced
[14][ Google Search ] [15][ I'm Feeling Lucky ] search[17]Language
tools
[18]Advertising Programs[19]Business Solutions[20]+Google[21]About
Google[22]Google.de
(c) 2012 - [23]Privacy & Terms
References
Visible links
2. http://www.google.com/imghp?hl=en&tab=wi
3. http://video.google.com/?hl=en&tab=wv
4. http://maps.google.com/maps?hl=en&tab=wl
5. http://news.google.com/nwshp?hl=en&tab=wn
6. http://www.google.com/shopping?hl=en&tab=wf
7. https://mail.google.com/mail/?tab=wm
8. http://www.google.com/intl/en/options/
9. http://www.google.com/history/optout?hl=en
10. http://www.google.com/preferences?hl=en
11. https://accounts.google.com/ServiceLogin?hl=en&continue=http://www.google.com/
16. http://www.google.com/advanced_search?hl=en&authuser=0
17. http://www.google.com/language_tools?hl=en&authuser=0
18. http://www.google.com/intl/en/ads/
19. http://www.google.com/services/
20. https://plus.google.com/116899029375914044550
21. http://www.google.com/intl/en/about.html
22. http://www.google.com/setprefdomain?prefdom=DE&prev=http://www.google.de/&sig=0_X8reba58b4My9W_LY3WkyHDxBuM%3D
23. http://www.google.com/intl/en/policies/
- links (2.7) does the same thing as w3m.
Search Images Videos Maps News Shopping Gmail More >>
Web History | Settings | Sign in
Google
__________________________________________________________ Advanced
[ Google Search ] [ I'm Feeling Lucky ] searchLanguage
tools
Advertising ProgramsBusiness Solutions+GoogleAbout GoogleGoogle.de
(c) 2012 - Privacy & Terms
- lynx (2.8.7rel.2) is the only text based browser other than elinks
that actually adds numbers, and furthermore also outputs UTF-8 characters
(support for this seems to be a compile time option).
Search [1]Images [2]Videos [3]Maps [4]News [5]Shopping [6]Gmail [7]More
»
[8]Web History | [9]Settings | [10]Sign in
Google
_______________________________________________________
Google Search I'm Feeling Lucky [11]Advanced search
[12]Language tools
[13]Advertising Programs [14]Business Solutions [15]+Google
[16]About Google [17]Google.de
© 2012 - [18]Privacy & Terms
References
1. http://www.google.com/imghp?hl=en&tab=wi
2. http://video.google.com/?hl=en&tab=wv
3. http://maps.google.com/maps?hl=en&tab=wl
4. http://news.google.com/nwshp?hl=en&tab=wn
5. http://www.google.com/shopping?hl=en&tab=wf
6. https://mail.google.com/mail/?tab=wm
7. http://www.google.com/intl/en/options/
8. http://www.google.com/history/optout?hl=en
9. http://www.google.com/preferences?hl=en
10. https://accounts.google.com/ServiceLogin?hl=en&continue=http://www.google.com/
11. http://www.google.com/advanced_search?hl=en&authuser=0
12. http://www.google.com/language_tools?hl=en&authuser=0
13. http://www.google.com/intl/en/ads/
14. http://www.google.com/services/
15. https://plus.google.com/116899029375914044550
16. http://www.google.com/intl/en/about.html
17. http://www.google.com/setprefdomain?prefdom=DE&prev=http://www.google.de/&sig=0_as31yZm8oj3mZ_rsi73-rQlRfKo%3D
18. http://www.google.com/intl/en/policies/
Conclusion: Only lynx and elinks have a -dump option that works for me. Since
I like elinks's curses UI better than lynx's, and I sometimes use a text browser
for actually browsing websites, not dumping them, I choose elinks as my
favourite. However, that's a matter of personal preference. A big plus for lynx
is that it outputs UTF-8 characters as such.
Cloning the NetBSD version history
2012-09-22 09:48
I'm going to explain how to clone the NetBSD CVS repo, because it's not
completely obvious. Apparently, the methods you can use are cvsup (written in
Modula 3, so you need to install an old version of the Modula 3 environment just
to do this), csup (cvsup reimplementation in C) or rsync.
The problem here is, that you will have to pick a mirror and hope (or know) it
works, since the mirror list for mirroring the entire cvsroot (du -sh:
1.5G
) instead of just checking it out by anoncvs (du -sh:
6.5G
) is not published along with the
other mirrors.
Your best bet is picking one of the CVSWeb CVSup mirrors,
because running a CVSWeb mirror implies having a copy of the entire
history, but cvsweb.se.netbsd.org
, which is the only
European option, was so horribly slow that I had to switch to the American
mirror. The official
supfile for
CVSWeb CVSup mirrors can be used, but some modules listed in there, e.g.
netbsd-syssrc
seem not to exist. Instead, I used
netbsd-src
, which worked.
For rsync, it's even worse, because the mirrors (are there any?) aren't listed
at all. As it was pointed out to me on IRC by phadthai,
rsync -rltHz --delete rsync://anoncvs.netbsd.org/cvsroot/ /data/rsync/netbsd-cvs/
should do the trick. The usual rsync mirrors don't have the "cvsroot" module,
though.
Some developers are considering to switch to some other VCS, which might in fact
make this sort of thing easier.
FreeBSD recently switched to SVN entirely, which doesn't help here at all but
rather makes it harder, with the SVN repository format not consisting of
plaintext RCS files, so they apparently still mirror their SVN in a CVS repo.
DragonflyBSD has been on git for a while, although voices in the NetBSD
community claim that no DVCS right at this point supports the huge commit
history of the project very well, which is apparently why they didn't switch to
anything else yet.
I think this decision is right, because breaking consistency just to break
consistency again in five years does not sound benificial.
The key problem is, though, that for ordinary users or infrequent contributors,
CVS has the advantage of not having the overkill of pulling in the entire
version history, which is often unneeded and unwanted. Now, if there was a VCS
(other than CVS) that offered a distributed mode, while also supporting a
checkout-mode (not git clone --depth=1
, which isn't a
real checkout mode), while also supporting the huge backlog of an entire
operating system over some 20 years, maybe that would be an option.
Granted, git would probably shrink the space required for an entire
repository using it's builtin compression, but at the end of the day, you would
need to check it out again, which would add another 1.5G again.
Update (same day):
sjamaan mentioned in a conversation on IRC that bzr supports plain checkouts in
addition to it's distributed default mode of operation.
It's nice to see someone else already thought of this and implemented it.
Unfortunately, bzr would probably barf over the repository size, but I doubt
anyone has tried it yet.
Update (2012-12-13): Because of a security breach,
FreeBSD's CVS and CVSup (and of course also CVSweb) has been permanently taken
offline. Good riddance.
Changing pkgsrc mirrors
2012-10-01 11:03
It's as simple as
cd /usr/pkgsrc
find . -type d -name CVS -exec gsed -i 's%anoncvs@anoncvs3.de.NetBSD.org:/cvsroot%anoncvs@anoncvs.fr.NetBSD.org:/pub/NetBSD-CVS%' '{}/Root' \;
Too bad NetBSD sed doesn't have -i. Maybe
%w {}/Root
can be used
instead, but I didn't feel like trying. That said, the only remaining German
mirror (were there more?) is rather slow.
Update (2023-02-16):
I was pointed to the fact that the German mirror went down long ago, and in fact it's been 10 years since the last time I touched this file :)
Regardless, I wrote a little rant about sed -i
.
NetBSD-6 apparently had no -i
, so I guess in 2012 this wasn't even wrong.
And although the NetBSD sed(1) manual page nowadays seems to be more or less identical to the one on FreeBSD,
people on NetBSD-9 have reported that sed -i
as well as sed -i ''
work for them?
Maybe NetBSD sed has changed more from FreeBSD than the manpage makes it seem...
which adds some extra confusion for me. The more you know you don't know! (thanks to Burnhard and logix on IRCnet #netbsd)
About sed -i
As I learned some time later but never corrected here, FreeBSD
sed (as well as possibly any other sed descended from FreeBSD's – which includes modern NetBSD) does
have -i
, but I didn't understand the difference
between how the command behaves on GNU vs. BSD sed.
GNU sed -i
takes an optional parameter without a space between -i
and
the argument, such as -i.orig
,
and it is most often used with an empty argument to -i
in Linux shell scripts,
which makes sed not keep a backup of the original file.
Compatibility pitfall
If you see something like
sed -i s/bar/foo/ filename
in a shell script originating on a GNU system,
you'll get the following cryptic error message trying to execute it on FreeBSD 12:
bsd$ sed -i s/bar/foo/ .ssh/authorized_keys
sed: 1: ".ssh/authorized_keys": invalid command code .
bsd$
and the fix on FreeBSD is to use instead:
bsd$ sed -i '' s/bar/foo/ .ssh/authorized_keys
bsd$
Likewise with GNU sed, the FreeBSD command fails with a cryptic error message:
gnu$ sed -i '' -e s/foo/bar/ go.mod
sed: can't read : No such file or directory
gnu$ sed -i '' s/foo/bar/ go.mod
sed: can't read : No such file or directory
gnu$
while this works:
gnu$ sed -i -e s/foo/bar/ go.mod
gnu$ sed -i s/foo/bar/ go.mod
gnu$
On FreeBSD sed, the parameter is not optional, but because an empty string cannot be passed
to a command via getopt(3)
unless it is a separate command argument, the GNU syntax above of passing plain -i
for making sed not keep a backup file does not work on FreeBSD.
Not sure what they do on GNU sed
to get the special behaviour out of getopt, because declaring it as an option instead of
a flag probably isn't enough...
What this means for the original problem
Using only FreeBSD/NetBSD sed, a command with a bit more status information thanks to -print
to switch pkgsrc mirrors would be:
bsd$ cd /usr/pkgsrc
bsd$ find . -type d -name CVS -print -exec sed -i '' 's%anoncvs@anoncvs3.de.NetBSD.org:/cvsroot%anoncvs@anoncvs.fr.NetBSD.org:/pub/NetBSD-CVS%' '{}/Root' \;
If you're not bothered by having a bunch of
.orig
files lying around, you can also use
-i.orig
which I guess should work everywhere.
OpenBSD
For the sake of completeness, OpenBSD sed if implemented according to the manual
seems to mimic GNU sed with regards to the handling of space with -i
; If someone can verify that, please let me know. I don't have an OpenBSD machine currently.
Other *NIX
You tell me.
Font madness
2012-10-02 07:10
Does anyone reading this have any idea about fonts, fontconfig and Xft?
Currently, I am facing the following problem:
Using urxvt -fn 'xft:Adobe Source Sans Pro:'
renders extremely
wide spacing between letters (Screenshot).
I think there is some fontconfig-xml-fu needed to get rid of this. Any help is
appreciated! (But maybe I just entered the wrong font string.)
Explaination: My laptop has a 1024x786 screen. I am running a tiling wm, and it
would be super awesome to be able to actually use the tiling feature to place
two terminal windows next to each other, and have them both be at least 80
characters wide. I consider 80x25 characters the absolute minimum size for a
terminal emulator window, and so does probably most of the software.
Unfortunately, my understanding of fonts and how Xft handles them is very
poor at the moment. I'd like to learn about it, but the documentation is usually
a bit sparse.
Edit (2012-12-13): I realise that the "lazyweb" approach does not
really work when nobody else actually reads your blog...
Edit (2013-02-27): Since Source Sans Pro works on a different machine
with a different font string, I assume I indeed entered the wrong one. Sparse
documentation aside this particular post was complete bogus, so you may wish to
ignore it. It's still here because I dislike removing something I already
published. Just be warned that I'm stupid and make mistakes.
$Id: blargg.xhtml,v 1.74 2023/02/16 15:57:33 mw Exp mw $