Rob's Blog rss feed old livejournal twitter

2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2002


April 10, 2014

Watching a ted talk on my phone this morning, it occurred to me: maybe we can fix software patents by arguing they shouldn't apply to public domain software?

Establishing that public domain source is not appropriate material for patent coverage is a narrow exception, which legally _was_ the case back before copyright applied to software by Apple vs Franklin decision in 1983. The revival of the public domain should mean a return to unpatented status for that category of software. This used to be normal, by copyrighting software people opened themselves up to other IP claims on it. Removing copyright should also remove the patent attack surface.

We can of course push to get a law passed to that effect (and should try), but there are also historical and constitutional arguments which can be made through the courts. The constitutional purpose of patents, to promote the progress of science and industry, is better served by open source software than by the patent system. The tradeoff for patents was documenting the invention in exchange for a limited monopoly, open sourcing the code is the best possible documentation of how to do it, and by doing so the authors give up any proprietary interest in the code. How do you have grounds to sue them for code they have no proprietary interest in?

The reason to say "public domain" instead of "open source" is partly that open source is difficult to legally define: the open source initiative couldn't even defend a trademark on it. Microsoft released the old dos source with a clickthrough agreement preventing reposting, is that "open source"? GPLv2 and GPLv3 are incompatible, and neither can contribute code to an LGPL project, how much "project X can't use my code" is allowed while still being open source? Does the GPL's "or later" clause invalidate the defense if a hijacked FSF could release a future version with who knows what clauses in it? Does the Artistic license qualify? What about the licenses where anybody can use it but this one company gets special privileges, ala the first draft of the Mozilla license?

Public domain hasn't got that problem. It avoids the whole can of worms of what is and isn't: the code is out there with zero restrictions. The price for freedom from patents should be zero restrictions: if the authors have no control over what people can do with it, why should uninvolved third parties have a say? Ideally the smooth, frictionless legal surface of the public domain should go both ways.

That's the constitutional argument: freely redistributable, infinitely replicable code serves the stated constitutional purpose of copyrights and patents better than patents do. Releasing your copyrights into the public domain should also prevent patent claims on that code.

The historical reason to say "public domain" instead of "open source license" is possible legal precedent: back when software was unpatentable, it was also uncopyrightable. An awful lot of public domain software used to exist, and when people added copyrights to it, they opened it to patents as well. Software that _isn't_ copyrighted, historically, also wasn't patented. If somebody tries to enforce patents against public domain software, we can make a clear distinction and ask a judge to opine.

(The Apple vs Franklin decision went in Apple's favor because Franklin looked bad. There was clear and obvious copying going on, Franklin took Apple's work and profited from it. If a patent troll or large for-profit company sues a public domain open source project, they'll look bad. If we can say "these other patent suits were against copyrighted software, this is public domain software, we _can't_ profit directly from this", it might be a legally significant distinction.)

A few obvious objections:

The old "math isn't patentable" arguments once held sway. They got abandoned about the time the public domain was abandoned (with the apple vs franklin decision and the rise of proprietary software), because once math was _copyrightable_ it was a short step to making it patentable. We can position this as a return to a historical position, possibly even an unlitigated corner case that was ignored by those original decisions in the rush to resolve _competing_ IP claims, not jurisdiction over a _lack_ of IP claims.

Doing so doesn't threaten the business model of anybody actually _doing_ anything. Microsoft sucked in the BSD network stack and happily profited off it for years. Despite this, BSD survived long enough for Apple to suck in the WHOLE of BSD a decade later. (And yet BSD still exists...)


April 2, 2014

Hmmm, glitch with toybox when cross compiling, it builds an a.out with the cross compiler then tries to run it (to probe the value of O_NOFOLLOW). I think the reason for this is some recent-ish variant of ubuntu was refusing to #define it without the _GNU_DAMMIT micromanagement symbol, or some such? I forget, and the checkin comments aren't particularly revealing. I haven't been blogging as much as I used to. Hard to search your notes-to-self if you didn't write it down in an obvious place. Hopefully now I've gotten kernel documentation maintainership handed off (and the corresponding frustration) I'll get back to that.

Anyway, put it back in portability.h with an #ifndef to set it to 0 if the headers haven't got it. (That's also the failure mode of the cross compiling case: if O_NOFOLLOW isn't there, we can't use it.)

But first, I need to fix the date command, which broke when I added checking for invalid inputs: sscanf() returns number of entries read, not number of bytes consumed. If you're reading an unsigned 2 digit number (%2u) and get "1Q", the number of elements read won't help (it got a 1 and set it, it's happy), you need %n to say the position it ended parsing at...


April 1, 2014

The next Linux Luddites is up with replies to my interview last time, and one of the questions was about BSD kernels. My own flirting with PCBSD was ambiguous (got it installed, but the package management system didn't want to work, without which the toolchain was missing Important Bits). Still better than free, net, or open managed.

So I pinged a BSD developer on twitter about kernel building, and she pointed me to the BSD source and instructions.

Something to poke at...


March 31, 2014

Dalias pointed me at tinyssh, which has no source or mailing list yet, but is somebody poking at building ssh out of the NaCL public domain crypto library. So I looked into nacl. (Pronounced "sodium chloride".)

It's _hilaribad_. Dan Bernstein is involved, so you know this library hates you, personally. But I expected basic competence. (Yeah, I dunno why either.)

The library download is an http URL. If I change that to https, it's a self-signed key. There are no signatures for the tarball on the website (md5, sha1, sha2, gpg, nothing).

I complained about this on twitter, and Dan Bernstein replied that anybody wanting to inject flaws into the tarball would have no trouble subverting the https registrars, and that's why they don't even bother trying.

That's presumably also why there are no signatures on the website so you can verify the tarball after download is the one the developers think they wrote. Further exchanges with other nacl users were about how "delegated trust" is bad in the absolute sense, so what you must do is read the code and become such an expert that you yourself can detect things like the recent subtle iOS and gnutls flaws that the maintainers of the relevant projects themselves didn't spot for years. And if you can't do that, you have no business using Dan Berstein's code.

This is why I don't use code written by Dan Berstein. I'm sure he's an excellent crypto researcher and/or ivory tower academic, but as a software project maintainer he's deeply really annoying. And why I've gone back to poking at libtomcrypt, which is also public domain, and I can get a known copy of through dropbear to compare against other versions. (Maybe dropbear was compromised years ago, but a lot more people have looked at that and I can diff of a known base to see what changed. And the maintainer hasn't expressed incredulity about why I might want to do that, or suggested that only people capable of writing this code are ever qualified to use it.)


March 27, 2014

Finally emailed Randy Dunlap asking if he wants kernel documentation maintainership back. Off-list, because I don't need the drama. It's not "Mom, James Bottomley was clueless at me!" and it's not that the kernel.org guys might as well be cardboard cutouts. It's that I have too many other things to do with my time, which are more important and _way_ more fun.

If I want to engage with a bureaucracy, I _am_ still subscribed to the posix standards committee mailing list and there are Things They Should Do. Both tar and cpio _used_ to be standardized in the 2001 version (SUSv2) and they need to come back and be modernized; admit "pax" was a mistake nobody uses. Add "truncate" which Linux picked up from freebsd over 5 years ago. Explain what happens to getline()'s lineptr field when you pass in NULL requesting it allocate memory but the read fails: does it reliably stay NULL or does it return a memory allocation you need to free even in the failure case, or does it change the pointer but free the memory itself so you DON'T free it for failure? The last one seems unlikely if it's doing remalloc() but I can't quite rule it out...

The posix committee at least never _claimed_ to be a hobbyist endeavor, so if nothing else they're not being hypocritical about it.

Looking back, I can see Linux development "selling out" at least as far back as 2007, where IBM's needs trumped Linux on the Desktop's needs because IBM was just coming off its billion dollars a year annual investment in Linux, and they wanted to follow the money. Red Hat had retreated up to the "Enterprise" market eating Sun's lunch, so who cared about something silly like Knoppix? What was important was who paid developer salaries! I found that creepy.

But I thought there was at least a remaining _role_ for hobbyists until now. Live and learn: it's corporate all the way down now. Forms and procedures so they can categorize your submission progress through the kernel development process. It's all tracking and delegation and risk management assessment now, collecting required approvals. They have procedure update procedures (discussed at the kernel summit). Multiple submissions policy documents (in triplicate, and that's before you get to semipractical details like security or coding style). There's even a checklist (currently 26 steps).

The bureaucracy isn't paralyzing yet. But if you're wondering why there are no more hobbyists joining in...


March 26, 2014

I got cpio.c cleaned up and promoted out of "pending", but haven't done a proper test suite yet, and only really tested the extraction side (as non-root). I have a longish todo list for it, including teaching it to understand the kernel's initramfs creation syntax.

The kernel's scripts/gen_initramfs_list.sh and usr/gen_init_cpio.c respectively create and consume a file list where each line has not just a filename but type, ownership information, and so on. All the stat info so you only ever need to look at files to get contents. This is the same general idea behind squashfs -p or genext2fs -D, both of which let you specify filesystem entries (such as device nodes) that you can't actually create without root access. This ability to supply extra data lets you create root filesystem images without running as root.

This is really useful, and the data format's been stable for just under a decade (symlink, pipe, and socket support added January 2005)


March 13, 2014

Linux Weekly News's article about development statistics for Linux Kernel version 3.14 says "The number of contributions from volunteers is back to its long-term decline." and later "Here, too, the presence of volunteer developers has been slowly declining over time."

Gee, whoda thunk?


March 12, 2014

Got Isaac's cpio -d patch handled, and now I'm cleaning up the rest of cpio. The vast majority of Isaac's patch factored some common code out of mkdir (albeit in a way that subtly broke mkdir so I had to reimplement it), but as long as we're touching cpio, it's not actually that big...

I'm making an executive decision that cpio belongs in the posix directory, because it _was_ in posix. Just not posix-2008. It was in posix-2001, and they removed it from the standard just about the time that RPM and initramfs and such started heavily using the format. (The same thing happend to "tar", although that was even more widely used for longer.) Both were deprecated in favor of Sun Microsystems' "pax" command, which nobody uses for anything, and which I have no interest in implementing.

I am a bit concerned that cpio has 8 hexadecimal digits for date: that's a 32 bit value and thus the 2038 problem. Ok, interpreting it as unsigned gives us another ~80 years beyond that so it's not an immediate problem. But still. I should poke the initramfs guys and go "huh"?

Unspecified posix question du jour: if I feed a pointer containing NULL as the first argument to getline() (which posix 2008 says tells it to allocate its own buffer), and the read fails (function return value -1), does it still write non-NULL into the pointer in that case, and if so is it a still-valid memory allocation I'm responsible for freeing?


March 8, 2014

This morning a little programmed phone alarm reminded me, one hour before the fact, that I was on a podcast! (Ok, that's an oversimplification, but we'll run with that, shall we? I did eventually remember what it was for.)

Alas, I _meant_ to set up skype and garageband on Fade's computer a week ago, when she was still here. Of course doing a skype password reset meant downloading my 5000 pending emails (I apparently hadn't checked my email since _last_ weekend), but it didn't quite take the full hour to get to the end of it through pop3. (Thunderbird and gmail have conflicting hardwired assumptions about folder layout, using pop bypasses these irreconcilable differences.)

Anyway, we got it to work and we talked for an hour and change, so presumably when Linux Luddites episode 11 comes out, I should be on it. Woo! (I'd trust them to edit me down to something coherent, but they say that their editorial policy is just to cut out pauses. Not sure that's ideal in my case, but oh well.)

Meanwhile, in that giant pile of email, amongst the endless flood of "every kernel patch series that includes a documentation component, plus all ensuing discussion of said patches" (which the kernel's find maintainer script says I should be cc'd on, and then the kernel social norm is to reply all), there were actually interesting things!

It turns out there are prebuilt ellcc binaries, (somebody emailed me about it on tuesday, I'd say who but my current email reading solution is X11 forwarding over ssh from a machine that isn't world accessable, so havign internet on my phone doesn't help when I'm out. Forwarding an ssh port to it is a todo item, not helped by the fact the box with the working mail client is dhcp and its address changes weekly. You probably did not need to know this.)

Anyway, I downloaded these tarbals, tested one to see what its file layout looked like (I have _learned_ that even though the unix/linux norm is "tarballs extract into a directory with the same name as the tarball" nothing actually _enforces_ this, and indeed this tarball didn't do that), found out it was creating a "bin" and "libecc" directory, and went "hmmm" because how do the files in "bin" find that libecc? Do do find the directory their binary is in and do ../libecc?

The answer, from experimentally building hello world, is "no, they don't":

$ bin/ecc -v hello.c
clang version 3.5 (trunk)
Target: x86_64-unknown-linux-gnu
Thread model: posix
Found candidate GCC installation: /usr/lib/gcc/i686-linux-gnu/4.6
Found candidate GCC installation: /usr/lib/gcc/i686-linux-gnu/4.6.3
Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/4.6
Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/4.6.3
Selected GCC installation: /usr/lib/gcc/x86_64-linux-gnu/4.6
"/home/landley/ellcc/x86_64/bin/ecc" -cc1 -triple x86_64-unknown-linux-gnu -emit-obj -mrelax-all -disable-free -main-file-name hello.c -mrelocation-model static -mdisable-fp-elim -fmath-errno -masm-verbose -mconstructor-aliases -munwind-tables -target-cpu x86-64 -target-linker-version 2.23.2 -v -resource-dir /home/landley/ellcc/x86_64/bin/../libecc -internal-isystem /usr/local/include -internal-isystem /home/landley/ellcc/x86_64/bin/../libecc/clang -internal-externc-isystem /usr/include/x86_64-linux-gnu -internal-externc-isystem /include -internal-externc-isystem /usr/include -fdebug-compilation-dir /home/landley/ellcc/x86_64 -ferror-limit 19 -fmessage-length 79 -mstackrealign -fobjc-runtime=gcc -fdiagnostics-show-option -vectorize-slp -o /tmp/hello-dd5e79.o -x c hello.c
clang -cc1 version 3.5 based upon LLVM 3.5svn default target x86_64-unknown-linux-gnu
ignoring nonexistent directory "/include"
#include "..." search starts here:
#include <...> search starts here:
/usr/local/include
/home/landley/ellcc/x86_64/bin/../libecc/clang
/usr/include/x86_64-linux-gnu
/usr/include
End of search list.
"/usr/bin/ld" -z relro --hash-style=gnu --build-id --eh-frame-hdr -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o a.out /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../x86_64-linux-gnu/crt1.o /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../x86_64-linux-gnu/crti.o /usr/lib/gcc/x86_64-linux-gnu/4.6/crtbegin.o -L/usr/lib/gcc/x86_64-linux-gnu/4.6 -L/usr/lib/gcc/x86_64-linux-gnu/4.6/../../../x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/lib/../lib64 -L/usr/lib/x86_64-linux-gnu -L/usr/lib/gcc/x86_64-linux-gnu/4.6/../../.. -L/lib -L/usr/lib /tmp/hello-dd5e79.o -lgcc --as-needed -lgcc_s --no-as-needed -lc -lgcc --as-needed -lgcc_s --no-as-needed /usr/lib/gcc/x86_64-linux-gnu/4.6/crtend.o /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../x86_64-linux-gnu/crtn.o

What the New Jersey?

It's... finding gcc on the host. Why does it _care_? It's then using /usr/bin/ld (which is not the bin/ecc-ld linker) and calling it against host headers and libraries. So that's totally derailed at that point.

I tried adding --sysroot both with $PWD and with $PWD/libecc and both times it died saying it couldn't find stdio.h. Looking _into_ libecc there's a Makefile in there (?) but it seems like there are headers and library binaries in there too? Sort of? (It's a multilib setup, which I generaly avoid, but this is a compiler that supports every target in one binary. How I'm not quite sure. What you set CROSS_COMPILE= to when building with this, I dunno. But have to get it to work on the host before worrying about that...

This worked for somebody, so having it is progress. It's just not as much progress as I'd hoped.

The ironic part is the obvious way forward here is for me to finish the ccwrap rewrite, and task it with wrapping THIS compiler so I can tell it where to find its darn headers and libraries and give it the --nostdlib --nostdinc stuff so it ignores the ones on the host. :)

So... back to what I was doing then.


March 7, 2014

I meant to add many, many links to the previous two blog entries, but the problem with "furious apathy" is oscilating between doing WAY TOO MUCH and not being sure I care at all.

As Zaphod beeblebrox said, "Nuts to your white mice."


March 6, 2014

Work's picking up. When I interviewed for this job I actually interviewed for a QA position on the theory it was something I hadn't really done before, and now they've set up a spare BVT (Build Verification and Test) machine, and I'm slowly picking through a giant python 2.4 test suite on top of my ongoing poking at buildroot.

This means work isn't just taking up most of my time and energy, it's actually taking up a certain amount of headspace because I'm learning stuff and doing designs and plans, which the previous janitorial work didn't stress so much.

On the one hand, this is sort of energizing for my open source work. (Nothing quite so draining as enforced boredom.) On the other hand, it's making me LESS interested in trying to make nice with James Bottomley. His day job is doing Linux kernel stuff. Mine is not, and he's made sure I won't find it _fun_ either.

I've got plenty of other hobbyist programming that IS fun. Wasting time on kernel documentation stuff is drudgery I don't get paid for, and now that Bottomley was kind enough to clarify that _nobody_cares_, and in fact the kernel guys can't comprehend the idea of anyone NOT having a day job doing kernel development (I have yet to _look_ at kernel code for this job, it's all userspace; Jose does the initial board bringup and the broadcom or San Jose guys handle driver issues), it's probably time to hand it off.

Yes, I _can_ persist through things like the 5 years necessary to get the perl removal patches in, or the giant gap between this and the initmpfs patches going in. But mostly I don't care enough to bother. I only submitted miniconfig upstream three times, and even though other people still find it useful enough to namecheck in the docs today (No, I didn't add that) and a few years later it was the obvious solution to Linus's arm defconfig woes... all I really care about is that it works for me. Sure, I'll tell other people how to use it when it seems useful, but if the kconfig maintainer says no to it and tells me to go off and do an order of mangitude more work before he'll even pay attention again, upstream can go hang. (I'm aware that guy is no longer maintainer. Just as Paul Mundt is no longer sh4 maintainer. They "got over it", and I'm still using the solutions that they rejected. And I'm off to do other things...)

(Yes, I gave a talk a few years ago explaining bounceback negotiation in free software projects. I understand what they're trying to do. But the threshold of dedication they expect from people is way beyond hobbyist and somewhere between "this is my day job" and "cult status".)

My kernel documentation todo list is all things like try yet again to collate the arch directories because after it got bikeshedded repeatedly it fell down my todo list. Moving that many files does require a git tree because the gnu/dammit patch doesn't understand move file syntax git intorduced, but after the kernel.org guys went "of course you upload websites in git, just like you browse the web using git" I decided not to set up a public git tree until I get rsync back. Since they've made it clear the current kernel.org admins aren't actually capable of doing that, even if they wanted to... (Despite me pointing them at the appropriate ssh passthrough syntax for restricted rsync from an Ohio LinuxFest security presentation...)

Heck, even trying to filter out the the device tree stuff in the Documentation maintainers entry got bikeshedded enough that I wandered away and lost interest. (I was also trying to filter out the internationalization directories, which I argued against including but was overruled by somebody who doesn't speak chinese either. Endless todo items...)

My non-kernel todo list hasn't gotten _shorter_ since the busybox days. I have tons of things other than kernel development that I'd like to do. Making time for linux-kernel documentation was public service, one that makes it significantly harder to read my email.

In that context, James questioning my commitment to sparkle motion because I'm not putting in as much time as _he_ does (with his full-time job at Parallels working on the kernel), calling me weird for being too stubborn persist with this Linux thing in the face of Windows or iPhone's continuing market dominance (I.E. weird for not getting over things and moving on)... fundamentally being offended that this is _not_ my day job and might compete with other things for my hobby time? How dare I _volunteer_? Nobody does that anymore, they're all PAID to do it...

If that's what kernel development's come to, he's right. I do have other things to do with my time.

(P.S. I thought it was a bad sign when the kernel guys did a whole "Is there still such a thing as a kernel hobbyist? Let's find such a unicorn and sponsor their trip to LCA!" And then Philip Lougher's horror story where he took a year off from work to finally get squashfs merged did _not_ win him the trip; as far as I can tell nobody noticed it. No wonder Con Kolivas flounced (although he seems to be back lately). No wonder the average age of kernel developers is Linus's age and rising: no new hobbyists in the past decade, and people like me are replaced by people working on Oracle's fork of Red Hat Enterprise...)

Anyway, if you wonder why I haven't been able to politely reply to James Bottomley's questioning my commitment to sparkle motion... this is the tip of the iceberg of the _anger_ that comes out on the topic.

I wouldn't be angry if I didn't care, but I'm working on that.


March 5, 2014

Musl-libc is in feature freeze for 1.0, meaning I spent most of last night on irc with the maintainer working out the wording of a press release to announce it to the world. (I'm pretty sure the phrase "for immediate release" is in Linux Weekly News' spam filter, but Rich insisted.) I learned the difference between marketing and sales many years ago (and that I can do marketing pretty well, but can't close to save my life), so i worked out their initial marketing plan and now we're digging it up for 1.0.

My todo items from this are to bring the wiki's current events page up to date (basically another kernel-traffic variant like the qemu weely news I tried to do for a while, and rewriting ccwrap.c to work with musl so I can port aboriginal Linux from uClibc to musl.

Over on the aboriginal side, I'm way behind on release (largely due to the sparkle motion thing making me not want to look at the new kernel). I decided to skip a release, but next one's coming up, and I still need to fix powerpc. A week or so back the musl guys asked me for an sh4-strace binary, which needs to build natively. The sh4 emulated board is crap (64 megs ram, one disk, and if you hit ctrl-c it kills the _emulator_ rather than passing it through). I made an ext2 sh4 root filesystem with 2 gigs of extra space to combine my /dev/hda and /dev/hdb into one disk, and then added a 256 meg swap file to overcome the insufficient memory thing, and then wget the static-tools.hdc image and loopback mounted it. At that point the build failed because the board doesn't emulate a battery backed up clock so the clock thinks it's 1990, meaning make complains all the file dates are in the future. When I tried to set the date by hand I found a bug in the toybox date command, so I need to fix that. (Meanwhile the musl guys got their sh4 port largely complete without me, using the last aboriginal sh4 release image. But still: I should finish that up. Oh, and the sh4 kernel config forces EXPERT which causes collateral damage i need to fix up to, by ripping "select EXPERT" out of the sh4 kconfig.)

The big aboriginal todo item is the ccwrap rewrite so I can port aboriginal's toolchain to building everything against musl. (Yes ellcc remains a todo item but the build breakage there goes pretty deep

Meanwhile over in toybox I'm working on the deflate compression side, because I don't want to ship with half a gzip implementation (sflate) I'm not going to keep. The japanese guys have shown they'll happily use code and become dependent on code out of "pending" that's default n in the config, so if I'm going to swap implementations I want to do it before the release. (I'm also partway through adding grep -ABC, need to rewrite the cpio -d patch, and so on. Figure out which o those go in the toybox release I need to cut to include in the Aboriginal Linux release.)

Oh, and one of the busybox guys emailed me to ask me to update the busybox binary images for the current busybox release, which is also sort of blocked on getting an aboriginal release out. (New aboriginal uses new busybox, I usually build binaries with that. But I might just do a one-off by hand with the old release images to get it off my plate.)

I'm probably behind on the toybox and aboriginal mailing lists again, but since Sparkle Motion I've only been able to stomach reading my email once or twice a week because 95% of what I have to wade through there is irrelevant kernel documentation crap that I can't just _ignore_ but have to filter for bits to go upstream. Any patch series that includes a documentation component cc's me personally on the entire series AND the ensuing discussion, and that's something you brace yourself to wade through at the best of times. And of course getting documentation on a topic you know nothing about and having to _evaluate_ it requires more focus and study time than I usually have when I'm so tired I can't do anything more productive than catch up on email...)

I also need to renew my domain registration (expires on the 11th) but I don't want to just renew it, I want to move it to dreamhost (which throws in a free domain with web hosting anyway) and that _also_ involves reading documentation (on both the old and new services) to unlock and transfer the domain without bricking my website and email. Might wind up just paying the old guys another year to not have to deal with it right now, but I'm trying not to do that.

Oh, and I have to set up skype and a recording thing on Fade's macintosh because some guys in... ireland? want me on a podcast this weekend.

Anyway, that's the immediate, time-critical stuff. I think.


March 4, 2014

Today, I remembered my netbook's power cord. And getting logs on my netbook turn out to be approximately as time consuming as I expected, not just because it's slow to build (a full target build on the netbook takes most of an hour). No, it's because development involves iterately answering the question "_now_ what have I screwed up?"

Forgetting to pass a mode to the open() of the log file so all opens after the first fail because the stack trash it used as permissions for the newly created file didn't have the write bit set. Doing a build for an architecture that doesn't currently compile because I'm in the process of redoing its config to not force "EXPERT" and it turns out there's kernel version skew in the patch that applies that. Logging just the after without logging the before command lines. And a typo in ccwrap that breaks the build didn't get noticed until the end of simple-cross-compile.sh, _twice_, and then I had to redo it with CPUS=1 because the before and after sequences aren't stable otherwise and it's kinda important to match them up...

Three minutes of fixing the last bug, start the build over from the beginning, go do dayjob for an hour or more until i get a break, check the log, three minutes of fixing the next bug, rinse repeat...


March 3, 2014

Banging on ccwrap, actually debugging the build in place is kinda horrible (especially on the netbook), so I've come up with the idea of logging the before and after gcc command lines, and running the 'before' through the new ccwrap and having it print out the new 'after' instead of running it, and then I can compare the files. There's a gazillion other fiddly bits (such as environment variables), but it's a start.

At least that's what I _would_ have worked on today if I hadn't forgotten to bring my netbook's charger. (Enough battery for the bus ride(s) in and the bus ride(s) home, but not enough to leave it running a long build while I'm actually at work...)

Of course getting the logs on my netbook turn out to be a bit time consuming, not just because it's slow to build. (Forgetting to pass a mode to the open() of the log file so all opens after the first fail because the stack trash it used as permissions for the newly created file didn't have the write bit set. Doing a build for an architecture that doesn't currently compile because I'm in the process of redoing its config to not force "EXPERT" and it turns out there's kernel version skew in the patch that applies that. Logging just the after without logging the before command lines. And a typo in ccwrap that breaks the build didn't get noticed until the end of simple-cross-compile.sh, _twice_, and then I had to redo it with CPUS=1 because the before and after sequences aren't stable otherwise and it's kinda important to match them up,


March 2, 2014

My new phone has netflix, which has the same problem with the nightly netflix watches with Fade: if I'm programming, I want some background noise but not something hugely distracting.

Which is why I'm currently re-watching the Disney tinkerbell movies (which I have in fact already seen with the niecephews).

You've gotta wonder about the ecological catastrophe that required all these manual fixups of that parallel earth's biosphere. Luckily there was a friendly alien race around to leave a colony to do just that. (Presumably out of guilt from having contaminated our biosphere in the first place? Dunno, they seem to have regressed a bit, maintaining some very user-friendly nanotechnology but not a whole lot of actual records...)

(In other news, George Carlin's 1978 HBO special wasn't as funny as his later HBO specials. Presumably it's something HBO acquired later rather than having existed for in-situ...)

(Possibly I'm not being as un-distracted as was the intent...)


March 1, 2014

Deflate compression side is eerily familiar. I've written this code before. (In Java! In 1996. Ported from info-zip.)

Corner cases I need to add tests for: gunzip -S "" blah, gunzip .gz, gunzip ..gz, touch ook && gunzip ook.gz...

That first one, gunzip prompts to overwrite and if you say y it deletes the file. That's nice of it. I notice that -f doesn't force it to decompress an unknown suffix.

I'm sort of tempted for the "gunzip .gz" case to produce "a.out", on general principles.


February 22, 2014

Chipping away at the email backlog. Still not coming up with a civil answer to James Bottomley's sparkle motion thing. I _am_ coming up with a long list of other things I want to do that's convincing me bothering at all with kernel documentation is a complete waste of time.

Started to send a message to the list describing my solution to the "multiple commands with different command line option parsing in a single C file" problem, and during the writeup realized I hadn't solved the entire problem and I have to redo more of the header file generation. (Disabled FLAG_x symbols need to be present but defined to 0 to avoid bulid breaks. Right now if the command is disabled the flag block isn't present in the header, so the clean happens but the new command's definitions don't.)

It would also be nice if all CLEAN_ blocks preceded all FOR_ blocks, because otherwise if they've both got a FLAG_b it clashes. My original idea was that command code go in alphabetical order within a file, because that's how the stanzas occur in the file so a CLEANUP_A FOR_B pair will work if A comes before B alphabetically.

On the one hand, that's annoyingly subtle. On the other, it's a pain to teach mkflags.c to cache output. On a third hand (or possibly tail), I'm not sure if complicating the build infrastructure or complicating the code is worse, it's one of them tradeoff things without an obviously superior way to go...

It's always the near ties, where it probably doesn't hugely matter which one you pick because both suck indistinguishably equally, that are hardest to decide. Precisely because neither _is_ clearly better...


February 21, 2014

Blah, what have I done this month.

So after the last entry before the big gap, where a kernel developer questioned my commitment to sparkle motion (how _dare_ I not have a day job working on this stuff, and have multiple other things competing for my hobby time), I pretty much stuck my fingers in my ears on email and did other things.

One such thing was writing that new inflate implementation from scratch. Took a week to work out some details (at one point there's a huffman code table used to decode two other huffman code tables, and there doesn't seem to be an obvious _name_ for this meta-table so the documentation talking about it is unnecessarily vague) and then another couple weeks to debug it (fun fencepost error in my input bit buffer, got the static huffman literal annd distance tables swapped, the usual teething troubles). But I could do that on my netbook without even internet access.

The largest gzip file I had lying around was the gnu/dammit patch source tarball I needed to reproduce a bug last year, and wow is the deflate encoding in gnu stuff crazy. Almost every encoding block is followed by an unnecessary zero length literal block, for NO REASON.

I have two wild guesses about the reason behind this crazy:

1) The pipe through gzip works as a streaming protocol, and it needs to flush the output data promptly when the input stops writing (so if you pipe ssh through gzip, when you type a command and hit enter you want it to execute immediately, not when you type another 64k of commands. Is there some sort of nagel in there?) And when tar pipes data through gzip, it's in this mode so treating each short read as an explicit flush, which is marked by these zero length literal blocks to make sure the far end knows.

Of course this is a horrible thing to do for PERSISTENT storage, you want a tarball to be optimized and explicit blocks that store no data are clearly wasted bytes. And you can probably tell this mode is not appropriate when the input isn't a terminal or similar...

2) It's some setup to let you decompress in parallel? Scan ahead in the data to find the start of the next block? You'll have a byte aligned "0x0000FFFF" each literal block. In theory you could have that as a false positive in the data and there aren't per-block checksums to show the next block is valid, but there are a couple ways to deal with that: 1) when you've finished decompressing the previous block check if it ends and where you through the next block started, discard any that get passed by other blocks. So output's a bit serialized but that's I/O bound anyway. 2) There are a number of ways decoding can error out, and that shows it's not a real block. The huffman table symbol decoding needs to add up to a specified length (it's sort of a "hit me/bust/blackjack" thing that should always match the number and not exceed it), and with most huffman tables an encoding can lead off the end of the table. Either way, that's not a valid block.

Next I need to do the compression side...


February 20, 2014

Finally got a doctor's appointment about the way the endless cold went into my lungs a few weeks back. (Might have something to do with inhaling a chunk of potato, but when your chest HURTS on both sides, coughing feels like something snapped in your chest every time, and you're kept awake at night by the crackling noises your breathing is making... yeah, time to talk to a professional. At least by week two...

They gave me a prescription for ten days of Cipro. I remember when that was the superdrug they gave to all the people mailed weaponized anthrax in 2001. Now it's apparently a first line antibiotic they hand out like candy, because everything's immune to the older stuff. (We've been giving 80% of all antibiotics to animals for decades, they tend to stop working after that...)

The list of side effect warnings on Cipro is nuts. It apparently eats your tendons, and if you exercise while on this stuff they snap. It also makes you sensitive to sunlight (in Texas, that should end well). Oh and it can give you peripheral neuropathy, and trigger depression, because it eats your brain too.

Also, if you complain about the side effects on twitter, the antivax people come out of the woodwork and insist you're somehow just recreationally using antibiotics, and would be better of with the pneumomia because not dying of simple infections at a young age is unnatural. (Um, yes? Not getting eaten by lions is unnatural. I'm all for it.)


February 6, 2014

I've been sitting on a reply to James Bottomley until I can answer him in a civil manner.

Don't hold your breath.


February 5, 2014

Yesterday's comment about busybox wasn't because I was looking at their deflate implementation (when I did the "bruce didn't build that" analysis back in 2007, it was just a stale version of gzip). It was to see what command line options busybox users decided were an essential subset.

The sflate approach of doing gzip, zlib, zip, and raw deflate in a single binary is clever, but using "-z" to mean zlib and "-p" to mean zip is strange, and "-l" has an existing meaning in the gnu/dammit version of gzip, and "-L" means output the complete copy of the GPL text stored in the gzip binary because the FSF thinks that's a good idea...

The posix-but-dead "compress" command has more freedom of command line options. Compress fell out of use because some idiot asserted a patent on the compression algorithm it used, thus causing users to flee the protocol. In fact Phil Katz released the "deflate" algorithm he did for his original zip implementation gratis after a lawsuit with the guy behind the ARC algorithm. That's why deflate took over, and where the "appnote.txt" file I mentioned earlier comes from. It was Phil's "by all means, clone this, make it a standard and make ARC a historical footnote" writeup of his own algorithm, which took out unix compress for the same reasons.

Zip itself is a combination archiver and compressor, but unix already had an archiver (tar) that glues files together, and then it ran the result through compress to create *.tar.Z files. So unix needed a streaming compressor that _didn't_ try to map the content to a directory of files, which is where both gzip and zlib came from. Those were two independent implementations of Phil's deflate algorithm with different wrappers: gzip using crc32 checksumming and zlib using adler32, with different magic bytes at the start to identify which format it was. (Zip checksummed each file, and the checksum was part of the directory metadata it stored.) So, three formats, and the fourth is just raw deflate output with no wrapper. The magic bytes identifying each format are that zip files start with the ascii characters "PK" (Phil Katz's initials), gzip starts with the 3 bytes 0x1f, 0x8b, and 0x08, and zlib is crazy (first byte & 0x8f = 0x08, second byte can't have bit 5 set because a "preset dictionary" of data you feed into the compressor WITHOUT PRODUCING ANY OUTPUT is just nonsense in a DATA COMPRESSION PROTOCOL and we're not even trying to support that, and then first two bytes must be divisible by 31 when viewed _big_ endian even through everything else deflate does is _little_ endian Because Reasons. When compressing just use 0x78 0xda, but we can't trust zlib itself to produce that because "crazy", above.)

So when I inevitably write my own from scratch rather than trying to clean the external submission up some more (I try not to dissuade contributors, but this one wasn't contributed, the contributor instead requested I prioritize adding the functionality, without specifying how)... anyway, having "compress" be the deflate multiplexer probably makes sense.

Which sort of implies I should teach it -Z, since the patent's expired now. Hmmm...


February 4, 2014

The more I read the sflate code, the more I just want to write a new deflate implementation from scratch. It's doing the "switch/case into the middle of loops" thing that the original bunzip did.

I also want to reuse the bunzip bit buffers, but reading the deflate spec everything there is little endian and bzip is big endian. Not just the byte order, the _bit_ order. Adding a test for that to the hot path would not be fun. Haven't looked at xz yet, because it's time to go sit in a cubicle again...

Heh. The busybox binary I have on my system (same one I uploaded to the busybox website, I should do a new one but that's in the aboriginal release todo heap) implements gzip and its help doesn't mention -9 but it supports it anyway. (The again the gnu/dammit version supports -3 even though --help just mentions -1 and -9.) Tricksy hobbitses.


February 2, 2014

So kernel releases require aboriginal releases which require toybox releases, pretty much driving my open source development schedule based on an external calendar.

This time, the hiccup is that the powerpc target broke. I bisected it to commit ef1313deafb7 and got back a "works for me, what toolchain are you using", meaning they almost certainly leaked a new toolchain feature into their build that gcc 4.2.1 (the last GPLv2 release) doesn't have.

And checking my email, that's exactly what happened. And it's not just an extra #define, it's "altivec support' which was a large elaborate patch that I can't use under GPLv2.

The long-term fix is to switch toolchains to ellcc, although they're of the opinion that supporting the 2013 C++ "standard" means rewriting the compiler in that dialect so it only builds with compilers less than 18 months old. This is the sound of a project disappearing up its own ass, but it's that or the FSF so no contest really. (You can get away with a lot and still be less crazy than the FSF.)

The short term solution is #ifdeffery in the kernel headers. I should work on that, but haven't got the heart for it right now. Banging on sflate/gzip instead.


February 1, 2014

Happy birthday to me. I am now The Answer years old.

For dinner we went to "Emerald Tavern", which is a newish place right next door to that Sherlock Holmes themed bar. It's a combination game store, coffee shop, and bar, which sounds like they designed the place with Fade in mind. I had a peanut butter and jelly sandwich run through a panini press. Inexplicably, the place does not sell energy drinks.

I got a new phone for christmas. Nexus 5, and a switch back to T-mobile, this time with the "we acknowledge that tethering your phone is something you will be doing" plan. Did you know Dungeon Keeper is now a free download in the app store? (They try very hard to sell you gems for real money.)


January 31, 2014

In celebration of the fact we now have enough pieces of paper to file taxes, I made rice pudding. (As with so much in my life, it's a Hitchhiker's Guide to the Galaxy reference.)

The stuff's pretty easy to make: 4 cups milk, 1 cup dry white rice, 6 heaping tablespoons sugar, pinch of salt. Boil the lot of it slowly, stirring enough to keep it from sticking to the bottom and dissolving the skin back in (basically every couple minutes), until you've run out of liquid (maybe 20 minutes). Add a shot of vanilla extract (it boils out if you do it at the start), and maybe some raisins if you feel like quoting Better Off Dead.

Went over well with both Fade and Camine, and since making the two of them happy is one of my major life goals, I'm calling it a good day.


January 30, 2014

Finally glued together Szabolcs Nagy's "sflate" deflate/inflate code into a single file I can nail onto toybox and then spend forever cleaning up.

Way back in the dark ages (1997, back when I was working on OS/2 for IBM) I ported the info-zip deflate code from C to Java 1.0, by which I mean I read the info-zip code (and the pkzip appnote.txt file) to understand the algorithm and then wrote a java implementation of said algorithm. It worked, in that the C version could extract the compressed output my Java code produced.

But before I got around to implementing the decompression side, java 1.1 came out with inflate/deflate added to the java standard library (implemented in C and thus several times faster than a native Java implementation), so I abandoned it and went on to do other things. But the important thing is that at one point I did wrap my head around how deflate works, so I wasn't too worried about doing one for toybox. It's just one of those "need to get around to it" things like mount, mdev, toysh, or updating the initmpfs code in the kernel so automounting devtmpfs works for that. (The hard part is working up the energy to do more programming when I'm not sitting in my cubicle at work. The hard part at work isn't the programming, it's SITTING IN A CUBICLE. Those things suck the life out of me for some reason.)

Anyway, nsz (his irc handle, he hangs out of the freenode #musl channel) wrote a prerfectly serviceable implementation of this stuff with gzip, zlib, zip wrappers, and one of the Japanese companies using toybox that wishes to remain anonymous after that whole "tempest in a toybox" nonsense (yup, there's more than one, and no they've still never given me a dime) reminded me I said I had plans for data compression stuff and asked me to prioritize adding it. Since I try to be responsive to my users (whether or not they're deploying this stuff to millions of people), it's time to check in what I've got of the help parsing code and switch my chineese-water-torture level of development effort (drip, drip, drip) to deflate.

Step 1: glue everything together like I did for xzcat and ifconfig. Step 2: make it run as a toybox command. My first impulse was to make it "zcat" (ala bzcat and xzcat), but I guess "gzip" is the logical name for said command since that's the command line streaming version and can do both inflate and deflate (ala the -d switch). Historical accuracy says it should be zip since Phil Katz invented the algorithm for pkzip (and documented it, which is why there's so many independent cloens ala info-zip and zlib and gzip and so on), but zip is an archiver that handles multiple files and that's still a todo item here. (Note to self: dig up appnote.txt again when back on the internet, maybe archive.org has it. Actually these days there's almost certainly a wikipedia article on deflate, which there wasn't last time I messed with this.)

There's probably about as much cleanup to do here as there was for ifconfig. Oh well. I need to get the command line option parsing behavior (including the OLDTOY aliases) done before I can cut a release, because people _do_ use stuff out of pending and I don't want them getting too used to "gzip -z" or similar...

(Step 3: remove the camel case.)


January 25, 2014

Eh, what am I working on...The ellcc build broke in binutils, because I haven't got makeinfo installed on the host. I applied the aboriginal patch and it A) ellcc rebuilt everything from the start again, B) binutils then broke for a second makeinfo reason. (The binutils build has a "missing" script that its' configure substitutes for makeinfo when it's not avaiable, but it doesn't work. I hacked it to work, and then one of the binutils subdirectories doesn't use it and calls the nonexistent makeinfo directly. Wheee. The FSF, ladies and gentlemen!) So I need more tries.

Re-poked the kernel guys about powerpc not building in the new kernel, and they said it builds for them with my config and want to blame my toolchain. So I need to debug it myself, and that's blocking an aboriginal release.

I wanted to do a patch to make initmpfs auto-mount devtmpfs when the config option's enabled. If it's to go in this merge window, I should do/submit that this weekend.

The toybox help parsing is being weird, I've tracked it down to one of the strings in the to-sort array having the value of its pointer written into the string. (I.E. there's a redundant write that's doing an extra dereference). Even though I wrote this, I'm boggling a bit. (How did I manage to screw it up _that_ way?) Areas of the code currently have as many fprintf(stderr) lines as actual code.

Need to merge Szabolcs Nagy's flate (deflate/inflate) into toybox because some of the project's japanese users need it. This can share code with the bunzip implementation, which needs a proper bunzip2 front end and not just bzcat...

(Note: the help parsing glitch was a malloc(clen+tlen) needing to have its size multiplied by sizeof(char *). Not the last bug, but a weird one to track down because the effect and cause were separated by a few steps. I should do some screencap podcasts on debugging as part of the cleanup.html series.)


January 20, 2014

Thread on the toybox mailing list about building toybox with llvm (from BSD guys) made me dig up ellcc again, and it's still... annoying. Checking in all the extracted source instead of building from proper tarballs and patches means A) the endless svn checkout of doom, B) I can't easily see what versions they're using, what they've changed locally, or try swapping in my own stuff.

Still ignoring all that, the _real_ annoyance is the way llvm/configure barfs because Ubuntu 12.04.3 LTS has gcc 4.6 instead of 4.7. It's specifically checking for gcc 4.7 at configure time and refusing to build if it's older than that.

Obvious step 1: find that test and COMMENT IT OUT. Because refusing to support last year's toolchain is just dumb.

In other news: 3.13 kernel is out. Muchly todo items in the near future. (One of which should probably be a patch to make initmpfs mount devtmpfs on /dev when that config symbol is enabled. Because putting /dev/console in your initramfs cpio file is just sad. I should do a doc patch noting that while I'm at it.)


January 19, 2014

Torn about kernel documentation stuff. There's so much I want to DO, but it's hard to care until the kernel.org guys give me rsync back to update kernel.org/doc, and it doesn't look they're capable of that anymore.

Possibly I should hand it off to somebody else. But who? I basically do monthly roundups of patches that fell through the cracks, and the occasional reply. Even my attempt at updating the maintainers entry to exclude the devicetree stuff (which is well maintained by multiple other people and just clogs both my inbox and the kernel doc mailing list with enough noise to render it useless) got bikeshedded enough that I lost interest in pushing it.

That's Documentation/ for you. Even when I grab pieces quickly, it's all about stuff that's got other maintainers so other people put it in through their trees (without telling me) resulting in collisions. Plenty of patches go in to Documentation that never went to linux-kernel or me anyway. I've taken to ignoring anything that's part of a patch series, because it'll go in through somebody else's tree. (Maintaining this directory is janitorial work at best.)

What I really want to do is reorder it all, such as putting the arch directories in an arch/ directory. But last time I tried that, oh the bikeshedding...

I suppose I should make another attempt to care.


January 17, 2014

The internet's a weird place. Thirty years ago, millions of people had strange side projects in dusty notebooks in the back of a closet or under a pile of papers on a desk. Something they spent hundreds of hours on at one point, and then got buried in day to day rush. Maybe they occasionally came back to it, showed a couple friends, but mostly nobody else remembered it existed.

Now that sort of thing tends to go on the net, where it can sit fallow for years before somebody else bumps into it and goes "hey, look at this thing somebody put hundreds of hours into, lemme reference it in this new thing I'm doing."

This is the real power of the internet. Harnessing people's junk drawers. It's still horribly indexed, but not _impossibly_ indexed. A file in somebody's house they forgot they even did isn't something I can stumble across. A five year old web page comes in handy all the time.


January 15, 2014

So we want to collate help entries that are: 1) enabled, 2) have the same usage: name, 3) in declaration order. (This means we don't have to parse depends.)

If the first entry in each usage: string is an [-option] block (single dash, no space), collate the blocks and alphebetize. For the remainder, put the later config symbols' usage first on the collated usage line, because the non-option arguments tend to be in the first bit and should remain at the end of the usage line.

Getting the spacing right's a bit of a challenge, but then string parsing is always horrible in C. (It's not really suited for it. Beats shell, though.)


January 14, 2014

Huh, looks like Howard Tayler (the Schlock Mercenary guy) has swapped Penguicon for ConFusion. Makes sense.

Probably takes a bit of backstory to explain _why_ it makes sense.

Many moons ago, Tracy Worcester and I created a convention called Penguicon. The third year we were both distracted (her with thyroid cancer, me by replicating the whole thing in Austin as Linucon), but the first, second, fourth, and fifth years involved a lot of hard work from both of us. I recruited all the guests of honor those four years, and each year I tried to introduce something new. It was a dumping ground for "wouldn't it be cool" ideas. The "food track" was inspired by the Minnesota Munching Movement my sister's involved in (she's been behind the scenes at ConClave since before Minicon became Microcon; less so with 4 kids but I went with her to her convention during my stay in Minnesota). Liquid nitrogen ice cream was something I saw in the General Technics party at Millenium Philcon, and Mirell and I worked out how to reproduce it for Linucon. For panel recording I bought 5 mp3 lecture recorders and taped 'em down to tables in the panel rooms (presumably they have something better now).

Of course lots of things Tracy's crowd took and ran with, such as when I posted a youtube video of local Austin guys with musical Tesla coils to the penguicon-general mailing list and they found their own local version to have a concert... And of course there was plenty of stuff I had no hand in at all: the swordfighting workshops, turkish coffee, scotch tasting, chocolate tasting, brazilian beef, whatever that 'brick panel' was... Tracy and her friends poured in tons of ideas, and they were the locals who actually _ran_ the convention. The only reason I could build Penguicon higher each year is that she and her friends were holding it up. (I regret that I _didn't_ jump on the "we should invite the mythbusters" suggestion back around year 2; at the time I'd never heard of them. My "why don't you go do that" was sincere and heartfelt: I wasn't against anybody else pulling in more stuff they found cool, I just wasn't motivated to go research people I'd never heard of. I haven't had cable since last century and they didn't have a big net presence yet.)

The local Michigan convention scene had 3 conventions, one of which had only died recenty (ConTraption, due to political infighting on the part of its con staff), and Tracy used their timeslot and mailing list to launch Penguicon. Tracy had previously been a con chair of the larger of the other two conventions, chairing "ConFusion 19100" (the Y2K version of the convention Howard's going to now). Why does it make sense that Howard rebased from Penguicon to ConFusion? Because of Mr. Penguicon.

Unfortunately a guy named Matt Arnold went to Penguicon 1 (his first ever convention) and got fixated on Penguicon (to the point he lost more than one job over it), and in the best Igor from Dork Tower tradition went "It must be mine!", I.E. the convention had to becom all ABOUT HIM. For example, I did the website for the first year and made sure there was a "heartbeat blog" letting everybody know that we were still hard at work and cool things were coming. The second year he created the "minister of communications" position so every public communication from the convention had his name attached to it, and he was the one getting interviewed on local television about it (although he wasn't the con chair until after I stopped attending).

The real problem is that in the process or taking it over, taking complete credit for it, making it about himself, and turning himself into Mr. Penguicon... he had to eliminate all competention for the title, starting with the actual founders. During the the third year where Tracy was sick and I was busy back in Austin, he started a whispering campaign against us and we didn't even notice. I wasn't local and had largely moved on to other things, but introducing LN2 ice cream and panel recording were both things I had to do with no help from the concom (I just showed up and did them), because any idea I proposed was automatically blocked. And I only bothered through year 5; the following year (last last year I attended) a group of Matt's friends stood around in a circle chanting mocking rhymes about one of Tracy's proposals. (Tracy and I weren't the only ones, everybody who might conceivably overshadow Mr. Penguicon had to be written out of history so Mr. Penguicon could stand alone as the creator-god.)

The reason _that_ was a problem for the convention is that in the entire time I had to deal with him, Matt never actually had an idea. He did extensive social engineering and took credit for other people's ideas, but I think a big part of what impressed him about Penguicon was that it fundamentally wasn't something he could have done himself. Tracy and myself never viewed Penguicon as irreplaceable, if all else failed we could do it again from scratch. (And I did, with Linucon, but couldn't sustain it without the support network Tracy had in Michigan. I needed to go work at an existing convention for a couple years to recruit concom. Moving to Pittsburgh during what would have been year 3 didn't help. But those were learning experiences, not blockers. I haven't done it again because I'm busy with aboriginal and toybox and being married and staying properly employed to pay for an actual house... but mostly because I already _did_ it. Been there, done that, twice. Moving on...)

But to Matt, Penguicon was magic. Combine that with some deep psychological need for the spotlight, and it meant he wanted to take credit for this thing that had impressed him, so Tracy and I had to go, as did anyone else who might conceivably take the spotlight away from his starring role in Matt Arnold's Penguicon by Matt Arnold. This incessant politicing took all the fun out of it for me, so I stopped pushing new content in after Penguicon 5, and just tried attending for a year: the year where Matt wasn't technically chairing but invented the "assistant con chair" position for himself (we hadn't had such a thing before, year 1 I stepped back ~3 months before the event so Tracy could chair because you need one point to the wedge). Matt organized opening and closing ceremonies so he had twice as many lines as the actual con chair, and when he announced he would be con chair the following year, I didn't bother coming back, and haven't been back since.

Since then I've mostly tried to ignore it. Bruce Perens trolling busybox may have helped dissuade me from caring _too_ much what happened to it after I left: if other people are having fun knocking down a sand castle I helped build after I went home, it's none of my business. I wasn't boycotting it or anything, I didn't stop Fade from going on her own the year Matt chaired to have fun hanging out with her friends in the area. I vented about it a bit while stuck in an airport with nothing better to do (that link has lots of links to sources for things I've mentioned here, because I was bored and sleep deprived with internet access; haven't bothered this time around). But mostly, Penguicon just hasn't come up much in the past 5 years.

Sure I heard rumors of trouble from other convention organizers next time I passed through the area, but that was a "Don't you want to fix this? No? Oh, ok then." professional courtesy sort of thing. I actually got such rumors from multiple angles: one of our guests of honor in one of the last years I attended was Randy Milholland of Something Positive. (Randy did the "Gurps Marriage" book cover that Steve Jackson used when he officiated at our wedding at P5. I still have it, in a box, signed by both of them. Yes, I abused the fact I was still helping arrange the panel schedules to get us a private panel room for an hour. We had to move it twice, once because it was scheduled opposite an Elizabeth Bear panel Fade wanted to go to, and once because it was scheduled opposite a Charlie Stross panel that Steve and Eric Raymond (best man) wanted to go to).

Anyway, Randy returned to the convention as a vendor in later years, and the first I heard that Penguicon might be going stale from an attendee point of view was when he tweeted that it had become "just another con". (We had a tradition of trying to give our guests such a good time they'd come back on their own time. Hence the "nifty guest" designation, a lot of which were previous Guests of Honor who got lifetime free admission to the con if they came back to attend.)

Howard Tayler was another perennial Nifty. I was a fan of his comic from early on, and back when he still worked at Novell we tried to use the technical nature of my half of the convention to convince his employer to fly him in on their dime, since Novell had _just_ bought SuSE. (Year 2 I think? Year 1 was in the hotel with the leaky ceiling, the Dick Van Dyke or some such. I don't remember him wandering around that building. Year 2 was the year Eric Flint came to talk about the Baen Free Library, and Howard gave me a Novell Linux shirt that I think I was wearing when I went to Flint's panel, so that sounds about right? It's been a while...)

Howard made a bunch of friends and new fans in Michigan, and came back to visit them each year. (The first convention he attended as a full-time web cartoonist was my Linucon 1 in Austin, which was a few months before Penguicon 3, I think?)

So that's the context in which Howard going to ConFusion instead of Penguicon is a "huh, makes sense". Given that ConFusion is in the same city 3 months earlier, it's not a big stretch to go to that instead of going to Penguicon. When Penguicon started we were pulling in lots of new people who'd never attended a science fiction convention before. (Aegis Consulting, the swordfighting people, originally found out about us because of their Linux dayjobs.) But now? If you go to ConFusion, you can see all the same people. Skipping Penguicon makes sense.

Maybe I should pencil in ConFusion next year. Sounds like fun.


January 12, 2014

Toybox uses the menuconfig help text for its --help output. For the longest time a python script was harvesting the kconfig data to produce generated/help.h, but A) python should not be a build-time dependency (and the hacks to work around that are brittle and crappy), B) lots of commands have more than one config symbol and it hasn't been collating them.

A while back I decided to rewrite them in C, but haven't had time to actually do it. I'm too tired when I get home to get much done, so I'm back to getting up at 5am to try to steal a couple hours before work.

The first step is just writing a C parser so I can discard the python. That bit's pretty much done now.

The second step is to come up with the list of commands with config stuff to merge, which can be found via:

$ grep -h usage: toys/*/*.c | awk '{print $2}' | sort | uniq -d

Giving cd, cp, df, ls, mke2fs, mv, netcat, and sort.

Step 3: look at the config entries of that and work out rules by which their text can be merged.

(Note: I want to delete everything but the README out of generated/ and right now it's listing each file to delete. Possibly I should just move the README to the code walkthough on the web page? Or mark the README read only. There don't seem to be a lot of exclusionary wildcards...)


January 11, 2014

My current email workflow involves starting thunderbird via ssh -X, and every time I do so it goes:

(process:30776): GLib-CRITICAL **: g_slice_set_config: assertion `sys_page_size == 0' failed

Doesn't seem to hurt anything. There's a reason I'm against asserts. After a few years of python programming, the correct approach to C coding (other than code inspection) is to have a good regression test harness to show that the result actually does what you think it does.

I need to fluff out the test suite for toybox. But as of yet, I still haven't caught up on the cleanup writeups...


January 4, 2014

The weekend!

Catching up on the cleanup writeups for toybox.


January 2, 2014

I was just getting over the cough that's been plaguing me since December 2, and then today the Cedar pollen started up.

By our powers combined we are, seriously annoying!

*cough*


Back to 2013