Rob's Blog rss feed old livejournal twitter

2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2002


December 31, 2017

Trying to figure out how less handles certain things, so I did:

X=0; while true; do X=$(($X+1)); echo -ne "$X\r"; done | less

That makes ubuntu's less unhappy, but for a different reason than I expected. It considers it an endless single line, so even though it wraps it you can't cursor up and down. I'm pretty sure "we wrapped to display 5 lines" counts as 5 lines.

Hmmm, does this mean my linestack stuff should break long lines in its internal data representation? Or do some fancy indexing? I dunno what the proper data representation for this is. Screen needs to basically get _everything_ right, but less can work up to it in stages:

1) filter out everything but a few known special characters (like newline), and print the rest as escapes.

2) get all the low ascii stuff right, including backspace and vertical tab and such.

3) utf8/unicode: combining characters, multicolumn characters, and the right-to-left escape sequence.

4) ansi escape sequences for movement.

5) color changes! Can of worms there.

The problem with color changes is A) they stack, B) they combine strangely with movement. Which means I need to parse them enough to unroll the foreground/background/intensity changes to get the 8 bit number for the current color (16 foreground, 16 background, modulo the newer more granular color change stuff that's seldom used) and then store it either for every character or at every transition. (Which gets us back to storing line breaks above, which would suck if you resize the terminal and it has to recalculate them. Wheee...)

So, pull out my ls.test todo item and see what Ubuntu's existing less does with it:

echo -e "$(X=0;while [ $X -lt 255 ]; do X=$(($X+1)); [ $X -eq 47 ] && continue; printf '\\x%02x' $X; done)" | less

And the answer is it escapes all the low ascii stuff but CTRL-I and CTRL-J, and escapes 27 as ESC. And when I less tests/files/utf8/*.txt it renders the unicode properly. bad.txt becomes ^A (color inverted) and the one with the direction reversing commands turns them into and .

Sigh. I should really replace my netbook keyboard before doing too much more on this command, I can _page_ down easily but right now to cursor down I have to stick my finger in the empty socket the missing down arrow key left and jiggle around to try to short out the wires, and the problem is it doesn't reliably STOP when I do that: it sometimes stays down-arrowing for several seconds longer than I'm pressing it, occasionally stuck until I jiggle it enough to get it to stop again. It generally doesn't spontaneously trigger itself when I'm not trying to, but I don't want to push it.

Anyway, it looks like my idea of less is a lot more ambitious than ubuntu's idea of less. Presumably I should implement that baseline first, promote it, then worry about the fancier stuff.


December 29, 2017

Ok, got iconv checked in. (Punted on -l, not a clue how to do that.)

Next up in ls -loS toys/pending is probably "watch.c". (Because vi.c is a stub for a can of worms, and groupdel.c is a cluster with groupadd.c, userdel.c, and useradd.c, plus login/su/sudo/sulogin, for a subsystem android does a completely different way in libc because they assigned a different uid to each app way back when as their first security thing, and still do.)

Doing watch.c requires the same "not curses" windowing as vi, screen, less, and so on. I have a lot of infrastructure in lib/linestack.c but there's still stuff missing.

The hard part is tracking the cursor position, because I don't accidentally want to scroll the screen, which means I never want to accidentally write off the end of a line and wrap to start a new one (because it could be the bottom line). Which means before I output stuff, I need to know where the cursor will be afterwards.

There are multiple categories of issues here:

1) When you put characters below space (ascii 0-32), the cursor does weird things. For some of them (tab, backspace) it's dependent on your current position. (Tab jumps to the next position divisible by 8, backspace won't go past the left edge up to the previous line.) Others are handled differently by different terminals: xfce's Terminal prints single column square boxes with a hexadecimal number in it for unknown characters (ascii 1, 2, 3, 4, 6, and 16-31), but other "known" characters produce no obvious output (such as the NUL byte) so the cursor doesn't advance. Meanwhile, the kernel's text mode doesn't do the mark unknowns thing, so the cursor never advances for unknown characters, but its handling of SO (ascii 14) is just weird (it breaks output until you run "reset", some sort of codepage switch).

2) UTF8 is a whole can of worms. There are combining characters, double and triple width characters, and left-to-right/right-to-left gearshift sequences that reverse the direction the cursor advances (and where newline puts it).

2B) UTF8 invalid sequences: what's the terminal going to do when it prints them? I presumably have to escape them myself because otherwise I don't know what printing them does to the cursor on terminal du jour.

3) ANSI escapes! (escape left bracket blah letter) We already output a standard set of ansi escapes, and have code to parse the ones we recognize on input because they're fed in to the terminal for things like cursor keys. But the full set of parseable escapes is much larger than we're currently dealing with, and what do we do with unknown sequences? How do we figure out they've _ended_?

3B) Another fun thing is ANSI escapes can change the color of the text! (The used to be able to blink and underline and such too.) So when we're recording output to replay it we need to record state, and you can have "change state, jump to new location, continue output" so if you break it up into lines the state change has to be recorded and replayed later, which means it has to be recognized. And you can have multiple sequences that stack rather than canceling each other out (foreground and background changed independently for example).

What I probably have to do is parse the sequences I understand and stomp the ESC for the ones I don't (so it's output as "<27>[blah" or similar.

Another fun thing is you can resize your terminal, which makes your terminal send you SIGWINCH. If you make the terminal smaller, then larger again, did it erase bits that become black, or do you re-wrap your stored text, or what?

Answer: depends what you're drawing! If it's vi, or less, or watch, you have a buffer of output you redraw from, wrapping at the current width. But if it's screen, you probably just have a snapshot of output that has to be filled in with spaces when you make it bigger? Except when the terminal is in "cooked" mode...


December 27, 2017

Going through pending looking for low hanging fruit, hit iconv.c. I implemented -c, fixed the endless loop when illegal char happens with !outleft (don't need to check errno, just in == toybuf), made it refill buffer each time (less efficient handling of illegal chars but never have to worry about how long constitutes a valid sequence in unknown encoding).

It doesn't look like this was ever tested with input longer than 2k: the memmove() has src/dest switched, then the second time through the loop in starts at offset inleft? Not sure what that was trying to do, I made it just start at beginning every time instead.

I changed iconv_open() error msg to show to/from and errno (rather than hardwired english text).

But the part I _can't_ figure out how to do is implement iconv --list because the libc iconv stuff doesn't seem to have a "list known locales" option? The "man 3 iconv" page says to run iconv --list from the command line to get this list WHILE DISCUSSING A C API. (Um, layering violation much?)

I looked at the musl source and src/locale/iconv.c has a hardwired charmaps[] list which is never exported in a usable fashion. I ran strace against iconv --list on ubuntu and it read a file which is binary salad and:

$ file /usr/lib/locale/locale-archive
/usr/lib/locale/locale-archive: PDP-11 separate I&D executable not stripped

Good guess, thanks for playing.

I had to pull updated bionic source to get an iconv implementation to check, but in there it's in a cpp file, so I'm guessing with the name mangling that's only available to C++ programs not C.

(Last decade I worked on a project where they had a shared library that did an "extern c" function that exported a pointer to a C++ class instance. The build system was stuck running a version of Red Hat Enterprise that had gone out of support because the compiler version skew had broken some ABI detail in the name mangling and they had to build everything with the old compiler version forever to keep binary compatibility with the deployed library versions. All because they'd passed C++ pointers across an "extern C" and bought into the "C++ has anything to do with C" marketing hype. Eventually they did a flag day redeploy of everything to get away from the obsolete RHEL build version, it broke all the customer apps, and they wound up losing all the customers for that project.)


December 26, 2017

So at the end of the Dr. Who christmas special, Capaldi lectures himself for several minutes mansplaining how to be The Doctor, then the regeneration is a big crucifiction cross, then the new female doctor gets to press exactly one button before the Tardis freaks out, opens its doors, turns sideways, and shakes her out from a great height.

Perhaps I'm reading too much into this.


December 25, 2017

Yay, Elliott fixed that xargs -0 issue. Merry christmas!


December 24, 2017

Jeff reminded me of Eben Moglen's excellent writeup on the Linux Foundation, where he explained that it's the same kind of Trade Association as the Tobacco Institute and Microsoft's Software Publisher's Association. It's exists entirely to serve its members, and has no mandate to help anyone else. In fact helping the in-group at the expense of everyone else is what this sort of association generally exists to do, and in this case the in group is for-profit companies and the out-group is hobbyist developers. (I've mentioned this before, but again "I wrote about that once 7 years ago" doesn't mean everybody knows about it. I suppose the marketing "rule of seven" works in here somewhere.)

Moglen was the lawyer who co-authored GPLv2 and GPLv3, and is the half of that team that hasn't gone obviously crazy. He's smart and knowledgeable and his article is well worth reading.

That said, Moglen still thinks copyleft is a good thing. Possibly a little too close to the issue to re-evaluate a changed landscape. Forest for the trees sort of thing.


December 23, 2017

Fade got home on Wednesday, I've been a bit distracted. :)

I got the pidof/killall plumbing redo checked in, and am trying to come up with a proper test suite entry for it. There are several things to test: executable vs script, name vs path, the path logic distinguishing between different executables running with the same name...

Elliott sent a patch trying to make killall behave like "upstream", which is a can of worms. It turns out the bug report he got was that our killall had a behavior difference vs busybox, so he tested against busybox and assumed that's how ubuntu works. I haven't looked at busybox killall recently but I ran a lot of tests against ubuntu to see what it's doing.

The other fun thing is toybox's killall and pidof use the same logic (but pkill/pgrep don't, they use the ps/top logic). Specifically, Ubuntu's pidof treats relative and absolute paths differently (it matches "dir/name" against a command's argv[0] but "/dir/name" against the inode) and killall treats any path as "check the inode", so "pidof ../dir/name; killall ../dir/name" can have pidof find nothing and killall find something.

Ubuntu's killall doesn't distinguish between different paths to the same inode, which means hardlinks get killed together. The pidof behavior of grouping hardlinks together for absolute paths but not relative paths is crazy and inconsistent and I'm not doing it.


December 20, 2017

I got to introduce somebody to the cp -rs make trick today, something I first optimized _away_ almost ten years ago.

Something that I need to internalize is that successful speakers and educators and such repeat the same darn thing a zillion times, because their audience won't have heard it. I've watched successful speakers give basically the same talk ten times to different audiences. I myself keep thinking "I already talked about that, everybody's sick of it, move on"... but that doesn't seem to be how people work.

The cp -rs trick is where you use cp's symlink or hardlink option to snapshot a tree of source code, then build in the snapshot. This is how Aboriginal Linux's package cache worked.

This has a bunch of advantages: it uses less disk space, snapshots are nearly instantaneous (since you're just copying metadata, not file contents), and if you're doing multiple parallel builds of the same package (for different architectures) you have only one copy of the extracted data in the page cache so you thrash the CPU memory bus a lot less.

It also means that you never really need to implement out-of-tree build infrastructure, you can just cp -s your source tree and build in that. (You can use "cp -l" to make hardlinks instead, but that can't cross filesystem boundaries. You can cp -s from a read-only filesystem (or an NFS mount) to some local build scratch space, build in there, and then rm -rf the symlink tree without disturbing the original. And yes this even works around the NFS weirdness of "rm -rf failed on nfs because some process still had a file open and this created an invisible dot-file that means you can't delete the directory, and that broke the build".)

It's a neat trick, more people should know it. It _can_ break, the old aboriginal FAQ described an old zlib screwup (shipping a generated file and then modifying it in place, the shared copy got modified), and once upon a time bunzip2's "make install" used cp to copy a shipped file under /bin and wound up installing a symlink-to-nothing there. (This is why the "install" command exists. I poked them and they fixed it.) But almost all builds just work (he says having used it to build Linux from Scratch), and you can do a "find" for symlinks on new packages and check for broken ones (somewhere I have a script that did that, some variant of "find -type l | read i; do [ -e "$i" ] && echo "$i"; done", although a find -print0 that you could feed into read would be nice), and it's way less effort debugging that than implementing "make O=".


December 12, 2017

A report came in via github that xargs -0 is skipping arguments between each line break. Sigh. (Is this a regression or a corner case I never noticed/tested to begin with?)

Throw it on the todo heap...


December 11, 2017

Got a little cleanup done on stty, although testing changes for regressions is an open question. (I don't have any non-usb serial hardware anymore, and qemu's serial port emulation is just copying bytes to/from a host filehandle; nothing about baud rate, parity or stop bits.)

Cycling back to mkroot, the arm64 defconfig has /dev/vda and eth0 but my miniconfig does not. So, let's run through the isolation protocol again:

What I do is take my existing miniconfig, and append all the new symbols the other .config file enables that aren't already in the miniconfig. Here's a script to do this:

(cat $MINI && egrep -v "^($(sed -n 's/=y//p' $MINI | grep -v '^#' | grep -v '^$' | tr '\n' '|')nope)=." .config ) > config2

That script turns all the NAME=y lines from the miniconfig into a big "^(NAME|NAME|NAME)=' regex, then feeds it to egrep -v to filter out all those lines from the .config, and appends the result to a copy of miniconfig. So config2 starts with a verbatim copy of the miniconfig, followed by all the new symbols that _weren't_ in miniconfig.

(Implementation detail: it's really using NAME|NAME|nope because it's easier to append a string I don't expect to find than strip off the trailing | from the tr '\n' '|' replacing the ending newline.)

Since the new file starts out exactly the same as the miniconfig, as long as I don't touch that part my base system functionality should still work like it used to. The new symbols at the end enable some new feature I want, but only a few of those new symbols are relevant, so I comment out a chunk of the new stuff at the end of the file and rebuild, and if the behavior changed one of those symbols I commented out was important. Rinse repeat until I'm down to the miniconfig and whatever new lines switched on the feature I want.

I usually start by commenting out a couple dozen symbols I recognize that probably _aren't_ important, then do a test build to check that the new stuff I want is still there. The balance is "remove lots of trash so I'm not doing lots of test builds" vs "something I didn't expect turned out to be important and I have no idea which symbol it was because I just switched off buckets of them". There's a tradeoff between doing more builds and backing up a long way if the test fails. Generally I comment a batch out, do a test build, and then delete the ones I commented out before commenting more out, so I can tell where to back up _to_. (They're deleted when I've confirmed they're not needed.)

Eventually I either guess wrong or run out of symbols I recognize. Then it's testing smaller numbers of symbols I'm unsure about, often one at a time, until I find all the symbols that enable the new behavior. (Then I look them up in menuconfig to read the help text and see what they mean, sometimes even digging into the Makefile or C file plumbing that uses them to see what they actually do.)


December 8, 2017

Still waking up at 1am and being awake until morning, then going to bed around 6pm. This wouldn't be so bad if Peejee didn't climb up into my arms every time I sit down at the computer at home (preventing me from getting any work done), and Austin's once rich array of 24-hour places to program having thinned considerably over the course of "George Dubyah"'s tenure. (I miss Metro.)

I am waiting for my cats to die of old age. A bit like the baby boomers. One of the things I've enjoyed most about trips to tokyo was quiet time alone in a hotel room with no cats, where I could catch up on toybox. (Didn't manage it this most recent trip because there was never a time when we weren't working on crisis du jour; gps debugging, helping Jeff prepare for investor meetings, etc. No time/energy to spare for toybox. Maybe it'd have been different if Jen had gotten on any of the half-dozen flights she promised to be on and then wasn't with a new excuse each time (a pattern apparently stretching back months), but she didn't so I got sucked into shouldering her load.)

It snowed outside this morning. Austin only does this about every 5 years. (It's like a quarter inch and only sticking on half the ground, melting on the other bits. But still quite pretty, and we had to drag Fuzzy's plants inside in rather a hurry.)

One of the recruiters is waving a telecommuting position at me, up in Colorado. Highly tempted. I haven't been to Colorado since I visited Kirsten at her college in something like 2002.


December 7, 2017

Got email from dreamhost the other day about processes getting killed due to excess resource usage, which read like an attempt to upsell me on some sort of cloud service. I went "huh", and today I got back clarification that it's some site spidering the old mercurial repositories, and launching enough python instances to peg the CPU long enough for dreamhost's monitor thing to start killing them.

Long as the site's not hacked and the static pages load within 15 seconds or so, it's not ideal but eh. Other fish to fry.

Mmmm, fish. I could cook salmon for dinner.


December 6, 2017

The downside of not following ttwwiitteerr is I bump into mention of "the problems with Patreon" in a couple different places and have no idea what anybody's talking about. Google news is useless, and google for "patreon problems" brings up a dozen different issues going back to 2014 or so on the first page. (Adding "december 2017" doesn't help.) So I wind up digging through the twitter feed of one of the mentioners to find a link to the issue. (Basically patreon is pretending that they charge your credit card individually for every creator you support instead of once a month, and adding a 40 cent fee per pledge, meaning $1 pledges to 30 people are no longer a thing. Since they still DO charge your card just once per month, they pocket these new fees and presumably use them to persecute women.)

And yes, even with 280 the proper description is a link to a screenshot. This 280 thing was stupid and useless and should not have happened and I'm treating it like I did Horrible Retweets which means I miss twitter but do not read ttwwiitteerr.

I haven't got a proper replacement for this yet. Livejournal _used_ to provide my Random Link Farm, and before that slashdot. Tumblr is closest now but the signal to noise ratio is at least as high as twitter ever was with posts that take up to 30 seconds to scroll past (once you've posted 3 pictures of the same staircase, I'm not interested in the other 30) instead of 3 seconds each.

I should set up an RSS reader. Google Reader famously joined the Google Graveyard in 2013, but I still provide a feed here and Fade uses one for her webcomics and such. (I just type webcomics in from memory, which is the modern equivalent of remembering phone numbers instead of having speed dial.)


December 5, 2017

Airplane back to Austin.


December 4, 2017

We are so not getting everything done this trip. Not even close.


December 1, 2017

Emailed the help address at the bottom of my old kernel doc directory to point out they're not linking to the Documentation subdirectory (with the copy of the kernel source's Documentation directory) or linking to the lki mirror. I know they're there because I _put_ them there, but I haven't had write access to that directory for years, because if all you have is git, everything looks like a nail. Or some such.

Sigh. The Linux Foundation wants to "be" Linux the way Facebook wants to "be" the internet. They want to control it, and own it, and convince as many people as possible their walled garden is all there is. I'm not a member of either, and feel dirty every time I have to interact with either one.

But I shouldn't blame the kernel.org maintainer for his employer making me sad.


November 29, 2017

Hanging out at a tokyo coffee shop called ePronto. It's quite nice. They have a big long desk full of outlets, and in _theory_ they have internet here but in practice I haven't been able to navigate the japanese login screen. (I can recognize the Terms of Service page even if I can't read it, but it goes around in circles. I think it wants me to create an account, maybe?)

I have internet back at the hotel, but the cleaners kick you out at 10:30am (they open every door on a floor and clean all the rooms at once), and you can't get back in until around 3. I could go to the office but I don't have keys to get in, and Jeff isn't in today (he was on the phone with US people until 3am; time zones).

I've fallen way behind on toybox again. And it's slow going poking at mkroot on the netbook because each kernel compile takes half an hour and there's a dozen targets to build. (I didn't bring the big machine this time.)


November 28, 2017

We walked through the scilab code for GPS fixes and confirmed it's doing more or less what Andrew's version is doing (in a MUCH CLEANER way, although part of that is no coordinate conversions because the units of all the data it uses are consistently light-seconds as double precision floating point numbers).

As with all code review, this turned into a tangent from a tangent and we wound up chasing obscure bugs in the hardware side where we worked out that our fixes were becoming inaccurate when we dropped/added satellites because codephase adjustments made to those dropped satellites aren't being properly discarded but instead applied to the wrong satellite, which means the codephase we think the satellite is at and the codephase the hardware is using don't match. (The hardware doesn't let us read the value back _out_, so the driver has to add up the adjustments it asked for.)

The thing about GPS is once you've read a satellite's packets once (and the most interesting bits repeat every 30 seconds), all you care about after that is timing information.

Subframes 1-3 contain constants you plug into orbit calculations, describing this satellite's orbit very precisely. These are updated at most once a week (at midnight sunday UTC, by ground stations transmitting new info to the satellite's onboard computer if solar wind or the planet's magnetic field or something caused the orbit to drift enough they have to fire the thrusters, but usually they don't). So once you've successfully parsed a given satellite's packets, you don't need to do it again for a week.

You use these equations to calculate where the satellite was at a given time, so the X, Y and Z coordinates for the satellite are calculated from the T (time) coordinate. So every time you get a fix on your position, you have "this week's info" about the satellites and 4 new timestamps for each satellite of exactly when you heard each one's signal. You use those 4 timestamps to calculate 4 corresponding sets of X, Y, Z positions, and then plug them all into matrix math to solve 4 equations with 4 unknowns using a liebnitz determinant.

Getting that accurate timestamp is the hard part. The information you have about each satellite clock has 3 parts: seconds, miliseconds, and code phase.

Seconds come from the parsed packet data, although there's 3 parts to it. The first word of each satellite subframe (the "handoff word") contains a timestamp value, so merely by parsing one packet you can set your clock about as accurately as NTP would. The timestamp is basically a monotonically increasing packet number (which wraps sunday at midnight GPS time, which is UTC but not adjusted for leap seconds since 1980), and then there's a GPS week value only goes 0-1023 so it (which rolls over every couple decades and knowing where "zero" is is left as an exercise for the reader, although we have to care because the next one's April 6, 2019.)

Each packet is 6 seconds apart from the previous packets (measured by ATOMIC CLOCKS), so once you've parsed one (from any satellite) you only need to work out where packet edges are, and don't have to be able to _read_ any of the packet data: the next packet will be exactly 6 seconds after the previous packet. If any packet's too garbled to read you can just count bits: the next packet starts 300 bits after the previous packet, and each bit is 20 codephase rollovers (I.E. milliseconds). Since the satellites send the packets all at the same time (arrival's a bit skewed by the distance, something like 70 milliseconds at the speed of light, but again _transmission_ is measured by atomic clocks), you just need a reasonably set PC clock to figure out which group of 6 seconds you're in.

We have to line the codephase up to read the signal at all (it acts like a one time pad XORed with the signal, it's all just noise until you XOR it back, and since half the bits in the codephase are high and half are low and the thermal noise is randomly distributed, when you get it right this cancels out the noise while amplifying the signal), and that alignment tells you _fraction_ of a millisecond (relative to the other satellites).

The codephase changes 1023 times each millisecond, and when the sequence restarts is exactly (via atomic clock) the start of a new milisecond. So you can count the number of times the codephase has rolled over since the last time a new packet started, and that gives you the current milliseond.

We sample the signal 16 times faster than the codephase changes to find the clock edges more accurately, and 1023*16 gives us 1/16368 of a millisecond, which is 61 nanoseconds. Multiply by the speed of light and that works out to a fix granularity of about 18.3 meters.

So the pieces of timing information you assemble are: timestamp in the packet data (week number and time of week), which codephase reset the first bit of a new packet started transmitting on (indicates exact start of a 6-second period), number of codephase resets counted since then (milliseconds), and the current codephase offset (1/16368th of a millisecond). Of course all those times are at some point in the past: you need to work out your exact distance from the satellite and subtract how long it took the signal to get here at the speed of light.

When you request a fix (which can happen at any time, but to make the math easier we do it at local second boundaries), you snapshot the time you have for all 4 satellites (which is a historical reading because nanoseconds tick by as you process it so you're working out where you were at the snapshot time in the past, not where you are now; the important thing is that the sattelite times match each other and correspond to a timestamp out of your local high resolution clock). For each satellite you have the time you saw the last packet edge, the number of correlator turnovers (milliseconds) since then, and the correlator codephase of that satellite (which passes zero at each bit edge, so the codephase gives you the fraction of a millisecond from the last bit edge that satellite is at when the snapshot happens).

Codephase advances each 1/1023 of a millisecond, but we sample the signal 16 times faster to get 1/16368th of a milisecond. The speed of light is around 300 kilometers per second, which is 300 kilometers per milisecond, which is around 293 meters per codephase, or about 18.3 meters per oversampled codephase. So a 4 satellite fix should have a granularity of somewhere around 20 meters, but if you're not moving you can average a bunch of readings together to get more accuracy (this is called a "survey"). If you work out the rate of change of the codephase (first and second derivatives) you can basically use Bresenham's line drawing algorithm to go "our codephase should be _here_ at the snapshot time" and not just have pixel granularity but the remainder. After all, the satellite's moving at a constant rate through a curved orbit that approximates a straight line in any single digit number of seconds.

So, adding it all up: seconds + milliseconds + 1/16368ths of a second (plus predicted fraction of codephase if you're feeling posh) gives you a very exact time for each satellite. Plug that time into the orbit calculations to get equally accurate X, Y, and Z coordinates to go along with that T (using the many-decimal-place constants out of the satellite packet data), then fling matrix math at the results to solve for where _you_ are if you're seeing those satellite signals at the same instant.

The problems we've been having all boil down to timing mistakes and OH there are so many of them, and THIS is why the hamsternz github code doesn't track real world signals accurately for more than a few seconds at a time. For example, codephase drift (due to the satellite's relative velocity towards or away from you changing as it orbits and affecting its doppler shift) can wrap around zero, which moves the satellite's reporting period from one millisecond to the next, meaning the satellite can report twice in the same period or skip a period depending which direction it's going (ascending in the sky is moving towards you but decelerating, descending is moving away and accelerating; when the satellite's directly overhead its doppler offset is zero), meaning that although you NORMALLY observe exactly 1000 codephase wraps per nominal second, when the codephase wraps you can expect 999 or 1001 depending which way it wrapped. You have to make your codephase tracking code notice the wrap and adjust your expectations (999 milliseconds per nominal second is weird but sometimes right), and we had to add code to do that.

Hardware clock skew's another. We're counting signal samples and saying it's 4 signal samples per codephase (well, per 1/16th of a codephase; our clock isn't 16x the codephase it'x 64x the codephase but we need 4 samples for each sine wave cycle so we can spot the ups and downs), but the clock that drives the receiver isn't the clock that drives the correlator which isn't the clock that drives the satellite. And the same "multiply a slow clock up to a fast one" error magification that screwed up doppler happens here too. So we have to correct for skew between THOSE clocks. (The atomic clock in the satellite is really expensive and works in a vacuum, the ones in the board are under a styrofoam pad yet still skew noticeably when the air conditioner turns on or somebody opens a door.)

And since you can't get a crystal that vibrates at a gigahertz (and couldn't accurately measure the vibrations if you could), what you do is get a crystal that vibrates at a few megahertz and feed the signal through a series of phase locked loops to double the signal rate (the new clock has a rising edge on each of the previous clock's rising and falling edges, so it goes twice as fast). But this multiplies any error in your original clock, so the crystal being off by one tick per million means the drived signal from several consecutive phase locked loops is off by a thousand ticks per billion, and they add up.

And any of these clocks can slide past each other and wrap around and sort of introduce moire patterns into our data, which causes more millisecond slips. And these you can't count directly because you can't predict a gust of air screwing with your temperature a fraction of a degree, you just have to spot when it happens and compensate.

Milliseconds turn out to be the hard thing, that's what all the clock slips tend to screw up. Codephase we can detect when we're off and correct for, in fact the phased locked loops do so dynamically (assuming the darn hardware makes the adjustments we request without losing track of where it IS). But if you're off by a millisecond that's 300 kilometers at the speed of light. Either the matrix math will return "not a number" because there's no point where you could have seen all 4 readings at once, or it'll give you a fix way the heck away from where you really are (both in distance and in time).

Which is why our GPS generally either knows RIGHT where we are or thinks we're in tokyo bay.


November 27, 2017

Why is the readelf bug back? Because the build overnight used 4.11, not 4.14. Why did it do that? Because modules/kernel didn't get updated. Why not? Because I have two vi instances editing the same file. Lovely.

I still have an urge to _post_ to twitter, but very little interest in reading ttwwiitteerr. I should tack those lines onto here, I suppose.

I bought earbuds for 100 yen in akihabra, out of a bucket of them. Surprisingly, they are NOT the worst earbuds I've ever bought, although they're not in the top third, either. Still, at the rate I go through 'em I should go back and buy a dozen.


November 26, 2017

I'm taking a day off, trying to catch up on toybox stuff, which diverges pretty quickly into trying to catch up on mkroot stuff. (Aboriginal Linux is what drove what I was doing on buildroot, mkroot is driving what I'm doing on toybox until I can get to the point where AOSP does.)

Along the way, I had to download a new kernel because it's still building 4.10 and 4.14 is current, so I should do that upgrade and test everything, and the wget went boing in a way I'd never seen before: it complained https couldn't decrypt one of the blocks. (Whaaa....?)

It responded by reconnecting and resuming the download (wget --continue), but of course I had to extract the equivalent tarball from git (git archive --prefix=linux-4.14/ > file.tar, and zcat the downloaded tarball) and see if they differ (cmp says they don't). Which means I didn't actually have to download it in the first place, but I wanted to make sure the sha1sum matches the downloaded one (did they gzip or gzip -9)...

Since Jeff talked me out of taking the Big Machine with me (being sure that if I was bringing three computers to tokyo, what with the mac and all, I'd be stopped by customs... The thing is, I telecommute and do open source, so wherever I am I'm working. Kinda hard to answer the 'are you coming to tokyo for work' question when it's kinda quantum. To be honest, I'm coming to tokyo to give Jeff moral support while he finishes the GPS software, almost none of it's my code. Some if it's my _design_ ideas, but I've been arguing with him about that over the phone for the past half-year. (I pulled up a file with a half-finished thing I'd last touched in August.) In person is faster, but definitely not _cheaper_ than just spending six hours a day on voip or skype or something. (I generally pull up talky.io, but haven't managed to get it to work on my current phone and netbook. Worked on the phone before the factory reset...)

Where was I... Oh right: ) compiling kernels for all the musl-cross-make targets on my netbook is SLOW, so I have to leave it running overnight to build qemu-bootable versions of all the mkroot targets.


November 25, 2017

Exhausted. Jeff found a scilab implementation of the gps position calculation algorithm in Appendix B (page 77) of an old college thesis from 1998 (this is the version that can use more than 4 satellites to increase precision) and we're replacing a GPLv3 function we've had as a placeholder (the hamsternz thing had one file copied from Andrew's code, which we obviously couldn't ship, but this thesis is the same math a decade older so either where it came from in the first place or independently worked out) and Jeff's writing a matrix math library because in scilab there are functions like determinant() you just call, which we need C implementations of. This means I need to understand something called Heap's algorithm for permuatation, so we can use it to calculate Liebnitz's determinant. (Which is more accurate, and only really slower when your matrix size is larger than like 7x7).

Or at least that's where I ran out of brain for the evening.


November 24, 2017

And Pakistan joins the list of countries where homeowners have the right to sell rooftop solar power back into the national grid. Jeff is convinced that if we can't get our synchrophasor tech deployed soon, grids are going to start collapsing. (Me, I expect a couple of Katrina-style disasters and then it's a crisis which people deal with at the last minute. He'd prefer to avoid people's appliances getting destroyed and thousands without power for months, but given Enron-inspired the rolling blackouts in california and the ongoing mess in Puerto Rico, it'll probably take multiple cities going dark repeatedly for anyone to notice...)

Jeff is also writing a new matrix math library, which means he's sending me links to matrix math primers. Because my brain wasn't already full before that.

In theory both the fixed point library he made and the new matrix math library should be open sourced. We're just too busy trying to get everything to _work_ right now to do the releases yet. (I suppose we could throw 'em up on github, but without a web page and a mailing list it's not a real project. Ok, my mkroot project is using github's readme as its web page, but it has a mailing list!)


November 22, 2017

After I get toybox+musl+llvm+lld+linux not just rebuilding itself under itself but building Linux From Scratch under the result (as last time did for busybox+uclibc+gcc), my next goal is breaking AOSP into orthogonal stages maybe with kconfig.

That's a base layer I've prototyped at mkroot, and other obvious targets are minijail's android container as used in chromeos, the android init/jvm/selinux complex. Plus a "hermetic build" host airlock with toybox+ninja+toolchain and such (possibly hijacking the NDK for parts)...

Alas nobody else has been interested, and $DAYJOB's eaten every spare second for ~18 months now doing things like GPS software. (It's a startup. They do that.)

*shrug* This _is_ fixable, and Google is not opposed to any of it so far. But nobody ever wants to fund cleanup, and I've grown tired of giving "here's what I _want_ to do" talks at conferences. :)


November 21, 2017

A Federal Aviation Administration official just emailed me to ask if I want to work as a linux kernel developer on realtime radar data processing for an Air Traffic Control automation system. In other news, the FAA is automating air traffic control. (Good to know? Drones I guess. Hail Skynet.)

Today was the long meeting with investors about potentially funding j-core work going forward. Very exciting, I'm interested in most of the things they want to do with the architecture, things I've been agitating for us to do for _years_... but they haven't signed a check yet.

Alas, in business it's not real until the check clears. They were going to do a small Statement of Work in the run up to this meeting, while trying to get the real funding underway to get us started down their path (very aggressive schedule they want towards their product), but that SoW didn't happen, and there's no deadline by which it's expected to happen. I.E. Maybe Money, Jam Tomorrow. It's _exciting_ Jam Tomorrow, but it's not Jam Today.

Sigh. Jen was supposed to be here and do this. She was supposed to be on a plane from canada to tokyo the same time as me, but didn't get on the plane. she's since scheduled SIX flights that she wasn't on. Something is up there, but life goes on. And thus I got unexpectedly drafted to do her job.

Me, I'm out of my depth here: the lead investor's a kernel developer I vaguely know from way back, so I was happy to say hi, but I'm neither a business guy nor management. I can tell them how great our open source stuff is, but they know that (it's open, the video of our ELC talk is what caught their attention). I enthused at them a bit about synchrophasors, but Jeff's the one who taught _me_ all that. I only got sucked into these meetings because nobody else was available. (Well, Niishi-san and Arakawa-san came over from the other building for an hour or so, but they don't speak english very well and get really flustered when asked questions in it.)


November 16, 2017

The Python 2->3 conversion continues to suck. And it turns out Python had a 1->2 conversion that sucked almost as much! Backstory time:

So I'm looking through old mailing list postings from the late 90's, trying to figure out if my old blueberry.py script is the reason the kernel developers changed the default kernel output. ("Don't ask questions, post errors.") My quick hack was covered by lwn.net at the time and the commit that added it to the kernel was a few months later, but that commit comment mentioned the "dancing makefiles" kbuild rewrite as its inspiration, and I wondered if that fork copied the idea from blueberry.py between my January 10 post to lkml and the June 5 merge of a different implementation? (Or did they come up with it independently, because it seemed to _me_ really obvious it needed to happen but lots of "really obvious needs to happen' stuff doesn't for years until I break down and do such a terrible half-assed incompetent impelementation that people go to _great_effort_ to nip it in the bud and prevent its widespread adoption.)

This dancing makefiles thing doesn't seem to have been in source control this was back before git, which meant the open source state of the art was CVS (or an explicit CVS derivative like subversion), and most people just didn't bother). It was so long ago its mailing list was on sourceforge. Which has a horrible, horrible archive interface that makes stuff really tedious to find and read, and you basically have to read every message.

While slogging through that, one of the big repeated objections to Eric Raymond's CML2 (the "make menuconfig" rewrite submitted at the same time as the dancing-makefiles build plumbing redo) was CML2 required Python 2. Not that it required Python, but specifically Python _2_.

(This was back before Eric went obviously crazy and started spouting climate change denialism, thinking the book The Bell Curve made good points, that women programmers were trying to "honeypot" Linus Torvalds, and whatever crazy he's gotten into _since_ 2010. I'm told he's worse now, but haven't been paying attention.)

The point is, about 15 years ago when the existing kernel config system was hitting painful limits and a replacement was proposed written in python, there was a painful "Python 1.0 -> Python 2.0" transition going on in the python community, which seems largely lost to history at this point.

Meaning if Python 3 ever does manage to kill off Python 2, we can expect Python 4 to happen and be _just_as_painful_ as soon as the last switch goes down the memory hole and they forget why not to do that.

Meanwhile, the C 2011 standard was a non-event (nothing broke, all the old C99 code still worked, you could ignore it as long as you liked), and C99 offered candy (_finally_ you could explicitly specify 8, 16, 32, or 64 bit integer sizes, why was that so hard?) but you could still compile your ANSI C 89 programs just fine and ignore it as long as you liked, and of course ANSI C grandfathered in K&R syntax. C hasn't even quite gotten to the point where K&R syntax from thye 1970's generates _warnings_ by default (although there's rumblings, and turning on the "warn about functions without prototypes" flag is generally considered a good idea because the 32->64 bit transition around 2005 assumed you had function prototypes rather than expanding the default type promotion of arguments from 32 to 64 bits everywhere; I still think that last part was probably a mistake, but it did save stack space so I can see their point...)

Anyway: Python 3 sucks, turns out python 2 previously sucked but wasn't a _pattern_ of sucking yet, and I'm not waiting for Python 4. Ten years of forcing something the userbase does not _want_ down the userbase's throat is not how open source development should work.


November 15, 2017

Sigh. I'm helping Jeff prepare for an investor meeting (which should be Jen's job but she's still in Canada for an ever-changing list of reasons, in _theory_ I'm doing the same stuff here I was in Austin just hanging out with Jeff in coffee shops rather than on long phone calls with him), and he keeps trying to tell me about Risc-V (as a "what's the competition doing" thing), and... I cannot BRING myself to care.

I've reached the point where I don't care about Risc-V the same way I don't have a Facebook account or run a Windows system. It may be wildly succesful for a long time (I still have my doubts), but I'm not onboard and can wait for it to go away. (The sun only has so much hydrogen.)

Part of it's that I followed Transmeta, I followed OpenMoko, I waited for OS/2 and Linux on the Desktop to crack 2% market share. I don't see how their strategy of attacking the high end makes them all that much different from everybody ELSE who attacked the high end, from Sparc to the DEC Alpha. There was OpenSparc and Leon Sparc, there was powerpc.org. The Steam guys decided not to do their own game console because the unit volume wasn't there, and cost in this space is almost entirely a question of unit volume. Your design being free but your manufacturing being 1/5 the volume is optimizing for the wrong thing.

And part of it is the incessant hype. I saw a powerpoint where they created several different instruction sets each to win a different benchmark, while repeatedly touting how simple they are. They've taped out a dozen different times and are looking for their first actual user. That's "infrastructure in search of a user", clearly a bad thing in other contexts.

C and Unix were created by people who were using them as tools _while_ creating them, and their first big deployment was a contract with AT&T's patent and trademark office to build a typesetting system which baked the "everything is text" philosophy into the OS. Their development was guided by real use and an immediate testing feedback cycle driving the design. Meanwhile Pascal came out of academia as a "teaching language" that never bothered to standardize how you specify the filename when you open a file because it just didn't come _up_. Sure it was taught to a generation of students as their starting language, but its uptake beyond that? Turbo Pascal and Delphi have sort of wandered off. C is declared dead annually, the same way the Death of the Internet is regularly predicted.

J-core started by trying to improve upon a 20 year old processor design that iterated several times (we're not doing the first version, we're doing the second version incorporating end-user feedback and backporting bits of the third version), and we've got a specific niche (synchrophasors) we're designing products for already incorporating customer feedback from customers in three different countries. I understand the path to success here.

The Risc-V guys meanwhile are going "inevitability, pedigree, and funding", just like Itanium did 20 years ago. "I don't care if you want this, you have no choice." I'm sorry, I went into the booth, held my nose, and voted for that campaign slogan last year. I wasn't exactly excited about it at the time, am now kinda burned out on the concept. And the "get off your butt and vote _against_ if you can't vote for" aspect isn't there: arm64 has a stupid name but isn't torturing puppies.

Meh, I'm too close to this to have anything like an unbiased opinion, but if I wasn't doing J-core I still wouldn't be doing Risc-V. It's just not interesting to me. "I'm going to make an X that will displace Y" is an old, old story I've written about before. Intel couldn't do it with the Itanium, OpenOffice has spent 20 years trying to replace Word... Woody Allen said 80% of success is showing up but the other 20% can be kind of important sometimes. And the inevitability argument has always rubbed me the wrong way even when they pull it off.

(For the record, Mrs. Clinton lost my support in 2014 when she came out against Snowden in a radio interview I was listening to. That's a showstopper for me. I still flew back from tokyo early and pulled the lever on her behalf anyway, but that was a vote against not for, but I'm not surprised so many other people didn't. I flew back because I did _not_ take her election for granted, and didn't want to spent the next four years feeling guilt over not having done my part if the bad thing happened. These clowns implying it is my Duty To Open Source to support their chip? No. No it is not.)


November 14, 2017

Have we screwed up the climate? Let's see. In 2008, the Taklamakan Desert in Northwest china, the world's second largest shifting sand desert, was covered with an inch and a half of snow for the first time in recorded history. In _China's_ recorded history. Me, I'd call that unusual.

(I'm rereading the Temeraire books on my phone, that's the current setting the plot's wandered through.)

[I was in a hotel room in Japan when I wrote that. I'm editing it from an apartment in Milwaukee. Yes, still registered to vote in Texas, where my house is. The miracle of Per Diem consulting contracts.]


November 13, 2017

I think I've given up on ttwwiitteer. I miss twitter, but this isn't it. Everybody's gone to 280, and it's not worth the effort to unfollow enough people to bring my feed back down to a reasonable volume.

I have no interest in reading 4 line tweets. A 2 line tweet is zero effort because I start at the first line and when I hit the right edge there's only one other line. It's read-at-a-glance, low-eyestrain. Four line tweets take more attention, reading a zillion snippets in twitter's web interface just aren't worth it. It's not a _break_ from staring at code, if I want more eyestrain I'll just fire up the kindle reader or news.google.com or a blog like this one.


November 12, 2017

Hah. A recent discussion brought up an old linuxtoday comment which was a story.

[And that's the whole of the entry I left myself for that day. The obvious todo being "tell that story". I was referring to the "back when I worked at Boxxtech I convinced my employer to convince their supplier to create the first SMP Athlon motherboard", but between last entry and this one I've been blocked on editing and uploading these for a bit, and I'm tempted to just move on. The comment tells the basics, but... ok, new paragraph or two:]

Boxx saved Tyan's policial bacon by buying a bunch of rambus motherboards, which Tyan had done the engineering work to design but nobody wanted to _buy_ because of toxic patent troll shenanigans on the part of rambus inserting their technology into the DDR DRAM specification they were on the committee of, and then demanding patent licenses _after_ the standard was approved, in a blatant attempt to hobble their main competition. Everybody else went "ew" and refused to touch rambus' main product. But Boxx ordered just enough Tyan rambus boards to justify doing a small production run so Tyan _didn't_ have to write off the rambus design work as a loss but could instead categorize it as a legitimate R&D expense that resulted in a product sale, which made Tyan's investor numebers look much better. And thus meant Tyan owed Boxx a big favor.

So I convinced the manager in charge of speccing future products that Athlons were kicking Intel's ass while Pentium 3 was stuck at 1ghz (they had to recall the slightly faster one because they'd overclocked their own chip and it was unstable [maybe _this_ was the story I wanted to tell last post?]) and Pentium 4's quest for clock speed _over_ performance (because marketing!) meant they had such an insanely long pipeline that every time they had a cache stall or mispredicted a jump they had to wait 40 clock cycles for the bubbles to work through the pipeline stages and that sucker was terrible... And we all knew Itanium wasn't ready for prime time, that didn't take convincing.

The showstopper was Boxx needed SMP and Athlon "didn't do SMP", but I pointed out (with URLs to contemporary tom's hardare articles) that Athlon was designed to fit in the DEC Alpha's EV6 bus and the only reason you couldn't just stick Athlons in an Alpha motherboard was the boot rom had the wrong type of assembly in it, and swapping out the ROM chips couldn't be that hard. Athlon _was_ designed to do SMP and the only thing preventing it in its first few years was lack of motherboards.

I convinced the guy that this was the way to go (I'd already researched the issue pretty thoroughly for Motley Fool articles I was writing, and could linkbomb him with All The Sources), and he put together a presentation for senior management, and they called in that big favor Tyan owed them due to ordering the Rambus boards, and comissioned a new board that became the Tyan Thunder (which is why boxxtech was mentioned on the press release announcing the board, which was the link from my comment in the old linuxtoday thing that started us off here).

(Yes, I wrote motley fool articles for 3 years on topics like this. They lost my author entry in a database migration so my articles just say "by Motley Fool Staff" now, but some of them mention me by name in the summary the editors typed in or similar, and some even have duplicate bylines with old (link to a 404 page) and new (wrong) info. And a few of the articles that come up on a google search of fool.com for my name aren't ones I wrote. Altogether I wrote almost two hundred articles, I keep meaning to index and miror them properly, I have a couple series pulled out at the end of the writing page, but... todo list. [Seriously. Todo list.])

So in 2001 boxx went from SMP Pentium III to SMP Athlon (the "Boxxtech ServerBOXX R1"). And that's also why the first AMD SMP system ran Linux. (Boxx was an Irix shop that moved to Linux about when it hired me, senior management hated Linux and my boss told me to my face it was "communist" but Silicon Graphics ended Irix development to become a Linux shop so Boxx didn't have much choice.) There's more on that motherboard and its status as the first SMP Athlon in this linuxjournal article

And of course once there was _one_ SMP AMD board, other companies came out with competing products that weren't so high-end and complicated. Tyan got a proper SMP support chipset out of AMD (which was happy to do it, I don't remember if they'd already announced it and there were no board vendors willing to touch it without a proven market for the thing, or if Tyan comissioned it from them. I left Boxxtech long before this new hardware shipped. Being head of Linux development in a company where senior management despises Linux was not fun. The dot-com bust hit not long afterwards and within a year they'd laid off everybody and the senior management that hated Linux was on the factory floor turning screws to keep the company going. They managed, still around today as far as I know.) And once they'd started, Tyan itself went on to do a lot of SMP AMD motherboards. There _was_ demand, and everybody followed the money.

People were waiting for this product to "inevitably exist" for years but nobody wanted to be first, and the reason it finally _did_ happen is I convinced the right people it should happen, leveraging political backroom deals and investment journalism to give the technical arguments weight.

*shrug* A fun little historical footnote, more or less lost to history now. Thing I'm proud of nobody else even remembers, another instance of me setting off an avalanche in the Linux world, although that one went in the direction I'd hoped.

So if you're wondering why I'm trying to turn Android into a self-hosting development environment... it's a thing that should exist/happen and nobody ELSE seems to be doing it, so... grab the snowshoes and dog with the cask and head up into the mountains and start shoveling.

[And writing that up ate over an hour. You wonder why I'm so far behind editing and posting these? Yes, I just wrote all that text now, in April of the following year, even though it isn't in square brackets. This is meta enough as it is, moving on...]


November 11, 2017

Aha! I _did_ write down that processor rant. Well, bits of it anyway. (This is the one I couldn't find last month.)

There was a Motley Fool part too, let's see... (dig dig...)

[Editorial note: as I come to edit this so I can post it months later [I write this on April 15], I don't remember _which_ story I was referring to. How the Israeli design team took the India design team's head while queen played and fireworks went off [it's a highlander reference] and thus Pentium M unseated Pentium 4? How when Compaq bought the corpse of DEC in 1996 they didn't do chip design so AMD snapped up the Alpha chip designers and went "if you were to make an x86 chip what would it look like" and thus got the Athlon and Opteron and made Intel play catch up for 10 years, something Intel is very very bad at? Since the next few paragraphs are "oh look, I wrote about _this_ too" I'm not entirely sure what topic I was aiming for. Oh well, back to the text I wrote before...]

Heh, these two old point/counterpoint pieces (where somebody would write a "bull" argument on a stock and the other would write a "bear" argument, each side got 2 passes at it) turned out to predict wrong because AMD had a huge pile of debt that hamstrung its moment in the spotlight (all its success went to pay down student loans), because Intel finally broke down and accepted x86-64 into its life (Dell gave it an ultimatum in december 2004 ala "we _will_ ship x86-64 next year and if we can't get them from you we'll get them from AMD"), and because Pentium-M saved its bacon [as described in last paragraph's rant... this entry got kinda meta during editing].

However, the second of those had a list of articles on the topic, starting with the three article series intro to microprocessor fabrication I did way back when, and followed by the series I was thinking of (intel processor evolution from 386 to itanium). [Yes, but which aspect of it?] I've added mirror copies to my local index there. [Did I remember to do that?]

As for the others, the x86-64 launch party and a follow-up on Merced vs Hammer (I.E. Itanium vs x86-64) still stand up as basic technology comparison. And another follow up did describe the problems Intel was suffering from until the Pentium-M resolved the issue. (I wrote stock market _analysis_ columns, not prediction. Intel was in trouble 17 years ago, it just got over it 12 years ago.)

[And it stops there. I clearly remember working my way _to_ a point, but have no idea what it was. Ordinarily I go through and _resolve_ comments I leave myself in square brackets, but I'm just gonna add more, put it up, and move on.]


November 7, 2017

When did the phrase "Conservative" replace the phrase "Old Fogey"?

A century and change back John Stewart Mill famously said he didn't think conservatives were stupid, just that stupid people are generally conservative. But add a layer of plutocrats and con artists milking the stupid people and you describe modern conservatism pretty thoroughly.

They spent fifty years screaming "better dead than red", and now they're the red states. McCarthyism was all about the looming spectre of communism, pinko meant a little _tinge_ of red was all-the-way bad, there could be no compromise. And now they're rallying behind an obvious Russian puppet who defies congress _not_ to sanction Russia, which still hasn't given Crimea back. (Wasn't the _second_ iraq war still at least nominally punishment for invading Kuwait almost thirty years ago now?)

There are no principles here. Just tribalism.


November 6, 2017

Jen still isn't here. She's rescheduled her flight from Canada to Tokyo more than once already. The reasons keep changing.

Oh hey, there's more on that SFLC vs SFC thing. Still happy to be on the other side of the planet from it.


November 5, 2017

I've been having an email conversation with Linus Torvalds (this is somewhat unusual for me) about xargs.

Way back when the kernel limited "environment space" to 131072 bytes, I.E. 32 4k pages. The full prototype of a C program's main() actually has 3 arguments:

int main(int argc, char *argv, char *envp)

Usually people just grab the first two, but the _start code (in crti.o, try compiling hello world with the -v option to see the implicit command line options) copies envp into the global variable char **environ; (there's a man page on it, "man 7 environ"). So all 3 get used, even if you write the function prototype to grab the first 2 arguments and ignore the third.

The argv[] and envp[] arguments are basically the same sort of thing, there's an array of pointers with a NULL entry at the end, and a bunch of string data. Each environment variable is a null terminated "keyword=value" string where the first = indicates the end of the name. The arrays and the string data for both arguments and environment variables have to fit in the space limit.

Back when I first implemented xargs, I worked out the limit and very carefully measured the available space... but then a later commit changed the limit in 2.6.22, so each individual entry was 131072 (I.E. 131071 plus null terminator) and you could have 2 billion entries per process.

Well now it's changed again, arbitrarily capped it at 10 megabytes per process. And I don't really care what the limit is, I'm mad it KEEPS CHANGING and there's no way to probe it. (The posix APIs for asking this just return constants hardwired into libc which are different in glibc, musl, and bionic, AND there's just one size API that doesn't tell you size of entry vs cumulative size of all entries.) And if you make it too big, your exec fails. But if you get it too small, there's stuff you can run from the command line as one command that gets split into multiple commands through xargs, which is an unexpected behavior difference that's gonna hit somebody.

Hence the thread. I dunno what the right thing to do here is. Nobody does, so I need a design recommendation from the people who keep breaking changing it.


November 4, 2017

Joe Ressington emailed me a link to the latest SFC craziness, and I'm so glad not to be involved anymore.

My impression (from a distance, having left before it all went down) was the split between SFLC and conservancy was basically Eben and Bradley not getting along because Eben had moved on from the FSF's shenanigans (hence founding the SFLC instead of staying to run FSF's legal arm) and Bradley remained a true believer (following Eben because sensei, but still wanting to steer the new thing towards militant lunacy; hence the SFLC legal settlements involving appointing a compliance officer who reported quarterly to the FSF despite the FSF having NOTHING TO DO with the enforcement effort).

They managed to stay amicably divorced and not fight for a longish time after that by carefully ignoring each other. That seems to have broken down.

I keep saying "dying business models explode into a cloud of intellectual property litigation" (because it's true, as is the corollary to Moore's Law where 50% of what you know is obsolete every 18 months and the great thing about Unix being it's mostly the same 50% cycling out over and over leaving C99 and posix-2008 still relevant decades later).

But part of what I mean by that is suing people over IP law is a sign of weakness. Like drowning swimmers climbing on top of the ones that can still keep their heads above water, all the splashing and noise signifying exhaustion. It's not a defensive weapon if you initiate an attack, and throwing the first punch means you couldn't cope. Russia didn't invade crimea (or Georgia back in 2008) because they're in good shape: the soviet union collapsed because it couldn't _feed_ itself and that was before a full decade of hysteresis. Picking fights with their neighbors distracts from the fact Russia's pathetic modern GDP puts it somewhere between Italy and Mexico, and that's _with_ all the oil they can pump allowing them buy food from abroad. Their current attempts to weaponize internet trolling are trying to drag everyone else down to their level.

So when I see the FSF lawyering up (sucker-punching Metis, reopening a war with linksys/cisco 5 years after OpenWRT happened, trying to hijack my busybox license enforcement experiment ten years ago into a permanent crusade) I don't see strength. I see weakness. I see an organization that lost relevance back in the dot-com days trying desperately to drag the spotlight back onto itself. It makes me sad.

I should do a write up on the egcs and glibc forks, grub and the gold linker getting crushed and mangled, the ftp site crack after 5 years of stagnation (including debian squashing gentoo), the lwn articles on multiple projects leaving gnu... Meanwhile the Linux world doesn't need any of this crap to reverse engineer hardware (forcedeth etc), run on the most locked-down game console hardware (xbox and the ccc video on the ps3)... I had notes on this before my "rise and fall of copyleft" talk years ago, but I think it was in the 2/3 of the material I didn't have time to cover. :)


November 3, 2017

Tried to do a video chat with Fade. Both talky.io and google hangouts refuse to produce audio output on my phone. (All four types of volume are up, there's microphone permissions but no speaker permissions. The shop game is producing audio just fine when I run it.) My netbook produces audio just fine but hasn't got a microphone (the lapel mic's back in austin). If I have them on at the same time they do the feedback whistle.

I replied to an email from Linus and attempted to cc: the bionic maintainer, but I used @gmail instead of @google so it bounced. Sigh.

I haven't even left the hotel yet this morning.


November 2, 2017

Tokyo!

I love the quiet hotel mornings when I haven't fully adjusted to the timezone yet, and am thus waking up at 5am and having 4 productive hours banging away on my netbook before it's time to leave the hotel. (So far mostly stuff like writing down the reason I have open terminal windows in a todo.txt file so I can close the tab. I still need to shut down this machine and replace the keyboard, I've gotten 3 of the 6 desktops cleared off so halfway there. Lots of open email reply windows dealt with too. It's the administrative version of technical debt.)

Jen missed her flight, but Jeff's here. I dug up some of the big todo lists I've emailed them and wrote it all down on a whiteboard, then started collating it into buckets. There's so much half-finished stuff we've dropped on the floor over the past year, just trying to remember what it all was is a big task...

In theory nothing I'm doing here couldn't be done via skype, but I like tokyo, and it's nice to hang out with Jeff. I help _him_ get stuff done, just by having somebody else to bounce ideas off of. (Half the reason I'm here is to counteract his burnout. He really _really_ needs an assistant. And to me it's vacation-ish because of the change of view.)


October 31, 2017

Flying to Tokyo on Halloween. My international flight has an electrical outlet! I'm sleep deprived (4:20am shuttle pickup) but not completely dysfunctionally so. Yay getting work done.

I meant to get my netbook shut down and the keyboard swapped out before the flight (and maybe the hard drive swapped for the terabyte SSE I have lying around), but I only managed to close about half my open windows/tabs. (Three desktops had no windows in them, five still did.) Oh well, maybe I can close some more on the flight, and the replacement keyboard's in my luggage, so maybe I can swap it out on the trip.

The cut.c rewrite turns out to be buggier than I thought, but a lot of it's stuff like "I never implemented -s this time around". That's what test suites are for.

The kernel build's grown calls on getconf, doing "getconf LFS_CFLAGS LFS_LDFLAGS LFS_LIBS". On my ubuntu 14.04 those all return nothing so the fact getconf isn't in the mkroot $PATH doesn't seem to be bothering anything, but I have a 90% finished getconf.c in toybox that doesn't have any of these symbols either.

Let's see, it was added by a commit which enabled large file support for hostprogs because fixdep had to deal with files larger than 4 gigabytes. Namely scripts/basic/fixdep.d was bigger than 4 gigabytes. I want to point them at Code's Worst Enemy and go "Are you sure making an IDE that can load 10 million line source files is solving the right problem? No human can read 10 million line source files, at one line per second it would take 4 months nonstop without sleep to do one pass over it..."

But ok, getconf goes along with dd in the "tools the kernel shouldn't need to build, but now does" pile.


October 29, 2017

Prepping for Japan trip. Soup and/or Shuttle arrives at 4:20 am for my 7:40am flight (because security theatre is ever vigilant against getting their budget cut) so I'm pretty much staying up packing and trying to get stuff done.

The oddest things wind up being useful, and I should save them somewhere. Oh hey, I have a blog. (I write stuff down because _I_ won't remember what I did in six months, not in suffucient detail to be useful...)

I'm building new musl-cross-make toolchains (with the new musl release) on the fast machine, and rsync-ing the output directory over as it fills up. I don't have enough space for the old toolchains _and_ the new toolchains (it's well over a dozen gigabytes each) so I'm tarring up the old ones while the new ones rsync into another directory. But tarring up the old ones doesn't delete the expanded versions, so I've been running rm -rf on directories as they complete. Except I missed a bunch, so I did:

tar tvzf ../oldbin.tar.gz | sed 's@.*oldbin/\([^/]*\)/.*@\1@' | while read i; do [ $i == "$OLD" ] && continue; echo $i; ls "$i"; OLD=$i; done

That shows me which directories still exist, and lets me delete them. (Modulo the last one in the list may still be compressing.) I only want the tarball of old toolchains until I've proven the new ones basically work.


October 26, 2017

So Uncle Ben gave Peter the speech about great responsibility, but did _not_ leave him the rice business. Just checking.


October 25, 2017

I fly to Tokyo on tuesday! That's... halloween morning. I wonder if I should wear a costume for my flight? (That'll annoy the security theatre guys...) And my return flight's a full 5 weeks later. Making up for lost time, I see.

The Potential New Investor should be visiting Tokyo next month, so my trip overlaps with his. In theory I'm just helping Jeff prepare some material (same way I've been doing here) and then he and Jen do the actual meeting. She's supposed to fly there the same day I do.


October 24, 2017

Spent a lot of last night working with jeff on Business Plan things, trying to put together a Statement of Work for a potential new investor, which might fund some of the stuff we've been SAYING we're gonna do for 18 months now. That would be really, really nice. Fingers crossed.

Cycling back to utf8: there are two ways of dealing with utf8/unicode characters that let you keep track of where the cursor is and not accidentally scroll the screen: 1) you can filter out invalid sequences, 2) you can squash them to a known representation (I.E. escape them). The stuff I've implemented so far does the second, but wc -m does the first, and that means cut -C should too. (It would be so nice if there were standards for this, but this is an obvious missing _feature_ I'm trying to add.)

A failure mode I've gotten into recently (I blame GPS) is wanting to make The Perfect Infrastructure, which manifests here as me first wasting several days trying to get a single codepath to handle _both_ of the above modes in the same function (it's a mess of conflicting assumptions), then writing a perfectly good unicolumns() function that takes a char * and number of columns, and returns the stopping point of the last display character that fit in that. (Treating low ascii character as "filtered out" because what does a newline _mean_ in this context, and tab needs to know where you _started_ to know where to pad to, and if backspace happens after a 2 column character does that subtract 2 or 1... There's a _reason_ I did the "escape it all" version first.)

But now that I have unicolumns, I'm going "well I _could_ write a generic function that could measure columns or bytes and return how many fit in..." Nope, write two of 'em, and wait until you NEED the second. Don't try to make that share code.


October 21, 2017

Went to T-mobile yesterday. They had zero clue what to do about my phone (possibly negative clue), and the only phone they have with Android Olestra is that "Pixel" thing that doesn't have a headphone jack. (I'm aware Apple wants to eliminate all non-DRMed ways of producing audio output, I just don't understand why Google would go along with that. Google doesn't have its own iTunes-style proprietary media distribution channel where it can screw you over on the razor and sell the blades? Ok, they're trying to "Youtube Red", but I'm already subscribed to netflix and hulu and then we've got prime as a side effect of giving money to a third organization. I refuse to subscribe to a 4th service, Google can buy netflix if it wants my streaming money.)

There are several other phone repair places I could visit, but how many of them just do hardware and how many can actually fix a (relatively simple!) software issue? Sigh, I should fiddle with adb and try to get a remote shell on the phone... (In theory I need to _become_ the kind of software expert I'm seeking. Not currently there and dunno how to get there, but... *shrug*.)

Being out with netbook but not having phone tethering for internet doesn't exactly crimp my working style, but it does accumulate todo items for when I have net again. Then again, fewer distractions... (On the drive back from Minneapolis I had a night in a hotel room with no cats! It was lovely. I miss it.)


October 22, 2017

Redoing the crunch_str() infrastructure and this is a HARD PROBLEM SPACE. There are so many different possible failure modes! For example, I'm doing the complement of utf8len() that measures columns instead (returning number of columns consumed by next display stopping point, and also measuring the number of bytes consumed by advancing the char **), which means skipping over combining characters to find the next display stopping point. It uses the same escape info as crunch_str(), a function to receive data that needs to be displayed escaped, and an "escmore" string listing additional characters to pass through to the escape function.

So question: what happens if you have valid combining character(s) followed by an invalid character you need to display escaped? What happens with a combining character followed by a newline, or string-ending NUL?

The last pass on this punted by declaring combining characters as always escaped. I _think_ what I should do is say that combining characters are only valid if the sequence includes a display character to apply them to.

Speaking of which, if I have a utf8-salad mess ala tests/files/utf8/test1.txt and I then cut and paste it back out of the terminal window, is the _order_ of the combining characters preserved? (Is there a standard on this? Do I need to manually test xfce's Terminal and kterm and gnome terminal and xterm and that one Rich Felker wrote? Who would be a domain expert on this? I have to match the order to access a file named thusly, it has to be byte-exact.

I'd say "this is why ls -b exists" except it _doesn't_. Try this and weep:

cd toybox
mkdir sub
cd sub
touch "$(cat ../tests/files/utf8/test1.txt)"
diff -u <(ls -b) <(ls)

Two slightly different utf8 salads. Joy. Oh well, capturing them in environment variables should theoretically work. (Modulo if you ever _do_ create a file with trailing whitespace, variable assignment will eat the trailing whitespace and then how do you rm it from the command line? You notice I create crazy filenames in subdirectories so if all else fails the cleansing flames of "rm -rf sub" can make it stop.)

Hmmm... Ooh, bug! My ls doesn't have a space after the word salad file in default (-C? -x?) output format. the gnu/dammit one does. I dunno if that's because they're fontmetricising better or what...

Aha! It's because they're padding by _two_ and I'm padding by _one_ and the test1.txt word salad test case has railing combining characters that apply to the space. So not exactly a bug, but another head-scratch "what do I do about this?" Nothing, I think... (There's a combining character for put a tilde in the _middle_ of the next character? Apparently so! I copied this from the @0xabad1dea twitter account of Melissa Elliott of Veracode, I have no idea how she produces them. Smashing the keyboard with a special shift key, I'd guess.)


October 21, 2017

Driving Fuzzy to another stabbing competition.

Hmmm, the CFG_TOYBOX_I18N symbol is making strlwr() much simpler (I.E. allowing a 2 line version to substitute for a 22 line version). On the one hand, "two codepaths bad". On the other, "order of magnitude simpler option, plus it uses less memory. (The other version basically doubles the malloc() size because we can't guarantee it won't expand during reencoding. Which is a hack, but what's the alternative? A measuring pass followed by a transcribing pass is way slower, and on most systems the unused cache lines aren't faulted in since we didn't zero them after initial heap allocation. Classing memory vs speed tradeoff, if it's a short lived allocation it should be ok? Sigh.)

Meanwhile, in crunch_str(), "display from right edge" and "escape unprintable characters" combine badly. I can't have tested that combination because it won't work at a conceptual level... ah, yes it can, I just have to re-measure characters as I discard them at the left edge, including escape widths. Lovely. Just to be sure... yup, the filename in hexedit is trimming to the left, not the right. I wonder when that happened?

And I bricked my phone. The "how do I disable Google Feed" instructions on Google's website said to disable "Google App"and reboot, and now my phone's been stuck at the boot screen for 12 minutes. The boot screen is animated, it does the little shudder of testing vibration every half-minute or so, but it's not bringing up a desktop.

If I hold down the up and down volume keys while rebooting... and it's a black screen. And won't power back up. Yup, it's a brick.

Time to visit the T-mobile store I guess.


October 20, 2017

Sitting in a McDonald's booth with netbook a bit after midnight (to get away from the cats), trying to redo the crunch_str() function to handle combining characters properly. (I thought it did, but apparently not. Yes, still fallout from implementing cut -C.)

And now ~18 hours later, I'm in a hotel room in Dallas, having promised Fuzzy I'd drive her to her fencing tournament this weekend and thinking "registration closes at 1pm" would mean we'd leave early saturday morning. No, she got a hotel room for friday night an expected us to drive up today. So that's 4 hours out of the day, plus running aroun getting Adverb's heartworm medication from the vet.

I think I've figured out why top is so slow: it's calling fwrite() on each character individually. I thought that would collate into one actual write() system call per whatever the FILE * internal buffer size is, but I guess not?

I was pretty sure crunch_str() was debugged, but looking at it I don't see how it _can_ work for all its options. Which is odd because I _tried_ this with hexedit(), specifically making a big long filename using the Japanese poem line I cut and pasted from somewhere ages ago to have a Big Block of UTF-8 Chars. (Ideally there'd be some doublewide chars in there, but I dunno what languages use those. It doesn't do the right to left thing from arabic yet.)

The crunch_str() API has got a "show me the _last_ X chars of this" option (so /path/from/root can show the actual unique filename and cut off the _start_ of the path instead), and I remember testing it, but the code implemeting it seems to have dropped out at some point? And I don't think I ever tested combining chars, they were escaped out so it got visibly displayed. (Which is the right thing to do in some situations, and the wrong one in others, and I'm not sure which is which. Conceptually an umlaut is a combining character, but it's a common enough case they added chars with umlauts for a lot of languages.

I have no way of telling when this markup is cosmetic (ala that stupid New Wingdings nonsense that just resulted in patric stewart voicing excrement for Sony, so the smiley faces dressed up as a police officer can have specified skin tones), and when it's actually a functional part of the language providing diacritical marks and such without which the meaning of text is interpreted differently. I.E. when is expanding it uniquely and unambiguously the right move, and when do native speakers need their words rendered in their native characters to be readable? I dunno, there's domain expertise here I do not have.)

I've mostly erred on the side of "show this filename/username unambiguously because '..' and '.[invisible].' mean different things with security implications", but cut -C measuring columns is intended to measure columns, and now that I'm trying to do it my infrastructure isn't there for it.

What seems to have broken crunch_str() is adding generic escaping functionality via callbacks. The above top slowdown is hard to fix because I can't go "find the start of output, find the end of output, do one big fwrite()", because if there's escaped characters in the middle of that they need to be output differently.


October 19, 2017

Yay, Rich got a musl release out! I need to clear some time to poke at mkroot to see if he broke anything. (Regression testing after the fact...)

So cut and wc have _three_ ways to measure stuff: you can count bytes, you can count utf8 characters, and you can count column width taking into account combining characters and "wide" characters that take up columns.

(Aside: the Java 1.1 AWT's fontmetrics() was actually kind of cool, part of a simple and straightforward graphics toolkit I miss, but then they added this "spring" crap in 1.2 and it all went pear shaped and I stopped paying attention again.)

Programming happens in terminals with monospace fonts, so instead of pixel widths we have an integer number of columns. The columns are measured using wcwidth(), which returns 0, 1, or 2 depending on whether it's a combining character (modifying the next non-zero-length character output), a normal single column character, or one of the multicolumn characters. (I haven't seen 3 yet, dunno if the stanard allows it, but it's not conceptually much different than 2, which raises the problem of "do I truncate or go past the end".)

Unfortunately, wcwidth() has the same "hidden state" nonsense that mbrtowc() does, caring about calls to setlocale() which sets a bunch of non-orthogonal crap. Are we using utf8 _and_ what's the date output format and month names and thousands separator character.

It's 2017: utf8+unicode is a thing. I don't care what your locale says, if you select cut -C or wc -m you get utf8 and unicode. Same way I don't care what your $TERM is set to, you get ANSI escape sequences. (And I don't care that posix dd has ASCII and EBCDIC modes, either. You get ASCII. We implement one sane codepath. If you want the GNU/Dammit versions with #ifdefs for HP mainframes last manufactured in 1984, you know where to find them.)

This implies I should write my own wcwidth() the way I wrote my own utftowc(). Unfortunately, I could do the latter because utf8 is well-defined. (Less well-defined than when Ken Thompson created it, but still sort of reasonable-ish with only minor standards body scar tissue.) But unicode is insane and gets revised every few years. I really want that to be libc's problem.

I poked Rich on the #musl IRC channel and he said I can setlocale(LC_CTYPE) to select JUST character encoding without messing with the rest of it. He also says that the sscanf(blah, "%*s", len, str) problem where len was _not_ number of bytes was a glibc bug he pushed a patch for. Between the two of those, maybe I can take TOYFLAG_LOCALE out of the flags and just set LC_CTYPE to "c.utf-8" in main()? That would be nice...

Ok, let's try this... Yank the I18N Config.in entry, the only users of the symbol are expand.c (which is dubious code, add a TODO to clean that up) and main.c which I can hardwire _if_ all the libc's fixed that sscanf() issue...

Who is setting TOYFLAG_LOCALE: sed, ls, paste, expand, wc, ps, hexedit. Ok, hexedit just wants utf8, ps needs this for top, wc for -m, expand is the aforementioned dubious, ls has wrangled the crazy sorting before an I'm pretty sure doesn't care? (The sort is hardwired to ascii order, It's trying to fontmetrics so -l and -C and such can columnize properly for utf8. There was a thing on the list about ls sort order a while back...) Probably sed wants the regex library to handle utf8 properly but why would it care again? Ah, s///i for case insensitive matching. I should make sure y// can match unicode chars. Um, would that include combining characters?

Sigh, this is really not a well-defined problem space.


October 17, 2017

I was raised by cats, but I'm waiting for my current cats to reach the end of their natural lifespans, and not getting more. I've been getting increasingly allergic to them for years now, and it's getting pronounced. Plus Peejee follows me around and climbs up into my lap/arms/shoulder when I'm sitting at the computer (or tries to block the heating vent with her tail over the keyboard), and when she doesn't George bites me. Zabina's been tromping through the poison ivy along the back fence an then coming in to be rub against me. At night Peejee curls up against my face to try to make sure I Breathe Through The Cat. (George just bites my face.)

I wanted children. I got cats. (Well _I_ couldn't have children. That whole "boy" thing. Fade got a dog and another cat on top of previous cats.) At 45 I'm coming to accept that pets are not acting as training wheels for kids, and don't really want more of them.

I type this having left the house with netbook to get away from cats. This means I'm going out to places that require a food/beverage purchase to secure a seat at a table (increasing my calorie intake, especially since "cheap" and "healthy" are so often traded off), and that I'm competing with homeless people for public spaces that let you cheaply sit around for hours in someone else's air conditioning.

Cats.


October 16, 2017

Working on presentation materials with Jeff (via email) for potential j-core investors. Simple stuff, "What's a computer", "What we made (asset inventory, tangible and intangible)", "Market context"... that sort of thing.

Jeff and I have vaguely planned to do podcasts for years, but are both way too busy. One one of my early trips to Japan he bought a tripod, about the way I bought a lapel mic. Neither has made podcasts manifest themselves yet, but we live in hope...


October 15, 2017

In dallas I got a "driving energy" chocolate square (10 grams of chocolate is 50 calories, still well under 200 for the day) that claims to have 150 miligrams of caffeine.

It's... quite effective. And tastes fine, which is an accomplishment given how bitter that much caffeine is. Decent calorie to caffeine ratio. And it was 99 cents. Possibly I should try mail ordering these. (Hmmm, it comes in civilian and professional versions. I had the latter, which has twice as much caffeine...)

There's a warning on the label. Never have more than two of these per day unless you are a thirty ton mega-elephant with bronchial pneumonia. Understood.

Fired up their "store locator" on the website to see if I could buy more individually, and the closest location to Austin is...the store I got it at back in dallas. Ok then.

Got in really late, but the chocolate square kept me up another hour at least. Gave up on fasting and put a beef heart from the chest freezer into the slow cooker for morning, going "maybe I can atkins instead"... and then filled up the extra space with potatoes because potatoes. Yeah, that's usually about how it goes.


October 14, 2017

Driving back to Austin. Picked up my car from my sister last night.

Doing the fasting thing again. My version of which involves eating things that have basically no calories in them (today I had two McDonalds side salads with vinagrette dressing, and a lot of diet Dr. Pepper. Total calories a hundred and something for the day). After the first day I mostly stop being hungry. The problem is being around people who are eating, I'm ok doing this alone, but the longest fast I've done was pretty much the duration of Fade's vacation. She's up in Minneapolis but Fuzzy's home, which means a fridge and pantry full of food. Oh well, something to do on the trip, anyway.


October 13, 2017

I now have three different workarounds for the missing down arrow: two finger slide on chrome, page down and cursor back up, and "j" in vi. All this to avoid replacing a keyboard when I _have_the_part_. (It's closing all these windows. Still grinding away at that...)

I'm avoiding twitter today, because twitter in general have been epically assholish recently (such as suspending Rose McGowan for tweeting about Harvey Weinstein molesting everybody, but in general having an is_nazi flag to comply with european law but refusing to let everyone else USE it, and then going "we'll give everybody 280 chars instead!" which isn't a PROBLEM, that's your friggin IDENTITY you're discarding)... So anyway there's a "women avoid twitter" thing going on, and I'm doing the "solidarity reg" thing.

But the thing about avoiding twitter is who do you comment about avoiding twitter ON?

Twitter seems to be going "but we're where everybody _is_, sites like AOL, Livejournal, and Myspace don't lose existing userbases, once you're on top network effects keep you there forever..." They are, presumably, wrong, but I couldn't tell you when.

Huh, somebody's pretending to be me on faceboot, with a profile picture of their prison tatoos. Last time I checked there wasn't anyone camping my name there, but it's been a longish time since I looked. (No, I wasn't thinking of doing a faceboot account, I haven't got one and am not getting one. I learned my lesson about "join to make it shut up" with linkedin. It's a variant of paying Danegeld.)


October 12, 2017

And the down arrow key on my netbook broke the rest of the way. It started self-triggering due to the accumulation of cat hair in it, so I took the top off and cleaned it out, and then couldn't get the keycap back on. But I could press the little plastic thing so it still worked... until the little plastic thing came off today. I can sort of hit the switch under it with a fingernail, but when I do it sometimes stays pressed for several seconds.

I asked Fade to order a replacement keyboard when the key first broke (the F1 and F2 keys were removed by an enthusiastic cat some time ago, but they have the plastic thing I can hit and don't come up much), and have been carrying it around for a while. But to install it I have to power the netbook off, not just suspend it, and that means closing six desktops full of windows full of tabs...

I cut a toybox release! Ok, there's a who's on first routine here...

One of the dozen or so half-finished projects in my tree is a rewrite of cut.c to (among other things) have "cut -DF 2,5,4" as an alternative to "awk '{print $2" "$5" "$4}'". While I'm there I did a _complete_ rewrite, and I finally checked it in and am properly testing it.

Then I decided that holding the release for new destabilizing things nobody but me will have tested (since they went in at the last minute) was a bad idea, and I should cut a release at the last commit before I replaced the cut command.

Hey, dreamhost went back up! (It was down. They're cheap and don't meter my bandwidth.)

Email from google.com is now winding up in gmail's spam filter. That's mail from google employees. I feel I should send the gmail admins a fruit basket or something...


October 11, 2017

I saw some medieval historians speculating on twitter why certain things got preserved and others didn't, and it occurred to me that nothing on Kindle will exist in 100 years. You can find an old paperback in your grandparents' library but even if the servers are still up (not likely, cloud rot) you won't log into your grandparents' accounts posthumously.

Cory Doctorow pointed out (in a talk back at Penguicon 2005) that the Age of Copyright will look to future historians like a dark age. Up through the 1920's there'll be a pile of literature, and then nothing for a hundred years as copyright lasted far longer than the material was worth preserving. Most things are out of print after 10 or 20 years, but copyright is life of the author plus a century.

This means that the open source and creative commons stuff could be all anyone will see of this period of time.

IP law isn't the only thing that's outlived its usefulness. The entire mindset of the Baby Boomers is obsolete today. One enough of them die, we need universal basic income, medicare for all, solar+batteries, app-summonable cars, and the top income tax bracket back to Eisenhower's 92%. All of this is achievable, we just need to move the Overton Window to where advocating them isn't crazy talk. (And we may need to declare billionaires a game species like the French did in the 1700's. France is a much more progressive place these days than it was back then.)

American's plutocracy happened since LBJ. The national debt maps 1:1 with the money Regan let billionaires keep. The graphs are mirror images. All this stuff is _recent_, and can still die with the people who caused it.

I wonder if Patreon is a baby step to UBI... (Probably not, but... sort of? A little? At least there's a scent of something there...)


October 10, 2017

Got confirmation from Khem Raj that this commit fixed his build break issue for toybox on openembedded (and thus yocto and tizen).


October 8, 2017

Got to Minneapolis in the early evening, drove to my sister's place and had her drive me to Fade's dorm, and then leave with the car. (Her oil light's been stuck on forever, and she used to check it regularly and as soon as she got out of the habit the car _did_ run out of oil and the engine siezed. On the highway, at speed. I so look forward to self-driving electric vehicle subscription services which would have multiple ways of this not being the driver's problem.


October 6, 2017

Phone call from my brother, saying my sister's car died. Oddly enough I'm driving up to visit Fade this weekend, so as long as I get there by monday morning I can give her a ride to work.


October 4, 2017

A recent tweet reminded me that the US needs to find a new way to fund reporting now that public airwaves no longer come with a public service obligation.

The Communications Act of 1934 required broadcasters to act in the "public interest" in exchange for access to a shared public resource, I.E. the finite broadcast spectrum. This translated to concrete requirements that a certain number of broadcast hours be devoted to nonprofit uses, from news to educational programming.

That's why broadcasters from Edward R. Murrow to Walter Cronkite were special. There was an unassailable wall between "editoral" and "advertising" in each station's nightly news broadcasts: if it wasn't sufficiently objective and "in the public interest" (according to the regulators monitoring the broadcasts), it wouldn't burn off the federal requirement for public service, and they'd have to sacrifice more hours showing things with mediocre average ratings which advertisers didn't particularly want to fund. News NOT being sufficiently factual, objective, and rigidly sourced could cost the station big money.

Then cable TV happened, which wasn't a finite shared resource with a public service requirement as a cost of entry. And one of Richard Nixon's speechwriters started a propaganda network that _pretended_ to put out the same kind of objective news programs the broadcasters did but wasn't even close, and people bought it because 50 years of broadcast regulation had taught them that news in the USA was scrupulously objective and fact-based and would issue a correction if it got anything wrong, and Roger Ailes' "Fox News" leveraged that perception to blatantly lie to people and get away with it. (And as high-fidelity FM radio eliminated demand for crackly AM radio, leaving consistently empty spectrum, AM radio also got exempted from public interest requirements and became infested with conservative talk radio, which also pretended to still be held to a standard no longer enforced against it.)

Then the internet started seriously eroding the advertising revenue of both broadcast news and newspapers (which had their own problems with "yellow journalism" a century earlier but in the face of competition from television news had consistently belonged to a fact-based ecosystem for all of living memory, outside of the supermarket tabloids), and as a result could no longer afford worldwide research bureaus and reporters spending 6 months on assignment tracking down a story. As budgets collapsed the cheap thing to do was present both sides' arguments and imply the truth was the midpoint between them, which was trivial to game with a combination of triangulation and moving the overton window, and the heirs of Roger Ailes went to town. This made uneven progress until the GOP's ongoing atrophy (Rockefeller had warned them that the southern strategy would have that outcome) made the party too weak to keep control of its toys, and con artists like Trump and state actors like Vladimir Putin stepped in and led away the carefully groomed masse of gullible rubes fed 30 years of propaganda.

At this point, I'm just waiting for the mass-senility of the baby boomers to work its way through and hoping there's something left to salvage from the wreckage afterwards.


October 3, 2017

Sigh. Nobody understands User Mode Linux anymore. (Look, a website still on sourceforge touting a book that came out in 2006! Still the first google hit for "User Mode Linux".)

UML was never ported to most of the architectures Linux runs on, even though in _theory_ it should just care about libc and some memory mapping tricks. Sure it's been overshadowed by qemu, but if you want to stick a printf in the kernel and understand how system call dispatching works, UML is a pretty darn fun toy. (And it's an intermediate between containers and virtualization, given how much effort both of those have absorbed it seems like it would have a niche if it wasn't perceived as abandoned.)

Every time people keep talking about the depth of the talent pool in open source, I keep thinking about ten year old ideas I've had that nobody else has beaten me to. They're generally things that I'm not particularly good at but which I'd try to stick a crowbar in if I had the time. It's probably not _hard_, I just have too many open cans of worms already...

The greying of linux probably works in here somewhere.


October 2, 2017

Ooh, here's a nice video: a Morgan Stanley Analyst giving a presentation about app-summonable self driving cars. This is what passes for baseline industry knowledge among the financial and industry people, it's why the car companies are so desperately switching over before all their competitors do. (Transportation As A Service needs maybe 10% as many cars as we have now, that's 10% the manufacturing volume, that's 10% as many car companies. Those that want to survive want to get in on the new market and be providing not just the hardware but the _service_. It's a big game of musical chairs with everybody scrambling for a seat as they're taken away.)

That's why the auto industry is crawling over itself to switch to electric as soon as possible. (And look, another riding drone.)

Tony Seba's stanford report says fossil fuel cars stop selling circa 2024 due to collapse of that ecosystem: finding fuel, spare parts, mechanics. The result is a stranded asset. Seba's analysis predicts a cost curve that makes electric vehicles cheaper than fossil fuel vehicles in all categories, so that by 2025 all new cars, vans, trucks, busses, and tractors will be electric. (Except a residual hobbyist market about like horse riding is today. So you'll still be able to watch the Indy 500 just like the Kentucky Derby.) The breakeven point where gasoline and electric are roughly equivalent is 200 mile range and $30k sticker price, and electric vehicle improvement won't stop there while fossil fuel cars are a mature century-old technology without obvious room for rapid improvement.

(Here's another stanford professor who thinks adding lead to solar manufacturing would improve matters. Anybody else spot a downside?)

One reason electric car range can increase is terrible gasoline efficiency: less than 20% of the energy released is harnessed as thrust, the vast majority is waste heat. So even with gasoline's 2 to 1 weight efficiency (due to reacting with atmospheric oxygen, so you only carry half your reaction mass with you), electric vehicles can reach range equivalence at 1/5 the power storage, double it at 2/5. Plus electric cars can devote more weight to batteries since they don't have transmissions, radiators, engine blocks... (Electric cars typically have front _and_ back trunks, it's empty space where a gasoline engine would be. If you really care about range, you can fill it with batteries instead. Some experimental models have already demonstrated a 100,000 kilometer range, but that isn't currently cost-effective and takes too long to recharge.)

Can we really do no better than 20% efficiency? Sure we can! Mixing water with gasoline plus detergent is an old trick, widely known for decades: the heat turns the water to steam converting more of the energy into thrust. Of course gasoline and water don't WANT to mix, so you need a detergent or an ultrasonic vaporizor (called a "bubbler" after a 1960's implementation of it one of my professors told me about in college). In the 90's caterpillar got a patent on basically using powdered sponges to get 80% water and 20% fuel mixed together to run their construction equipment, which worked just like pure fuel but gave you mostly thrust and little waste heat. But the oil industry fought hard against any way to sell _less_ fuel, and preventing your engine from rusting internally or the immiscible liquids seperating back out and stalling out required more R&D than individuals could usually pull off reliably themselves, so the technique was easy to discredit.

Another excellent video is The state of the US Energy Transition (Chris Nelder), The US energy status is: No new coal ever, wind overtook natural gas for new installs in 2015, and solar overtook wind in 2016. Near the end of his talk he explained why hydrogen didn't take over, how tar sands are more expensive than deepwater drilling, and pointed out that high oil prices during the Dubyah administration involved china creating and filling a strategic petroleum reserve, which is now full.

Meanwhile, the Dorito is selling off the US strategic petroleum reserve and presumably pocketing the money via some shell corporation. Drawing down the strategic reserve also opens the possibility of oil shortages and prices spikes for his backers to benefit from.

He's probably familiar with this because the way Putin funded his puppet government in the Ukraine (before they voted them out and he invaded to get back control of the port) was the same way he funneled money to some of Trump's campaign advisors: by selling them a bunch of oil from the state owned oil company at way below market prices, on generous credit terms. They they sold it at market prices, paid for the oil with the proceeds, and pocketed the enormous difference. (When dealing with enormous transactions you don't have to be that blatant abount it, inserting a middleman to shave off even half a percent can still be hundreds of millions of dollars.)

Here's a video on Farming the wind (I.E. wind turbines as new income for farmers.) Meanwhile, Solar has been something you farm for 5 years.

On average, data centers spend 40% of their energy on power conversion and cooling. They need to choose between putting them down south where solar makes power cheap, and putting them up north where cooling is free (most of the year).

People stop deploying solar when the "duck curve" threatens to go negative and you have electricity you can't use. California's crazy regulations once made them pay other states to take it rather than simply unplugging the panels (which is called curtailment, and no it's not bad for the panels). We're so used to electricity having a dollar value attached that when we generate power we can't use we freak out, even when it was free and otherwise would have just heated up some rooftops. Batteries let us store the power.

California now gets about 10% of its power from solar, and 90% of europe's new energy installations were renewables, primarily solar and wind. The worldwide installed base of solar panels has grown by a factor of 6 since 2010.

One more video I have lying around, Ramping up Solar to power the world more or less has a Wadsworth constant of 26 minutes and 30 seconds. (The stuff before it isn't exactly throat clearing, I just didn't find it very interesting.)


October 1, 2017

Happy first of Halloween!

I should update my Patreon. And I should do a proper pitch for why people should care about mkroot. It inherits most of Aboriginal Linux's goals, especially making Android self-hosting). But Aboriginal had its own web page. The new one has a mailing list, repo, but the only web page is the README in the repo shown by github on the main page. Hmmm...


September 30, 2017

More on battery technology.

I mentioned sodium's cheap and plentiful, and the next thing down in the periodic table from lithium, so lots of people are researching batteries made out of it. In this ted talk A woman presents her research on the "blue battery", but of course the resulting company is all white dudes because evil. Another group working on this calls its version the Seawater Battery. (There was a marvelous video on how lithium-ion batteries work, but the professor who gave it left to go work at Aqueon and the university let the video go down, because Carnegie Melon isn't in the business of preserving knowledge. I poked their website about it months ago and never got a response.) Other people are trying to make batteries from metalic sodium.

Another approach is to stop worrying about the energy density at all and go big, turning shipping containers into batteries, from ted talk to deployment. (That ted talk is old, the current status is standard startup hell because their molten magnesium/antimony chemistry corroded the shipping container it was in, they needed to develop new seals, and that ate through funding. Technology advances when patents _expire_.)


September 29, 2017

So, electric and self-driving cars. The existence of app-summonable self-driving car subscription services means that learning to drive a car today is like learning to ride a horse 100 years ago.

There's a timeline of horses going away: New York City newspaper columnists wrote about how the city would be buried in horse manure and have to be abandoned around 1890, which made cars were a huge environment upgrade over the next couple decades. Then cars scaled up transportation in general by 2 orders of magnitude, made suburbs possible, and once people started commuting to work ten miles each way by car, new problems emerged. The solution became the problem because "it's 1% as bad, there's now 100x as much of it, so we're back where we started".

Electric vehicles are going to replace fossil fuel vehicles completely because the gas station network is operating on razor thin margins. 25% of the USA's gas stations have already closed since 1994, and 1/4 of the installed base of gas/diesel cars go away the entire existing gas station supply chain becomes unprofitable.

A self-driving car fleet can charge while nobody's in it. Drive up a ramp, robot version of indy 500 pit crew swaps out batteries, drive off other side. Tesla demoed this back in 2013. Maybe a similar setup could clean (or at least vacuum) the inside of the vehicle. Battery prices have fallen 50% since 2014 and continue to decline (exponential growth), so spare batteries are just a question of time. (Demand is increasing faster than supply and there are raw material production limits, but there are a bunch of battery chemistries possible. Sodium and graphene are made from sea salt and carbon, it doesn't get cheaper or more plentiful than that. All a question of what you're optimizing for, and giving the patents time to expire.)

Volvo announced plans to phase out diesel vehicles, manufacturing only electric engines. (The emissions cheating scandal may have helped.) More than half of new car registrations in europe are for diesel vehicles, because they're more fuel efficient than petrol (gasoline), but in 2021 the EU emissions limit falling from 130 grams of CO2 per kilometer to 95 grams, and that means electric or hybrid cars.

Diesel trains are electric: the diesel drives a generator which powers the electric motor. This has been true since diesel replaced steam locomotives: diesel doesn't have any more motive power than steam, the greater hauling power is due to the greater torque of electric motors.

Caterpillar already sells self-driving trucks and mining equipment, meaning most mining jobs have been automated away. Currently automated sites tend to have skeleton crews to monitor status and respond to theft or vandalism. As time goes on that gets replaced with video feeds and drone patrols, and once you have enough data about what "normal" looks like you can have AI detect deviations from normal and either respond automatically to a known deviation or call a human to deal with it.


September 28, 2017

Neither millenials nor gen-x really buys into this uber-capitalism, thing, that's all by and for boomers.

Capitalism's domination in our culture is fairly recent. The "postwar economic boom" was an attempt to keep World War II's demand levels going to continue powering the economy. The wartime economy rescued the nation (FDR tried but couldn't get congress to approve _enough_ spending to reach full employment), and then they invented consumerism to keep demand high. It's a recent phenomenon, the Greatest Generation fed it to the baby boomers, and its seems to be dying with the boomers.

A lot's going to die with the boomers. The boomers have the biggest numbers and the biggest megaphone, and they tell people that how the world was for them is how it's always been, but it's just not the case.

The 60's were the 60's because of teenage boomers. Not every boomer went to woodstock or a was a hippie, but the boomer center of mass defined the decade. The 2010's are the 2010's because of retiree boomers turning into "racist grandpa", blaming "kids these days" for everything and screaming at long-haired foriegners to get off their lawn.

Bill clinton was the first boomer president, and he was born the same year as Donald Trump. They're now both 71 years old. The baby boom is on the downswing, and the actuarial tables come into play. The real question is when do enough boomers die for "racist grandpa" to lose political control of the country?

The boomers' parents were the "greatest generation" that fought in World War II, and their actuarial table results are in. Of the 16 million WWII veterans, only 558k are still alive in 2017, which means just under 97% of them have died already. The average age of a soldier serving in world War II was 26, which as a first approximation implies that in 26 years we can expect only 3% of the boomers to be left.

The question is, when do enough of them die to lose political control of the country? Most of the 77 million boomers are still alive (they were still 24% of the population in the 2010 census), and have concentrated wealth and influence into their own hands to punch well above their weight class even in retirement. But if they follow the demographic trends of the "Greatest Generation", 90% of them are going to die in the next 20 years. The last US veteran of the first World War died in 2011, but the question is when do the scales tip and power transfer? The average lifespan in the united states is currently 79 years, loosely implying 50% mortality over the next decade. What will that do to the GOP? What does this look like in real time?

Only one way to find out. Buckle up, it's gonna get bumpy. Wounded animals are at their most dangerous. Dying business models explode into a cloud of IP litigation. Japan's Kamikaze attacks came at the end as they knew they couldn't win and just tried to take their enemies with them. The GOP's southern strategy has tied it to the mast of racist sexist white demographic hegemony propped up by senior citizens, fossil fuels, climate change denial, and capitalist billionaires cornering the market on money and power. The clock's running down on all that simultaneously, and there's no mechanism for a smooth transition of power out of the hands of rich old white men, it's always been from one to another for centuries...

You can't have rich people without poor people. Bill Gates does not hire Warren Buffet to wash his car. It's rich people propping up the system, leveraging their wealth to hire think tanks and marketing companies to pull as many strings as possible to keep the gravy train going.

But capitalism is hollow, a fact widely noticed in the younger generation. Star Trek was post-capitalism. It's not a new concept. If you can't imagine another way of living, you're not describing a technology, you're describing a religion. We're waiting for Margaret Luther to nail a list of theses to Wall Street's door, and stop them selling indulgences to the 1%.


September 27, 2017

Rsynced down a fresh copy of my website to the big machine. I have like three directories with more or less the website contents, but just enough version skew I'm reluctant to just rsync one over what's there. Now I can triage. (It needs a largeish shoveling out.)

Ok, THAT's an interesting bug. I had a vfat system in need of fsck ("Free cluster summary wrong, 472622 vs. really 470607" according to fsck, although for some reason "fsck /dev/sdb1" would refuse to actually fix anything and say "Leaving filesystem unchanged" after prompting me what to fix and then ignoring what I said. Why? Because Linux is just crammed full of steaming piles of usability. But "fsck.vfat -avw /dev/sdb1" worked without even prompting me about individual tweaks, so that's nice.)

That's not the interesting bug. The interesting bug is that when I tried to copy a file to this filesystem, it would HOTPLUG REMOVE THE BLOCK DEVICE. "cp: cannot create regular file ‘/mnt/vmlinux’: Input/output error" and then suddenly /dev/sdb is now /dev/sdc and I have to re-mount it. (Don't need to unmount it, the mount's gone...)

Ah-ha! In dmesg:

Call Trace:
 dump_stack+0x63/0x87
 warn_slowpath_common+0x86/0xc0
 warn_slowpath_fmt+0x4c/0x50
 ? locked_inode_to_wb_and_lock_list+0x53/0xf0
 __mark_inode_dirty+0x27c/0x370
 mark_fsinfo_dirty+0x2e/0x30
 fat_alloc_clusters+0x37f/0x4e0

Who do I email about that... nothing on the kernel mailing list index mentions "fat". The kernel's MAINTAINERS file has a "VFAT/FAT/MSDOS FILESYSTEM" entry but no mailing list there either, just a personal email for the maintainer, Ogawa Hirofumi. Oh well, send them a message and see if they reply.

(I googled how to submit a bug to ubuntu, but it says I have to create an account to do it, so that's not happening.)


September 26, 2017

Twitter's decided that the correct way to respond to an increasingly toxic userbase (gamergate, nazis, a septic political party trying to start a nuclear war through their service...) is to: remove the 140 character limit that defined the service!

That's literally been a running joke for years, every time twitter doesn't know what to do they threaten to remove the reason they exist. Only this time they've done it, and some users in my feed are already posting 280 char messages (almost entirely mocking the change, but still).

I'm out.

Way back when I created a tumblr, and I should probably revive that. I follow four or five accounts by hand already, might as well aggregate them and actually post back into that ecosystem. And while I'm at it, I need to catch up on this blog (which is currently something like 3 months behind on the editing and uploading part, and the entries I've written are spread across 2 machines).

Beyond that, editing things as text is convenient for me but I should really have something (dreamwidth maybe?) pull the content from the RSS feed somewhere people can reply if they feel like it. (Yeah I know, "don't read the comments", but commenting worked fine on livejournal before Russia bought and destroyed it, and I'm not _against_ feedback. People email me and tweet at me about stuff I post here whenever I upload a new batch.)

I've also pondered trying to more regularly post content on patreon, but as a content delivery mechanism I'm not really fond of its authoring mechanism, and sorting through old entries manages to make tumblr look good (which is an accomplishment). I'm very grateful to the 9 people currently supporting me there (and need to update my bank info so I can take advantage of that, it poked me about redoing that in February and it's on the todo list). There's probably enough money accumulated there to buy a gpd pocket which sounds like a lovely replacement for my current netbook. (I missed the ~kickstarter so I didn't get the cheapest price, and I haven't found a place to buy the preinstalled ubuntu version, and the amazon complaints are that they're great if they work but if you do get a defective one they're utterly incompetent at addressing it.)

But my sad little netbook is becoming a bottleneck in my development process. (Thunderbird having some sort of n^2 algorithm in its email processing with regards to mbox size may work in there somewhere, but it hits other stuff too.)

Oh, and I need to redo the top page of landley.net too, that's 10 years out of date. (Time flies when you don't have a good conceptual model of a decade and it winds up bleeding into the next. We never got a nickname for the 2000s the way we did for the 60's/70's/80's/90's, and we don't even really call this decade the "teens". I'm hoping 2020 resets that.)


September 25, 2017

Solar power link dump du jour.

The GOP's war on solar continues. The fact 80% of Russia's net export income comes from oil/gas and the CEO of Exxon is now secretary of state aren't coincidental: solar power threatens the dorito's power base _directly_. The GOP recruited the racists and other single-issue voters in service of their original constituencey, the plutocrats. Four of the five largest companies (according to the annual S&P 500 index) are oil companies (the other one's wal-mart). Energy is 1/6 of the global economy, and until recently that meant exclusively fossil fuels.

That's why literally the first agenda item for the dorito was the Dakota Access Pipeline. The business case for building it erodes by the day, 4 years from now it'll be losing money. They hope that once the infrastructure exists, the fact it's there will provide incentives to keep using it at least long enough to pay off the construction costs. That lets the oil companies pump more oil out of the ground, and turn more of their balance sheet from theoretical money into real money.

Oil companies did lots of searching for oil and discovered oil underground (often in hard to reach places, or stuff like oil shale that's really expensive to extract), and then assigned that underground oil a dollar value on their books (as "inventory"). If it never gets pumped out of the ground, that dollar value is zero, which means they have to mark it down, which means recognizing losses, which means they've failed as capitalists and will get sued. Nobody wants to be left holding the bag, but nobody wants the cash trough the pigs are gorging at to end a second before it has to either... That's why everybody who used to work for the Tobacco Institute switched over to professional climate change denialism thinktanks. Squeezing $$$ from the last gaps of oil is where the blood money is today.

Meanwhile, Australia's power grid switched from coal to solar (and wind) so fast the old supply chain collapsed before the new one finished scaling up. Of course the fossil fuel companies are desperately trying to blame the new stuff (See! Unreliable!) but the reality is nobody wants to pay for the old crap anymore, including the people _maintaining_ the last gaps of the old stuff while it's still in use.

Watching another video about solar and storage in australia, it occurred to me that solar may make manufacturing and compute farms seasonal. "Energy's cheap right now, go nuts." Starting up the manufacturing plant when the energy's available, doing something else with the space when it's not? The question's the opportunity cost of automated manufacturing lines...

The 9th doctor narrated a BBC documentary called "The Last Miners" about the closing of the last coal mine in britain, but Youtube took it down due to enforcement of the holy secrets or some such.

Moore's Law has stopped applying to microchips and started applying to photovoltaic cells. Please adjust accordingly. This one is really boring and technical but I hadn't realized renewable energy storage was an emerging arbitrage market. Starting in 2020 EU requires all new buildings to generate all power onsite via solar/wind. California tried to copy that, but fossil fuel interests paid the GOP to sabotage it. British power generation achieved its first ever coal-free day. Here's an all-electric flying car, basically a rideable drone. India to elimiate gas stations by 2030, go 100% electric cars. For context, solar prices were $100/watt in 1975 and there were 2 megawatts of it installed worldwide. In 2015 that was 61 cents/watt and 65,000 megawatts worldwide.

Still trying to figure out how the GOP plans to sabotage/delay this...


September 24, 2017

I can't work at home because cats. Currently poison ivy covered cats, but even without that I'm gradually developing some sort of allergy or something where I get tired the moment I get home (and wake up more tired, usually breathing through whichever cat's curled up against my face this time), so even when I'm awake I can't focus. Perk right up when I leave the house though.

So I do a lot of work out at various places with tables, which usually means there's a food and beverage purchase involved. And I've noticed when I do this how _intensely_ most humans are herd creatures.

Take the HEB deli's dining area, which is less than half full when it's not rush hour so tying up a table for a couple hours isn't a big deal. It's a 4x2 grid of tables, and I try to pick a corner with nobody at the tables next to me. Then half an hour later, all the tables immediately adjacent to me are full, and the far corner is free and has nobody at the adjacent tables. So I move. And half an hour later they're clustered around me again. I mention it because I've moved SIX TIMES this session. (I don't move if there ISN'T a free corner table with nobody sitting at any of the tables next to it.)

I need to get a new pair of active noise cancelling headphones. Earbuds help, but not enough...

I'm starting to see why many self-employed people rent an office.


September 23, 2017

Spent Friday morning at the hospital emergency room, which couldn't figure out where the blood was coming from either, but it stopped and the cat scan didn't show anything unusual so they let me go home again. (Being very insistent I visit a specialist for follow-up, which I haven't.)

Personally, I suspect the poison ivy found a mucous membrane. (I'm very tired of going to doctors, having them be unable to find anything, and having to diagnose myself. I note I go to doctors _first_, and have them perform the expensive tests to rule out obvious immediate life-threatening whatsis. And then bill me a thousand dollars for not having found anything.)

I _was_ on a night schedule but didn't get out of the hospital until almost noon, and then slept the rest of the day and most of the following night because it was a stressful experience. I have no idea what my schedule is now. Took the big clippers and pruned all the poison ivy I could find, although it doesn't seem to be wilting despite having cut through the base of all those vines. (Wikipedia[citation needed] says urishol is actually a moisture retaining chemical, and humans' reaction to it is more or less coincidence. It would appear to work.)

I should go collect all the vine bits I cut up, but I don't want to touch them even with gloves or an inside-out trash bag. I'm pondering a visit to home despot to get lethal chemicals in a squirt bottle, which might be a nasty thing to do to the beehive in the neighbor's big tree (the half that didn't fall down remains beeful) but this poison ivy business is nasty all around.


September 22, 2017

Alright, I oversimplified when talking about oil chemistry. The chemical names I gave (butane and octane and such) are for the "simple" versions, where all the carbons are in a single straight line, each bonded once, with nothing else but hydrogen. Reality's weirder, 4 or more carbons can branch, providing a molecule with almost the same atomic weight (hydrogen's light, more or fewer of them is a rounding error) but with a different chemical structure, and by the time you get to gasoline there's a lot of options.

There's an entire category of chemistry ("organic chemistry") about the things carbon does, because it's extra-complicated. Here's a really quick intro.

Most atoms want to fill up their outer orbitals (or empty them out to the previous full layer), but they don't naturally have the right number of electrons, so they share electrons with other atoms creating atomic bonds. (The ones that DO naturally have the right number of electrons are "noble gasses", they don't usually want to bond with anything. They're gasses because each atom is its own molecule, so the molecular weight is lower than if you had multiple atoms glued together.) An atom with unused connections will bond to anything it bumps into (these are called "ions" or "free radicals", electric discharges can make them), so they don't stay unconnected for long.

Hydrogen only connects once and is tiny and plentiful, so the easy way to get a stable atom is to stick a hydrogen on each otherwise unused connection point, and it's such a common default that chemists don't bother drawing the hydrogens when they describe molecules: if nothing's mentioned just assume it's a hydrogen.

Carbon is right in the middle of the periodic table so it needs 4 electrons to fill up its shell (or drop down to the previous one, it's equidistant either way), so it wants to bond 4 times. This is pretty much the most of any atom (only things in that row of the periodic table do that: carbon, silicon, germanium, etc. The smaller the atom is the more chemically active it is because the electrons are closer to the nucleus, so carbon is a special kind of weird. That's why we're carbon-based lifeforms, it's the atom with the most tricks up its sleeve.)

The other thing you need to know is tiny/light molecules tend to be gasses (unless they're polar and stick together sort of magnetically, but only stuff at the left or right edge of the table does that, carbon's right in the middle and thus nonpolar), then bigger/heavier molecule stick together to form liquids (because of london forces, where brownian motion jerks the nucleus to one side of the atom so it momentarily develops a positive end and a negative end, and you get transient weak polar bonds), then even heavier stuff forms solids. (This is why the atoms that naturally _do_ have full electron shells are called "noble gasses". They don't want to bond with anything else, so their molecular weight is their atomic weight, which means they're light molecules and thus gasses. Water _should_ be a gas by atomic weight, but it's so strongly polar the molecules stick together like magnets which makes it a very chemically active liquid. Water's about as chemically weird as carbon, which is why we're _water_ based carbon based lifeforms.)

So: back to the list of molecules from last time. One carbon with a hydrogen bonded to each of its four connection points forms methane, written as "CH4", one carbon four hydrogens. It's a gas at room temperature and so light it floats up to the ceiling in our atmosphere. (Really the heavier air's sinking below it and pushing it up out of the way.)

Ethane is two carbons connected once (with each carbon's other 3 connections going to a hydrogen, so C2H6), and is also a gas at room temperature. But those 2 carbons can also bond to each other twice (think two dungeons and dragons "D4" dice sitting on the same table with an edge touching, so two of the points touch), leaving 2 hydrogens on each carbon. That's ethylene, which makes fruit ripen and it's also what polyethylene is manufactured from. And yes those two carbons can triple bond (those same two D4 dice can have a flat side against each other with the triangles lined up so 3 corners touch, making C2H2), that should be called "ethyne" but is usually called acetylene for historical reasons. (The -ane ending means it's all single bonds, -ene means there's a double bond, -yne means there's a triple bond.)

Similarly charged atoms sort of repel until they stick, like magnets covered in crazy glue, so it takes more energy to push them closer together to make double and triple bonds, and then they shoot apart again hard if the bonds break. That's why acetylene burns hotter than natural gas, you're breaking that triple bond and getting the energy back out. (Turning the bond energy into motion, which becomes brownian motion, which is heat. They're all microscopically vibrating and bouncing all over the place, that's why they're more reactive.)

Of course you don't HAVE to fill all the other binding slots with hydrogen. Reality is full of chlorine and sulfur and oxygen, and all of it binds to carbon. (Oxygen wants to bind twice, and usually one side of it binds to hydrogen anyway when the other side binds to something else, and that's called a "hydroxide group". Hydrogen hydroxide is, of course, water. Carbon with a hydroxide bonded to it is called "alcohol" (from arabic "al" meaning "the" and C O H being the letters for Carbon, Oxygen and Hydrogen), and its ending is "-ol". Methane hydroxide is wood grain alcohol (methane-ol, I.E. methanol), ethane hydroxide is the alcohol in beer and wine and vodka (ethanol), and then propane can bind the hydroxide to the end or two the middle, producing two different molecules. The one where it's bonded in the middle is "isopropyl alcohol" (isoproanol). The iso- prefix means "bonded in the middle rather than at the end". There's a whole vocabulary chemists use, mostly inherited from the 1800's when they were still figuring out how this worked.

And once you hit 6 carbons your chain can wrap around and bind its own ends together, producing a "benzene ring". Benzene itself is carcinogenic because it goes right through cell walls and dissolves DNA like soap (Judge Doom also uses it to kill 'toons), but if you bind a hydroxide to it you get "phenol" which can't get through cell walls (the OH makes it polar and cell walls are a nonpolar mix of fat and protein; smaller alcohols can sneak through anyway but phenol's too big), and that's used in mouthwash. (Most alcohols will kill bacteria, but phenol doesn't get into human cells so it isn't intoxicating, so people are less likely to try to drink it.)

One of the things oil refining does is wash the oil with chemicals to react with all this stuff and strip it off: rip off the sulfur and chlorine and hydroxides, break the double and triple bonds and replace them with hydrogen, cut the benzene rings, and so on. Oil is full of complexity and refining either filters it out or squashes it to the simple stuff, because the point is to get carbon molecules that burn cleanly without side effects, reacting with oxygen to produce carbon dioxide and water. Back before they removed the sulfur in refining, you'd get sulfuric acid as one of the outputs, and then acid rain kills fish and plants and damages buildings.

This gets us back to yesterday, where one carbon with hydrogen plugging all the connections is methane, two carbons connected once to each other and everything else being hydrogen is ethane (well, if the carbons are connected once, if they're double-bonded together so there's only 4 hydrogens needed to plug the rest, it's etylene). Then three carbons is propane, four in a row is butane etc. The short molecules are gasses at room temperature, the longer ones are liquid, and even bigger ones are solid (waxes) that melt like butter (or may stay liquid when mixed with the smaller ones acting as a solvent).

But you've got to simplify their structures before you can handwave away their differences like that.


September 21, 2017

Going through old tweets. Apparently I started on this whole solar kick back on April 13.

Back then I watched a lovely ted talk about how Guatemala has twice the solar power per square foot as london, and poor countries leapfrog to new technology more easily because they have no old stuff to replace. (Countries that never had wired phones now have cell phones everywhere.) And there were stories about how a small solar panel that plugs into a cheap battery pack, collectively designed to be ~$15 camping equipment in a tent, can power three LED lights and USB charge a phone battery in a rural villager's home. And these are just TAKING OVER some country (not guatemala?) because A) instead of going into town to charge the phone, you can charge it at home, B) your $10/month kerosene bill turns into a one time payment of $15 and then lighting's free, brighter, non-smelly, non-carcinogenic, doesn't cover your house in soot, and doesn't burn your house down.

Except... that was 3 ted talks, and I only saved a link to one. And I can't seem to find the others again. They were quite good. They weren't the only source of this information, but they were succinct and enlightening and how _I_ first learned this stuff, and I wanted to pass it on...

Avery Lovins covered some of the same material in one of his talks, which has really good information buried in the animal noises and sound effects that make him a really irritating speaker to watch. Maybe it's just me, but I find him amazingly smug and regularly picking turns of phrase designed to ensure he's only ever preaching to the choir. (Even if you already agree with what he's going to say, I find it hard to stomach him talking about "the rotted remains of dinosaur goo" or whatever he's sneering about this minute. "I invented the words negabarrel and feebate but I'm pretending I didn't because I'm trying to make them a thing, don't you feel stupid for never having heard them before? I made up PIGS and SEALS acronyms, let's club some seals. And that's not even the reason I make a pig noise in this talk!" I dunno, maybe I'm just tired of hearing from old white guys. It _is_ really good information, but I'm reluctant to forward it to anyone else if I _expect_ the presentation to turn them off before they're halfway through.

Anyway, the collective story I found so fascinating is that the kerosene industry remains a billion dollar/year niche today, but it's imploding as fast as the coal industry because solar panels and battery packs are eating the low end of the market. Kerosene lighting is one of those "payday loan" slumlord niches milking the poor so it flies under the radar, but it's a huge amount of money that was feeding into oil company profit margins. And now that profitable niche is drying up and blowing away.

We've actually seen this before: oil lamps used to burn whale oil, and the reason we didn't drive whales extinct in the 1800's is kerosene refining was discovered as a cheaper alternative. Whales produced a bunch of products: not just oil but meat and whalebone (a strong lightweight construction material, corsets were made out of it for example). But a cheaper replacement for whale oil tipped the scales and made whaling unprofitable, and then people learned to make plastics from oil to replace whalebone, and the US west opening up increased the beef supply (the cowboy era)... But whaling was attacked one product at a time, and that reduced the profitability to the point the whole thing stopped happening _before_ the world quite ran out of whales.

When a walking ship killed a whale, it got blubber (rendered into whale oil), and also meat and whalebone. It couldn't _not_ produce all of them, in a fixed ratio. All they could do is waste the parts they didn't want. Crude oil works the same way: refing it produces butane and kerosene and diesel and gasoline and vasoline and so on, all in parallel. The components of crude oil are mixed together and refining just unmixes them, it doesn't convert them into each other. So if you can't sell some of the outputs, you have to pay to get rid of them. (That's why they keep figuring out how to make dyes and fertilizers and antibiotics out of it, the alternative is paying to neutralize and dispose of massive quantities of highly toxic industrial chemical waste.)

The actual distilling part of oil refining is mostly just separating things by boiling point. The component molecules are categorized by the number of carbons connected together: one carbon is methane, two is ethane, three is propane, four is butane, then pentane, hexane, septane, octane, nonane, decane, and so on. The more carbons the molecule has stuck together the heaver it is, which gives it a higher boiling point. This means the 1-3 carbon molecules are a gas at room temperature, 4 is just barely a liquid (butane is the clear liquid in cigarette lighters that doesn't need to be pressurized to stay liquid, but evaporates at the drop of a hat). Then it's liquids with increasing boiling points up through about 20 carbons, where you start to get solids at room temperature. (Keep in mind if you mix them the act as solvents for each other, so you can have a "solid" soaking up components that would be liquid on their own, or solids dissolved in a liquid.)

The problem is crude oil has _all_ this stuff mixed together, and you can't say "I want to turn this batch of crude oil into pure octane", it doesn't work that way. The refining process primarily just separates the various chemicals by molecular weight. You can tweak the output percentages a bit using different chemical reactions, but that adds to the cost, and modern refineries don't necessarily know how their machinery even works anymore so tweaking the process is slow and expensive and occasionally explodey. Plus each oil well produces slightly different kind of oil (categorized into things like "light sweet crude" and "heavy crude", but also contaiminated with things like iron or sulphur... refineries have their hands full filtering out the weirdness in different batches. And then there's the fun of US refineries designed to deal with middle eastern oil for historical reasons, and sending the canadian shale oil to refineries overseas that specialize in _that_ kind of oil. And retooling any of these is a multi-billion dollar proposition that would have them offline for years.)

The output of oil refining is various products that are generally mixtures, not pure chemicals. Vaseline tries to average about 25 carbons but what they mainly care about is how solid or squishy it is at room temperature. Paraffin wax is anywhere from 20 to 40 carbons, all mixed together. Kerosene is 10 to 16 carbons. Diesel is defined as boiling between 200 and 350 degrees celsius, which means 8 to 21 carbons. Gasoline's "octane rating" originally meant "this liquid acts like this percentage of the liquid is 8-carbon octane, and the rest is some neutral filler that doesn't burn", but as with diesel gasoline is primarily defined by boiling point (30-210 degrees celsius) which is anywhere from 4-12 carbons, and then the cheaper vs more expensive types of gasoline are different refining processes producing different mixtures.

(Yes, that last link says diesel is 170-360C but the earlier wikipedia[citation needed] link said 200-350C. What standards there are for this vary from country to country. And yes, having a little butane in gasoline and a lower end boiling point just above room temperature is why you can smell it so strongly at the gas pump, and why it evaporates so fast when spilled on warm ground.)

This is the context for the loss of the third world kerosene market: the diesel and petroleum markets can't _not_ produce kerosene and vaseline and natural gas as a side effect. They can only skew the ratios of their output just so far, and that raises the price and may affect quality of the product. Refineries used to burn off natural gas because they had it, couldn't profitably turn it into anything else, and couldn't find anybody to buy it. (Not necessarily because people didn't _want_ it, but because capitalism said selling some and destroying the rest was their most profitable move. Selling it all would drive the price down, so they'd make less money overall. Capitalism is a mechanism for regulating scarcity, and in the absence of sufficient scarcity capitalism will create it. The whole point of OPEC existing is to reduce the supply to keep prices up.)

Oil companies today are selling kerosene to the third world because they've got it and can't find anything more profitable to do with it. (They can turn most of the same components into jet fuel, but that's a market with more or less fixed demand, so selling more lowers the price.) So losing a billion dollars a year of kerosene lighting customers comes straight out oil company profit margins. Those customers are now giving money to solar panel and battery producers. They're spending less money on solar than they were on kerosene, but the result is still a combination of "less money going to fossil fuels" and "more money going to solar/battery production and development". And that's a lovely thing.


September 20, 2017

Back on April 12 I looked at a bunch of Linux video editors, and tweeted about the results but didn't write up a blog entry. So for posterity:

The way to make screen capture work on Linux, to get a video of your desktop with synced audio, is to fire up vlc and under the "media" menu select "open capture device", then select capture mode "desktop", then set the frames per second (default is 1.0, you probably want more like 4), then click the "show more options" checkbox, click the "play another media synchronously" checkbox, and enter "alsa://" in the Extra media field. (This is the magic dance to make it listen to the microphone so it can hear you speak while you type stuff. You may need to plug a microphone into your input jack.)

Then click the pulldown arrow next to "save" and select "convert" and at the bottom entier the Destination file name (with an .mp4 extension). And THEN hit start.

If this seems like an elaborate dance to do every time you want to record something, that's because it's Linux on the desktop. I tried a bunch of other options and they don't WORK.

I still need something to edit the video, so I googled a bunch of options. There's lots! None of which actually work yet. KDEnlive wants to install 124 packages (the whole of kde, including the web browser). Avidemux seems promising (except for lack of "undo") until I saved the video, which started out as a grey screen with changed rectangles filling in. (No initial keyframe! Kind of a bug. I watched a video tutorial on how to use avidemux from 2010, and 7 years later it can't replace keyframes it cuts out when editing an mp4.) Openshot wants to install qtchooser, libqt4-dbus, python-support, and docbook-xml (despite the documentation being a separate package). As far as I can tell if you want to edit video on Linux without bundling more bytes than Microsoft Word + Internet Explorer combined: you buy a mac.

I _can_ edit footage with vlc, or at least save sections as separate files and glue them back together again, using a similarly awkward mess of non-obvious mechanisms manually repeated. It's entirely possible this is what I'll wind up having to do if I want to edit anything.

Then again, when I give talks at conferences the camera starts, the camera stops at the end, and in between sounding coherent is a matter of preparation and practice. Doing my own videos on the netbook without being able to edit them isn't _worse_ than that, and I don't have the "will this video ever go online" problems like with Flourish or the years ELC took months to put stuff up.


September 19, 2017

I got pinged about lastgplv2.org, which I'd largely forgotten about, and replied explaining why.

As with many things, I should do a proper writeup. But I had an IRC chat today where a question about LP64 turned into the link to "and here's why microsoft didn't do that, and here's the entry from that same blog about why microsoft won the 32 bit transition", which led to "heres why Intel won the IBM PC contract in the first place", and "here's how Cortex-M defeated Itanium inside Intel", and I'm SURE I wrote all those up at one point by my old Motley Fool articles have fallen off the web and most of the old stuff gets weighted down to nothing by google for being too old.

I have "podcasts" as a patreon goal more or less as a placeholder for this. My netbook has a "podiatry" folder with lots of todo lists and partial outlines for such things, but... I have too much else to do.


September 18, 2017

Up way too late on IRC with Jeff again (turns out texting costs him money; I get it for free).

Poking at mkroot and there's one of those "the fighting's so vicious because the stakes are so small" design issues: should the script create the initramfs cpio.gz image from the root filesystem directory before or after running the kernel build?

The code's currently doing so after, which is slow if all you want to do is rebuild the root filesystem. If it's before, as soon as the kernel starts building you can ctrl-c out and grab the cpio.gz file which has already been updated.

But the REASON it's after is if your kernel build creates modules, it can install those modules into the initramfs and you won't get the updates unless you package up the cpio.gz after the kernel does this. None of MY kernel configs do this (not really a fan of modules), and putting them in initramfs is a bit silly (they're pinned in memory all the time, might as well make 'em static in the kernel), but there are reasons to do it (mostly having to do with deferred initialization and specifying module arguments you don't want to stick in the kernel command line).

I've had a local diff forever that moves the cpio.gz creation up before the kernel build, and I'm tempted to check it in because persistent local diffs are a "code smell", but it would remove flexibility. (Flexibility I'm not personally taking advantage of, but still.)

In theory I could even make "cpio.gz" be a module, and make "squashfs" and "ext4" targets too. Except I don't want to have to specify them on the command line each time to get the "default" behavior. Similarly, I haven't moved the base root filesystem build out to a module, even though having mkroot -d to _not_ do it is kinda silly. I _could_ have a default set of modules that gets run through if you don't specify modules, but... that way lies aboriginal linux's complexity, and right now the build is still simple. How do I hold on to that simplicity but keep it flexible and capable of doing what I want it to do?

And once again it's one of those painful design decisions to make because either way's fine-ish, so neither is clearly _superior_, so there's no obvious right thing to do that stops the internal debate.

And when I'm developmentally swap-thrashing between far too many projects, just trying to keep the plates spinning so they don't come crashing down, something that requires a half-hour walk while I think about it is enough to dump it back on the todo list and cycle to something else, which means it becomes a blocking issue to further development. Not because it's hard, but because it's harder than I'm up for in the 15 minutes I have to poke at this when I'm too tired for "productive" work...


September 16, 2017

It's not fleas. It's poison ivy. The electric utility guys trimmed the trees and bushes in the back yard and the back fence that used to have lots of bamboo now has lots of (bruised) poison ivy, and the cats have been romping through it and then cuddling up against me, my pillow, the couch, my desk... What fun. Thanks cats. How about NOT being in my lap right now?

I got pinged about qcc a few days ago, and replied that I just haven't had time to work on it.

I've talked about how we the simplest self-hosting development environment is four parts: a kernel, command line, compiler, and C library. All of it written in C (one simple low-level implementation language), and all of it under a public domain equivalent license (or at least something close, ala BSD/Apache).

The reason that's important is so you can understand and audit all of the base system, among other things avoiding Unix creator Ken Thompson's trusting trust problem. Tl;dr: Ken once pranked the BSD guys by adding code to his C compiler that recognized when it was compiling the login program and added a magic root password to the resulting binary, and he also made it recognize when it was compiling the C compiler so it would add _itself_ to the resulting compiler. Then he recompiled the original compiler with his hacked version so the change was there in the binary but not in the source code, so the BSD guys wouldn't see it but it would still be there in any new compilers they built with that compiler. Thus any login programs they built from then on would have his magic root password, even if they audited every line of source code and rebuilt the entire system from source. (He removed it before giving the lecture where he described the hack.)

People have been trying to counter this for a while, but it boils down to "if you're running on a system with a rootkit, nothing you do inside that system is guaranteed to find or remove the rootkit if it's designed to hide from that way of looking for it". You need to look at it from the outside, with a "clean" system. But how do you get a "clean" system?

The way to deal with this is by having multiple implementations of every tool (so they're at least not all hacked the same _way_), being able to inspect all the source _and_ analyze all the binaries, and being able to understand what everything _does_ at least at the base layers so you have at least a chance to spot strangeness when it's there. (You can look at a hex dump of 20k of data and read the disassembly, that sort of reverse engineering is what some security researchers do for a living. But nobody can look at a hex dump of megabytes, the bloat hides the exploit, security through obscurity pointed the other way.)

There are other advantages to having a simple minimal baseline environment: it's the smallest amount of code you need to cross-compile to new hardware before you can start native compiling. And when you're learning how the system works, it's possible to achieve a complete understanding of a real working system you can actually use for something.

My toybox project is working towards such a command line. (I already mostly achieved it under busybox, but that's doomed to a license that's unlikely to outlive the current generation of programmers using it.) Musl-libc is a saneish C library. (I said ish.) These days I'd might point at xv6 instead of Linux, both because it's simpler and because it's got a whole textbook explaining about how it all works, but there's a lot of shoveling to do to make it hold weight. (The Google Fuchsia guys were working on that, but my interest in it never got traction, and Google has a history of abandoning projects so a wait-and-see attitude on anything new seems justified.)

Ideally each of these tools can be completely explained in about one semester undergraduate programming course, so four one-semester courses would get you a basic understanding of a complete usable operating system, under which you could natively compile Linux From Scratch. (and Beyond Linux From Scratch), thus bootstrapping up to arbitrary complexity from that initial "clean" starting point.

The point of qcc was to provide a simple self-contained version of the C compiler, but leverage the work QEMU is doing for multiple architecture support. Fabrice Bellard's Tiny C Compiler provided a working proof of concept, 100k lines of source code that built a bootable Linux system. Yeah, he cheated a little bit but it _worked_.

I maintained a tinycc fork for 3 years, but stopped because reasons. I got permission from Fabrice to BSD license his code (0BSD didn't exist at the time, but might count, I'd have to ask), but there were other people's contributions to the repo (initially quite sporadic), and it fell down my todo list while still in the "trigage" stage.

So still a work in progress. But this is another output of the "make android a self-hosting development environment" project, trying to repot it onto an understandable/reproducible/auditable base. And that's why "let's suck perl and zlib and curses and openssl into the circular dependencies at the root of the tree" is counterproductive. You wanna find a trusting trust exploit in perl? You think you're better at finding it than NSA/KGB/RNC/RIAA/Verizon/China is at hiding it? You think it not being there _today_ means it won't be there tomorrow?

In brief: Avoid.


September 15, 2017

Some insect bit my hand a couple times on the couch last night and it ITCHES, and then the same hand got bit more when I went to bed, and I'm having one of those allergic reactions that may require cortizone. I note the same cat was curled up in the same position relative to me both times. (No, it's not a cat bite, it's a bug bite.) Yes, the cats that follow me around the house incessantly and won't leave me alone. Last night all three of them were on the couch with me, not sure how many in bed.

Apparently they were supposed to be flea-dropped again at the start of the month, but Fade was doing that and moved her trip up to avoid Hurricane Harvey and didn't leave us a todo list, and it fell through the cracks. And now we spray and wash everything, and I maybe get allergy meds.

I'm tired of cats. I sort of viewed pets as training wheels for having children, but at this point it's pretty obvious that's not happening. I had 2 cats, Fade had 2 cats when we moved in together a dozen years ago, and 4 cats was Too Many Cats and I started to develop cat allergies, but over the years Dragon wandered off and Aubrey died, leaving two cats (George and Peejee) but then Fade wanted a dog, and that dog (Pixel) tried desperately to make friends with the cats who were viruently speciesist, so Fade got a kitten (Zabina) on the theory it wouldn't know better and would hang out with the dog, and then Pixel died suddenly (heart tumor) and then Fade got another dog (Adverb) before Zabina finished growing up because she ships cats and dogs together. And along the way my brother moved and sent me the cat I left with my mother back in 2001 (Foster) who was 17 years old, and we were once again up to 4 cats and my allergies started coming back...

Foster died of old age after about a year (he had a pretty good year though), and Fade took the dog with her up to Minnestoa, but there are still 3 cats here. Peejee and George are 14 years old and Peejee is getting SUPER CLINGY in her old age. I'd be willing to wait Peejee and George out (the age record for cats in my family is 22 but among my siblings and me the record is 19), but Zabina's two and likely to last another fifteen years easy.

They're lovely cats. I am tired of having cats. Too much cat. I can't work at home because cats climb up on my keyboard and on my shoulder and block the laptop's heating vent and WON'T LEAVE ME ALONE.

And now I can't sleep because itchy, and there's so much laundry to do when I get back. Maybe I should get a hotel room.


September 14, 2017

Witness me, I did NOT reply to Christophe Leroy with:

That was my argument last time, and the answer was "Breaking userspace is bad, mmmkay." Even when userspace is doing something REALLY OBVIOUSLY STUPID and it is _clearly_ their fault, as long as they got there first they've established the status quo and moral arguments about right and wrong are a bit like the native americans asking for their land back.

And I did NOT reply to Greg KH with:

You're right, I forgot step five of twenty-six in the secondary procedure for submitting my form through the approval bureaucracy, Which is also step five in the sixteen steps for the primary procedure.

(Do they have the equivalent of tax professionals to prepare and validate your forms for you yet, or is that still coming?)

The REAL danger of that last bit is SOMEBODY WILL DO IT. And make a lot of money doing it.

(The bureaucracy and bikeshedding in modern Linux Kernel development is the diametrical opposite of fun. I gave a talk about this at Flourish in Chicago but they never put the video up.)

I'm trying yet again to get my CONFIG_DEVTMPFS_MOUNT patch upstream. My third attempt hit a change in how printk() works (I left off the /n and the flushing got deferred and weird), but that was my bug. Just a strange new manifestation of it due to kernel version skew.

My new attempt broke booting for debian derivatives, which took a while to track down but it turns out debian's initramfs boot script is doing "if ! mount -t devtmpfs /dev /dev; then mount -t tmpfs /dev /dev; fi" which is STUPID. If we inherited a working /dev but the devtmpfs mount fails it INTENTIONALYL BREAKS the existing /dev directory. (The mount faiils in this case because it's already there: debian kernels are built with CONFIG_DEVTMPFS_MOUNT but then booted into an initramfs, and previously that was a NOP until my patch fixed it. But if you booted a kernel without CONFIG_DEVTMPFS enabled and provided a static /dev in the initramfs, this would similarly break your boot for no reason. There's no circumstance under which it can work, the tmpfs mount on /dev gives you an empty directory the rest of the boot can't use.)

The underlying problem is Debian's bug, an untested error recovery path triggers and blows up the system. If they didn't do that, it would work fine. The proper fix is for them to STOP doing that.

But until Debian not only fixes itself but the fix propogates through all the derivatives using the broken debian script (Ubuntu LTS has a 5 year support horizon, not likely to be fast), my patch needs to include an ugly workaround (don't return error when you try to mount devtmpfs on top of itself) if it has any chance of getting upstream, and then I need to go "yeah but [large lump of backstory, go talk to Debian]" to everybody who objects, and it's a political issue not a technical one, and see "linux kernel is no fun anymore" rant from earlier in the week.


September 13, 2017

Hurricane Harvey stripped the paint off of a corner of the house, so we're having it repainted. Technically I suppose that's another $2600 of flood damage, just to the top this time. Then again we haven't had it repainted since we bought it in 2012, so I suppose it's about time anyway.

Trying to get a Toybox release out. Jeff asked for "dd", which makes him like the 9th person to do so. (Awk gets more requests, but that's a LOT more work to implement.) In theory the 3 months since last release mark is the 15th, I.E. Friday. In practice, I'll probably slip it to the following monday.

I recently asked the kernel list for execve(NULL). Never have I seen a more pronounced manifestation of Dunning-Kruger syndrom from otherwise highly technical people. Nommu doesn't have fork()? That must be TRIVIAL to fix, the fact nobody's done it in the past 40 years means this idea off the top of my head must never have been tried before!

So, various people started bikeshedding and coming up with wild compiler redesigns that wouldn't work... Then Alan Cox chimed in with a hilariously misinformed take, advocating the minix approach of using what amounts to the old DOS "overlay" feature, copying all the memory segments out to backup storage every single task switch. On nommu this means that when you fork you need at least twice as much contiguous memory, possibly three times (backup storage for _each_ version plus active storage for this version, you can swap 'em in place if you're tricky about it but as the mappings diverge that gets complicated and potentially buggy). And it means that if you do fork() and exec, you copy all the data for no reason, just to discard it again. (Just try and do a shell script under those circumstances.)

The example case I first heard for this a decade ago is: suppose firefox wants to launch a flash plugin as a child process, which means fork() followed immediately by exec(). So it did a fork() off a 500 megabyte process the immediately discarded all 500 megabytes to launch a 10 megabyte process. Normally you leverage copy-on-write so the actual physical pages aren't copied, it's just reference counts twiddled. If you have to actually copy that you're at _least_ churning the heck out of the memory bus and CPU cache. If you only had 100 megs of memory left then actually copying all the 500 megs in the fork() either fails or swap thrashes for quite a while, just to discard it immediately again.

But reality is worse: because on a nommu system all your pointers point to physical memory, meaning you can't MOVE the data to create a copy. If you copy memory to a different address range, all the pointers point back into the old address range. So you have to evict the old copy of the data (to swap or other memory) so your new copy can run, and then if the program runs long enough for the scheduler ot kick in you have to evict the new copy and copy the old one back to the same address ranges so it can run. Meaning you take this fork overhead every time you schedule _either_ process.

The way vfork() gets around this is by giving the child shared mappings instead of copy on write mappings. Nommu can't implement copy on write (no mmu to intercept illegal access and generate a soft fault to a kernel route that fixes up the illegal access by adjusting the mappings), but it _can_ implement shared memory. Unfortunately if the child and parent both modify the memory they interfere with each other, so vfork() also freezes the parent process until the child calls exec() or exit(), either of which disposes of its copy of those mappings (unsharing them so the parent now has exclusive access to the memory again).

If you exec() almost immediately after the fork there's almost no difference in behavior between fork() and vfork(). But if you don't, it gets dicey fast. If the child writes to the shared memory bad things can happen (which includes writing to the alread-used parts of the shared stack, such as returning form the function that called vfork() (which can stomp the return address the parent uses when _it_ returns from that function).

Sometimes people have the bright idea of copying the stack to give the child its own copy, but A) pointers to stack variables can happen and they'd point to the old copy, B) the stack is potentially a large amount of memory being copied. (It usually isn't on nommu, but it _can_ be, especially since vfork can also get used on with-mmu systems to have a common code path.)

The other problem is if the child blocks, the parent stays blocked. This thread started because ptrace wants the child to SIGSTOP itself so the parent can ptrace_attach() to it before resuming it and performing the exec.

All the tricks to provide more conventional fork() semantics on nommu wind up either reproducing the problems or coming up with worse ones.

So Alan Cox said all we need to do is implement nommu fork() the way minix did. Yes, this is the same minix that had a filesystem maxing out at 64 megs and which couldn't overlap CPU and I/O. Its deficiencies drove Linus to create Linux in the first place, becuase it already didn't scale to a 40 mhz 386 with 2400 bps modem circa 1991. Clearly, we should do what they did.

And I just don't have the energy to give him a large enough backstory dump and try to argue him around. Specifically, I cannot do it POLITELY. (Plus what the THREAD is about is execve(NULL) and vfork is a tangent there, it's useful for something like busybox or toybox to be able to re-exec itself in _general_, without relying on external dependencies like /proc being mounted; vfork() is a _distraction_ from that, it's one example use case he's trying to shoot down because it's there and he thinks the problem he hasn't personally dealt with must have a simple fix a generation of programmers hasn't seen because he personally can't find anything wrong with the first thing he thought of.

I tried to get Jeff Dionne (my boss at $DAYJOB and creator of uclinux) to reply to him, but Jeff doesn't want to get linux-kernel on him. So once again, it's me arguing with everybody else on linux-kernel, the lurkers privately supporting me in email but nobody actually chiming up in PUBLIC to say anything... and I'm tired.

I HATE winding up with "the lurkers support me in email". It means there's a breakdown in the social fabric of the community, and people see me as willing to take on a political role to advocate for their interests while they stay anonymous and avoid participating in said social aspects. This puts ME in an uncomfortable position, doing unpaid basically emotional labor.

The alternative is code staying out of tree forever, and having to maintain patches that upstream intentionally tries to break each upgrade to make the out-of-tree approach expensive, hoping to _force_ people to either confront them or bow to their way of doing it.

Oh well, what I usually do is bring it up again next merge window and hope that between now and then some of the people bikeshedding against it have died of old age. Meanwhile the out-of-tree version works for me.


September 12, 2017

Walked to ACC instead of UT. (I still have a valid ACC student ID from the japenese class. Which is only fair, since they kept the money.)

It's not quite the same 24 hour availability of the outdoor tables protected from rain with a working electrical outlet at the little niche off of speedway (which has "quiet undistirubed working environment" by virtue of being there at 3am, especially over the summer). But the highland campus is open until 10pm, and it's got all the space in the world (converted shopping mall) and it's all tables and outlets and air conditioned study space, and if I head out around sunset I get a couple hours there before closing time, and then I can head to the McDonalds on I-35 that's open until 1:30 am and get another couple hours there before heading home.

Given that I still can't work at home because cats (so many cats, so clingy), that's not a bad variation. (The Wendy's in hancock center also closes at 10, but I'm usually there earlier in the day for a couple hours of work, and don't particularly want to eat two meals there every day. HEB's deli area used to be open 24/7 so I could go there for a couple hours at 3am, but homeless people started camping out there and now they put away the chairs every day at 7 pm.)

What I really want is a 24 hour Starbucks. I've tried Epoch a few times but it tends to be full, and that's a long walk with no guarantee of a place to sit at the end. Plus the drinks there are about as expensive and as many calories as 4-for-$4 at Wendy's.

I miss Metro. 24 hour coffee shop with a beverage I really liked (Big Train spiced chai, made with steamed milk), next door to a 24 hour video arcade. Pity the Cult of Scientology took over the building and raised the rent to drive them both out.


September 10, 2017

And GPS is back to being endless feature creep with moving goalposts. Customer doesn't give us money until it demonstrates X. Then Y. Then Z. Still no money. Friday will be the 4th missed paycheck. (The one year anniversary of going down to half pay came and went, that was back in like June.)

We are SO CLOSE to making this work. And I am out of gas.

I tried to switch back to a day schedule after the "Yay GPS working!" part last week, but Jeff has still needed me to help debug GPS stuff several nights since and I've been in that fuzzy "haven't GOT a sleep schedule, just random naps and I'm always tired" state for a week now. It's still really easy for me to get stuck on a night schedule because sunrise makes me really tired, but sleeping until 3pm (with the daily call at 5) means that by the time I'm free to run errands everywhere's closed. (I've been meaning to run an errand at a non-local bank for weeks.)

And I have 4 different recruiters periodically calling me and wondering if I'm interested in taking jobs, and I keep telling them to call back in a couple weeks when I HOPE this job will have resolved and maybe even put me back on the full-time paycheck I was making for the first couple years instead of this endless half-pay...

But closure ain't happening.


September 6, 2017

Yay! Jeff finally got GPS to do a thing! With live antenna data! There's still several things wrong with it, but there are circumstances under which it does at least some of what it says on the tin.

I'm kind of glad that car dealerships are one of the things self-driving app summonable cars will put out of business in a single digit number of years.

Alas, ever since SCO I've said "dying business models explode into a cloud of IP litigation", which is shorthand for the way drowning swimmers climb on top of would-be rescuers and drag them under. Other people around them aren't struggling, so when they give up trying to keep their head above water under their own power, they grab on to nearby survivors and attempt to suck the success out of them. It's a failure mode you see often, wounded bear syndrome. Nothing to lose.

The current political situation seems to be a combination of the Southern Strategy making the GOP go septic (as was foretold at the 1963 rockefeller vs goldwater convention), the entire fossil fuel industry reacting to solar and wind's exponential growth the way the RIAA and MPAA dealt with streaming media replaced physical recordings, and the baby boomers hitting retirement like a brick wall and going "but we've ALWAYS run the world, the universe can't possibly continue to exist without us".

The feedback between them is uncomfortable. With all three burning down to the ground they'll have nothing left soon, but they're determined to go out in a blaze of embezzlement. The question is how much damage they'll do on the way out, and how much rebuilding work they leave for everyone else when they're gone.


September 5, 2017

I've been looking at strace which means looking at ptrace, and I'm trying to figure out how it's supposed to work on nommu? The big version at least builds and runs on nommu, but I don't understand how it does launches a stopped child with vfork?

Ordinarily I don't look at other packages' source code (dowanna get gpl on me when writing a public domain implementation), but I couldn't figure this bit out long enough I eventually took a peek (github is a publicly viewable webpage, the answer's theoretically _right_there_) and the reason I couldn't figure out the synchronization mechanism is it doesn't synchronize on nommu, instead it races and probably misses the first few system calls.

The problem is you need to fork(), have the new child kill(SIGSTOP) itself, and then exec(). That way the parent can trace the exec itself, including the syscalls the dynamic loader does and so on. But with vfork() you can't do that because until the exec() happens the parent can't resume, so if the child stops itself first the parent stays stopped.

The solution is do to the "re-exec yourself" trick where you exec /proc/self/exe to restart yourself (thus freeing the parent), and signal to the new exec that it's a re-run either with environment variable data or by changing the argv[] data. But re-execing yourself is brittle because /proc/self/exe isn't guaranteed to be there (in a chroot for example), and argv[0] could be a relative path vs a cwd we already did a chdir() away from. (Assuming argv[0] was set accurately in the first place, execve(filename, argv[], envp[]) supplies filename and argv[0] separately, they don't _have_ to have anything to do with each other.)

Sigh. I wanna execve(0, argv, envp). I should poke people in email and make puppy eyes.

The lack of execve(NULL, argv, argp) to restart the current program is one of my longstanding "why hasn't somebody done this already" things. A process should be able to re-exec itself even if it's in a chroot where the original binary isn't available; on a system with an mmu you can fork() and get a second copy, but on nommu you can only vfork() which blocks the parent process until the child calls exec() or exit().


September 4, 2017

GPS continues. So much GPS. It's another one of those "three pay periods missed and we need to hit customer milestones to get paid" things.

Very tired.

Also, if you cursor down in less and there's no new input yet, it hangs waiting for more input. The toybox version shouldn't do that, it should scroll if there is input but it should also let you cursor back _up_ before then. How would I write a test suite entry for that instead of manual testing? There's like a whole pty framework needed for that...


September 3, 2017

The 4.13 kernel adds ktls, I.E. basic https plumbing in kernel space. (The w3c standards loonies renamed https to "Thread Local Storage" because too many people knew what the old name meant, since every URL still exploses it to this day. They also tried to rename URL to URI but the only people who noticed were the same ones who say "kibibyte", and they're both crazy.)

Unfortunately, ktls is only about half the plumbing: it handles an ongoing connection but doesn't do the initial handshake, nor does it handle renegotiation if you switch keys after a while (standard practice for crypto stuff, and I think either end of the connection can initiate it). Instead there's an example userspace package that hands that part off to openssl. (So what's the point of having any of it in the kernel? To make the kernel bigger?)

I didn't tackle my own https support in toybox, in part because the Android guys don't wanna audit another https package (don't blame them). Instead I was looking at an stunnel-style approach of piping data through an external program to encrypt it, but that's still on my todo list because stunnel, openssl, and bearssl's command line utilities to do this all have wildly different command lines. There isn't even a de-facto standard here.

If this really did reduce the burden of implementing my own https plumbing, that would solve a real problem for me. But it looks too half-assed, and I am sad.


September 2, 2017

Last time I attended Texas LinuxFest I visited the ChickTech booth, and got on their announce list. They just organized their own conference, ACT-W or Advancing The Careers of Technical Women.

There are videos online in one of those random fly-by-night services that will go away again in a year, so I should probably watch them and download the ones I like before they're lost to history (like the Flourish videos).


September 1, 2017

I wrote a utf8 test program to compare my utf8towc() function to libc's mbrtowc() found that musl, glibc,and bionic were _all_ doing the conversion A) wrong B) differently. (I keep thinking "I can't possibly be the first person to test this", and yet...)

Rich took a bug report about musl (reproduced and agreed it was wrong) and I poked Elliott about Bionic in email (dunno how much they care; the utf8 parsing is right, it's the stupid unicode range restrictions that are funky...) And glibc can go hang, I don't care.

Although utf8 is quite elegant, unicode is INSANE. And the main source of insanity is the legacy utf-16 encoding windows does, which started with the Rich White Guy Assumption that 65535 characters oughta be enough for anybody. (There can't possibly be more letters than that in the world, can there? Answer: currently about 120,000 and still going.)

So utf-16 did this funky straddling thing where values in the 0xd800-0xdfff range mean they come in pairs, which can combined with the base set of values it can code to give a grand total of 1,114,111 values, which is where Wikipedia[citation needed] gets 0x10ffff from.

Oh, and you also exclude the 0xd800 through 0xdfff range as -1 (invalid code point). But NOT the 0xfffe and 0xffff that bionic's also calling invalid code points for some reason. Yes 0x10ffff is a strange number, musl was capping at 0x11ffff instead, and bionic at 0x1fffff. (Meanwhile glibc is not just capping at 0x1fffff but also returning -2 "need more data" for some 4 byte sequences because it's trying to parse to the end of the original utf8 range (up to 7 bytes), and THEN say it's an invalid code point. Jazzhands!)

Sigh. Testing and documentation. Never enough of either. Except both seem to require a certain amount of skill to do right. When I asked for toybox tests as a way to let others contribute usefully, I got lots of tests that weren't really testing anything in toybox. (Looping through mknod 000 through 777, all 512 values, is maybe testing the kernel? But it's not checking that the kernel _does_ anything with those values, just that you can write it into the dentry then read it back...) Similarly, this utf8 stuff isn't testing toybox, it's testing libc to see what toybox should DO when reimplementing part of libc because libc's API maintains unwanted state.

You'd think the simple "do a for loop in an unsigned int trying all 4 billion values, and see if you get the right results" would have been done before, but I guess you do have to know what to compare the results against. I wrote another utf8 parser and was trying to see if it produced the right output. Mine could still be wrong if all four implementations are making the same mistake, but they're making DIFFERENT mistakes...


August 30, 2017

For some reason GPS coding and toybox fight in my head, or at least require sufficiently different mindsets that I have to _not_ do GPS for a full day before I become unblocked on toybox. (I've carved out an hour and sat down and stared at the toybox code and felt utterly uninspired to do anything with it on multiple occasions. Frustrating. Something about GPS puts me in uber-nitpicky mode where I can't write a simple function because it's not PERFECT and I'm sure I could shave a cycle or two off it with a tighter implementation if I just thought through EXACTLY how to code this, and then the result is so brittle you can't change anything... I'm aware this is a failure mode.)

I keep being awake late enough that Jeff's up and active, and we text, and I speculate about some GPS design issue, and get drawn back into it and then suddenly the sun's coming up and my schedule is trashed again and I won't have any luck doing toybox the following day because wrong headspace...

On the bright side, we're making decent progress on the GPS stuff. Which is the main blocker for actually being able to sell products and get proper funding for the company. (Our synchrophasors measure electrical signals in wires at given points. To correlate the results they need to know not just exactly where they are but exactly _when_ each sensor pack saw the signal. So they need a nanosecond accurate onboard clock, which means doing fancy GPS stuff. Right now we're limping along with third party GPS dongles but that doesn't give us the accuracy we want (for a half-dozen reasons, reporting interface latency is the most obvious but by no means the only one) nor is it cost effective for deployment at high volume.)

GPS processing is a realtime problem: the correlators output one reading per milisecond and if you're tracking 4 satellites you get 4 of those skewed around each milisecond, and you can't drop any (must process this one before the next milisecond or it'll get overwritten) and we have to not just record them but process them in realtime so the delay locked loop and phase locked loop can track the satellite's doppler and code phase, I.E. update the correlator registers before the next reading. (The DLL and PLL are mathematician magic I've spent months trying to understand and just _don't_. I can see _what_ they do, I do not understand _why_, and the explanation always boils down to "and these constants were found experimentally" so I cannot GET an intuitive plumbing-level grasp of what's going on. Jeff understands it, but he's a math guy. Rich is at least comfortable with it, his degree's in math. Jen and Ken understand it too. Oddly enough I was reasonably comfortable with the hardware version where a capacitor is damping out a waveform, but the software version where you add some constant times the first derivative and another constant times the second derivative... Um... does not compute.)

I punted on why code phase is cool last time: the signal is an order of magnitude WEAKER than the ambient thermal noise. (Think analog television static or tape hiss, noise that's just "around" all the time because we're 300 degrees kelvin above absolute zero and everything glows in various radio frequencies even if it's not hot enough to do it in visible light.) How do you hear a signal that's massively out-shouted by ambient static? By inverting half the bits (xor it with a pattern) so the noise statistically cancels itself out. Since the signal you're looking for is _also_ inverted the same way, when you line the codephase up exactly the signal adds but the noise cancels.

Problems this solves:

1) Finding really weak signal in the noise, so a solar-powered satellite can broadcast a signal you can hear from the earth's surface.

2) Precise timing. The correlator resets (the 1023 repeating bits start at a known location) on each bit edge, so matching up the code phase tells you exactly when that happens, which lets your timing data be more precise than the signal you're actually seeing. (The bit edge itself being blurry due to noise doesn't affect your timing, you know when it had to be even if you can't see it. Each time you just need a majority vote of "is this high or is it low" to tell what the bit was, and the codephase tells you where the edge was.)

3) Distinguishing multiple satellites broadcasting on same frequency (if each uses a different xor pattern, they're just noise to each other, and it's noise a tiny fraction as strong as existing background noise anyway so trivially ignorable). So a constellation of ~2 dozen satellites can all use the same frequency, instead of each needing a unique one. And ground signals using the same frequency are just more noise easily discarded. (In theory, that last bit has some practical hiccups I should explain later...)

(Again, it's 1023 not 1024 because prime numbers are easier to make a repeating decimal out of. That's also why the frequency stuff broadcasts on is a multiple of 1023 mhz.)

In theory 1023 codephases/milisecond means your timing accuracy is about 1 microsecond (I.E. 1023*1000 samples/sec, which is still three zeroes short of the 1 billion/second of nanoseconds. The speed of light is _really_fast_.) BUT the timing is really accurate (the bit edges are emitted with nanosecond accuracy) so we oversample the signal, listening at 16 times that speed we expect the codephase to change, and then we try applying the codephase XOR to all those and see which gives us the strongest signal. (If it's switching early or late from the edges we're listening to some of our signal becomes noise and cancels out, but losing 3/16th of the signal just makes it weaker, you can still hear it. A properly correlated signal for a satellite directly overhead (least amount of atmosphere it's going through) can be up to 5x the noise floor.)

Another corollary of #3 is this is a really effective way to HIDE transmissions, because unless you match the correlator frequency exactly you can't see the signal at all, not even to tell it's _there_.

So anyway, we're measuring our GPS processing time in microseconds: our SOC design has a nanoseconds register you can just read from a memory location, it's lovely. The code is running on a processor that runs a little under 100 mhz, with 8k of L1 cache, which doesn't do dual issue so at most 1 instruction per clock cycle. Luckily it's SMP, and in THEORY we're only taking up about 60% of one processor for the realtime bits. (We should be able to optimize that down, but that's fast enough it should work and we can debug it.) In practice, the per-cpu clock tick takes 500 microseconds (half a millisecond) so when that happens we get a latency spike that sometimes drops a sample, and if you diable the timer interrupt on any processor it's RCU stall city and the system goes bye-bye. (Rich thinks he might be able to get RCU grace period processing running on a processor OTHER than the one it's processing for. Sadly a tickless Linux system doesn't mean there aren't ticks, they just aren't regularly scheduled...)

Meanwhile, both Rich and prototes customers are testing have some sort of interference that's causing the analog antenna filter stuff to sqelch the volume of the signal we're interested in. (It scales the input amplitude so the strongest signal we hear is 100%, and if that strongest signal is very loud interference...) Neither Jeff nor myself are seeing this interference? That's another thing we need to track down.

Oh look, sunrise again.


August 29, 2017

So on the utf8 thing, I need to test my new contextless parser function to make sure it's doing all the corner cases right. Since some random standards body broke UTF8 to max out at 4 bytes of input (even though the original proposal could do 7 and thus encode a much wider range), this means all the valid inputs fit in an unsigned int. So I should be able to loop through them and compare them to the output of mbrtowc(). (Remembering to memset the old one's state structure back to zeroes after any <1 return result, of course.)

First thing I forgot is that mbrtowc() constantly fails unless you call setlocale() first. Because of course it does.

Next thing is this is a little endian machine so the order it's going through the values is nonobvious. But if it was the other way around it'd have leading null bytes for the first several million values, so the mapping's weird either way. Important thing is it tests all of 'em.

I think this even tests all the _invalid_ values. (A first byte with more than 4 leading zeroes counts as a -1 error not -2 insufficient data.)

Oh right, redundant encodings! 0x7f is also 0xc180 but the second is an illegal sequence. (To simplify processing utf8 didn't add the "last code point" of the previous sequence to the start of the current encoding, so they overlap.) Which means I need to -1 for values too _low_ for the number of bytes consumed, but they didn't make the last one a power of 2 and instead it's 0x10ffff which is just sad and needs to be special cased.


August 28, 2017

The hurricane seems to have wandered off. It's lovely outside. I'm not sure whether to trust it yet. My headache doesn't.

I'm exhausted. Niishi-san is on another vacation. I can't remember the last time I had one. (I've switched to different _types_ of work, but I always had something I was doing or avoiding doing, for years now.)

Sat down to try to do toybox work and just... blah. Burned out. (Then wound up sucked into GPS stuff _again_, and awake all night texting Jeff in Japan. Again. This time I got him to log onto IRC, which at least gives me a full keyboard instead of tiny phone keyboard. So that's something. Lots of VOIP calls too. It's a process...)

I just saw a commercial for some random movie (in which Tom Cruz jumps on a couch ranting about scientology with a female mummy), saying "own it on 4k high def with Google Play". Um, that's a streaming service. That's not "own", that's "lease from a service which is going to cloud rot in our lifetimes". There's more than one graveyard of google services, and other cloud services going away and leaving stranded assets is such a regular occurrence it doesn't even make the news anymore. "You can't play Diablo 3 without connecting to the server? That game will go bye-bye someday" should be part of your purchasing decisions.

Paying a streaming service for access isn't "owning" anything. The main advantage open source ever had over proprietary software has always been that the commercial interests behind the proprietary software will stop investing in it someday, and it will become abandonware. The open source thing isn't going to stop being available. The context in which it runs might be; it might evolve, get forked, even wind up restarted from scratch with a different development team responding to it... But it won't leave users still wanting to use it and having no way to do so.

(There have been some headwinds pushing against this, crap like devfsd and x11 needing hald and Red Hat's current "systemd all the things!" push, but those of us who dowanna can generally wait them out. There's other problems of unwanted sequels (GPLv3 driving everybody away from GPLv2, Python 3 driving everybody away from Python 2...) Reality is complicated. Reiserfs didn't survive Hans Reiser's murder conviction, and node.js is apparently having a bad week. But Erik Andersen took over BusyBox after Debian abandoned it, then I took it over from him when he didn't have time anymore, and I handed it off to Denys Vlasenko when I got too disgusted at the troll who named it to keep working on it. There was no "business case" for any of that (it was independent of capitalism), and I've still never met Erik, and only met Denys at one convention. Similarly I maintained my own tinycc fork for 3 years (before running out of time, but if I really _needed_ it I could dig it up and resume work on it, or somebody else could). It's a "can" vs "will" thing...)


August 27, 2017

The musl maintainer broke toybox chrt.c, and did so intentionally. He doesn't like what one of the Linux syscalls does so he made the syscall wrapper (and like five others) return -ENOSYS. Here's the commit that did it.

Trying to figure out if I should switch everything to syscall(BLAH) to work around the musl bug, or just say "chrt doesn't work on musl". Leaning towards the second. This is 100% a musl bug.


August 26, 2017

Hurricane Harvey is driving my sinuses nuts. We're right at the edge of the thing, so the spiral arms of thunderstorm keep going over us, and then the clear gaps between them, so the barometric pressure is BOUNCING.

House hasn't flooded yet. The concrete wall, french drain, and landscaping changes (two trees removed) are collectively holding. I dug a little trench in the french drain's gravel so the water from the downspout at the most problematic corner has a nice clear path to go AWAY from the house. (It's washed all the dirt under the downspout away to expose brick. I didn't know there were bricks under that. I guess the french drain installers put it in.)

(One of the trees we removed was a crepe myrtle, which is trying very hard to grow back. the stump was grated to sawdust a foot below ground level, but what's left is sending runners out all over the yard which I've been regularly plucking for something like a year now. Every few trips to HEB I get another couple salt cannisters and pour salt on where the stump was. That reduces its enthusiasm slightly, but hasn't come close to killing it, and any significant rain washes the salt down the storm drain again. So those legends about "salting the earth" when cathage delenda est seem somewhat exaggerated, or perhaps merely never had to deal with an invasive ornamental plant people in previous decades thought it would be a great idea to plant all over the neighborhood. (See also Austin's "cedar" infestation.)

Oh well, salt's cheap...


August 25, 2017

Southwest was kind enough to let Fade move her flight back to Minneapolis up a day, to avoid the hurricane. I walked to the grocery store as she was packing up to grab more caffeine (my schedule is flashing 12:00 with all the nights walking to UT and talking to work's Japan office on VOIP, but trying to be awake days for other stuff. A sleep schedule consisting of multiple 3-hour naps isn't necessariy sustainable long term...)

Anyway, there was a positive DELUGE trapping me at HEB when I tried to head back, so Fade got her suitcase and the dog into the car and came and picked me up, and we went to the airport from there. And after I dropped her off... the sky was clear again. And has remained so ever since.

This is a very odd huricane so far. I guess that's what you get when you name them after six foot tall invisible white rabbits.


August 18, 2017

It's difficult for me to explain just how craptacular lsof is, at least judging by its 2600 line man page.

The basic idea of lsof is clever: it doesn't just check all the /proc/*/fd entries to see what files processes have open, it also checks their "exe", "cwd", and "root" symlinks, and their memory maps to see if anybody's mmaped that file either. It also looks under /proc/net at the files tcp, tcp6, udp, upd6, raw, raw6, unix, and netlink, which tells where open sockets point to (so you can describe the /dev/*/fd entry of a process when it points to a socket).

Unfortunately, the user interface is _insane_. It shows signs of a long history of unconstrained development on a bunch of different fragmented unix variants, maintained by people who glued bits on at random and never said "no" or tried to be consistent about anything.

There's a -b option to avoid using "kernel functions that might block", namely lstat(2), readlink(2), and stat(2). There's ALSO an -e option that skips those for a specific filesystem (although it notes it's path-based, not matching major/minor, so caveat emptor). Except that only skips stat(2) and lstat(2), to skip all three you need to use +e instead. (Lots of arguments have + and - versions, sometimes doing completely unrelated things. For example, -c specifies which command names to show, and +c specifies how many letters of command name to display in the output. Except -c matches the start of commands (so if I say -c b it'll show me all instances of bash and bc and so on). Oh, and you can go -c /regex/ to do regular expression matches.

A lot of commands have _optional_ arguments. How this is parsed I couldn't tell you. If you say -s by itself (or something like "-s -a") it "directs lsof to display file size at all times". But if you say -sTCP:LISTEN it filters for TCP sockets in LISTEN mode. Of course the part after the colon can be a comma separated list of states including things like "FIN_WAIT_2" and "SYN_RCDV". (This is just "some common TCP state names", not an exhaustive list.) For some reason, the TCP states are all upper case and the UDP states have mixed case names like "Idle" and "Unbound". For more information, I'm expected to read the lsof FAQ.

Did I mention the man page is not complete? There's an LSOF FAQ. It says that some other section of the giant man page says where to find it, and I don't care and am not reading it.

There's a -C option to disable use of the "kernel name cache", something I don't think Linux has. There's also -D to disable use of the "device cache file", which again would do what on Linux exactly? This one requires a function letter: b says to build the device cache file, i says to ignore it, r says to read it, u says to read and update it, and ? says to report the device cache file paths (of _course_ there could be more than one). This is a unconscionable level of micromanaging crap to do something dubious that's probably specific to an operating system that stopped being produced in the 1980's.

A positive _masterpiece_ of crap is the -f option, which "by itself clarifies how path name arguments are to be interpreted". It has 7 paragraphs and 2 tables of explanation, can take 5 different letter arguments "in any combination" (including none of them, in which case they recommend -- after it to prevent the next argument from being taken as a parameter), and comes in - and + variants to enable/disable what it's searching for.

The point of -f _seems_ to be that if you specify a mount point as one of the arguments, it automatically notices it's not a regular directory and shows you everything in that mount point. So lsof /home shows nothing (because no program currently has that specific directory open, or as its chroot root, or as its cwd). But lsof /dev shows 461 entries (basicaly open device nodes, mostly /dev/null, /dev/tty, /dev/ptmpx, and /dev/urandom). So if /home _was_ a mount point, it would behave differently (showing everything open uner it, like +D does), but it's not (at the moment) so it doesn't.

That isn't what -f does, the mount point vs not behavior varying is just what lsof does automatically. The point of -f is to _modulate_ that behavior.

Needless to say, the lsof the Android guys sent me isn't treating mount points differently from regular directories. I have yet to find a syntax to make a specific argument recurse when others don't, but I'm not to the end yet and wouldn't be surprised.

I'm trying to figure out what subset of this crap is worth implementing, and what parts I dowanna do. Some is sort of maybe edge-case useful once in a blue moon? The -T option (which takes arguments) displays extra TCP information: queue length, socket options, flags... I'm pretty sure I don't care? Not enough to implement the arguments -T takes specifying WHICH of these things to display.

If it worked like "ps" where you have -p blah,blah,blah where each blah is a magic name tied to a data field, I could add a giant table with all the things and then let you specify what fields you wanted in which order and I could GLUE THIS TO PS AND REUSE THE LOGIC. (It's more code that grovels around in each /proc/$PID directory, seems like a natural fit.) But the command line argument syntax is totally different and the organizing principle is a list of files (where PID is basically an attribute of the file).


August 16, 2017

Sidelined by a pulled muscle, spent half the day in bed trying not to move. (Thought I'd bruised my tailbone but no, putting more weight on my right leg than my left, even for a moment, is a REALLY bad idea right now.)

In other news, Ibuprofen is a wonder drug.


August 15, 2017

Still banging on lsof.c. (Under what circumstances does it _not_ have to read everything up front before producing any output?) Well, alternating that and GPS. Jeff made GPS sort of work! Well, bits of it, using software corrlators significantly slower than real time.)

An unexpected benefit of marriage is that if you have a sudden attack of intense intestinal distress having walked an hour away from home late at night, there's someone you call to bring a car full of towels, who is then willing to throw out a pair of shorts while you huddle in the shower being really miserable. (Cell phones are also a wonderful invention. The smart phone part even let me look up and call the UT after-hours maintenance people and apologize profusely for the state of one of their bathrooms.)

All considered, not my most productive evening of programming.


August 14, 2017

Ok, I went back on caffeine. Too much to do, and that's about as much zombie-ing as I can afford right now.

The big push at $DAYJOB is eating my brain (still no end in sight) and I need the occasional palette cleanser, so I've been poking at lsof cleanup (which has nobody waiting for it, and is thus relaxing). But as with the rest of pending, it hasn't been promoted yet for a reason.

I'm confused about what people _expect_ out of lsof. There's no spec for it, and I'm not sure what it should do. I use lsof -i4 and -i6 which the one Android submitted to toybox doesn't have, but the upstream one was written many years ago (at Purdue university) for non-linux systems, and has a 2600 line man page full of crap like:

To find processes with open files on the NFS file system named /nfs/mount/point whose server is inaccessible, and presuming your mount table supplies the device number for /nfs/mount/point, use "lsof -b /nfs/mount/point".

No. I am not implementing special support for nfs. I have todo items to implement v9fs and smb servers in toybox, and I'll probably do an "nfsmount" wrapper, but even that hasn't been a priority.

Both Android's and ubuntu's lsof are reading all the ps data for all processes up front (which takes 40 seconds on my netbook) before producing any output, which seems crazy.


August 8, 2017

Three different spam campaigns in the past week offering "content" for landley.net. I wonder what the scam is? (Yeah, SEO, tracking cookies, viruses... but which in this instance?)

One of my perpetual todo items is "collate todo lists". Another todo item I haven't actually written down yet but should is to go through my old blog entries and find either "I should do X" or unfinished things I got distracted halfway through.


August 7, 2017

Mozilla/KDE has resurrected this ancient thing, which I last complained about here almost four years ago. Not exactly encouraging me to give either project a second look.

(I've gone off caffeine in hopes that'll help with the headaches again. Much napping. Still hugely irritable but at least I've got a reason for it now. Mostly drinking HEB's diet tonic water, quinine flavored, so if I get a strain of malaria from the 1930's before everything evolved resistance, I'm all set.)


August 6, 2017

I've been really tired for several days. Dunno if there's a cold involved, or allergies, or downshifting from the Monster energy drinks to kickstarters (which don't give me the same migrane symptoms), or trying to switch back to a day schedule, or the endless GPS grind. (Which is making progress, but still a topic on which exhaustion set in long ago.)

I've been really irritable as a result. It's statistically very unlikely that a half-dozen different sources have all given me unusually pointless BS requests at the same time, and if that's how I'm perceiving them I'm probably evaluating badly.


July 31, 2017

Bunch of find bug reports through github. I still have pending stuff on ping, dd, and cp --parents.

Sigh. If I'd been able to work on this the past 18 months at the rate I did in 2015, I'd have the 1.0 release out by now.


July 30, 2017

I was totally burned out and going "I'll just take today off, it's Saturday, won't even carry my netbook with me..."

And Jeff fixed GPS. Position fix within 2 meters, derived from MIT-Licensed code. So I need to look through Jeff's new thing, which is the _fifth_ GPS codebase I've poked at, not counting the code I wrote from scratch or Geoff's hardware correlator acquisition stuff.

This new hamsternz project has two implementations, the one big file version at the top level, and the 300x faster version in a subdirectory that's chopped up into tiny little .c and.h files so you have to choose-your-own-adventure your way through reading it. I question the design choices here: one-big-file-but-slow and many-little-files-but-fast are orthogonal issues, surely? (When I first started poking at tinycc it was one big .c file ten thousand lines long. Ok, that's an extreme case ala Berkshare Hathaway never doing a stock split and hitting a million dollars per share, but still. The two versions of hamsternz have _two_ big differences, and I'd really like one from each pile. Oh well.)

I'm glad we have something that works (albeit needing serious surgery to fit on the hardware, run fast enough to matter, work consistently, and stay up longer than 10 seconds), but I'm _tired_.


July 29, 2017

Poking at mkroot integration of native toolchains, and the next problem up is packaging, I.E. what goes in which filesystem.

Musl-cross-make's native toolchain extracts to a couple hundred megabytes, so if you add that to the initramfs a qemu image with 256 megs of ram can't extract it. So we need to package it into a block device filesystem we can feed in to /dev/vda or similar. In Aboriginal Linux, I used squashfs.

I haven't currently got mksquashfs in the airlock directory (no host-tools), so I need to build it. And there's an organizational issue: the mksquashfs build needs zlib, which I'm currently building as part of dropbear. The download and build of the package are both in modules/dropbear. I'm trying to build a host version, so having a different build is ok (if not ideal), but having a redundant download makes me a little sad.

In Aboriginal Linux I factored package builds out into sources/sections, which makes it easier to share them but harder to follow what's going on because the order of operations jumps around. One of the big advantages of mkroot is it's a simple script that runs stuff in order. It has includes, but each one is "now run this entire script start to finish", and they're all at the end of the first script. (I'm thinking of factoring out most of the mkroot.sh script and turning into modules/root so the top level ONLY calls modules, but having a top level script you can copy into an empty directory and run and it does what it says on the tin... don't want to give that up either.)

But this "run in order, with all dependencies local" design means if you have to do something twice, you wind up duplicating it. Such as both dropbear and mksquashfs needing zlib, with neither depending on an external zlib build.

*shrug* It's a conflict, you either duplicate stuff inline or you factor it out. Both have downsides. The definition of "simple" has multiple local peaks, and sometimes they exchange gunfire.


July 24, 2017

Got a feature request for cp --parents, which on further investigation is full of corner cases.


July 21, 2017

Note: if I haven't responded to a toybox issue after a week, ping me again. If I then don't get to it for another couple days, my policy is to apply your patch as is and then it's on me to clean it up later, without blocking your use case.

Sorry I've been so out of it. Working for a startup on life support it's crisis du jour, you look up and a year's gone by...


July 20, 2017

There's a s390x user trying to make qemu work under Aboriginal Linux. Cool! I pointed them at mkroot.

Alas, mkroot is the kind of project where you don't necessarily hear back from people because once they've got a kernel config and toolchain that boot a system to a shell prompt on their hardware (or emulator), they can take it from there. It would be nice if you could natively bootstrap things like debian, gentoo, or even buildroot under mkroot just going "I have a shell prompt and a build environment, now build packages with this". But that's the distro hairball problem which I still need to solve. Most notably for AOSP.

That's why I'm trying to get _both_ musl-cross-make and Android's NDK to provide sufficiently capable cross and native toolchains I can both rebuild mkroot and build Linux From Scratch under the result. Orthogonal layers: mkroot should work with multiple toolchain sources.

I keep coming back to Linux From Scratch as the lowest hanging fruit of the distro hairball problem. Building it probably gets us about halfway to solving it for debian (both because it forces toybox to fill out all the corners and because if something _is_ missing LFS has probably built one so you know how to bootstrap up to a more complex build environment for debian). It's maybe 1/5 of the way for AOSP, which can't just swap in a gnu tool (no gpl in userspace, if I want it self-hosting I have to write my own mini-git for repo to clone/pull stuff with).

As for other distros, Alpine Linux might be the next lowest-hanging fruit, not sure. Buildroot is a tardis console that might already be able to do this but I've never figured out how, Gentoo's problems are self-inflicted and not hugely interesting for me to solve, Arch is for people who think Gentoo is too soft on newbies and thus full of "those people" who didn't truly EARN their Linux system, and Red Hat is too corporate to do anything with in a hobbyist context. (And yes, I say that while putting AOSP on the _other_ side of that line.) I've poked at SuSE before but that whole "open build" thing (which I used at Cray in 2013) was way too complicated...

Anyway, that whole struggle is a can of worms I haven't dug my way to yet. LFS first, which has its own giant dependency list to get there...


July 19, 2017

From today's email:

> I have a RFID Reader with BusyBox on board. I need to execute a stored
> procedure on SQL Server with Java, but from BusyBox I can't execute my
> code. Have you ever tried to do this? On this BusyBox (v 1.14.3) has
> installed JamVM.
> 
> Could you help me?

I handed off busybox development 11 years ago and I still get these. Busybox is seldom actually involved in the problems they describe, and I'm generally not sure how much backstory I have to explain to get them to the point they're even asking the right questions.

Alas, I'm a sucker for help requests, and spend time trying anyway. Here's my stab at it:

Busybox is just a set of command line utilties, mostly implementing the Posix and LSB specs. It's not an operating system. Your operating system is Linux, running on some kind of hardware. (If you run "uname -m" it should tell you what kind of hardware. If that's not there, "cat /proc/cpuinfo". Embedded systems are usually arm or mips, but sometimes they're x86 or powerpc or something else.)

To run userspace programs, you use a C library. The "readelf" or "ldd" commands can usually tell you which one. It might not be installed on the system if your binaries are statically linked, in which case you'd have to copy a binary to another system and run the command on it there. For example, on a normal linux system, run:

readelf /bin/ls | grep interpreter

That'll probably say something like:

[Requesting program interpreter: /lib64/ld-linux-x86-64.so.2]

Which means your C library is glibc for x86-64.

To run _any_ binary on an embedded Linux system (such as busybox is usually installed on), you need to build it for the right hardware type, and either statically link it or use the same C library as the target. (If you build the new binary with the same compiler toolchain the system was built with, it should all match up. If not, you need to match it yourself.)

Next question: you said you need to "execute a stored procedure on sql server with Java". Where is this SQL server running? (On the RFID reader?) Which SQL server is it? (Postgresql? Mysql? Oracle? The one actually named "SQL server" is a microsoft product, generally not something running on a Linux system, much less an embedded one.)

Are you trying to get the busybox rfid reader to do a network transaction to a server somewhere, running an sql server and a java runtime? (Which java runtime? Oracle's?)

I strongly suspect you need to talk to a domain expert here. I don't really do databases much.

Sigh. I should add a toybox FAQ entry for that, I suppose. Most of our users are through android but there's some in the embedded space, and it should pick up once I've got a toysh worth using. (Right now it doesn't quite complete the circuit as a useful standalone package. That's what my mkroot project is for, create a simple kernel+libc+cmdline system, I.E. linux+musl+toybox, booting to a shell prompt in initramfs under qemu.)


July 18, 2017

Asked the kernel guys about the Gratuitous Ping Exclusion (with patch to disable it). So far, crickets chirping in way of reply...


July 17, 2017

Did a git "clone linux clean" and the new clean doesn't have a branch master. How...? (The linux repo had an older version checked out, but how does that matter?) Had to do a "git checkout origin/master -b master" to get one, and no idea if it'll track.

Seriously, this software was developed by people who saw the Tardis console and went "what a good user interface, let's do that". (Minus the whole telepathic labeling part.)

Also, there's some kind of N^2 algorithmic inefficiency somewhere in "git bisect". On this netbook, "git clone" of my linux-fullhist repo took 2 minutes (22 seconds user, 11 seconds system time, the rest I/O bound; I can tell from the CPU meter it's not pinning a processor). But then if I "git bisect good c319b4d76b9e5" and then "time git bisect master" it pins a processor the entire time and the result is:

real	56m47.585s
user	47m1.816s
sys	0m24.704s

An hour. Wheee. (Most of the difference between user and real isn't I/O bound, it's me doing other cpu-intensive things during that hour. Again, stock ubuntu 14.04 with updates applied.)

So the second bisect is a little over 23 minutes, the one after that 6 minutes, and the one after that 2 minutes. Its job is to figure out which commit is halfway between "good" and "bad".

Of course the kernel developers don't use a fullhist repo due to nitpicking over commit hashes that aren't even in the current tree, so presumably they haven't hit this yet? Or maybe they just all use 8-way SMP 3ghz processors with with 2 megs each of L2 cache and 16 gigs of DDR4 memory. People with less need not apply, I suppose.


July 16, 2017

The busybox commands mkroot still uses are ping, route, tar, wget, vi, and sh. (Not counting the compression code only enabled because tar needs it; busybox has gz/bz2/xz decompressors and a half-finished gzip compressor.) Those first four don't seem like _that_ big a deal, so yesterday I thought I'd take a quick look at ping, because it's pretty simple, isn't it? Why haven't I already done this one?

Generally, the commands I haven't done yet are the ones that have some sort of can-of-worms design issue once you scratch the surface. In the case of ping, it's seamless ipv6 support. By seamless I mean you should never have to specify -4 or -6 (unless you have multipath and want to force one), it should autodetect. The fiddly bit is you can specify both host and target addresses (which interface to bind to locally), and it's possible that the first one you look at has both ipv4 and ipv6, but the second only has one of those, so you have to defer the decision on which address to use until you've looked everything up.

Banged on it for a few hours until I got "ping -I ::1 127.0.0.1" and "ping -I 127.0.0.1 ::1" giving me reasonable error messages, autodetecting types properly when the match up (or when there's just a target), and letting -4 or -6 veto certain paths. Then I tried to open the port.

Back in 2011 Linux merged ICMP Sockets, which I've meant to use in this all along. To implement ping you used to have to open a raw socket, which requires root access (because you're supplying the full IP header yourself and can fake source addresses or make christmas tree packets and generally bypass all the filtering). The whole point of ICMP packets is to let you implement ping without requiring root access.

But when I _tried_ on my Ubuntu 14.04 host kernel, I got permission denied. And reading more, I found out about the Gratuitous Ping Exclusion, I.E. the fact /proc/sys/net/ipv4/ping_group_range defaults to "1 0" (lowest group that can use this 1, highest group 0), so even ROOT can't use it by default. This is the reason nobody uses this infrastructure and still just deploys suid ping binaries instead.

There's no obvious reason for the Gratuitous Ping Exclusion. If the new ping code in the kernel is exploitable, letting people test it and find OUT in the half dozen years since it went in might be nice. If you do want this kind of restriction, that's why selinux rules are turing-complete (and skynet will be written in them).

I googled and pretty much everybody who enables it turns it on for all users anyway. (Although they disagree about whether that's 0-65535 or 0-0x7fffffff.)


July 15, 2017

Blah. Tired and headachey for days. Haven't been sleeping well. Not entirely sure why. (I've been sleeping a lot, it just hasn't been particularly restful. Very Clingy Dog may be a contributing factor, Fade gets back from her sister's wedding Tuesday.)


July 11, 2017

Reminded of the longstanding mount -o remount,ro bug, and this time I set aside a block of time to fix it, which turned out to be two issues (basically it never should have worked in the first place, and the fact it did was its own bug).

But along the way, of course I dug up more design issues, at least things I want to extend mount to do that aren't immediately obvious how to implement. (And I still need to implement nfsmount/smbmount/v9fsmount which all boil down to "prompt for password and feed it in without making it show up as a -o password on the command line visible to other processes in ps".)

Meanwhile, the serial bug in qemu-system-sh4 (where the kernel started depending on a uart buffer that qemu never implemented) still isn't fixed. I have a kernel patch but mkroot builds vanilla packages unpatched and current vanilla kernels don't work. I keep raising the issue and it keeps getting ignored.

The reason people think sh4 is superh is that sh2 was nommu, and then Hitachi had a plan to release a new architecture each year so sh3 was only on the market for a year before sh4 came out. Then the handoff from Hitachi to Renesas (because the 1997 asian economic crisis cut their chip design budget so they spun it off to cut costs) froze sh4 in stone, because Renesas got the design but not the engineers who made it. And the new Renesas engineers wasted years creating an sh5 design nobody wanted (a radical departure from traditional sh that lost most of its advantages), then gave up on superh and started doing arm instead. So that's why qemu-system-superh is called qemu-system-sh4. (Meanwhile, I suspect qemu-system-x86 being being called qemu-system-i386 is because Intel's marketing department paid somebody off. AMD pushed x86 as a generic name to Intel's trademarked i386 with the i standing for Intel. The new 64 bit architecture is called x86-64 because it was designed and thus named by AMD, not Intel.)


July 10, 2017

Cleaning up dd I got to the "seek" and "skip" options, finally working out the mnemonic for remembering them. (You _must_ seek on stdout to preserve existing data, stdin you can just read the data and discard (skip) the blocks. So seek= only works if stdout is seekable, but skip= works whether or not you can lseek(). I've had to check the man page every single time I've used those suckers in the entire time I've known about dd, which is emabarassingly long.)

This means I need a consume() function to lseek() past input and fall back to reading through it if we can't seek (because zcat | dd). Which brings up "should this be in lib/lib.c" which made me think of tail.c already doing something like this. I've always been embarassed about tail having two complete codepaths for the seekable and nonseekable cases, but without a seek fast path doing tail on a gigabyte log file can take minutes from a slow disk. The performance difference is big enough that _not_ having a second code path is really noticeable, and almost unusable.

However, tail() couldn't use consume(). The reason is we don't know where to seek to, we have to seek to the end of the file and then read _backwards_ to find the appropriate number of lines to display. On the forward pass we have to look at all the data and remember it. Sigh. So less useful as generic infrastructure.


July 9, 2017

Burned out badly on the GPS stuff, Jeff talked me down. I really hate this kind of signal processing, you keep going down cul-de-sacs where it looks like it was working but then you find it out it was fundamentally never right and you have to rip out and redo the bits you thought were working YET AGAIN.

(Also, the 1995 GPS spec is badly written. The In Nomine core rulebook is clearer and better organized than this.)


July 7, 2017

And so my fourth attempt to clean up dd.c has started. This time instead of reading the posix spec and trying to implement something sane (and throwing it out once I've been called away from my half-finished work for more than a month and no longer remember where I left off), I'm instead doing small incremental cleanups to the code that's there.

The existing dd is wrong, missing large chunks, and pretty much untestable (how do you check what ibs and obs are doing when pipes collate writes so the only way to see the transaction size is under strace)... I'd throw it out completely but it's one of the commands Android grabbed out of "pending", and I don't want to break them.

I tweaked atolx() a while back to get it closer to what dd.c needs, but there's still 'w' meaning word missing. Add that to lib.c and then I can yank strsuftoll(). (Which is basically atolx_range() except it's non-obvious because "def" is functionally "min". Sigh...)

The other difference is that strsuftoll() returns unsigned long long instead of signed long long. Given the ranges being used in the callers, that only matters for count, seek, and skip, but all of those are filesystem lengths: the maximum file length on ext4 is 16 terabytes and signed long long goes up to 8 exabytes, so standard LLONG_MAX seems appropriate.

There are 6 users, and that means the only remaining difference is whether the smallest allowed value is 0 or 1. Tempted to make a wrapper for that, but small chunks: let's check in what I've got first.


July 5, 2017

Following up on yesterday's electric self-driving car writeup, let's focus on the holdouts. The people opposed to this brave new world, who WANT to keep their landlines and film cameras and vinyl records and their own personal car that they drive with pedals and a steering wheel. And the message for them is: you can still ride a horse today. You'll just be restricted to horse trails, instead of riding around downtown in a modern city.

Learning to drive a car in 2017 is like learning to ride a horse was in 1901. This skill will be obsolete within a generation, and nobody will need to do it anymore outside of closed course hobby contexts. And eventually, nobody will be able to except on a closed course.

Self driving subscription service means no car insurance, no speeding tickets, no calling a tow truck when you're locked out or need your battery jumped. If the car blows a tire you summon another one and continue on your way, the _service_ tows it. You don't even need a driver's license: no extensive training, periodic retesting, or loss of mobility when you get old (or are too young). And if 99% of the population doesn't need or use those things, the infrastructure to provide them to human drivers will atrophy. (Hint: the failure mode of "speeding tickets are no longer a profit center so the police force stops devoting officers to traffic control" isn't "and now you human driver can go 90 in a school zone!")

People are used to ownership, it's _my_ car, it's important I own it. (Part of this is Stockholm syndrome, for most people it's the second largest expense after your house, as with New York City taxi medallion owners they assume a giant financial liability will remain an asset. But ignoring that for now...)

Do you know how to fix "your car" yourself? (Even the modern computerized ones?) When's the last time you changed your own oil? (Where do you legally dispose of the old oil?) Doc Brown's "I blew the fuel injection manifold, it would take me a month to rebuild it" is predicated on him being a mechanical genius with a fully equipped blacksmith shop, operating on a car from 1985. Anybody less impressive working on a newer car has to order a part and hope there are still some in stock for that model. (That's why I had to get rid of my pontiac, it wore out something called a "throttle snorkel" which was no longer available; they made a fixed number of spare parts for that model year and when they ran out the car became unserviceable.)

Any argument about "independence" ignores the constant need for gas stations to refill the car and governments to maintain roads. That's what defeated Doc Brown: driving off-road tore a fuel line, he couldn't get more gas, and trying to whip up his own fuel damaged the car. Cars only really work in a context of a massive fuel supply network and roads maintained by governments, with painted stripes and signs aimed at human drivers. If human drivers become scarce, why pay for that? Pedestrians and bicyclists, sure. But the insurance liability of manually piloting a thousand pounds of metal at high speed, which currenty kills a hundred people per day in the US alone? (37,461 in 2016 = 102/day.) Right now that's seen as normal, the way cars without seat belts used to be seen as normal. If it STOPS being seen as normal, good luck getting insurance.

Continuing to drive manually probably means finding manual electric cars, because fuel refining and delivery is a highly optimized narrow-margin industry that's already contracting and starts losing money with only about a 20% drop in volume. It's so far up its current tree that I'm told retooling to be profitable at lower volumes is such a major undertaking it's akin to starting over. (Given the amount of institutional memory already lost behind the scenes, this isn't surprising. Now add in budget cuts, and massive writeoffs from places like Exxon admitting that the vast oil reserves they have on paper aren't profitable to extract so the "assets" aren't really oops a trillion dollars of paper asset just vanished, that won't have financial reprecussions...) Gasoline should continue to be available (assuming sudden loss of lobbying power doesn't lead to blowback outlawing it after so many years of multi-billion dollar subsidies), but you'll have to order a delivery like liquid nitrogen today.

So human drivers will stop being able to get insurance, fossil cars will have to bring their own spare fuel, and servicing the vehicle becomes a specialist operation not necessarily available in your city (send the car away on a flatbed if you can't get it to start). Driving your own car becomes like riding your own horse, a hobby for the idle rich to show how much money they have, done far away from anyone else who might get hurt. (How did Superman actor Christopher Reeve break his neck again?)

To get an idea how fast the switchover from gasoline manual driving to electric self-driving should be, let's look at the switch from horses to horseless carriages. Carl Benz patented the "Benz Patent Motorwagen" in 1885 and built about 25 of them over the next decade. (Notice how a "motor car" is now a car, a "cell phone" is a phone, and in about 20 years a "self-driving car" will be a "car".) Once this patent expired the automobile took off. Henry Ford introduced the Model A in 1903, the Model T in 1908, and the moving assembly line in 1913. The last horse drawn carriage in New York City was retired in 1917 and the last horse drawn engine in the fire department replaced in 1922.

I.E. nobody cared about proprietary crap, but once the patents expired the S-curve started upwards and it took about a decade for the new thing to become ubiquitously available to anyone who wanted it, and another decade for the old thing to go away even for the holdouts. Horses could eat grass, didn't need replacement filters, and could make more horses, so they weren't _forced_ out. But horses had their own infrastructure (farriers to saddle and shoe them, hitching posts and water troughs, sanitation services willing to shovel horse manure off city streets...) that went away as cars took over.

The world moved more slowly back then, and it's hard to "anchor" the electric car timeline. (Tesla's shiny red sportscar "roadster" became profitable in 2009, is that the Model A here? Their "Model 3" is clearly a phonetic play on "Model T".) But even a relaxed timeline implies the last few petrol/diesel cars wind up in museums by 2040 even without significant supply chain issues forcing the matter, and demand for _new_ gasoline cars dries up years earlier, meaning manufacturing would stop sometime before that, and research and development before that. That's why car manufacturers are racing to switch over to electric vehicles _now_, their current manufacturing lines are already a stranded asset. (Liquid natural gas cars are betamax tape in this analogy, their market window closed before they could go through it.)

There are two related transitions: the move from electric cars to self-driving is a bit like replacing hand-cranked cars with electric starters, which was first tried in 1896, a viable version patented in 1911 and sold by Cadillac in 1912, and Ford's model T switched over in 1919. So about a decade after mass production of cars, they stopped being hand-cranked (which tended to break your arm if anything went wrong). Using cell phone/smart phone as another analogy for electric/self driving, the motorola star-tac was introduced in 1996 and the iPhone and Android phones around 2007, pretty much a ten year delay. Pre-smart phones had some webbish features (web browsing, java games) before they became real computers running Unix under the covers the same way new cars have some self driving features now (autopilot, auto-parking, assisted reverse, etc), but "smartphone" is what got the holdouts to move off of land lines.

The main delay in getting smartphones out there was getting cell phone carriers to allow unmetered internet bandwidth (Steve Jobs negotiated an AT&T-exclusive deal for the iPhone, and when that took off all the other carriers jumped on Android to compete). Similarly, the main blocker to self-driving cars right now is regulatory, not technological. (Tesla set the industry back several years by enabling full self-driving mode before the technology was ready, resulting in exactly the fatal crash that Google has been scrupulously avoiding for many years, spending years restricting them to 25 miles per hour to make sure nobody was in real danger before they were DARN SURE about everything.) But the regulatory issues are mostly a problem in the US and Europe. The rest of the world's deploying this stuff already, we'll be importing most of it the same way the Japanese automotive industry blew Detroit out of the water in the 1980's when we were selling gas guzzling rolling living rooms with fins long after the Opec oil embargo, and they were doing small fuel-efficient cars people actually wanted to buy.

This is just rough estimates from an amateur, the experts are placing electric self-driving's "ubiquitously available" breakout year around 2025, and expecting the auto fuel supply chain to implode around then too. Stanford professor Tony Seba says it'll all be over by 2030.

Here's a Morgan Stanley analysist predicting the imminent arrival of "mobility services" And telling the investment community that fossil cars are already toast. (He uses a 4 square X/Y plot with "more shared" and "more automated" axes, so "drive your own car" is in the lower left, self-driving subscription services are in the upper right, and car2go an uber are in the other two squares as transitional services that converge on what financial analysts apparently call "shared autonomy".)

Here's another guy giving a talk about his research on the topic. (He owns a real estate firm and his brokers drive company electric cars to show properties.)

(And yes it's possible to program a self-driving car to kidnap people. It's also possible for cheuffeurs and taxi drivers to do it too, and yet people get into a stranger's Uber every day without getting kidnapped. New things are scary vs the devil you know...)


July 4, 2017

As far as I can tell the biggest thing anybody can do to hurt the modern GOP is install solar and switch to electric cars. Both these things are the "CD -> mp3" style transition that makes the old guard freak out with existential dread, and this time it's the fossil fuel industry doing it (4 of the 5 largest companies in the Fortune 500, plus the vast majority of Russia's cash exports, what the Cock brothers built their fortune on, etc).

Solar is already cheaper than other generation options, but even if utilities balk people can put solar on their own roof and install their own battery wall. (You lose the financial benefit of getting paid to put energy back on the grid, but you avoid a lot of regulation and if you store enough in your battery you don't lose much.) This is already a financial net positive if you lease-to-own the asset the way car loans and home mortgages work (GOP-dominated state legislatures in Florida and such are passing laws outlawing financing, which is stage 3 of Gandhi's ignore/laugh/fight/you-win meric), but you can still get a home improvement loan (second mortgage) and use the money once you've got it. Given a few more cycles of exponential price decline it'll be "repainting" or "replacing the carpet" level of expense. (Right now the main reason to delay is that prices are dropping so fast it'll be cheaper later. The "when to buy a PC" problem of the 1990's...)

As an aside, Tesla's trying to convince everybody they invented this stuff the way Microsoft tried to convince everybody they'd invented operating systems, but Samsung and Sony and so on are selling into this space just fine. Germany's spent a decade trying to wean itself off Russian natural gas by advancing solar and wind. China's government made fixing their air quality a plank of their previous 5 year plan and doubled down on solar in the current 5 year plan running through 2010. (They're also building a lot of nuclear power stations, but hopefully they'll back off on that part before too much damage is done.)

The USA is much less of an interesting market than it was a year ago, but that's not specific to solar. We're not steering here, we're along for the ride. Obama _tried_ to keep us near the front of the pack in solar technology, and the GOP tried to make "Soylindra!" a synonym for "Benghazi!" when what he DID was approve loan subsidies to 1980's whitebox PC manufacturers that DIDN'T wind up as one of the Big Four in the next decade, and then talk up the wrong ones in his speeches. (You touted Tandy and Zenith? They were put out of business by Compaq, Gateway, and Dell. Clearly this whole PC thing is a fad! Shame on you for pumping money into a thriving competitive market with multiple players of vital strategic interest to the country's future. You're hurting our vinyl record investments!)

The switch to electric cars is likely to happen really fast because it's a MORE compelling use case: robot cheuffeur driving a car that lasts a million miles and is way cheaper than driving yourself. But let's walk through the steps to get there.

I looked at the "car2go" car sharing service a few years ago, which was much less compelling but still had a significant userbase. Car2go deployed a fleet of tiny two-seater cars around the city, which reported their location via GPS and cell towers. Subscribers were charged a $10 monthly fee for their app, which showed you their nearest parked cars on a google maps-ish dispay. You walked to a car, wirelessly unlocked it with the app, drove it to your destination (charged a few cents per mile for distance), parked the car there, and then it was just another car on the map ready for members to use. Usually you could drive the same car back to your starting point. If the car was low on gas, there was a special debit card in the dashboard you could use to refill it. They presumably had a couple people go around the city picking up cars and driving them to more convenient locations (there were dedicated "car2go" parking spaces where you could usually expect to find one), the same way u-haul has people drive trucks between cities when cheap rentals the other way aren't enough to rebalance them on their own.

That's the old, obsolete "manual driving with gasoline" business model, which did not change the world. Its enabling technology was the smartphone, everything else about it was conventional technology. But it had its advantages (and thus its fans), starting with being a lot cheaper than owning a car: $10 was _much_ less than most monthly car payments, the mileage charge was less than you'd pay for gas in an SUV, and you never had to deal with maintenance (changing its oil, washing it, fixing a broken air conditioner, turn signal's out, window refuses to roll up, that "crunch" the left side makes every time you hit a speed bump...). Plus other little bonuses like your apartment didn't need its own parking space.

The main downsides of car2go (and zipcar, and a dozen other competitors in this niche) were uncertainty and inconvenience: you didn't know how far away the closest car was and had to walk to pick it up from there. Those downsides restricted the userbase of manually driven car sharing, but add self-driving and both problems go away entirely.

The self-driving version is you click the "car" app on your phone, and the nearest car drives to pick _you_ up. It can show you the approaching car on the map, or just give you a countdown to its expected arrival (a bit like airport shuttle services do now). If you care that much you can even schedule a car's arrival well in advance to guarantee it'll be there exactly on time (reserving one for an appointment; this can do anything a taxi can but cheaper because you're not tying up a human servant's time). But if your service offers a "car arrives in 90 seconds or your ride's free" guarantee, why bother? How often do you call for a pizza delivery "next tuesday, 8:30 pm"? (Back in the days of blockbuster you'd pick up a video to watch later, now you fire up netflix and click the thing and it's there. We don't draw a bucket of water from the well for morning, we have faucets.)

As compelling as self-driving is for most users (it's why the rich get driven around, you can read or sleep or work on the trip), electric cars are as compelling for the fleet operators. Car2go itself is already switching to electric, because electric cars need so much less maintenance (no fluids to change, each wheel has its own motor you can swap out as easily as changing a tire, and the thing that wears out most often _is_ the tires). And they're much cheaper per mile to drive because 80% of the energy in gasoline is lost as heat while electric cars are over 90% efficient. Gasoline has traditionally had longer range because half the energy comes from the atmospheric oxygen you don't have to carry with you, but half your fuel being free and weightless doesn't make up for wasting 80% of it: electric cars may not hold as much total power, but they consume it twice as efficiently, making them far cheaper per mile driven. And solar charging can make the electricity _literally_free_, something not on the table for gasoline.

The range you can go between charges is currently lower, but that's largely a question of how much cost+weight you want to devote to batteries. There are electric vans with 100,000 kilometer ranges today, they're just really heavy and expensive. A self-driving 18 wheeler can get across country on a single charge now, it just doesn't make sense for a sedan yet. But with battery technology getting faster and improving its power/weight ratio every year (on an exponential curve), even the cheap lightweight ones are on course to surpass gasoline's range in the next few years. And for fleet vehicles staying within a city, total range isn't the passenger's problem, it's just trips between refills.

Most cars are driven less than 4% of the time (they sit there parked the other 96%), but shared cars get driven more often. (The self-driving shared car people are expecting ~40% usage, and keep in mind half the day people are asleep.) This means maintenance becomes a much larger issue. An individual owner may take 5 years to put 100,000 miles on a car, a shared car can get there in months. Fleet vehicles wear out WAY faster, in that context a car expected to last 500k miles beats a car expected to last 150k miles hands down even if it _wasn't_ cheaper to buy and operate.

Refueling of electric cars is also easier to automate: there's a "90 second battery swap" youtube video from 2013 demonstrating a car driving over a bay where a robot arm unscrews and replaces the batteries at Indy 500 pit crew speeds, with no worries about fumes or liquids spilling. That was a solved problem years ago because it's _easy_. Now add a stock of extra batteries charged by solar power during the day and stockpiled as needed. (Possibly a self-driving 18 wheeler running them in from a solar farm out of town every night. Wind and solar power have been something you farm for years now.)

According to the 2010 census 80% of the us population lives in urban areas (I.E. in or around cities). For these people, over the next 5 years, "summon car" becomes a button on your phone, their car payment replaced by a much cheaper monthly subscription fee. They never have to clean out their car (although they can't use it as extra storage either, but that's just a question of getting used to a different assumption). They don't wash it. They never fill it with gas. They don't change its oil. If it gets a flat tire, the app just summons another car (while the car summons a tow truck for itself, which is the service's problem).

This means services from Jiffy Lube to Triple-A lose their customer base. Public transportation gets radically transformed (will cities _need_ self-driving busses, or just subsidize self-driving car subscriptions?) It also means most parking lots and downtown garages become available for redevelopment, probably a _billion_ dollars of land in your average city. And the root cause of traffic jams is "people can't drive", which affects infrastructure upgrades...

It's a _big_deal_.


July 1, 2017

Forgot to blog about the end of Japanese class, but I tweeted a clue at the time: in the third week I got sick and missed a couple days, which was like 8 hours of class time due to the compressed summer session, and I was already so far behind due to not getting the textbook and workbook on time (and generally sucking at foreign languages) I withdrew from the class. I should try again some other semester, although when I'll have the time...

Still got the textbook. I should read through it and see what I can pick up. Maybe enough to benefit from youtube videos or subtitled anime...


June 29, 2017

The space guys want another chunk of my time to help debug something, and this time I can work remotely. They fedexed me a board and put a VM image up on an ftp server. Given that work's 2 paychecks behind and will probably be 3 next week, I could use the money. (We haven't dipped into the second half of the home equity loan yet, but it's getting uncomfortable.)

It's been 6 months since I last did anything for them, which means the contract with the consulting company I was going through has expired, so they can send me a direct contract. This also means I can ask for a higher hourly rate, although I only bumped it up $10 (when the consulting company was probably charging double what I was getting) because inertia and I don't plan on doing it long.

Sigh. I really want to finish this GPS stuff, it's a prerequisite for most of the rest of our technology. (For prototypes we're using an external GPS dongle, which both doesn't give us the precision we need and is too expensive to deploy in bulk. We need to process the GPS signal ourselves, from raw antenna data synchornized to our own thermally stabilized clock, and although the hardware's been ready for a bit the VHDL and C code has noticeable gaps in the data pipeline.)

And I really need to spend more time on toybox. I need to get android self-hosting, and I've made no significant progress on that since new year's.

And I need to get mkroot past what Aboriginal Linux was already doing 3 years ago. (Heck, _to_ that point. To do it right I need to write my own make and distcc, and replace the remaining busybox commands with toybox implementations. And that's months of work right there. Even doing it _wrong_ is a solid month I haven't got...)

But yeah, new timesink stacked on top of all that. Keeps the lights on.


June 28, 2017

The only thing unique about the battery "gigafactory" is the name. Imagine if motorola had created a cell phone "gigafactory" 20 years ago to manufacture its then-popular star-tac flip phone, and advertised about how we'd all be replacing our landlines Real Soon Now and it was a BIG DEAL. Instead it just happened, with another revolution (smartphones) building on top of it immediately after, and sure everything changed but motorola was a bit player in those changes.

Now we've got electric cars, with app-summonable self-driving car subscription services building on top of that, but there's no one company behind it. We've got entrenched buggy-whip interests rigorously defending the Old Way the same way the RIAA and MPAA desperately fought against MP3/MP4 and streaming media, and this time the industry being displaced is 1/6 the entire economy so they didn't just lobby the government, they hacked all those electronic voting machines everybody was warning about since the Cheney administration, and now they've made the CEO of Exxon Secretary of State. This is not _subtle_.

In the long-term, the Fossil Fuel Freakout is unlikely to work better than the CDROM tax did, but what stopped RIAA president Jack Valenti from pearl-clutching about every new media technology since he compared the VCR's ability to record televsion programs to the Boston Strangler killing women in his 1982 testimony before congress (no really) is that he died in 2007 at age 85. D-Day was 72 years ago, the Alleged President turned 71 a couple weeks back. Current US life expectency in the USA is 76.5, expected to go up to 79.5 in 2030 by people who expected there would still be a health care system.

So this problem should solve itself over the next decade or so, but it's really gonna suck getting there. The Baby Boomers who worked _miracles_ as teenagers (woodstock, the moon landing, the internet, stopping the vietnam war via political protests) are now showing us what it's like when the same demographic bulge all hit retirement age and start ossifying into loons. They'll run out soon, but in the meantime drowning dinosaurs climbing on top of other swimmers is the new normal. Render them obsolete and wait the rest out.


June 27, 2017

Huh. Back when I was wrote about Russia I totally got syria's location wrong. They're at the east edge of the mediterranean. I thought they were basically Morocco. (I.E. Russia's camping the Suez Canal instead of Gibraltar.)

It's easy to forget how big Africa is, and how the same maps overstate the size of russia due to showing a round planet on a flat map being a hard problem. (There was a west wing episode about this.)


June 26, 2017

If you want to know what ubuntu package a command came from, "dpkg-query -S $(which $COMMAND)" is the magic invocation to do that.

One of the blockers for the dd cleanup is that atolx() is treating the suffix "b" to mean 1 byte, but dd wants 512. So ripping out dd's bespoke suffix parser to use the generic one in lib needs to harmonize that. So I go on a dive through the code trying to figure out where "b" got added (some command needed it), which brings up another trick I call "peelback annotation". Do a "git annotate lib/lib.c", hit forward slash and search for your filename (or hit colon and type in the line number; it's "less" so both work), then highlight and right click->copy the commit hash on the left of the last time the line you're interested in was touched (you probably have to cursor right first to find it), then "q" out of less, and "git show $COMMIT" to see what the change was. If it's not an interesting change, "git annotate lib/lib.c $COMMIT^1" where the ^1 on the end of the commit has says "show me the parent of this commit". (In a merge commit the "1" would say which parent to show, you could do 2 or 3 or so on for an octopus merge. No, I dunno why it isn't zero based.)

Repeating that a bunch of times gives me commit b8ef889cbfae (the current vogue is to show 12 digits of commit hash), and that commit says I added it when implementing "od" back in 2012. Except... the od man page in ubuntu 14.04 documents the "b" suffix as meaning blocks (I.E. 512 bytes). Did it change since I implemented? (Answer: quite possibly.)

Looking around, head, tail, and find... all use b to mean 512. Truncate doesn't specify a "b" but does have +-<>/% prefixes...


June 25, 2017

I need to use mkroot as a test environment for toybox, and the current low hanging fruit there is netcat, for which I have a longish todo list.

Last time around Aboriginal Linux did native builds of things, but I haven't set up native builds yet. I have to cross compile make and distcc, but when I poked at building distcc I found that project bit-rotted autoconf into a hard build requirement when it moved to github, so I need to abandon that implementation and add it to toybox instead.

I should try adding make, though. It's slow, but it's something. The last gplv2 release is getting a touch creaky, that's _also_ on the "add to toybox" list. (Well, it was on the "add to qcc" list but that project's gone all blue-sky wouldnt-it-be-nice instead of anything I can schedule right now. My time is not my own these days. Sigh.)


June 24, 2017

I keep trying to update things like my patreon, this blog, and cutting releases of toybox and mkroot, and each time I try the cleanup work to get it ready spins off a bunch of tangents and todo list items.

This has been my working style for well over a decade, but the problem _now_ is every minute taken away from that is another minute I'm not bashing my head against GPS, and my company is starving to death because we need GPS working as a hard blocking requirement to ship product to customers.

Also, since I dunno when I'll get sucked away from stuff and for how long, I'm relucant to check in partial results. Including emails going "oh yeah, I see the problem, looks like a simple fix" without actually doing the fix first. (Half the time it turns out _not_ to be a simple fix, and then I haven't replied to the issue _or_ fixed it when I get yanked away for who knows how long.)

The downside of long interrupts being when I come back I have to reverse engineer my own work to see where I left off. Even if I remember what I meant to do and why, how much of it is actually on the page and how much is just in my head that I _think_ I did but didn't yet? (I'm spinning too many plates to remember the current state of any of them with enough accuracy for more than an hour or two. There's 176 toybox commands in defconfig, and I've got a dozen locally modified but not yet checked in...)

Only way to clear the backlog is to keep chewing. Working on it...


June 22, 2017

In order to make a functional GPS system, you need to be able to perform something like six stages:

  1. read data from live board antenna (w/timestamps of when we saw each reading).
  2. find satellites in the noise (acquisition)
  3. track satellites as their doppler/phase changes (ongoing: they move fast)
  4. Extract bit data for each satellite into 50 bits per second streams
  5. packetize and checksum bitstreams into frame 1-5 cycles
  6. collate 4 sattelites to solve for x/y/z/t

In theory all these steps are documented in the official GPS specification. (With various important details buried in other documents.) In practice, it's still kind of hard.

We have an implementation of all this in scilab, and it works fine, except that when you run it on an 8-way server with 16 gigs of ram it grinds away for half an hour before giving you the output from 30 seconds worth of input data. It's doing fourier transforms using x86-64 floating point hardware, and we need to get all this working on a 62.5 mhz System-On-Chip (with various coprocessor hardware) in realtime, and it can only eat a certain chunk of that thing's resources because the board's got a lot of other stuff to do.

Steps 1, 2, and 3 all make use of GPS tuning hardare called a "correlator". We sample the input signal really fast (16x the rate of the signal we're looking for, there's an off-the-shelf chip for that costing less than a dollar each), and then run it through a piece of hardware that looks for sine waves at a given frequency. Except GPS uses this funky thing called a "code phase", where it has a PRNG (pseudo random number generator) that repeats after 1023 bits of output, and the sine wave we're looking for is XOR-ed with this PRNG's output bits. (This is actually really clever, for reasons would take a while to explain. Doing it solves multiple problems, but for right now just accept it as a thing we need to do to interpret the signal.)

And the sine wave may be 180 degrees off from where you expect (mirror image) because it's symmetrical and when your start determines what's up and what's down. (Oversampling by 4 means each sample is the same signal rotated 90 degrees.)

The 6 steps above are a bit fuzzy (overlap and bleed into each other) because this is complicated stuff: we need nanosecond-accurate timestamps to attach to raw input signals read from the antenna, and then we have to marshall them along with the data as it's transformed through "correlators" that tune into the frequency of interest (and apply a code phase to it).

We have dedicated hardware for doing #1, and in fact one of our coolest patents is doing it in about 1/8 as much hardware as usual because Jeff and Jen are very clever. I wrote code to do #2 and #3, but it's not sensitive enough (possibly due to a bug in my software correlator implementation) and Jeff insists we must throw it out and take a wholly different approach. (*shrug* Ok.)

The first thing I wrote on this project was code to do #4, then #5, using scilab output.

#6 is math. We were trying to use rtklib for this way back when, but it's a bunch of dispirate functions that don't connect together, with no example code showing how to use them. We've since looked at like 6 other implementations, all of which are crazy/broken in one way or other...

Apparently most of the existing GPS implementation out there are by like the same 2 people, hired as consultants over and over. There's another guy (Andrew Holme) who hacked together his own implementation that actually works, but half of it's in an FPGA and half in a raspberry PI (as in the software passes data back and forth so it's REALLY hard to follow what's doing what) and the code he released is GPLv3 so nobody wants to use it (and we _can't_ ship that into electrical grids, no utility would allow it and we couldn't support it. That license tries to say "authors of this GPLv3 code must be able to upgrade the electrical grid infrastructure, the utilities who own the hardware can't stop them if they pay an electric bill" because Stallman was trying to Free The Cloud or something, and wound up instead excluding his new generation of software from a wide range of uses. Wheee!)

Jeff and Jen understand all the math, there's multiple textbooks out there on how this 1970's technology does its thing. But you need a working reference implementation to test ours against, and if there are a thousand steps having the output of the last one doesn't tell you what the answer to step 22 should be. The failure mode of signal processing is "your output is nonsense static, and you dunno why".


June 21, 2017

I just got notification of another "payroll delay" at $DAYJOB yesterday. We've been on half pay for a year now, and the paycheck on May 26th didn't happen. We got paid 2 weeks later, but didn't get the missing one filled in. And now they're missing another. They say they'll get new money from a customer on the 1st, but another paycheck's due that friday at which point they'll owe me _three_ paychecks.

Sigh. I really want to see the technology I've been working on since 2014 launched out into the world, we get to open source interesting chunks of it (including time to work on toybox when we're not doing the this-is-fine thing surrounded by fire) and I'm even more excited about it since I've been researching the current state of solar technology. But this is "wheels coming off" territory on the money side. I got a home equity loan to tide the household over through this, but if we spend through that money without the company recovering I can't _afford_ to stay here.

But I _also_ want to turn Android into a self-hosting development environment and the past 18 months have been a huge distraction from that. I sometimes idly wonder if I should switch to focusing on patreon instead of $DAYJOB, but even half pay at dayjob is paying dozens of times what patreon is. It's flattering and encouraging and I'm grateful people care, but I can't remotely live on it. (And I'm supporting two people, three cats, and a house with a yard, not currently in a "ramen and roommates" stage of my life. Although Fade's pretty self-supporting with her scholarship and dorm room now, so if I _did_ sell the house... what to do about the cats though. And Fuzzy doesn't want to move.)


June 20, 2017

The official Global Positioning System Specification is a 46 page PDF full of incomprehensible math in small print, but what it describes is actually quite cool.

Back in the 1970's the US launched a couple dozen sattelites with atomic clocks so they know EXACTLY what time it is, and the government launches replacement satellites when any of those wears out. (We're on like our 4th generation of the suckers now, but they're backwards compatible.)

Each GPS satellite constantly broadcasts a signal telling you what time the atomic clock onboard the satellite thinks it is, with a 50 bits per second data protocol cycling over and over through 5 different data packet formats ("subframes"). Subframes are 300 bits long, so each one takes exactly 6 seconds to transmit. One of the data fields in each subframe is a timestamp, which says EXACTLY what time it will be (to the nanosecond) at the rising edge of the first bit of the next subframe. (At the tone the time will be... BEEP.)

But that's just the time the satellite thought it was when the signal left it. The signal moves at the speed of light, and Admiral Hopper taught us all that a nonsecond is 11 centimeters at the speed of light, so it takes a whole lot of nanoseconds for the signal to get from the satellite to our receiver on the ground. (Except 11cm/sec is the speed of electricity in wire, the speed of light in vacuum is 30cm/sec, meaning if your time's off by a milisecond your position is off by 300 kilometers. If you want to know your position within 10 meters, you need to know your current time within plus or minus about 15 nanoseconds.)

Other data fields in the 5 frames describe where the satellite thinks it is when the frame was sent, or at least give you data you can plug in to standard equations to work out its orbit. In fact subframes 4 and 5 describe the orbit of the entire constellation (slowly, takes about half an hour for them to cycle through all the satellites to give you the current "almanack", although listening to multiple satellites in parallel gives it to you faster because they're each reciting different parts of it at any given time).

So if you know where the satellite was and what time it thought it was, and you know where _you_ are and what time it is where you are, you can tell exactly how far away the satellite is thanks to the speed of light. (Modulo atmospheric distortion, which there are equations to compensate for.)

Except... we don't know that. Where we are and what time it is here are what we're trying to find OUT. We know where the satellite was and what time it thought it was, and if we have a local clock we can say how long ago we received the packet (exactly when the rising edge of the next packet started, measured backwards from now).

So we reverse the math: we need to know our X, Y, Z, and T (time) coordinates. That's four unknowns, so in order to do the "solve for four equations with four unknowns" thing from high school algebra we need four different inputs (which is why you need to listen to four different satellites to get a lock). If we can get packets from four satellites, each with position and timestamp information (and record exactly when we saw each packet relative to all the other packets), we can compare them to each other and work out which unique position and time could have seen those.

And if we know exactly what time our local clock said it was back then, we can work out how much our local clock is off by, and then keep it accurate as it drifts. All non-atomic clocks drift. We use quartz crystals for timekeeping because of "piezoelectricity", which means if you squeeze them they produce electricity, and if you push electricity into them they flex a little, which means if you run a current across them they _vibrate_, and if you cut them very accurately in special shapes you can count the vibrations and use that to keep time. But "very accurately" here means a digital clock is losing one second per year is ok, but that means it's losing about 43 nanoseconds per second, so if you want to know what nanosecond something happened at that's not good enough. Worse, piezoelectric fluctuations in quartz vary slightly with temperature, meaning your clock rate changes as the clock heats up or cools down so the gain/loss isn't _constant_ unless you put a heater under it to keep it at a constant temperature, and yes those exist. To get _nanosecond_ accuracy you need two nested ovens with special insulation. It's an off the shelf part, I think ours are from Germany. The ones in your phone get around this problem through a special technique called "not caring that much about accuracy". (They work out a correction factor for how many nanoseconds per second your clock is off by on average, an if that varies by 3 or 4 nanoseconds in any given second that's only a few feet so close enough.

Then there's the "a cartesian bear is a polar bear after coordinate translation" issue: the unknowns we're solving for are actually the distance between us and each satellite, and then there's a unique position that far away from each satellite. (Well, sometimes there's two, but one of them's way out in space. Two spheres intersecting have a circle of common points, three spheres give you two intersection circles which have two points in common, and then _four_ should narrow it down... but sometimes after you've set your clock and found your position you drop down to only 3 satellites you can see, and you can still make it work by knowing about where you were an what time it was before so you're solving for fewer unknowns. Your height above sea level is usually the easy one.)

The position each satellite reports is relative to the center of the earth, so we know where that is relative to these points, and we know the planet's radius. It's in our constant table, from the 1984 World Geodesic Survey.) So you work out the satellite positions and your distance from each one, then work out your position in X/Y/Z/T and _then_ you convert that into latitude and longitude using yet _more_ equations. (There's various supplementary material describing how to do a lot of this.)

Except there are TWO LATITUDES, and the one X/Y/Z converts easily to (assuming the earth is spherical, which it isn't) isn't the one mapmakers used hundreds of years ago when wandering out onto the land and siting stars and such through an astrolabe to figure out how far north of the equator you are given a level horizon. The spinning keeps the earth reasonably spherical as far as longitude (east/west) is concerned, but the equatorial bulge from centrifugal force means that once you get north or south away from the equator much towards either pole, the horizon isn't quite level anymore. I.E. a line straight down from the "level" ground would miss the center of the earth by many miles, and a line up from the center of the earth at the same angle would miss _you_ by the same amount, and since that angle is your latitude you've got two coordinate systems to chose from; geocentric or geodetic latitude. And you convert from one to the other with a for loop (iteratively work out about how far you were off by, work out how far _that_ was off by, and keep going until the correction factor is small enough you stop caring). Using a constant value for how bulgy the earth is, which again isn't quite what reality's doing but at this point everybody just sighs and updates the map markings because they weren't THAT accurate before.

All of the above isn't _how_ you make this work. It's _what_ we're trying to make work. The _how_ is DARN FIDDLY on top of all that.


June 19, 2017

Cut a toybox release, and then a mkroot release. Back on my head, by which I mean endless GPS.

For some reason working on GPS interferes with working on toybox. The mindsets conflict, to the point where if I bang on GPS for half an hour, I'm usless to work on toybox (full blown writer's block) for a couple days afterwards. I can force myself to fix specific bugs, but not make anything approaching a design decision. (I try to load the toybox context back into my head and it won't go, it's all "why did I... what was... I know I had a reason for that, where did I write it down..." Trying to work out the correct approach to take for any new code, NOTHING feels right.) And having multiple days where I DON'T have to do something GPS is pretty rare these days. (Even if I take a weekend off, Saturday and most of sunday I get nothing done, sunday evening I might do a little, and then it's back to GPS again.)

There are big things I very much want to do on toybox, but any time spent on toybox is NOT spent working on GPS, meaning I'm putting my startup out of business. I feel guilty for not making proress on toybox, and even more guilty spending any time on it at all.


June 17, 2017

Oh, if you're trying to run the i486 target in mkroot you'll need a current qemu. QEMU broke the ability of "qemu-system-i386 -cpu 486" to boot Linux a year ago, I reported it to qemu-devel last month, and the resulting fix went into the tree on the 31st.

Yes, mkroot has already inherited aboriginal's proud tradition of finding and fixing other people's bugs. That's what happens when you run consistency regression tests across multiple architectures, even of simple basic stuff. NOBODY TESTS THIS!

Part of the reason is the embedded Linux world has given up on upstream and uses its own very stale forks of everything, ala the entire Cortex-M Linux world being based off a fork of linux 2.6.33 and not even the _last_ release of uClibc. Yes, I poked Rich about it, but he's busy and nobody's wanted to sponsor the work. Because they have their stale uClibc, and are deploying it.

Nobody fixing/testing GUI stuff is its own can of worms, which reminds me I should really get a new version of my prototype and the fan club talk recorded and up somewhere. The university of Chicago's "Flourish" is not a conference that reliably makes and posts recordings. (I presented there twice, and both times that was going to happen, then didn't.)

But seriously, i486 not working in qemu for a _year_? Sigh...


June 16, 2017

The help text parser is bring stroppy. I lost hours tracking down a segfault that turned out to be in my debug printfs. (Which were on the wrong side of an if statement so dereferencing a null pointer.)

Blah, I still have a todo item about qemu doing a weird thing. If I do "qemu -nographic blah | grep hello" it works fine from the command line, but doing the same in $(argument) hangs with SIGTTOU. I should track down why, but poking at qemu is unpleasant.

To start with, where does qemu keep its main()? In vl.c it's #defined to qemu_main() but grep -r says there isn't one. (This is why in toybox there's a main.c at the top level that ends with the main() function, and then I wrote up a walkthrough on the website. I can't think how to make it more obvious.)

People keep talking about how well tested open source is, but I keep hitting things where I'm clearly the first person to ever do what I'm trying. Peer review is being defeated by volume.

It reminds me of the way automobiles (the horseless carriage) were initially the _solution_ to a massive environmental problem: the sheer volume of horse manure. It was always a problem (the ubiquitous "road dust" you wore "traveling clothes" to protect against back then was dried horse manure, powdered by the passage of many feet/hooves/wheels until it went airborne). Eventually the sheer volume started rendering places like New York City unlivable in the 1890's. There were newspaper articles about how the city would have to be abandoned due to the stench and other health hazards, teams of full-time municipal street cleaners could barely keep up downtown and had nowhere to haul it off to, and the roads into and out of the city were nearly impassable. Then circa 1901 cars started showing up and the horses were all gone a dozen years later. Suddenly the city was clean and breatheable! And then over the next few decades, the volume of automobile traffic grew to where the solution was now the new problem, previously negligible tailpipe emissions (nothing compared to horses) became a big problem once you had a hundred times as many cars as there had ever been horses...

Everything's only a solution up to a certain scale. Solving a bottleneck just reveals the next bottleneck, every time. "Our limiting factor is now THIS..."

(Meanwhile, I'm hoping that the exponential photovoltaic and battery growth curves do to the oil industry what the automobile did to horses, and more or less coincidentally cuts off the ecological disaster just in time again. The problem is, if we stopped emitting new CO2 _today_ we're already over 400 parts per million, meaning Georgia gets the climate Florida used to have, Minnesota gets Georgia's, and Texas develops the tradition of Siesta because it's too hot to go outside for half each day. And that still puts Florida underwater no matter what we do at this point, and makes Tokyo copy Holland. Wrestling _that_ back into the bottle's not happening in our lifetimes.)


June 15, 2017

In Dr. Who, one of the seldom-used aspects of the Tardis is its ability to pause the outside world. The 4th doctor used it to change clothes in his regeneration episode, although you could always just park it in deep space a million years in the past anyway. (Hey, time machine. Pop off, spend a year between galaxies, come back 5 minutes after you left.)

I suspect something like that would be the only way to get anything like a handle on my todo list. Of course for a human you'd need some kind of life extension technology too, because there's like 10 years of grinding without significant external input. (A read-only snapshot of the internet would be nice though.)

Meanwhile, I implemented a download thing in mkroot, by which I mean I came up with a hack. Now "mkroot.sh -d" downloads all the packages without building them, by calling sed on the modules to find their download invocations. (I know!) Still doesn't delete the old versions, though.


June 14, 2017

Building multiple targets with mkroot.sh is slow, and the main slowdown is re-extracting the kernel source tarball. Aboriginal Linux had the package cache to deal with this, which was always one of the big sources of complexity in that design. Heck, it was hard to just to explain it.

That said, it did valuable things (speeding up multiple builds a lot, and making multiple parallel builds take much less disk space, disk cache, and I/O bandwidth), so doing without it is a bit of a pain. Redoing a simpler version of the package cache is tricky.

Right now there are no lifetime rules for source packages. The old aboriginal download() function would delete old tarballs after downloading new ones, this one doesn't. That's a problem going forward, when I add updated versions cruft accumulates and multiple versions might get confused for each other. Hmmm. I _had_ a solution to this, but it was kinda heavyweight and a bit awkward (it wrote a timestamp file at the start, then touched each file it confirmed was current, then did a find -newer and deleted everything that came up. Meaning the download couldn't be conditional, anything you skipped validating got deleted.)

I don't have a separate download.sh this time, and it would be hard to implement because the modules have their own packages in them. So I can't say "download everything but don't build it yet", let alone the EXTRACT_ALL=1 ./download.sh that populated the package cache under aboriginal linux.

So far, I'm not applying any patches to any of the packages I'm building. That's a design decision and one of the big simplifications in the new build. It's also kind of limiting, but for now I'm sticking with it. That makes half the reason for the package cache go away, and if the user wants to extract or git clone the directories themselves, they can. (I could add a helper script to do it for them, but an accumulation of helper scripts is clutter. It's already less obvious than I like that you call ./mkroot.sh to do the build. I suppose I could add a lot of hints and examples to the README, but that turning into a novel doesn't help either...)

Finding "simple" is a lot of work.


June 13, 2017

Ok, the theory of operation of config2help.c is:

1) read Config.in and populate struct symbol *sym linked list.

2) read .config and set "enabled" on corresponding *sym entries.

3) Loop through *sym entries finding ones with the same "usage:" line and collate those entries, adjusting whitespace, sorting -X options and combining usage lines.

The problem with two commands sharing the same block of text (ala top/iotop) is part 3, with the usage: lines. In ps.c I used "usage: * [blah]" and I never actually implemented that syntax.

It looks like I got the rest of it right, the lifetime rules seem like it's not removing stuff from old entries but copying it into new entries. (It mostly never frees stuff because it's a short-lived program, it's all freed when it exits.) So my first attempt at fixing this was to switch to "usage: iotop|top [blah]" and teach the plumbing to loop checking each name in a pipe separated list and accepting any match. The problem is, the plumbing iterates through in _reverse_ order, so it's hitting the multiple entry first and taking that as the name to look for.


June 12, 2017

Ubuntu's grep has --include and --exclude so you don't have to do "find . -name blah | xargs grep" if you want a recursive wildcard search. This is simple to implement, but there are two problems:

1) There's no short option for --include. (All long options should have a corresponding short option unless you are _out_of_letters_, upper and lower case both.)

Given that ubuntu already defined an -I, I'm happy using -M for match and -S for skip. Ubuntu's grep (in 14.04) does not define those, and you can use the --include and --exclude long names for compatibility if you want to write scripts that run in both contexts. (Although I haven't currently mentioned the long versions in the help text for -M and -S because I haven't got enough space in two columns. :P )

2) In Ubuntu you still have to say -r, which means "grep blah -M '*.c'" will hang despite seeming, otherwise, really obvious.

Fixing the second one's trickier, because you can go:

find . | xargs grep --exclude '*.c'

and that grep won't -r. So there _might_ be a reason for grep not to imply -r, although I'm not sure it's a real use case? (And you can find -type f if it is.)

My first guess was to have a variant of the "grep -r pattern" no-file period adding logic add the -r when you don't specified what to search with -M, so "grep -M '*.c' pattern" with no files would automatically -r. Except that "grep -M '*.c' pattern dirname' would then return immediately with no results (because no -r). Which is nonintuitive.

Possibly -M should imply -r but -S shouldn't? Hmmm...

Part of the issue is toybox grep doesn't complain when told to look at a directory without -r. (In part because find . without -type F produced a lot of noise...) Maybe it should? I suppose I could just require people to say "grep -rM '*.c' pattern"...


June 11, 2017

A reminder that when the SFLC said it was fine to relicense OpenSSL, the openbsd developers floated a proposal to use the same process to relicense gcc 4.2.1 from GPLv2 to ISC.

Still waiting to see how that turns out...

(When I relicensed toybox to 0BSD I only had a half-dozen external developers, and I yanked the code of the one I couldn't contact even though he probably would have been fine with it. "Breaks closed" means the default answer is no.)


June 10, 2017

My ongoing solar research continues. Hard to stop it from getting a bit repetitive, but it's a bit like following microprocessors back in the 80's and internet back in the 90's. It's this THING and it's gonna be HUGE and it's gonna change EVERYTHING and blah blah blah. (Part of the lack of more widespread excitement is human nature: humans continue to be TERRIBLE and comprehending exponential growth. We keep looking at a history of doublings and then projecting forward linearly from wherever we are now, and that's NOT HOW IT WORKS. I grew up with Moore's Law: trust me, this is a thing.)

Some of the updates are things like Tony Seba gave his talk to the Colorado Renewable Energy Society guys. There's some new material, but the majority is the same talk as the last 6 times he's given it. (It's a book tour, S.O.P. Yes, same guy behind the three video series I point everybody at as a good starting point to get up to speed. Plus Avery Lovins' excellent info buried under random animal noises and sound effects.)

Several months back Paul Krugman tweeted a link to an excellent chart showing 80% of Russia's export revenue came from coal and natural gas, but I can't find where I bookmarked it and twitter only scrolls back so far. The closest I'm finding now is figures from 2013 showing that back then 68% of Russia's export revenue came from oil and natural gas. (Ok, 70% to 80% in 4 years isn't a huge stretch, it implies the rest of their export economy's collapsing, but it would be nice to have current figures.) Still, if you're wondering why they hijacked our election to put the climate change deniers in charge, they're as economically petrochemical dependent as Saudi Arabia. (Hence the piplines to nowhere article from last time.)

The exponential growth of solar technology seems accepted as a given at this point (and has its own version of Moore's Law), what's been missing is batteries. Installing more than a certain amount of solar results in "curtailment", I.E. unplugging the panels and wasting the electricity they produce when existing grid demand can't immediately use it. People have been talking about using the surplus electricity to generate storable fuel, but realistic ways to make liquid fuels aren't done yet (note: if you take carbon out of the air to make fuel, burning that fuel just puts that carbon back and the whole cycle is "carbon neutral", it's when you dig it up out of the GROUND that burning it adds carbon to the atmosphere that wasn't there before). And although hydrogen fuel cells are efficient and pollution free (the output is electricity and water, the inputs are hydrogen and oxygen gasses, and you don't need to carry the oxygen with you so the power/weight ratio is similar to other combustible fuels that leverage oxygen's tendency to be available wherever you are on the surface of Earth), _making_ hydrogen is inefficient in a bunch of ways (half the energy from electrolysis is lost as heat and it leaks out of just about anything you try to store and transport it in), so the real excitement is around batteries.

Battery prices have come down 80% in the past 6 years, and we can expect about the same over the next 6. This means at the start of 2023 we'd see 1/5 the price at the start of this year. Pairing batteries with solar panels solves the "duck curve" issue, where solar electrical production drives other power generation needs down near zero during the day (or even below zero, hence curtailment), but then they have to suddenly ramp them up again in the evening as the sun sets and people get home from work, and then keep a lower level running until morning. The ability to store even a few hours of the solar farm's output deals with the ramp-up problem and lets you avoid expensive peaker plants, and 24 hours of storage would let solar power the grid all by itself. (Probably plus wind.)

It's not just price, other big advances have been in lifespan of the technology. Back in the 1970's solar cells used to degrade to uselessness within 5 years. These days the lifespan metric is "how long until they only produce 80% as much power", and the common answer seems to be "around 40 years" and it's getting longer all the time. (One problem is the rooftops they're mounted on need replacing after 40 years.) Battery lifespans are measured by charge/discharge cycle count, and according to Apple a battery that was good for only 300 cycles in 2008 is good for 1000 cycles today (and these days they seem to define needing replacement with the same 80% of original capacity metric). Batteries that don't have to be portable can last much longer.

Laptops funded most battery R&D in the 1990's, cell phones took over in the 2000s, electric cars have been driving development in the 2010's (pun noticed and shrugged over), and next decade it's utility storage and home powerwalls. We're already seeing the first few large utility-scale battery deployments in hawaii, australia, india, china, california... Tesla's advertising this a lot, but samsung's actually _doing_ it, as are germany and china and so on.

We're been getting better battery chemistry (such as Nissan's heat-resistant "lizard battery"), better charging/discharging circuitry, liquid cooling, graphene and glass membranes, and new tricks every year. Lithium still dominates batteries the way silicon dominates solar cells, but people are trying other stuff. Perovskites are a material you _paint_ on surfaces to make a solar cell (at room temperature and everything) with the possibility to tune it to absorb infrared or ultraviolet while letting visible frequencies through, so you could have transparent solar cells turning building windows and tablet screens into power collection surfaces.

Meanwhile, a competitor to lithium I found particularly interesting was the revival of century-old Nickel-Iron battery chemistry. I've known about this for a while but bumped into it again recently when a small colorado company called Iron Edison gave a talk at CRES about their battery technology.

As the name implies they specialize in the Nickel-Iron technology known as "the edison battery" after Thomas Edison bought the patent from the original inventor Waldemar Junger in 1899, and then tried to pretend he'd invented it. (He really was the Bill Gates or Elon Musk of his day, gaining a reputation as an inventor by pouring money and marketing behind other people's ideas, then taking credit for what he'd commercialized.)

This little colorado company is selling Ni-Fe cheaper than Lithium (100 amp hours at 12 volts: $970, or 200 amp hours at 48 volts: $7760. Vs $9920 for their lithium solution. Although they've got a hybrid lithium/iron chemistry too, which sounds new. I'm a bit confused by their website: 48 volts needs 40 cells, so how does amp hour capacity _not_ change going from 12 volts to 48 volts if 100 amp-hours is the smallest they sell? Wouldn't they have a version with fewer cells and thus lower voltage?)

I was really excited about this technology when I first heard about it, because these batteries NEVER WEAR OUT. The "thirty to forty year time horizon" people quote is how long it takes things like tables and bookshelves to wear out. You put a physical object in a building, you tend to replace it after 40 years or so no matter what it is unless it's a historical site with a preservation budget to restore things. There are Nickel-Iron batteries from Edison's day still in use.

But this chemistry has some downsides: the batteries are really heavy (ala lead/acid) so didn't get used for portable applications where power to weight ratio was imporant. And like lead/acid they use a water-based electrolyte you have to top up periodically, but the reason to top it up isn't just evaporation but electrolysis. The current flowing through the battery breaks down the water into hydrogen and oxygen, which is vented into the atmosphere. (The overcharging failure mode is venting a _lot_ of hydrogen when extra electricity goes into the water with no other reaction to consume it.) This hydrogen is why you can't seal them like most modern car batteries.

The hydrogen advocates will immediately go "yay hydrogen", but it's still inefficient (as with all electrolysis the rest of the energy becomes heat, which does evaporate the water), and hydrogen is still a tiny molecule that osmotically leaks through just about everything (current Ni-Fe batteries don't even try to capture it; even if you could doubling the system's complexity for an extra 5% or 10% efficiency's near- pointless).

My real problem with leaking hydrogen isn't lost efficiency: hydrogen is as bad for the ozone layer as freon ever was, and if any does make it up past there it's so light it floats away into space and is permanently lost by the planet. Any energy technology that permanently reduces the amount of water the planet has as a continuous side effect of its operation makes me uncomfortable when people talk about scaling it up to 1/6 of the total economy long-term. (That's not my preferred solution to rising oceans. Water isn't lost to space because it has a molecular weight of 18, hydrogen gas has a molecular weight of 2. Fun fact: you could give the moon a 1ATM atmosphere of Sulfur Hexafluoride if you wanted to, it's 1/6 the gravity but the molecule's 6 times heavier than air.)

The big initial use of Ni-Fe battery chemistry was in fully electric cars; the gasoline car people seem to have switched to lead/acid batteries to avoid Edison's patents the same way all the movie people moved to the other end of the country from Edison's New Jersey offices to start Hollywood in California where he couldn't easily sue them. The reason patents were _invented_ was to encourage disclosure so secrets weren't lost went people died, everything else about them is a massive drag on progress. Places like China that ignore patents out-develop other countries _fast_. So yeah, the first time around Ni-Fe technology was killed off by patents, and by the time they expired it was old news people forgot about.

These days power to weight ratio still rules electric cars, so they're lithium all the way (element #3, atomic weight 7, lighter than aluminum or carbon fiber). They want to suck up as much juice as possible for maximum range, and people have been worried about the charging demand electric cars would add to the grid, let alone whether an average suburban home's rooftop has enough square feet to charge two cars (spoiler: yeah just about, but there's work to do to get there). How do you charge your car overnight if the sun goes down, you'd have to fill up a battery and then drain that battery into your car's battery...

Except that's not how they'll do it. Tesla (Edison Du Jour) demonstrated its 90 second swap-out battery back in 2013. You charge the batteries when they're not in the car, then "refueling" is swapping out the physical battery for one that's already charged. Sufficiently rural areas may convert gas stations to do this, but the business model urban and suburban areas are switching to is app-summonable self driving subscription fleets. (Basically really cheap taxis that come at the press of a button on your phone, and cost less per mile than gas does now, maybe even a flat monthly fee in-area. Not only do you not have to buy or lease a car, but there's no insurance, annual registration, license, oil changes, repairs... In a decade or two owning your own car will be like digging your own well and septic tank instead of city water, only a thing way out in the country.)

Self-driving fleet cars will have battery replacement done by the robot arm version of an Indy 500 pit crew servicing a racecar: between customers the car can drive itself up a ramp and the battery's swapped out when it drives off the other end. This means they can have a stockpile of spare batteries charging all day, to be used later. Set up your own solar farm at the edge of town where land's cheap and pile the batteries on an 18 wheeler overnight to go to a distribution depot in town. (Easier than laying cables. The big technology revolution before microprocessors was containerization, which really did transform the world in a fundamental way, and finding new ways to leverage that remains a common transformative business model today.)

A few years ago the european union passed regulations requiring all new buildings to be zero energy by the end of 2020 (meaning they generate their own power via rooftop solar, most of the gains would come from efficiency increases of the type Avery Lovins specializes in. But the reason you don't hear about it is that recently random lobbying seems to have gotten the weasel word "nearly" inserted into the directive, rendering it basically useless. Still, like everybody they're far ahead of the USA, which has been rapidly turning into a technological backwater since the Cheney administration. Sigh.


June 5, 2017

Start of the second week of Japanese class and I am so far behind already. It's a compressed summer schedule (6 weeks instead of 14) and I still don't have the books.

Fade ordered the textbook, since she's got amazon prime, and had it sent to her primary amazon mailing address, I.E. her Minneapolis dorm room. Which we noticed when it arrived, and had to return and re-order.

The used workbook I bought essentially starts at page 45. (It has almost all its pages but somebody tore out the hiragana exercises at the start, which are the part we're doing now.)

I'm glad I don't care about my grade in this course (I want to learn japanese, not get course credit) but you need to keep up in order to benefit from the new lessons. Especially when the teacher starts writing stuff in Hiragana using the half that alphabet we're already supposed to know...


June 4, 2017

I have Fade's cold. Did not get out to see Wonder Woman on opening weekend.

Qemu commit 143021b26ffe is making a couple of instructions sh4a-specific. In theory the technique for adding sh2 and sh3 targets (and j2) wouldn't be that different...

*shrug* Threw it on the todo list...


June 3, 2017

If you take half of each day out for Japanese class, and then have standard $DAYJOB and open source project bug reports and email fielding, there's not a lot of energy left for programming. :)

Still, trying to get a mkroot release out. I think I've enabled all the targets I'm going to this time, the table looks like:

                boot    clock   disk    net
aarch64         x       x       x       x
armv5l		x	x	x	x
armv7l          x       x       x       x
armv7m
armv7r
i486            x       x       x       x
i686            x       x       x       x
microblaze
mips		x	x	x	x
mips64
mipsel		x	x	x	x
powerpc		x	x	x	x
powerpc64
s390x		x	x	x	x
sh2eb
sh4		*		x	x
x86_64		x	x	x	x

I.E. you can boot to a shell prompt that works, have the clock set to current time (without which "make" gets deeply unhappy), have a working block device and a working network card under qemu on the following platforms: arm64 armv5l armv7l i486 i686 mips mipsel powerpc s390x x86-64.

Microblaze's defconfig doesn't seem to have a working elf loader, I dunno if that's a musl-cross-make toolchain issue, kernel config, or qemu yet. There's also the "nommu/with mmu versions have the same short name" issue I mentioned earlier, it's a todo item.

I punted armv7m and armv7r because they're not fully supported in musl-cross-make, the vanilla kernel, and qemu yet. Musl hasn't added cortex-m fdpic support because the gcc fdpic support patch is still out of tree (so the toolchain is static pie instead), and the kernel defconfigs and qemu board emulations don't overlap for cortex m (qemu can emulate arduino-style boards with 256k of ram that can't run linux; maybe I can plug it into the "-M virt" board next time?). Armv7m is a nommu board that only has the arm thumb2 instruction set, and lots of people seem to use it. Armv7r is a nommu board with the conventional armv7l instruction set and I've so far only encountered two people interested in it. Both targets remain on the todo list but I'm not holding up the release for them.

Mips64 turned out to be broken in musl, there was a type size wrong in the stat structure so things like "ls" don't work. This is a big enough issue (the kernel API to list directories returns nonsense) that I punted the arch until the next musl release. (The upstream patch is here if you want to test it yourself.)

I just haven't gotten around to 64 bit powerpc yet. I'm unaware of anything wrong with it, just haven't put in the time yet. Only IBM seems to be using it, and IBM is dying fast. (I want to get Alpha and with-mmu m68k working someday so that's not a blocker, just... not a priority.)

The sh2eb target hasn't got qemu emulation, that's for j-core boards (turtle and numato). I haven't merged the kernel build yet because it's the only one _not_ targeting qemu and I'm not sure how I want to handle that yet. (Todo for next release.)

The * on sh4 is if you make the kernel change mentioned here and use a current qemu with this patch _then_ it should work. The second problem is that qemu didn't shut down right. The first problem is the kernel guys recently enabled a serial buffer thing that qemu doesn't emulate properly, which means the "serial data has arrived" interrupt only triggers every 16 characters or so (when the buffer fills up; there's no timeout). The THIRD problem is that's the only target that hasn't got a working clock; there are two clock drivers commented out in module/kernel for sh4 and neither of them actually gets data from qemu (despite one being in the r2d defconfig in the kernel).

The correct fix for all of that would be changes to qemu-system-sh4, but I don't really have much contact with those guys. Changing that project to depend on libgnugnugnugnugnuglib was so dumb I largely stopped paying attention to the development side after that. It's another "this got useful enough to large corporations for them to assign bureaucrats to staff it", and there's no trace of hobbyist left. Building the right thing gives way to procedures and certification.


May 30, 2017

First day of Japanese class at ACC. I'm not officially registered yet, need to email administrative higher-ups and get permission to join a full class. Then I need to get the book and the workbook, which could be a bit of an issue since there's no bookstore on the highland campus. (2/3 of the mall hasn't been digested yet. They say they're making dorms out of some of it.)

Poking at mkroot in the afternoon, the s390x target doesn't have a network card? And the defconfig won't run. How did I... ah right, switch out the machine type symbol for a more primitive processor without this forest of unsupported capability bits that QEMU doesn't implement. (You have runtime probes for what your system supports, but no ability to NOT USE THEM at runtime. So what does the probe accomplish, exactly? Sigh...)


May 29, 2017

I attempted to add distcc to mkroot, but the current releases of distcc moved to github, which means their source tarball releases are just a snapshot of the github repo, which means they didn't run autoconf to create a .config file, which means autoconf is now a build-time dependency for them, and screw that. It's literally easier for me to implement a simple distcc in toybox than make the current nonsense build. (Especially since I spent a longish time maintaining my own toybox fork and doing ccwrap.c for aboriginal linux, meaning I know more about parsing gcc command lines than is probably healthy.)

Heck, if I start from ccwrap and don't care about compatibility with the old distcc protocol but just do something trivial glued to netcat's server mode, a distcc client/server pair is probably a weekend's work.

But not _this_ weekend...


May 28, 2017

Thunderbird is open source, but I can't easily build it from scratch. This means it's not USEFULLY open source.

I HATE the way it collapses the "Reply All" and "Reply List" buttons together into something that randomly switches back and forth between them so whenever I'm distracted I wind up doing the wrong one. (I usually want Reply All. I would delete Reply List if I had the ability, but it doesn't give me an orthogonal button, it only gives me the Reply All functionality as part of Gratuitous Bundling of the thing I want glued to crap I don't want.

But as annoying as it is, it's not annoying enough for me to chop it out of ubuntu's package manager, build my own version from source (with giant dependency tree and who knows what invocations), chop my way through the code to figure out how to change this UI thing, argue with the upstream mailing list that my change is worth integrating, instruct the package manager not to replace the version I built for the year or two it takes the upstream change to percolate into the version ubuntu's using...

It's just not worth it. If it gets so bad it's unusable, I either write my own from scratch (ha) or look around for an alternative package (ala my year or so using balsa a while back).


May 27, 2017

I added cross.sh to mkroot, and if you run it with no arguments, it gives the mcm-buildall architecture list:

aarch64 armv5l armv7l armv7m armv7r i486 i686 microblaze mips mips64 mipsel powerpc powerpc64 s390x sh2eb sh4 x86_64

The arm64, armv7l, s390x, i686, and x86-64 targets are all testable now and should work with the full set of hardware under qemu (boot to shell prompt, disk, network, clock, shutdown). I'm working on filling out and testing the rest.

The armv7m and armv7r targets are nommu, m is thumb2 instruction set only and r is conventional 32 bit arm instructions withotu an mmu. Last I checked, armv7m wasn't quite upstream in qemu and the vanilla kernel (cortex-m linux is available from emcraft in moscow, but they've never tried to push anything upstream; linux had a config for a board qemu doesn't emulate, and qemu had a board emulation with 256 kilobytes of ram). Support for armv7r was worse (I first heard about someone speculating _adding_ Linux support for it in a hallway conversation at ELC in either 2013 or 2015), but maybe it's changed since?

I found a mips64 bug (stat layout is wrong so ls prints nuts info) which Rich has a musl patch for but that needs to go into musl-cross-make. I asked Rich and he said to bump that arch to next release.

Microblaze is funky (doesn't wanna run ELF?), still need to debug why. I also have the problem that there's a with-mmu and a no-mmu version of microblaze and they don't have distinguishing prefixes, not sure how to distinguish that. (Right now the build is sorting by $PREFIX-* in the toolchain name, which was unique per architecture up until now. There's an fdpic suffix, but how that maps to a short name... Need to add plumbing I guess.)


May 20, 2017

For years Android's had an obvious bug where plugging headphones or earbuds into the headphone jack produces spurious control signals (there are no buttons on these earbuds!), and there's no way to tell it to ignore these spurious signals. I've dealt with this since about 2012 (headphones du jour randomly starting/stopping song playing while I'm listening to podcasts). The current version's iteration is two different sets of earbuds popping up a full-screen "listening..." every 30 seconds while I'm trying to watch youtube videos. (Yes, I've disabled "ok google" everywhere I can in the settings.)

There are complaints online about this going back years. There are of course apps to disable it, the highest ranked of which on the google play store says it was broken by the upgrade to Android Jellybean (the one _before_ kitkat, which came before lollypop, marshmallow, and nougat. Yeah it's alphabetical.) It links to what it claims is a bug report, but which is instead a Google login screen. (If you have to login to _view_ the bug, it's not real.)

Note to self: if I start doing podcasts, bundling is a topic. It's what Microsoft built its business on (then got into antitrust trouble for), and it's what Android is doing with this OK google crap. It's a "feature" I don't _want_ and can't _disable_. I have no control over this without rooting my phone and going deep down the modding rathole.


May 16, 2017

For posterity, here's that chunk of photovoltaic research I posted to the svlug list (half of which is the same links as last time; I have a bunch more, but this is still the best introduction):

At my dayjob we're making better synchrophasors and hooking them up to the internet.

Electrical grids were designed around the idea of centralized generation from which power flows in one direction to consumers, so you only had to measure it at the generators and maybe the substations. But now we've got solar and wind feeding power back in at the edges, and once you go above around 3% on that the voltages go out of spec and people's electronics get unhappy. So we're retrofitting the grid with sensors so the whole thing can switch over to 100% solar and wind over the next decade. Combine this with batteries and your more optimistic forecasters expect peakers to go away around 2020 and 100% renewable base power by 2030.

The best summary of the state of things is probably a series of three related videos by and about a stanford business professor named Tony Seba. The first video is a class he taught on the energy industry in 2013, then a book talk he gave last year, then that book talk was analyzed by a mutual fund manager in india earlier this year.

Another informative speaker is Avery Lovins, who started on the environmental side of things and became an expert on the technology. I find him really annoying to watch (the first minute and change of this ted talk gives you the idea), but when he's not patronizing his audience, oinking like a pig, or introducing loud clangs into quiet speech, he's got really good info. If you're up for it, this talk is very informative and worth gritting your teeth through.

I could give dozens more links but that trilogy of Tony Seba videos is probably the best starting place. If you want to dig for yourself, about a third of the ones on the Colorado Renewable Energy Society's youtube channel are good (the rest are environmentalism, not technology).

Anyway, the optimistic people are probably right about "renewable energy" adoption, because humans never forecast exponential growth accurately. Moore's Law's been replaced by Swanson's Law, I.E. solar panels were $76.67/watt in 1977, $0.36/watt in 2014, that's a curve that's hard for humans to keep up with and it seems to be _accelerating_, and there's a similar depreciation curve for battery technology too although it started later. (Driven by laptops, then cell phones, and now electric cars and home battery walls are ramping up.) The analogies people keep making are to cars displacing horses (about a decade from 1% to 99%), digital cameras displacing film (ditto), analog to digital phones (cell and voip)... There are one time conversion costs and then the new stuff is _waaaaay_ cheaper.

Speaking of Godwin's Law, did you notice how the vast majority of Russia's export income is from oil and natural gas (without which they basically can't even feed themselves), and the CEO of Exxon was happy to divest himself of his stock assets to become Secretary of State? (Ordinarily a CEO selling all his stock in the company is considered a bad sign, but he found a way.) There's an excellent article explaining that, and why the new administration's #1 priority before ANYTHING else was pushing through the Dakota Access Pipeline.

So yes, the fossil fuel industry's 1/6 of the economy, it's set to dry up and blow away over the next decade, and it's taking the calm dignified approach the RIAA and MPAA did when confronted with Napster. (I.E. they flipped out and did everything short of send assassins for several years.) Add in the GOP's Southern Strategy driving that party outright psychotic and it's "interesting times" indeed. The plutocrats the GOP nominally serves are drowning, and trying to squeeze all the cash they can out of their stranded assets before the loss of value is recognized and their stocks implode. (The coal industry's already lost 99% of its value. It seems to go solid/liquid/gas: oil is next, natural gas after that. Probably something to do with the energy density or ease of mining/storage/transportation?)

The point is, putting solar panels on your roof isn't just good for the environment, and isn't just a good financial move, it's also the most direct financial strike you can make against the current Republican party and the oligarchs supporting it.


May 15, 2017

Driving back to Austin today, with Fade (and Adverb). Summer vacation! She's got one more meeting and then we can head out in the afternoon.

People on twitter periodically complain about video only tutorials; I'm noticing now as I prepare to do videos. None of the people complaining ever read my blog, so the text I've been writing here for over a decade is lost on them as they notice the popular highly shared videos that many, many, many more people watch, and presumably lament their existence.

*shrug* My motivation is "lemme just show you my screen as I do a thing, and walk you through it explaining as I go". I'm aware it's less keyword searchable and not likely to age as well, but in terms of getting the info out there to a large audience today, television became a major industry for a reason.

I suppose I should break some of my longer writeups out of random mailing list posts and blog entries and into separate papers, but I've done that. Some even got picked up by magazines... which then went defunct. (I got interviewed by Linux Luddites twice (episodes 11 and 88), that ended at the end of 2016). Used to write for three different Motley Fool columns that no longer exist (I was one of several rotating authors for each, but the sections went away; you can still dig a few up if you try but my authorship info seems to have gotten lost in a database migration).

All that's no stranger than anything else on the internet going down, the most ironic of which is the article on institutional memory loss going down at its original location and only being available on archive.org now. This is why I mirrored the computer history articles I was using for research back when I thought I'd have time to write a book on that topic. :)

But it does mean if I want to produce new material, properly composed and edited and organized to be vaguely coherent to other people... well Youtube has the advantage that the first video ever uploaded to the thing is still there. (Stuff goes down for stupid copyright reasons all the time, but it doesn't _expire_.)

Part of it's my recent usage patterns. I've been watching a lot of video on my phone because I have it with me. Half the time I just listen and glance at the screen every few seconds. Trying to read on the phone screen is less convenient (especially while walking), it's ok for twitter but less so for long texts...

And then there's the "how do you find new content" question. Once upon a time there were RSS feeds, but Google killed its reader and the open source community's been worse about those than it's been about email clients. Social indexing goes back to slashdot and livejournal, these days it's twitter but that's a lot more time consuming than livejournal or slashdot used to be, and these days I skip reading my feed for days at a time when I just don't feel up to hearing about the new horrors du jour. (And it goes out of its way to make sure catching up on what you missed is a huge pain.)

Youtube's video sugestions are annoying and sturgeon's law writ large (vast swaths of crap), but I do find interesting new things there when I dunno what I'm specifically looking for.


May 14, 2017

Collated all the solar power research (or at least URLs and text not files) I did from the big machine, so it's all on the netbook now. I still need to grab a gazillion "watch later" boomarks off my phone, although there's a lot of overlap. I threw a little of it into a post to the ongoing j-core thread on the svlug list, but there's a good hour-long talk here. Alas it's an area I'm not a recognized domain expert in, so I don't get to _give_ that talk anywhere. (And most of it's "go read/watch this, then this, then this..." anyway.)

A couple weeks back Elliott gave an Android NDK status update (about building toybox with it), which linked to info about the nightly builds, which links to a login screen, which is where I stop. Any process that requires a login to download the code is not open source, and I'll wait until they do a release I can download without a login.

It's a pity, I'd love an llvm toolchain that links code against bionic to test toybox against, but bionic doesn't have makefiles so you can only build it as part of the AOSP hairball, at least within the limits of my interest in going down that particular rathole. That leaves me downloading binaries other people built, which only open source the release versions.

Way back at Flourish in 2010 I gave a talk about how open source actually works (project gravity and barriers to interest), but they screwed up the recording. I should redo that talk. (I keep trying to give better versions of talks I already gave at Texas LinuxFest, hoping to get a recording I can point people at. But so far, my txlf talks are... sad and underprepared. Something about a local conference I don't travel for tends to result in everything getting pushed to the last minute. Ok, the time I missed a bus stop and gave myself heatstroke walking to the venue didn't help.)

Speaking of redoing talks, fade ordered a $12 lapel mike off amazon after I watched several youtube videos and came to the conclusion that if I ever have to care about the difference between condenser and non-condenser microphones I'll just use the laptop's builtin-in microphone, and teaching VLC to listen to a USB input is not a can of worms I want to open, but every time I've been "miked" at a conference they just clipped a thing to my shirt collar and I can order one of _those_ and stick it in the microphone jack and call it good. So I'm trying that. (She sent it to Austin, I'm still in Minneapolis, so it's a ways off.)


May 13, 2017

Visited my sister and the niecephews. They are all getting larger (except my sister, and Ian who is 17 and may be fully tall at this point). Sam's going to be dangerous in a few years. Sean was mostly out with a friend. I watched Carrie fight the ender dragon in creative mode, sit through 15 minutes of fairly pointless end credits to leave, go back in to see if there was another ender dragon (there wasn't), and then work out you can hit escape to skip the end credits when it tried to play them again when she left again.


May 12, 2017

What did I do today...

Still not sure how to deal with Andrew Morton's email (about my patch for using devtmpfs automount with initramfs). What does he want?

In mkroot I wrote an email about HOST_EXTRA= updated README to mention the mailing list, unquoted $HANDOFF to support multiple arguments, tried to write a checkin comment showing the syntax for using multiple qemu -append instances and found out doing that doesn't work, so I emailed the qemu list. (They said -append doesn't append, it supplies a single set of arguments. The kernel might or might not append them to built-in arguments, but that's the kernel for you. So why is it called append then? No idea...)

I should reply to scsijon's mkroot email about future directions for the project, and post to j-core about turtle orders/pricing...

The guy who keeps trying to turn toybox into a shared library is back, only this time he didn't say that's what he was trying to do until message #5 in the email thread. (If you statically link something that does malloc() and then link it against something else that does malloc() you now have two instances of the heap, and as soon as you allocate from one and free that memory in the other you've got corruption. This is why statically linking shared libraries is dangerous, and thus why libraries depend on other libraries at runtime.)

Ongoing linux-kernel thread about my linux-fullhist tarball, which is actually somebody else's project I mirrored/refreshed when their page went down. (If somebody's looking for a standard for git commit numbers older than Linus's tree I'd much rather they use the existing thing that's been stable for the past decade than invent some new half-assed thing based on bitkeeper.)

Closed the window with the Linux Plumber's Conference call for papers, since the extended deadline of the 11th passed yesterday, and I really don't care about volunteering my time to make money for the Linux Foundation. I should do an updated writeup about the linux foundation, since people showed interest in the last one, and there have been several developments since then.

(A change of scenery is really helpful to my productivity. And Minneapolis doesn't have multiple cats trying to climb onto my keyboard and/or shoulder while I'm typing.)


May 11, 2017

The phone repair part arrived! (I dropped it twice over the past month, smashing each end of it. Still works fine in the middle but there are sharp edges, and the proximity sensor is jammed so every time I make a call the screen blanks as soon as it connects and you can't use any of the buttons. I switched on "power button ends call" in settings, but I can't dial the passcode for the daily conference call on this phone.

I called around to repair places and they wanted over $200, but the part itself is only $40 and there are youtube videos on how to do it, so...

I followed a youtube video's instructions on how to install the new part. (Which boils down to "take apart old phone like this, move this list of parts to new front panel, reconnect wires, put screws back in, snap on back panel".

The big missing instruction is that lots of stuff is glued together. The guy in the video had a large hair dryer (heat gun?) that I didn't have. The battery especially broke two of the plastic tools over the course of an hour before finally coming out. (I should have ordered a new battery while I was at it, the old one's got greatly reduced capacity at this point. Removing the battery is like the first 1/3 of the disassembly process.

The front panel turns out to be about half the phone's thickness, containing the display, touchpad, front glass, and most of the side case. About half the remaining space is battery. The motherboard (the top 1/3 above the battery) is tiny, connected to the even smaller bottom circuit board (below the battery) by two long (antenna) wires and a pair of ribbon cables (one above, one below the battery). Another ribbon cable plugs the top board into the battery, and two more ribbon cables go from the top board to devices plugged into the front panel (the front camera and I think the proximity sensor?) I had to move the camera, power and volume buttons, two little square rubber earhorns that funnel sound into the top and bottom microphones, and possibly a couple other things I don't remember. All of them were fiddly.

That funky slot I thought was an sdcard is where the sim card goes! In a tray that would be really clever if it wasn't upside down so the sim card falls out when you open it. I somehow got the top motherboad out without noticing, but couldn't put it back in without removing and replacing the tray.

I did not drop any of the ten tiny screws! I don't think I stripped any either, but it was close. The back panel does not want to snap back on. It's got like 8 clips, of which maybe 5 are working. I wonder if I can get a new back panel cheaply?

I'm used to this phone, reasonably comfortable with it, and don't want to lose my apps and bookmarks. (Twitter app user tokens, game progress, etc.) Given that I've called every mandatory google account they make you do setting up a phone "goawaynow" with numbers after it, I have yet to migrate one to a new phone. It may have a password, who knows? I took it to the T-mobile guys to ask if they could do it, and they said Google intended for me to do this and who was I to not go along the path Google laid out for me, for shame. (Yeah yeah, maybe I can fire up adb, copy the filesystem contents, and copy it to the new phone? That sounds easier. But I'd probably have to root it first, and that usually involves a complete reflash. Hmmm...)

I also really like this metal poky thing out of the toolkit. I have no idea what it's called or I'd order another dozen of them, it's basically a flat metal toothpick attached to a flat metal ring and it's SO USEFUL doing this kind of fiddly work.


May 10, 2017

Ah, Elliott clarified that he _isn't_ going to plumber's this year. They track wiki imported last year's page as a starter, and then never updated it to remove people who weren't returning.

In that email exchange he mentioned the pending list and the first item on it was chrt.c, and I started lookin at cleaning that up. It's not a command I use much myself, but it seems potentially tractable with a bit of research...

Elliott also sent a date %N patch, which I'm pondering. The problem is libc has this thing called "struct tm", which is time broken down into individual fields (year, month, date, hour, minute, second, weakday, timezone, etc). The strftime() function converts struct tm into a string representation, using a printf-like pattern with a bunch of its own escapes to say "put two digit century here" or "abbreviated month name according to local locale". Quite a pain to replicate this yourself, and not really feasible once you take localization into account.

But there's no nanoseconds field in struct tm; it predates computer being fast enough to care about that. You get whole seconds and that's it. These days, lots of things care about fractions of a second, so ubuntu's date added a %N escape, which it parses and replaces before passing the rest on to libc's strftime(). And oh there be dragons there: %%N is a literal "%N" in the output, the other strftime escapes do the printf escape trick of "%34N" padding out the result, and you probably don't want "% %Ns" turning into "% 123456789s". Luckily I factored next_printf() out of seq's insanitize() back when I wrote it, so I've got a function that parses user-supplied printf strings already....

I'm only adding %N to the output side, though. You can set nanoseconds with @UNIXTIME.NANOSECONDS already, and making the parsing work on the input side is noticeably harder (because you're not modifying the format string, you're cherry picking data from the middle of the _input_ string, and figuring out where to take it from is equivalent to correctly parsing all the other escapes...

Meanwhile, I'm participating in a long thread on the Silicon Valley Linux User Group mailing list (which I subscribed to setting up a j-core meetup at CELF and never unsubscribed from). They mentioned j-core and I replied in, and it turned into an ongoing discussion of j-core vs risc-v.


May 9, 2017

I wonder if I should collate various big writeups I've done and post them as essays under the writing directory? I usually do them as as blog entries, the kind I leave half-finished and then constipate the uploading part of the blog until I get around to finishing them. (I write _newer_ entries, but can't upload them out of order. Yeah, my process often sucks. Or doesn't scale well to the amount of free time/energy I have divided by the number parallel things I'm trying to do. But improving my process is itself a significant demand on my time and energy, so... (And depending on external things like livejournal or sourceforge always turns into technical debt and eventually a _new_ timesink to fix it. No, I don't trust wordpress or github any more than their predecessors, they're fine _now_...)

This is related to my "I shouldn't go to conferences where the talks I'm proudest of tend not to get recorded (sad eyes at Flourish in Chicago which promised to put my 3 waves talk up...), or I run out of time and give an insufficiently prepared talk (symptom: I have 3 hours of material for a 45 minute timeslot and run out of time), and the last few times there was bad jetlag screwing up my ability to present..." I.E. I should just podcast. Except learning to podcast (mostly the editing part) and making time for it... Sigh, it's on the todo heap.

(I'm happy to use youtube for distribution the same way I use github for distribution. I don't _depend_ on them, its a read-only archive of stuff I have backed up locally in at least triplicate.)

Speaking of which, the 2010 Flourish talk I did (which is missing the first 30 seconds and then has buzzing drowning out what I'm saying for the next minute, but the bulk of the talk was at least recorded and I should record a new intro to it maybe? Anyway...) was on something called blip.tv which went away after a year, but I downloaded a snapshot of the video file, except it's a "flash video" format file and what supports that anymore? I should convert it to something else, but last time I tried the audio and video got out of sync. (Keyframes exist for a _reason_, people. Grrr...)

I tried re-giving the talk at Texas Linuxfest, but something about that convention always leads to me giving substandard presentations I'm unhappy with. Maybe because it's local and my subconscious doesn't take them seriously? Dunno. I'm aware there are conflicts here. I need a respectable externally imposed deadline without it resulting in such a time crunch the task I'm traveling to do gets starved for prep time or I wind up too exhausted to perform. Maybe the fundamental problem is that writing and giving talks isn't really part of my job...


May 8, 2017

I am finally using my blog for its original purpose again, which is reminding myself of where I left off. Specifically, I hit the part that removes the trailing ".gz" from filenames and went "basename does that too, and bunzip should do it, and I've already got strstart()", so yeah I need to make a nother library function.

Context: I finally dug up Mike Morton's patch to fix zcat (which got buried under the "finish implementing deflate compression side" todo item). It seemed like there should be a way to fix it so !pos was the right test and I didn't need to repeat the 32767, but ten minutes of fiddling didn't find it, and that's more than saving a half-dozen bytes is worth. (Yes, it took something like 9 months to spend that ten minutes. Sorry.)

This is at the top of the todo list because Elliott needed gzip/gunzip and he did a zlib version in toolbox, which he ported to toybox when I asked. Now it's my job to try to connect up the deflate side stuff I've already done with that plumbing, then try to finish compression side. ($DAYJOB is eating my life but I NEED to keep up on toybox if it's to earn its place in Android let alone achieve the self-hosting agenda.)

Ideally, I should also do zip (which is mostly directory manipulation once you've got the deflate code), and then review and promote "tar". That's probably enough to cut a release, which I need to do anyway for mkroot. (Of course work wants me to do something else, building an SDK around codelite. But it's a weekend day.)

Anyway, added strend() to lib.c.


May 7, 2017

Driving to Minneapolis to pick up Fade and take her back to Austin for the summer. Her last scheduled thing is the 15th, but I'd rather be there early than late so driving up this weekend. Once again stopping at rest stops to get some coding done.

That armv7l target wants insane command line entries. The qemu wiki linked to a page that suggested the command line:

qemu-system-arm -M virt -m 1024 -kernel installer-vmlinuz -initrd installer-initrd.gz -drive if=none,file=hda.qcow2,format=qcow,id=hd -device virtio-blk-device,drive=hd -netdev user,id=mynet -device virtio-net-device,netdev=mynet -nographic -no-reboot

Which is just nuts. The image filename's in the middle of large gratuitous data blob. My new mkroot plumbing is simpler in a lot of ways, one of which is that each qemu-$ARCH.sh script it generates ends with "$@" so any extra command line arguments you pass in get appended to the script. I.E.

./qemu-armv7l.sh -hda walrus.img -hdb potato.img

Which should make dev-environment.sh and friends easier to do. But that -drive and -device stuff is architecture specific and even if I did make some sort of translation regex (ew), it need _two_ arguments for each component it's adding (in this case network card and hard drive), with one of the fields there to collate them. Given id=mynet pairs with netdev=mynet, id=hd pairs with drive=hd. If I add multiple drives, does each one need to be unique, or is one of these a controller card, or...?)

The old way ("-hda filename") was _simple_. I like simple. I needed to provide exactly one piece of information, and it figured the rest out itself. This new micromanagement is complete bs.

Reading the qemu source code to see if I can make it use the old way, it looks possible but tricky. The source (vl.c) says that if I don't specify any network stuff at all it sets default_net which does its own "-net nic -net user", which I guess pulls default values? (The call in vl.c goes to qemu_opts_set() in util/qemu-option.c which calls qemu_opt_set() which is a gratuitous wrapper for opt_set() which does find_desc_by_name() and qemu_opt_parse() plus 20 lines of gratuitous plumbing, which is about where I lost interest. (One of the C++ diseases is thinking lots of extra wrappers and data marshalling makes your code better.) Given the behavior of other boards, I'm going to assume this -net nic selects default hardware the board actually implements.

Meanwhile, -hda seems to be populating hda_opts which is adding IDE disks so old it's setting cylinder, head, and sector values? (Not relevant for over 20 years, since Logical Block Addressing replaced it.) But there's still drivers for the old stuff, and they're drivers you can attach to a PCI bus so in theory ARM should be able to use them. So let's go back to the multi_v7_defconfig and switch _on_ the PATA stuff... And yes, it's finding -hda and It's showing up as /dev/vda?

So -hda is adding a virtio device? Um... yay? (No idea how but I'm not spending 6 hours reading qemu source to find out.) But my stripped down config that was the minimum to work with the verbose command line is _not_ seeing the "default" network or hard drive types. Even though they CLAIM to both be virtio devices, they're somehow _different_ virtio devices. But of course.

Alright, I have a config that's finding this stuff, let's go through the strip-down process again... [2 hours later] For some reason, it needs five board types (ARCH_MULTI_V7, ARCH_VIRT, SOC_DRA7XX, ARCH_OMAP2PLUS_TYPICAL, and ARCH_ALPINE), and 7 ATA/PATA symbols, in order to see a virtio network card /dev/vda device. Oh, plus VIRTIO_PCI and VIRTIO_MIO.

The ways of the kernel are mysterious and creepy. I should dig into things and see what those symbols actually _do_ (set more symbols? What's #ifconfigged in the kernel?) but for right now it works. The architecture-specific armv7l part sets 27 symbols which is ridiculous, but I'm checking it in and moving on for the moment.


May 6, 2017

Three different CFP are pending, and I don't really want to do any of them. I should do podcasts, but need deadlines. (Many moons ago I wrote regular columns for The Motley Fool, and I'm still proud of some of 'em, but what I really miss about that period is the productivity. I needed to emit wordcount, on a schedule. Quality was secondary. Strangely, looking back on it, a lot of the stuff I'm most proud of was banged out so I had something to hand in to an editor waiting to put _something_ up. But "deadline" is not the same as "travel expenses, hotel, jetlag, huge time commitment to go to a physical location where I can record an hour's worth of video I could have recorded from home...")

Of course after deciding _not_ to submit to those 3 CFPs and letting the clock run out, this evening I saw an email about the android track at plumbers which Elliott's attending. It would be nice to hang out with him, but I still don't necessarily want to _present_, and merely attending is way too expensive, as one of the main Linux Foundation profit centers.

If I wasn't so exhausted maybe I'd come up with a presentation topic and do a CFP, but the Plumber's deadline's like now and I have to drive to minneapolis tomorrow. (Would have headed out tonight but Adverb's doggy anxiety pill refill requires 24 hours notice and I forgot to call it in earlier.)


May 5, 2017

Dropped my phone again, smashed the screen more. Still usable, it's now cracked along the top and the bottom. Haven't cut my finger on the glass yet,but it's probably only a matter of time.

(I've been taking long nightly walks to UT to use my netbook at a convenient picnic table near an outlet under an overhang so I don't get rained on if the weather turns nasty. Along the way I watch youtube/netflix/hulu videos and play phone games, which means I'm carrying my phone out in an easily droppable position, above a concrete sidewalk. I _try_ to work at home, but cats prevent it. And it's hard to walk any length of time during the day in Texas, the sunlight's brutal. Hence long night walks when telecommuting gives me the schedule freedom to do that.)


May 4, 2017

The problem with most of the qemu advice that's easily googlable is it doesn't provide all the info you need. For example, this page provides a nice qemu command line to get armv7l working with the "versatile express" board, and there's a vexpress_defconfig to get a kernel config started. But the only block device it can do is -sd which is a flash disk, and a quick test using "-sd MAINTAINERS" in the kernel source says it's truncating the supplied block device down to multiples of 262144 bytes. Meaning if I whip up a squashfs file and -sd blah.sqf it'll truncate it and thus corrupt it.

Meanwhile this page gives an elaborate qemu command line for booting armv7l under the "virtual" board (where all the devices are virtio stuff provided to the kernel as a device tree by qemu), but doesn't say what kernel config you'd want to start with. (It says to grab debian's, but what config did debian build with?) It also expresses opinions about wanting to use one emulated device over another, but doesn't say why. What's better or worse about those emulations? (Speed? CPU usage? Maximum available size? Deprecated code? No idea...)

Luckily I looked at arch/arm/configs and guessed multi_v7_defconfig could _probably_ do it (it helpsto have followed the development discussion for that stuff years ago; I've read piles of useless crap over the years, and even if I didn't understand much at the time when I bump into something related years later I remember vague hints I can track down with fresh context).

And building that kernel did indeed boot! Woo! Now I'm stripping down the config to just the symbols I need, using the same "miniconfig, add baseconfig, comment out lines" technique I mentioned recently.


May 2, 2017

The dropbear build silently ignores $CROSS_COMPILE and just uses "gcc" if you feed it a path with a wildcard in it (ala blah/armv5l-*-cross/blah). And then at link time, it uses the cross-prefixed linker and the build dies on unknown .o file types.

All together now: cross compiling sucks.


May 1, 2017

I hath defeated s390x and gotten it to boot under qemu with musl-cross-make!

The steps to getting a new architecture to work in mkroot are:

First get a toolchain that runs hello world (then toybox) under qemu application emulation. (That's all the mcm-buildall stuff I did with musl-cross-make.) Use the "mkroot.sh" script to build a cpio.gz root filesystem for it. (They're pretty much all the same, just built with different compilers.)

Next find a kernel defconfig that matches one of the boards qemu system emulation does for that target. So "make ARCH=that blah_defconfig" matching "qemu-system-$ARCH -M thingy -cpu thingy". You can usually do "-M ?" or "-cpu ?" to list your options, and "make ARCH=blah help | grep defconfig" lists the defconfigs (or just "ls linux/arch/$KARCH/configs").

(There may not be a defconfig, maybe there's an out of tree config somewhere. Or you can guess and try to enable symbols until you get something that works, but that's long and frustrating.)

Then figure out which output file qemu-system-xxx wants to boot: it could be the top level vmlinux or one of the arch/xxx/boot files (zImage, bzImage, etc). The qemu command line is generally something like:

qemu-system-arm -M blah -nographic -no-reboot -append "console=ttyACM0 panic=1" -kernel vmlinux -initrd blah.cpio.gz

Except QEMU's ELF loader isn't universally hooked up on all emulated boards, and when it isn't you have to find the output format the hardware it's emulating expects (usually under arch/$ARCH/boot somewhere, often called *Image).

If you don't get any console output, focus on making that work first. If you get _strange_ output, grep the kernel and qemu source code to see where the message is coming from.

For s390 I didn't have a lot of defconfig options (default, gconf, performance, zcfdump), and feeding qemu the "default" one just got a brief complaint about unsupported CPU features. But grepping showed the message it produced came from the kernel's arch/s390 directory when meant the kernel was booting and producing serial output! (Yay!) To get something that would boot I had to switch the "processor type and features->processor type" in menuconfig from CONFIG_MARCH_Z196 to CONFIG_MARCH_Z900. The menu was laid out from oldest to newest, so I just picked the top entry, and that worked (I.E. was something qemu could emulate). There was a lot of googling along the way finding useless stuff: the message claimed I could look up the bits in an IBM manual (I found a PDF of said manual via google, it had no useful information), I found mailing list messages about adding service bits to qemu's s390 support that didn't help (possibly never made it in? There's no -cpu options and no useful -M in qemu-system-s390 so I can't really select other machine/cpu variants to emulate, need to change the kernel I'm building to work with the qemu default board instead). I thought maybe I'd need to rebuild the compiler to use fewer CPU features, but A) there wasn't an obvious knob in the gcc config for that, B) the userspace code application emulation ran and seemed to work fine ("qemu-s390x ./ls -l" on the toybox binary). Eventually I tried fiddling with kernel .config processor type selection and that was it.

The "can qemu application emulation run a userspace binary" smoketest is important if you have a problem launching a shell prompt. Right now microblaze can run "toybox" with no arguments, but "toybox ls -l" dies with SEGV which generally means illegal instruction, which maybe means a floating point mismatch? (I hit this on ARM too.) This might mean kernel config (enable floating point support and/or emulation), or might mean toolchain (use soft float). Once again QEMU isn't giving a lot of options for this target (no -cpu ? output, and -M ? just has little endian or big endian for the same "petalogix" reference board, although board emulation only applies ot system emulation and this is application emulation also failing. If it's not floating point, most likely it's something wrong with musl, but presumably other people would have seen that?)

Once you've got a kernel that boots to a shell prompt, test that you've got A) the date set right (if there's no "hardware clock" to query it'll say 1970; make freaks if your clock is older than your source), B) that you've got a block device (add "-hda somefilename" to your qemu command line and look for a /dev/?da (hda, sda, or vda) that shows the contents of that nost file; if that's too noisy look at /sys/class/block to see what the kernel found), C) that when you "exit" the kernel shuts down (the panic=1 in the qemu -append argument turns a kernel panic into a reboot after 1 second, and -no-reboot tells qemu to exit instead of trying to restart), D) that ifconfig shows a network card (see /sys/class/net) and maybe that you can ping 10.0.2.2 or something (that's qemu's masquerade alias for the host's 127.0.0.1). I run a web server on the host's loopback so I can do wget http://10.0.2.2/filename as an easy way to copy files into the emulated system. (The uuencode cut and paste via serial console trick works too. I should set up a network mount to replace the web server, but both v9fs and smb are on the toybox todo list down with "rsync" and "screen"...)

If you have a block device or network card in defconfig but dunno which driver it's using, try "ls -l /sys/class/net/eth0/device/driver" (or block/vda instead of net/eth0 for block devices). The basename of the directory the symlink points to should be the driver name it's using.

Then you can "make ARCH=$ARCH menuconfig" and hit "/" to search for a config symbol name, and find the symbol controlling that driver. In extreme cases I've had to do something like:

$ grep -r --include "*.c" '"virtio"' .
./drivers/virtio/virtio.c:	.name  = "virtio",
$ grep virtio\\.o drivers/virtio/Makefile 
obj-$(CONFIG_VIRTIO) += virtio.o virtio_ring.o

You don't need the CONFIG_ prefix when looking up symbols in menuconfig with forward slash, although it'll find them if you use it. And I haven't found a way to tell it "exact match" so if there are 8 gazillion CONFIG_VIRTIO_IGUANA symbols it will show ALL of them in random (hash table) order, so you may have to scroll down a bit to find the one you want. (Kernel developers do such wonderful user interfaces in every userspace tool they touch, don't they?)

The point of all that is this tells you A) where to find the symbol in the horrible nested menus, B) what it depends on and what chunk of that's enabled.

Of course what I usually wind up doing is making a miniconfig out of the .config (using my old script for that), and then commenting out lines (prefix with #) and rebuilding and testing to see if that yanked functionality I care about, keeping the lines I wind up needing. You can save time by cutting and pasting the common prefix blob (the part getminiconfig() prepends in module/kernel) to the start of the file, so you don't have to go back and filter out those symbols later.

Sigh. I should make an instructional video about this process and fling it on youtube.


April 30, 2017

Announced mkroot on my patreon.

There's a chicken and egg problem with the patreon. I'd love if I made enough money from that I could do open source full time and didn't have to work a day job, but I'm swamped with dayjob stuff so I don't update it nearly enough. (And feel really bad about that.)

There's also the fact my day job is REALLY interesting work that I want to see succeed, but so is turning Android into a self-hosting development environment, so is the "digging down to the simplest possible Linux system and documenting how it all works" stuff that Aboriginal Linux and mkroot are both about, it would be great to have time to spend on qcc or similar, I'd love to do educational podcasts, I miss the days I wrote a weekly column read by 15 million people (that old Motley Fool stuff back before the dot-com crash cost them half their staff and changed the nature of the company)...

I suppose I could just make puppy eyes at Google to see if they wanted to hire me to work on toybox full time, but... seems impolite somehow. (I gave them a free thing as a trail of breadcrumbs to lure them in the direction I wanted them to go. I need to put down more bread crumbs to get them to go further, but I'm busy and tired all the time.)

And I'm seriously pondering a Japanese class ACC's doing over the summer, since I've been to Tokyo a half-dozen times and still only have about a dozen word vocabulary in the langauge. (Because my time isn't sliced into small enough pieces already.) But I haven't signed up for it _yet_ because I dunno if Jeff's coming to Austin or if I'm going to Tokyo again (or flying to California to meet up with him) or what. Last time I left Tokyo Jeff planned to be in Austin a week later. It's been 2 months and still no idea of what happens when...


April 29, 2017

Trying to get new architectures I haven't done before to work. s390 and microblaze are being stroppy.

Fuzzy bought an Oyster Mushroom grow kit a month or two back, which is basically a cardboard box full of a big cube of pressed sawdust, which you open up and soak with water, and it sprouts edible mushrooms over the next week. It worked quite well for the first harvest, but she cut them too close to the base or something and they stopped growing new mushrooms.

Undeterred, she did some research and has been putting her used coffee grounds in glass jars with oyster mushroom bits, which are slowly filling up with white fibers as the mushroom fungus grows and eats the coffee grounds. At some point she exposes them to light and they start fruiting (growing edible mushrooms).

Meanwhile, Fuzzy's new boyfriend is vegan, so she keeps trying new vegan recipes she can feed him. Mushrooms are apparently an important part of the vegan experience. I can't get behind replacing butter with shortening though, that's a clear step down.


April 27, 2017

Went ahead and moved the kernel.sh build script to module/kernel, and overlay.sh to module/dropbear. I need a distcc one too if I'm going to recreate aboriginal linux's build control image infrastructure.

Banged on armv7l, which won't fit in the versatilepb board but will fit in "versatile express", which also needs a device tree binary supplied to the kernel but doesn't want to do the append it to the kernel trick.

However, over the past couple years qemu's grown a -dtc command line option, so I taught mkroot's kernel to pass one in. Also fixed the RTC in armv5l (different board), bisected the mips breakage...

Keeping a project like this working (as components upgrade and don't _quite_ fit where the previous version did) is a constant grind. Catching up after a year away is almost like starting over. (I'm cribbing heavily from my old aboriginal build to get stuff going _this_ fast, but I never had armv7l or s390x working before...)


April 26, 2017

Still banging on mkroot. I got armv5l, arm64, x86-64, and 32 bit powerpc in and working enough to boot to a shell prompt via initramfs. I need a second pass to add block device and network support (and confirm everything's got a persistent clock, and exits the emulator when the virtual system halts), but let's just boot to a shell prompt for now.

I want to move my kernel.sh to a subdir/kernel.build but can't quite figure out what to call the subdir. I've already got "build" and "packages" taken. I've been referring to them as "overlays" but that's not exactly it...

Hmmm, really "packages" is downloaded source, the new directory I want is build scripts/stages/modules, and the current "build" is temporary directories. So I could rename build->temp and packages->download, then call this one... "modules"? I'm aware the kernel has its own modules with a different meaning.

There's still the slippery slope argument (I.E. the distro trap, pages xx through xx of my old presentation). But the great thing about git (and this design) is people can just fork it. I personally can say no, if other people want to go all buildroot they can.


April 25, 2017

People keep saying the open source community has all these people doing all these things, but I keep hitting simple stuff nobody's bothered to do.

On a larger scale I've complained about the years it took to get the perl removal patches in, and how nobody bothered to implement initmpfs (and nobody's built on what I've done). And I've pointed at the years the squashfs maintainer spent trying to get his thing merged even _after_ it was already in basically every distro (but not kernel.org).

Back when I was doing Aboriginal Linux, I regression tested each new kernel release on every platform I had running under qemu. Aboriginal Linux rolled to a stop after the 4.3 kernel because the next two releases broke on 4 different architectures, and some of the breakage was the kernel changing to require features my old toolchain (built with the last gplv2 release of gcc and binutils) didn't implement. With $DAYJOB sucking up all my time (and more), I didn't have the bandwidth to tackle that.

In the switch from aboriginal to mkroot I went from a 4.3 kernel to 4.11, and in between the kernel guys broke sh4's serial port under qemu, changed how arm works (so supplying a device tree binary is no longer optional and if you don't it's a brick), and mips isn't shutting down properly for some reason. And that's just the targets I've tested so far.

Not that this is new. New releases break stuff nobody's tested all the time. Like the time Sparc32 stat broke and nobody noticed for a year because nobody ran current kernels (they were all using an old Debian sta[b]le release). Not the only time sparc broke either.

And I wrestled with arm scsi emulation for years, and then Arm interrupts changed and qemu and the kernel guys were pointing fingers at each other for months...

Before all that, there was the horror that was the powerpc boot (oh there were some long threads before they eventually implemented enough bootloader plumbing to pass in a device tree, and yes their first pass at that was called "open hackware".

Sigh. One of the reasons I did aboriginal linux (and one of the reasons for mkroot) was to automate this kind of basic smoke testing. When it breaks, I notice while the change is still fresh in the mind of the guys who broke it.

But I'm _waaaay_ behind now, and mkroot is third behind toybox which is behind my $DAYJOB crisis du jour..

I'm starting to understand how Erik Andersen felt torn between uClibc, busybox, and buildroot. I had aboriginal, toybox, and qcc. Now I've got j-core and this closed-source gps stuff. I've ditched qcc for the forseeable future and tried to focus on toybox, but toybox needs something like aboriginal as a test harness. So I'm doing mkroot as a simpler version now that somebody _else_ has agreed to ship current toolchains as not just cross compilers but native compilers for all these targets. (I refuse to host gplv3 binaries on my own dime. That's a proprietary license owned by a cult. I waited out AOL and Windows, I'm currently waiting out facebook, I can wait out GPLv3 too. This too shall pass.)

I'd love to hand projects off to people who could do them better, but if people were doing them better I'd already be spending my time on other things...


April 24, 2017

Elliott Hughes sent zcat tests to the list, which are related to him writing a gzip that calls out to zlib. Toybox tries not to have external dependencies (so we can do a self-continained system bootstrap providing our out dependencies), but although my inflate code works it's got a bug (somebody sent a patch to the list that looks like "the wrong fix" but shows me what/where the bug is, it's on the todo heap), and I never finished deflate.

So I'm sitting on two simple patches which are actual cans of worms.

I feel bad about not keeping up. The demands SEI places on me are greatly slowing toybox development. This wasn't as a big a problem at pace and polycom and cray because A) telecommuting, B) SEI is _interesting_ development (helping solar along!), C) it's a teetering start-up that's understaffed because we can't afford to hire more people right now, meaning all of us are spinning a half-dozen plates.

A and B means I don't just leave it at the office, and C means the work is never done. This doesn't leave a lot of time/energy for toybox, but it's important to me so I'm trying to do as much as I can there too...

Sigh. I need a vacation.


April 23, 2017

If you want to understand what's going on with solar, start with these three videos: stanford professor Tony Seba teaching a class in 2013, then giving a book talk last year, then having his book talk analyzed by a mutual fund in india earlier this year.

I recently blogged about how this affects Russia, but here's a longer (excellent) article about why they hijacked our politics and why the CEO of Exxon was so happy to cash out of the oil industry (to become Secretary of State).

And here's a small 5 minute edit from a guy named Avery Lovins who gets annoying in his longer talks, but has really good info nonetheless. (The first minute and change of his ted talk is pretty much peak annoying for him, but if you're prepared for that he's got a very informative talk, worth gritting your teeth through.)

Next up, there's a think tank in Colorado that also has a lot of nice talks.


April 22, 2017

Trying to build a s390 kernel and it's not cooperating.

$ qemu-system-s390x -M s390-ccw-virtio -m 512 -nographic -no-reboot -kernel arch/s390/boot/bzImage -initrd ~/mkroot/mkroot/output/s390x-linux-musl-root.cpio.gz The Linux kernel requires more recent processor hardware
Detected machine-type number: 0000
Missing facilities: 7,17,18,21,25,27,32,33,34,35,45
See Principles of Operations for facility bits

I googled and arch/s390/kernel/als.c is where the printk about requiring more recent processor hardware comes from, which is a good sign. This means the kernel is running long enough to printk a string, which means the compiler is producing s390 code that qemu is launching and running! Woo! (If it was QEMU printing the string because it hit an illegal instruction or its builtin bootloader failed a sanity check on the kernel image or some such. Linux printing it means Linux is RUNNING.)

That said, I have no idea what a facility bit is, and google isn't finding anything useful. Apparently S390 documentation was in big paper manuals that never got scanned in, and the group of people who understood this got trained behind closed doors and then died of old age or some such. The downside of classified/proprietary info. It tends to get lost to history when the people who know it die and the next generation isn't interested enough to preserve it.


April 21, 2017

Still working on mkroot's kernel builds, I bisected the sh4 breakage to commit 18e8cf159177 and posted to the linux-sh list about that. The reply is that qemu isn't working like the real hardware, which would be at least the third thing wrong with qemu-system-sh4 so far. (I reported the ctrl-c kills the emulator bug in 2014 and it's still broken.)

And since the kernel builds need bc I had to add it to toybox's make airlock which means using the old toybox release with mkroot can't build a kernel, which means I need to cut a new toybox release for mkroot to use. Which means I need to bang on toybox and try to get some stuff done, which is why I'm reading the posix "vi" spec, which keeps referring to the "ex" command for about half its definitions, and it's a TERRIBLE writeup that assumes you already know how to use the command. (Trying to learn how to use "ex" from the posix ex spec is deeply unpleasant.)


April 20, 2017

I've been walking to UT late at night (when it cools down) to hang out at some of the picnic tables near an outlet. Three nights in a row now, which means my sleep schedule is completely horked but I'm finally getting a decent amount of exercise and catching up on programming. (I should get another bike.)

I should probably finish trying to sign up for those Japanese courses at ACC, because the highland campus looks like another very nice place to work. (I can't work at home due to some very demanding cats.) Plus, you know, I'd really like to learn Japanese if I'm going to keep visiting Tokyo.

Long phone call with Jeff about $DAYJOB status and plans going forward, and it looks like we need some sort of strategy summit where everybody collates their todo lists. He wants to build a customer SDK around the codelite IDE, I should probably install that and poke at it.

Email from Linux Plumber's asking if I want to submit a talk. It's _probably_ automated because I made an account in their system years ago (submitting a talk that was rejected). Hmmm. I'm tempted to submit a talk on 0BSD and public domain equivalent licensing, but only if I can put in the necessary prep time to give a _good_ talk. And focused work time has been in short supply this year...


April 19, 2017

Integrated the sh4 kernel build into the mkroot/kernel.* build and it booted by the serial was all screwed up: output seems to happen normally but input pauses for upwards of 30 seconds and is dealt with in large bursts, as if it's not receiving serial interrupts. I left a compile going when I suspended my laptop last night so I was trying to figure out what config symbol I changed that screwed stuff up, and eventually worked out that the change I was testing _was_ that I integrated it into the build script instead of doing a standalone build in linux kernel source directory, by hand.

The change that broke it is that my standalone build directory had Linux 4.3 checked out (the last version I had working under Aboriginal Linux; I'm getting everything implemented in a known-working context before testing 4.11). But the mkroot build just has a "git clone" of my linux with full history repo, which I've locally checked out to 4.11-rc7. So the problem is that sometime between 4.3 and 4.11-rc7, upstream broke the qemu-system-sh4 serial console. (This is why you need to regression test current kernel versions; they subtly break stuff all the time. This wasn't particularly subtle, but )

I want to rename overlay.sh to dropbear.build, but that implies it should go in an overlays subdirectory, which would be plural and brings up the distro trap. Don't really want to open that can of worms, but kinda need to provide examples other people can extend if they want to.


April 18, 2017

I'm researching solar power, which took off while I wasn't looking. As in "exponential growth, it will probably take over in 3 to 4 more years". Solar is to fossil fuels what digital cameras were to film, with Exxon playing the role of Kodak. There's a reason the CEO of Exxon left his job to become Secretary of State, he'd otherwise be presiding over a financial bloodbath which he can instead foist on his successor. It's also why they're so desperate to push pipelines through now, there's no business case for those pipelines to _exist_ 4 years from now.

Solar power generation with local "battery wall" storage doesn't show up in official electricity generation figures the same way Linux doesn't register in PC sales figures: it's not what they're measuring, so it shows up as a reduction in demand for what they _are_ measuring. Conventional Windows PC sales decline every year as cloud, android, and chromebooks displace them. The coal industry's already collapsing, and the fall in oil and gas prices is just starting.

A year or so back, my local HEB had Solar City install 36 crates of solar panels on their roof (yes, I counted). You can't see them from the ground and would never have noticed them going in if you didn't go into the mostly unused back parking lot (which I walk through on my way there, it's quiet and empty enough Google's self-driving cars used it as a staging area a few months later.)


April 17, 2017

Alright, adding bc to the toybox "make airlock" target and bumping it up the toybox todo list. I haven't added a way to patch packages in mkroot.sh yet (so my patch to replace Peter Anvin's bc sabotage with a C implementation doesn't apply here), and I _can_ make a bc implementation. (Heck I could do a quick and dirty double precision floating point one and that's good enough for 99% of the users. And this use case I expect.)

The musl-cross-make sh4 target didn't build a kernel because the kernel build wants to build with "-m4-nofp" and gcc says it's not supported. (It knows about it, it just refuses.) I asked Rich and the fix is to add --enable-incomplete-targets to the gcc configure line. (Because it doesn't have a libgcc built without floating point, although why libgcc would use floating point I couldn't tell you. I can then build a "hello world" -m4-nofp and it links just fine?)

Don't ask me why I never hit this with my old gcc-4.2.1 toolchain. I guess upstream hadn't taught gcc how to fail that way yet.

While I was there I learned about "gcc -Q --help=target" which is interesting. You can ask the thing what options apply to your current compliation. There's a lot of complexity built into gcc.


April 16, 2017

I am really, really annoyed that gmail keeps screwing up and breaking mailman. Specifically, gmail false positively identifies a lot of messages as spam, and when it does it refuses delivery of them.

When it's a post to a mailing list, gmail rejects delivery to each user who gets the message, and mailman retries delivery for each one and after enough rejections it disables the recipient's subscription because that address is not accepting mail.

Meaning every time gmail false-positives an email as spam, it unsubscribes half the toybox list. It's done this a half-dozen times now, and since dreamhost won't give me command line access to the server running mailman I have to go into the web UI and re-enable all the subscriptions. Which it won't give me in one big list, it breaks them up alphabetically, one per letter (or starting number). So I have to load 20+ pages, scroll down through the list of names to uncheck the checkboxes with reason "[B]" next to them, save changes, and load the next page. Every time.

Did I mention there's no https on lists.landley.net (only on landley.net) so the list administrative password transmits in plaintext, so I only do this from home not from random coffee shop wifi du jour.

Speaking of spam detection, android has a nifty phone spam feature where the call shows up as red and says probable spam, and when you get a robocall you can report it as spam yourself (go into recent calls, hold down on the number to bring up a menu, block and report as spam). Since the Dorito administration's never going to do anything about that (and are instead dismantling the Food and Drug Administration and Environmental Protection Agency so "Caveat Emptor" applies to things like eating/breathing/sleeping).


April 15, 2017

Happy rice pudding day!

Taking advantage of the 3 day weekend to try to shovel out mkroot.sh a bit. Specifically get kernel building working. Turning all the Aboriginal Linux sources/targets info into an if/else staircase in a kernel overlay file, which means fixing up mcm-buildall.sh and re-running it on the fast machine to get toolchains to play with.

Right now it's a standalone script, but I should hook it into the main mkroot.sh stuff so it can access download() and setupfor() and so on, which says I should genericize the overlay.sh stuff, probably in its own subdirectory. (kernel.ovl maybe?) I've resisted doing this because can of worms (the "danger will robinson" part of the original 260 slide aboriginal linux presentation). But I need more plumbing to wire up the bits I need this thing to do to be useful. Hmmm. Minimalism vs an acceptable level of basic functionality, trying to find a sweet spot balancing them off...

Other questions need Rich Felker's input, because of the dependency on musl-cross-make. Where to host toolchain binaries (I'm not doing it, that stuff's GPLv3 and I won't deal with the FSF's proprietary license unless paid to do so by a corporation that takes on the legal liability for actions I perform on its behalf). Then there's the actual mcm-buildall.sh script, which I've pastebinned on the #toybox and #musl IRC channels more than once and posted a version of to the toybox list, but those versions are all stale now and where should that be hosted? (Check it in to mkroot? Rich hasn't taken it, but he's busy and I'm the one testing them all to see if they work.)


April 14, 2017

Saw mention of Penguicon online, because the artist/writer pair behind a webcomic I follow live nearby and are attending. (Not a coincidence, pretty sure I started following that comic because I met the artists at the last Penguicon I attended in 2008. I don't really hear about that con anywhere else anymore; it still happens and appears locally well-attended but has receded in importance outside the state of Michigan. They haven't done anything _new_ since I left.)

This got me wondering what happened to my old Penguicon co-founder Tracy Worcester, who had thyroid cancer and changed careers to nursing afterwards. Alas, she's not particularly googleable: there are a lot of Tracy Worcesters out there, the most google-prominent of which is some english noblewoman who does charity work with pigs. Penguicon Tracy's old livejournal hasn't been taken down by the KGB yet but hasn't updated since 2014 either. (Her family home was near a superfund site? Ouch. But given the GOP saturation in their politics, it's not surprising Flint isn't the only part of Michigan where you can't drink the water.)

I lost touch with Tracy after I stopped going to Penguicon because a guy named Matt Arnold took it over and made it all about him, despite never contributing a single creative idea to it I'm aware of. I predicted at the time his ego would chase Tracy out too, and glancing at the penguicon event schedule it looks like he did. (Because he can't pretend it's all about him if the actual founders have any involvement, and he had a pathological NEED for it to be all about him. Dunno why. When I was last there in 2008 I described this theory to Tracy who ran an experiment that confirmed the problem, but my response was to move on. To me Penguicon's nice but not unique, I launched two conventions (that and Linucon here in Austin) and could do it again if I had the time, but my todo list runneth over and smof-ing is an _enormous_ time sink, and _sustaining_ a convention even more so. Not to mention I put $7k into Linucon's first year and these days I've other higher-priority demands on my finances.)

But presumably other Penguicon attendees might know how to get in touch with Tracy if she still lives in the area, and I thought "it's been 10 years, Penguicon's in 2 weeks, maybe I should wander up and say hi?"

Then I read more of the schedule, saw a board of directors meeting, went "ooh, people who'd know about Tracy", and the fist name on it was Mr. Penguicon hisself. I immediately lost all desire to set foot in the state of Michigan again. Oh well. Maybe next decade.


April 10, 2017

Back in Austin, programming at my local Wendy's, and I'm noticing another technology matured while I wasn't paying attention.

A mature technology commoditizes, until it's available from nameless vendors and any instance you plug in just works, indistinguishable from all the others. Smartphones aren't there yet because we still potentially care whether it's a Samsung or LG, they advertise model numers. But HDTV is already there and I didn't notice: how many square feet of it do you want?

I say this because all the menus at Wendy's are just big HDTVs now, as were the menus at every fast food place on my recent cross-country drive. I'm looking at bog standard flat panel TVs maybe 3 feet by 2 feet by two or three inches thick, each playing a video feed probably coming out of a PC or raspberry PI somewhere.

The whole setup looks seamlessly professional, but if you stop and think about what it's made of, there's 6 TVs here at Wendy's making up the menu: 3 of them behind the counter with the menu proper, one off to the left (at the end of the line, trying to sell strawberry lemonade), and 2 more turned 90 degrees (long way up) that people pass waiting in line, with an abbreviated mostly picture menu and the other cycling ads for their menu items. ("Unlock chicken's true potential" is presumably about advances in slurry technology?)

I remember the laminated paper stuck up behind the counter, where new menus came from Kinkos. That era is over, now it's televisions. I remember being annoyed by McDonalds' menus cycling (they only show the dollar menu for 5 seconds at a time, then you have to sit through an ad for their coffee before it comes back) but hadn't consciously noticed the technology upgrade before, specifically how cheap and off the shelf it is. They're just big TVs. Not even that big, it's a standard size available at Fry's, going for what, $200 each before retail markup? Plus $50 for a raspberry pi and mounting bracket/arm, times 6, that's $1500 for the whole setup. The track lighting install to shine on the old paper maps probably cost more than that.

That changed while I wasn't paying attention. I've been saying for years that HDTV would eat the computer monitor market; it presumably already did. This change was such a non-event it went more or less unnoticed.


April 9, 2017

Yesterday evening driving from Minneapolis to Austin, my check engine light came on, a hundred miles from, well, anywhere. And then the fun began.

The light said "Maint Req'd", which could mean _anything_. Low on oil? Clogged oil or air filter? Oxygen sensor? Alternator dying? I remember the time all the water evaporated out of my battery acid in Austin (summers that reach 110 in the shade will do that) and the tiny amount of electrolyte that was left heated up as current flowed through it until it exploded out the safety valve going straight up and spraying acid under the hood of my car. The _first_ time was just a big bang and I thought something hit the roof of the car. (I pulled over and couldn't find anything wrong because a fine mist of acid doesn't look like much and I wasn't closely examining the underside of the hood. The _second_ time a day or so later, the car suddenly lost all electrical power (in traffic) because the battery no longer conducted electricity without the electrolyte completing the circuit.

A single "something is wrong" light is NOT HELPFUL, is what I'm saying. It wasn't overheating, it didn't sound weird, and the lights weren't getting dimmer (although by the time you can tell during the _day_ the spark plugs aren't working anymore; that goes fairly suddenly as the voltage drops below what will jump the gap).

That said, I'm not gonna drive a thousand more miles without some sort of diagnosis. That's the kind of thing where you wind up needing a new engine.

Google Maps' idea of "auto maintenance" was a Love's travel stop which only serves 18-wheelers. For cars, they had two shelves of various fluids you could pour into the thing. Everything else in "town" was closed because it was a Saturday. I popped the hood and took inventory of the various kinds of fluid: power steering, brake, windshield washer, "do not open the coolant you will get scalded we're not kidding", and of course oil which had brown slime between the first and second dots on the dipstick even after it's wiped off and re-inserted, which is where it's _supposed_ to be isn't it?

Triple-A said their nearest approved auto center was in the next state, while I was over a hundred miles from the _border_. As far as I could tell, the lady on the phone knew nothing about cars, nor did the manager she tried to talk to. (AAA can jump a dead battery, let you in if you locked yourself out, and send a tow truck. Possibly they can deliver gas if you've run out on the highway. Anything beyond that is out of their comfort zone.)

In Austin the oil changes places do a random-prime-number point service where they vacuum your spark plugs and check the blinker fluid, and I was hoping if I made it to some place large enough to support a library I could find something similar that would at least try more than one guess. (Having something that could plug into the car's computer and TELL ME WHAT IT'S COMPLAINING ABOUT would be ideal, but possibly a stretch around here.) And sure enough, google maps found a Midas fifty miles down the road, which closed in an hour. I made it there with fifteen minutes to spare.

The little "Maint Req'd" light on a 2002 Honda turns out to be "Maintenance Requested", not "Maintenance Required". Specifically it means it's been too many miles since your last oil change. According to the guy at Midas (who said it was too late in the day to do an oil change and the state of Iowa is closed on Sundays) it pretty much exclusively means oil change; there are _other_ lights for other problems (which you can't see when they're not lit up). And it's not that it's low on oil (I'd checked that at the previous stop; it's wasn't), and adding more oil won't fix it. It's the onboard computer saying it tired of the old oil and wants to see other oils.

Who says driving cross country isn't exciting? (Other than me.)


April 8, 2017

Driving back from minneapolis to Austin, stoping at rest stops along the way to get little bits of programming done as a break from driving.

I found a new way Ubuntu (glibc) is broken. For a while I've noticed that ubuntu's "thingy | tee" is just useless, but I was blaming tee. Today I'm trying to watch gps phase adjustment code output while logging it to a file and I went "thingy | tee" and it was doing the broken "show no output until you have 4k of data, then blast it out all at once stopping halfway through a line until you have the next 4k of data". I.E. trivial failure mode of "baby's first tee implementation". So I swapped in the toybox tee, and then toybox's "cat -u" (byte-at-a-time mode) and it was still doing it.

It's either glibc's printf noticing that stdout is not a terminal and thus batching the output (I.E. instead of flushing when you print \n, wait until the output buffer is full before writing), or it's something broken in bash's pipe. Let's try building my test program against musl...

It's glibc. Musl isn't doing the "\n in the data stream means flush" thing at all, but it only has a small (256 byte?) output buffer so it's printing much more reasonable chunks.

So ubuntu's bash (its default user shell in /etc/passwd for all users, the one you get when you open a terminal) did an "optimization" that renders tee basically useless. Wheee!

(So if you do printf("prompt:"); and then wait for input, and output was redirected to expect, what does musl do? Not print that unless you fflush(0)? Do some sort of nagle thing? Either way, that's bad behavior.)

Meanwhile, ubuntu just popped up its stupid "I'm going to steal focus from whatever you're typing because there are packages you could upgrade!" window, DESPITE NOT HAVING NET RIGHT NOW SO I COULDN'T INSTALL THEM IF I WANTED TO. You can tell my netbook rebooted since the last time I killed that thing. I should rename its executable. Anything that randomly steals focus while I'm typing NEEDS TO DIE. (Even the battery low warning, which tells you the system's about to power down, doesn't steal keyboard focus. Ubuntu's judgemental "I think this is more important than what you're doing" nonsense just gets itself ripped off of the filesystem.)

So if you do printf("prompt:"); and then wait for input, what does musl do? Not print that unless you fflush(0)? Do some sort of nagle thing?


April 7, 2017

Ubuntu gave up on Unity and phone convergence, and has bogged off to the cloud. Quite right.

I've written before about Ubuntu's many poor technical decisions, calling Unity the "classic microvax mistake", which ties into my 2013 toybox talk about mainframe -> minicomputer -> microcomputer -> smartphone. Successful tablets are big phones, not small PCs.

The microvax was a minicomputer in a PC form factor. When a disruptive technology displaces a sustaining technology scaling DOWN the old technology hardly ever works, scaling UP the new technology does. When the minicomputer got kicked up into the server space, IBM defeated DEC but the winning _technology_ looked like minicompuer timesharing not mainframe batch jobs. The PC is what kicked the minicomputer up into the server space and the microvax was not competitive with the PC. Now phones are kicking PCs up into the server space, and Amazon EC2 is eating IBM's lunch in said server space, but the machine people actually interact with is now a phone or tablet, not a PC. Trying to glue a phone UI onto a PC is neither going to displace phones or prevent phones from displacing PCs.

Ubuntu was trying to make water flow uphill because they wanted it to. So was Microsoft with "metro". It wasn't going to work, and it's a good sign both have now stopped. (They can go compete in the big iron space and leave phones and tablets to people who do that.)

It's a pity they won't acknowledge the bash -> dash move was equally stupid. Oh well.


April 6, 2017

Wrote a lwn.net comment that possibly should have been a blog entry here, but it _was_ in reply to the lwn article above it...

(Do I need to do a subscriber link to a comment if the article hasn't gone unembargoed yet? Moot point in a week, but still.)

The McDonalds near Fade's dorm is a surprisingly effective work environment. It's easy to forget how much college campuses go out of their way to be study-friendly. (These days I live a half hour walk from the UT campus, and don't bother to go there much. I should get back in the habit. Alas, both bikes were stolen off the porch during my various sojurns abroad, downside of them not moving for weeks at a time. I should have locked them in the shed but forgot. That's the kind of thing I used to blog about, but mostly just tweet about these days.)


April 5, 2017

A recruiter called to wave a new position at me. (This happens. My resume has a high google rank because it's been at the same location for a decade and my domain is where I do my Linux stuff through.)

It's... another oil services company in Houston. Far enough away to be Per Diem (so I could maybe get a place right next to work and avoid a long commute), they're using uClinux so right up my alley... but "C/C++" means "C++" (just like Perl/Python means Perl) and "Oil Services" means "technology used in oil exploration" (there's a lot of computation in parsing the signals that come back from the dynamite) which is as dead-end as it gets.

The money would be nice but I don't currently _need_ it, it's not technology I want to see ship (the startup I'm working at makes electrical grid sensor/control technology to switch more of it over to wind/solar plus batteries _and_ we're open sourcing a CPU design). There's no way it would give me _more_ time to catch up on toybox work.

And really, given that working for any portion of the oil industry would indirectly help the Dorito, I think taking this job would be un-patriotic. I'll work cheap for SEI to help push the world in the direction I want it to go. I won't take twice the money to push the world in a direction I _don't_ want it to go.


April 3, 2017

Stopped at a rest stop to charge electronics and do some programming (since it's Monday and all; work day). Got stuff a little mashed together and wound up talking about the GPS stuff on the toybox channel, as you do.

Catching up on email, I find out Elliott's adding more code to toolbox. He posted a plan to replace the rest of toolbox with toybox but I've been so swamped with $DAYJOB's flailing to stay aflat that I haven't done my part, and now he needs stuff that's on my todo list (and some of the code's in pending, specifically the zip and gzip stuff) but he can't wait, so he did a version linking against zlib.I feel really really bad about this. And yet I have to keep grinding away at GPS forever if the company's to ship the project I've been working on for... good grief, 2 and a half years now. It's exciting technology, but the opportunities I'm missing out on by being too exhausted to advance them are exciting too. But they don't pay the bills and I've got three mouths to feed not counting cats.


April 2, 2017

Got to about Kanas. Yeah, it's looking like a 2 day drive. Of the "started saturday afternoon, get there monday afternoon" variety.


April 1, 2017

Conference call with Jeff (my boss) recently about preparing material for a meeting on the 6th with a potential new customer, in Japan. This means Jeff has to be in Japan on the 6th. This means he's NOT coming to Austin this week.

Mulled this over a bit, and decided that since the San Diego thing is off I might as well drive up to Fade's this week. So I'm doing that. (Google thinks it's a 17 hour drive. That's 17 _continuous_ hours, which seems unlikely. Experience says it's a 2 day drive, but I'll see if I can do it in a day in a half.)


March 31, 2017

Call from the San Diego guys: their customer is being finicky and the two weeks is off. (Good thing I didn't buy plane tickets; I was planning to drive.) Given that the project is to launch a weather sattelite, and the government has been hijacked by oil interests, I'm kind of amazed the project has lasted this long. (Inertia and being overlooked, I guess.)

It's interesting to realize that the US government got hijacked due to a clash between China and Russia. Climate change denialism is why the CEO of Exxon (now secretary of state) teamed up with Putin (80% of Russia's cash exports are oil and natural gas, without which they're so broke they can't even feed themselves) to take over the US government (using racist patsies the same way they've always been used, as easily led cannon fodder advancing someone else's agenda).

Climate change isn't just big money, it's the _biggest_ money. 5 of the 6 largest companies in the world right now are energy companies (the 6th is wal-mart). And if they're forced to write down the dollar value of the "oil reserves" they've discovered underground because we _won't_ pump them upstairs and burn them (possibly just because they'd sell for 1/5 the dollar value they're now going for and it's not worth pumping/transporting/refining at that price), it'll make the mortgage crisis look like an ATM fee.

But 3 of those top 6 companies (China's State Grid, China National Patroleum, and the Sinopec Group) are chinese, and china has gone all in on solar. This is half the reason gasoline prices went down: china's importing far less of it. If china, india, the US, and europe unite behind solar/wind plus batteries and control logic, we can switch over the grid in a decade. Even Saudi Arabia's thrown in the towel.

Yes there's tens of trillions of dollars of infrastructure that needs to be replaced to switch off fossil fuels, all those oil based cars and power plants and gas stations and supertankers, but that kind of market opportunity is an outright gold rush. We'd pay for it by _not_ spending that money on oil, and spending it on the new stuff instead. If your electric bill more profitably buys solar panels, there's money to be made fronting the cash to install panels and batteries on your roof and then receiving a little less than your previous monthly electric bill until they're paid off. As the price of solar panels and batteries both drop, the business case for doing that gets stronger every year.

But Russia _can't_ switch over, exporting oil and gas is all their economy's got. And ignoring china there are still 3 oil companies in the top 10: Shell, Exxon/Mobil, and British Petroleum. The dinosaurs left behind are TERRIFIED, they know they're fighting for their lives. They also know they will inevitably lose, but every year they delay is hundreds of billions of dollars extra profit squeezed out of those existing wells and tankers and refineries.

As for climate change, Moscow would be happy if sea levels rose ten feet, they're thousands of miles inland halfway up a mountain range and if it thaws their northern border they get new sea ports. Lack of sea ports is basically why they invaded Crimea: counting ice they're more or less landlocked half the year. They can launch submarines up north but surface shipping can't go under the ice. Their eastern border is too far away from 99% of the population to matter, the west edge is an unpleasant mountain range separating them from europe, and to the south are a bunch of other countries (china and india and the middle east) that all hate them.

So Russia invaded Crimea to avoid losing their best sea port (Sevastopol) and even _that_ just gets them to the black sea, they still have to go through the Bosphorus (basically a lake in Turkey) to get to the mediterranean. And _then_ they have to get past Gibraltar, where basically anybody in Europe or Africa can make life hard for their ships (which is why they've bombed Syria flat to avoid losing the Russian military bases there, it's near Gibraltar).

If Russia's northern border stopped freezing solid on a regular basis they'd be THRILLED, and Putin's sociopathic enough to flood three billion people out of their homes to do it. (You think the population of Syria walking to Europe is disruptive, just wait until the _real_ migrations start...)

The Dorito administration is a patsy. China's already figured out you just personally flatter and bribe the dude and he rolls over so you can pet his belly. His price is single digit millions, it's pocket change. (Meanwhile Bannon and Sessions _want_ to drown brown people. You don't even have to pay them off, this time the nazis and klansmen don't even have to build death camps, just walls to keep people from leaving the areas becoming uninhabitable.)

These are the fruits of the southern strategy Rockefeller warned Goldwater about back in 1963: build a large enough captive audience of loyal minions and find out "captive" and "loyal" are relative when somebody steals it from you. The billionaires who thought they owned the GOP turn out to be insignificant on a global scale. Gee, what a surprise.

So the most patriotic thing anybody who opposes The Dorito and Russia can do is push for renewable energy. It's coming anyway, but the faster it comes the more it hurts these guys. They're throwing everything they've got at squeezing a few extra years out of their dying industry.


March 30, 2017

Still wrestling with GPS. The incoming signal data is four values, representing signal strength of -3, -1, 1, and 3. When feeding this into the hardware correlators, we use 01 for 3, 00 for 1, 10 for -1, and 11 for -3. This is _almost the signed char representation right shifted by one with the bottom two bits masked off... except -1 and -3 are reversed.

I asked Niishi-san and he said that it's not using one's complement, the two bits are sign and magnitude. So yes, 10 is -1 and 11 is -3.

Getting it to share code with the software correlator I wrote is... fiddly.


March 29, 2017

Writing a "running Linux on nommu hardware" document for the kernel's Documentation directory. I've been meaning to do this forever, and just haven't gotten around to it.

Alas, after a few hours of poking I don't think I did a better job than the actual text on nommu.org, except that doesn't describe fdpic. Hmmm...


March 28, 2017

Suspend failed, redoing all my open tabs.

Apotheon on the toybox IRC channel on freenode mentioned a site called copyfree.org, I should point them at 0BSD.


March 27, 2017

More github licensing thread:

On 03/26/2017 10:45 PM, Christian Bundy wrote:
> Thanks Rob, I appreciate your post. There's a lot to unpack there, and
> I'd like to first point out that I absolutely agree with you on the need
> for an ultimately permissive license. I'd like to think that we're both
> on the same side of this issue.

I think we are.

> I think there are only three points that need to be covered on this
> specific issue (although you're always welcome to email me for anything
> tangential): plagiarism and the multiple discovery.
> I understand that from your perspective and social circles, the 0BSD is
> widely known. Unfortunately, I didn't know about the 0BSD, and ended up
> taking two steps:
>
> * Start with the ISC license, which was/is very popular
> * Remove a half-sentence
> * Submitted it for review

It's an obvious thing to do, as in "it's clearly where the industry needs to go next". If the John The Ripper license had existed at the time (or the various other doing-the-same-thing ones I haven't saved links to), I would have used that (and advocated renaming _that_ 0BSD for the reasons stated last email). But it didn't, and being able to say "Android's shipped this license in a billion devices, it's been part of the base OS image for years now" is itself a powerful argument.

I looked for existing licenses when I switched toybox from gpl to bsd back in 2011, and there were some great articles about it back when I was doing that research.

In 2013 the "universal receiver -> universal donor" trend became quite pronounced, and lots of people wrote extensive analysis of it. That second one links to one of Nina Paley's comic strips on the subject, she has a lot more and they're very good. (And yes she's the lady who did "sita sings the blues", the IP issues around which were the subject of her ted talk.)

A lot of this analysis applied specifically to github since they were a good source of data, and github's _reply_, pushing the MIT license, was close but not quite right because they went with public domain _adjacent_ instead of public domain equivalent.

Public domain adjacent is "picking sides" and leads to increasing legal clutter, which encourages people to opt out of licensing their code because there isn't a simple fix that lets you stop thinking about licensing. Public domain equivalent collapses together the way the GPL used to: merge CC0 and unlicense.org code and the result can still be distributed under _one_ license (I.E. any of them).

I devoted 3 minutes to my 2013 "dear google, please merge toybox" talk (which succeeded!) to licensing issues (starting at the 15 minute 9 second mark), and followed it up with a talk at Ohio Linuxfest titled "the rise and fall of copyleft", where I tried to lay out my path to the public domain (but ran out of time before I ran out of material).

In 2014 the unlicense.org guys contacted me, and I had a long thread with them, starting with "here's an interview I just gave on the topic" and moving on from there. (The discussion went to email, which wasn't public.) I was hoping they'd act as a clearinghouse for people interested in public domain equivalent licensing, but their marketing strategy had a glaring flaw. (They were going for something like "the uncola", but wound up with "I can't use this code, it's unlicensed, I need something with a license" confusion.)

So yes, I've thought about this at great length. In public. And tried to get the word out. I'm just really busy with other stuff and wander away to other topics for 6 months at a time...

> I wasn't subscribed to SPDX, and didn't see your emails until they were
> forwarded to me from the OSI, who explained the situation.

That wasn't your screw-up, that was OSI's screw-up.

> I'm sorry
> that I wasn't aware of your license sooner, my intention wasn't to
> plagiarize or try to take credit for your work.

I don't care about that. As my Ohio LinuxFest talk said in the section on attribution vs ownership, the internet is very good at sorting that sort of thing out on its own.

I cared because:

1) I had a reason for using that name, and your new name directly opposed that reason.

2) I expected this would screw up further adoption because "there's this almost identical license, what's the difference and why are there two of them, let's use this as an excuse to table the motion indefinitely". Isn't #2 basically what github is doing right now (hence this thread)? I missed the earlier parts, but it's not unique. Accidental or not, you have jammed my license very effectively.

Note nobody ever suggests "use the OSI version" and they never will. They say "move off this thing entirely, there's disagreement, that makes it controversial and thus bad".

I tried to argue hard to shut OSI down at the start because I saw the mess it would make, but OSI has no procedure for admitting (let alone fixing) a mistake. And then the mess happened, and I cycled around to other things that weren't a giant cleanup job.

> I have to admit that I
> was surprised by how personally hostile your emails were, and thought
> that it would be best to stay out of the discussion between you, the
> OSI, and SPDX.

Sorry, I wasn't mad at you, I was mad at OSI. They have a policy of keeping themselves in sync with SPDX, didn't do so, and then asked SPDX to retroactively change a decision that predated your submission with the rationale "we screwed up so you need to change to match our screw-up".

I was inconvenienced by your actions, but you meant well.

> The issue of naming authority, admittedly, sucks. I was originally under
> the impression that the OSI was the de facto naming authority for "open
> source" licenses,

In 1998, sure. In 2016? Not so much. About halfway through here I listed why.

As I said, no corporation mentioned OSI to me. They mentioned SPDX. OSI seems to have revived its license efforts (recently) because other organiations were moving on without them, and as Clay Shirky explained in his excellent "institutions vs collaboration" TED talk, the #1 goal of any organization is to perpetuate itself.

> but I wasn't aware that you (and probably others) were
> unconcerned about OSI approval.

About 1/3 of the email I linked to above (the middle part) links to a few of the reasons OSI lost momentum. It is not a complete list. OSI strikes me as being a bit like the FSF: they don't have the ability to do anything useful, but they have enough vestigial authority left over from 20 years ago to interfere with other people's current work.

> I'd heard of SPDX, but I wasn't aware
> that they tracked licenses that weren't approved by the OSI or FSF. It
> was very clearly a misunderstanding on my part, and I take full
> responsibility here.

You're not alone, OSI is intentionally pretending it's still as relevant as it was 15 years ago, in an attempt to rebuild itself.

Maybe the various standards bodies will eventually harmonize the way ANSI and ISO approved the same C standard, and the way Posix-2008 is also SUSv4 (IEEE and The Open Group, although as far as I can tell the Austin Group isn't really related to either anymore).

But alas, it hasn't happened yet. And until it does (which would render OSI irrelevant again because it would just be rubber stamping SPDX's decisions), OSI is jamming the gears by disagreeing while having a policy of not disagreeing.

> The problems, from my understanding, seemed to stem from the fact that
> the OSI wouldn't have approved an ISC-derived license referring to
> itself as "BSD" (even the OpenBSD project now uses the ISC license),

This objection was raised during SPDX approval, and I answered it.

> and the fact that you felt that the word "free" was similarly deceiving. For
> the record, the name was meant to highlight the difference between the
> FPL and the GPL -- the GPL optimizes for free /software/ whereas the FPL
> optimizes for a free /public/.

Arguing about the meaning of "free" is something the FSF does. The name "open source" was invented so people could stop calling things "free". Ever since, the FSF has insisted on calling it "Free Software" and objected to the name "Open Source".

That's why this word is polarizing, and your name was on the wrong side of it.

> This was meant as a critique of the GPL,
> as it restricts the freedom of the public in exchange for "free
> software", not a me-tooism.

Archive.org's oldest snapshot of the "call it free software not open source" page is from February 1999. That means for almost 20 years the FSF has been training people not to listen to your argument, but to hear "free" and think you mean FSF/copyleft.

You're trying to use The Ring against Sauron. It's bad marketing and won't work.

> It seems that both of us thought to use strategies to promote the
> license to different demographics: you used "BSD" for an easy
> explanation, I used "free" to show that it had a leg-up on the FPL. If
> we strip the branding though, I think that we can agree that it's
> /really/ a zero-clause ISC license.

Nobody who isn't already a license geek knows what ISC is. It has zero marketing heft. OpenBSD didn't even bother to mention ISC in the first half of their license policy page.

The name ISC is so irrelevant that the SPDX objection I answered above misidentified it as an MIT license. You're not arguing that it be called ISC (after all, _you_ didn't), you're _objecting_ to saying that the license OpenBSD uses is a BSD license. (Is there a similar objection to GPLv3 not containing most of the text of GPLv2? Or are you saying OpenBSD isn't a BSD?)

> I think there are four options:
>
> * 0BSD: Follows primacy and SPDX short identifier, easier to explain
> to others as "more BSD than BSD", but isn't really derived from the
> BSD license family.

You said last message that you were ok with calling it 0BSD. That would resolve this issue.

It's the OpenBSD suggested template license. OpenBSD itself describes it (in the above linked page) by saying "The ISC copyright is functionally equivalent to a two-term BSD copyright with language removed that is made unnecessary by the Berne convention."

I then removed a little _more_ text, but I removed far less text than the John The Ripper guys did. Would you say that calling the John The Ripper license a stripped down FreeBSD license is inaccurate?

I removed less because my _goal_ was to have a minimum delta from an existing widely-used license (to make laywers happy) resulting in a simple license easy for non-lawyers to read. (I looked at dozens of starting points for my new license. I really wanted to find somebody else who had already done this, but couldn't, so I went with the simplest thing calling itself a BSD license, and that was the OpenBSD suggested template license.)

> * FPL: OSI approved "open source" and (in my experience), easy to
> pivot the discussion with GPL advocates from optimizing for free
> humans rather than free software, but the word "free" may confuse some.

No, the term "open source" predated OSI. Eric Raymond created OSI because he believed that charismatic movements (led by a single leader with a strong personality) wouldn't outlive said leader, and he was trying to make an organization that would outlast his participation.

(He was also, circa 2001 when I first started hanging out with him, very worried that he'd ossify into a loon the way his friend Richard Stallman had. Eric and Richard hung out at science fiction conventions in the 1980's, but as Richard got older he got more and more extreme and fixed in his ways, and Eric was terrified this would happen to him. Alas, he went crazy along a different axis than RMS had so didn't manage to defend against it.)

For context, I crashed on the couch in Eric's basement for 4 months in 2003 while "editing" the Art of Unix Programming from 9 chapters to 20. That's why paragraph 2 of the author's acknowledgements says he almost made me a co-author, and we went on to co-author lots of stuff.

We stopped being able to work well together around 2008, and stopped talking to each other at all after I tweeted this at him in 2011, and since then he's just plain lost it.

But that previous relationship from before he went crazy means I know a lot more behind-the-scenes stuff about OSI than is necessarily public. And some _is_ public but people just forget it. For example, when Eric founded OSI he partnered with Bruce Perens, who was already a toxic loon. When RMS decided that losing the spotlight was unacceptable Bruce flounced from OSI back to the FSF, directly undermining OSI's core message.

Then when GPLv3 was happening Bruce begged to be let back in just long enough to neuter OSI's objections to GPLv3, and soon after that passed Bruce got thrown out again and of course made a big stink and pointed the finger at everybody else as is his way.

As far as I can tell, the current board of OSI is a complete reboot, creating a new organization on the bones of the old, presumably started sometime after this. But in OSI's absence, organizations like SPDX arose to fill the gap. It wasn't remotely the only one, Buildroot has its own tracking, and Yocto and Tizen have theirs, and of course there's this and this and this. Red Hat and Google have their own set of approved licenses, and of course github itself. Licenses listed/recognized by github are far more prominent than ones that aren't. (Hence this email thread.)

SPDX doesn't make value judgements about licenses, their job is to come up with a list of the licenses in use. Consistently NAMING said licenses is core to SPDX's mission. (I wanted BSD0 as the short version, but SPDX previously had 4BSD 3BSD and 2BSD so wanted to use 0BSD as the short identifier. So 0BSD it is.)

As I said: Samsung asked me to submit 0BSD to SPDX. The google guys agreed that was a good idea. None of them have ever mentioned OSI to me, and really don't seem to care what OSI thinks.

> * Something else: I really don't know whether orchestrating a
> compromise between the OSI and SPDX is even worth it (or whether
> this is insulting to even suggest), but at this point the politics
> surrounding these names seems to be suffocating this license.

SPDX approved my license before you showed up. I'm regularly writing code under my license. I have been ignoring you, and OSI. I haven't particularly promoted 0BSD outside that because I've been busy writing code.

I'm responding to this thread because github adding 0BSD as a selection option would be wonderful. It sounds like the reason they haven't is OSI's mistaken duplicate approval, months after SPDX's decision was finalized and published.

> I'd be
> comfortable settling on something more neutral and unopinionated
> like "0ISC",

Remember how I explained why I chose the name I did and what purpose the name serves? No?

> but I think this is really in your hands.

I talked to several people at linuxconf.au in January (Richard Fontana witnessed my CC0 vs 0BSD argument with that Google guy, I buried the hatchet over lunch with Bradley Kuhn, etc) and they suggested I re-raise the issue with OSI to give them the opportunity to _create_ a procedure for backing out a previous mistake.

It's on my todo list. (I also saw the Open Invention Network lady again and she reminded me to submit Toybox to OIN. My old Aboriginal Linux project is already a member, but I should get toybox explicitly listed. Haven't yet. Been busy with several other projects and travel...)

> * Nothing. This seems to be current course of action, as this drama is
> a total pain for anyone even tangentially involved.

The "drama" is that OSI won't back out its mistake, nothing more. You said you'd be ok calling it 0BSD. Can you tell OSI that?

> If dealing with
> this license continues to be this painful, I don't think it well
> ever get any sort of mainstream support.

It was getting some momentum before OSI wet the bed. Since then even I haven't bothered to push it much because I don't find dealing with OSI fun, and definitively shouting them down is a time sink.

That said, it's still on the todo list. But turning Android into a self-hosting development environment (so having an android phone is sufficient for being a full-fledged Android system developer, and the PC can go the way of minicomputers and mainframes up into the lucrative but boring big iron server space) is higher on the todo list, which isn't just toybox but mkroot and so on (dismantling AOSP and rebuilding it along modular lines, etc).

Heck, just turning Android's NDK into something usable is higher on my todo list.

> As we're on the same team,
> I'd really rather not have that happen.

I am primarly promoting 0BSD by shipping software licensed under 0BSD. Toybox is shipping on every Android device since Marshmallow. I have a todo item to promote 0BSD more but haven't cycled around to that yet because I'm trying to finish making Android self hosting and my $DAYJOB involves working on a new open source processor design which involves a lot of travel (6 trips to tokyo so far).

And yes, I intend to talk to Jeff about switching j-core's VHDL to 0BSD. (It's on the todo list.) If so it will be called Zero Clause BSD there too, and I won't even have to change the web page that links to it because right now it says "BSD licensed" and 0BSD is a BSD because OpenBSD is a BSD.

But the best thing I can do to cement 0BSD's position is get Toybox to its 1.0 release before Android "P". (I missed "O" because the last 6 months have been nuts.) I've got the Android Bionic and Toolbox maintainer posting his own roadmap for replacing what's left of toolbox with toybox, and I have my own roadmap of what needs to happen to build linux from scratch under android (AOSP has some more todo items such as a git downloader).

I'd love to get back to that, but today I need to convert the j-core GPS signal tracking routines from cartesian to polar coordinates (I'm trying to convince the hardware guys to make this change in the correlators and they want to see how expensive doing it in software is first). Then I should probably pack for my upcoming trip to San Diego...

Rob


March 26, 2017

I got cc'd on a "Would github like to add 0BSD to its license list" thread while traveling, which I belatedly replied to. It's an important topic I should really spend more time on, so here's a summary of what I said (most of it in reply to Christian Bundy, the guy who apparently accidentally sabotaged 0BSD last year, and who I never had contact info for before now):

The name "zero clause BSD" was part of a strategy to promote public domain equivalent licensing by coming up with a both corporate friendly and hobbyist friendly version. This is necessary because post-GPLv3 too many programmers are lumping software copyrights in with software patents as "too dumb to live" and opting out of licensing their software at all. I'm trying to offer a palatable alternative, which requires being aware of and addressing a lot of issues.

The first problem is that lawyers dislike "public domain", as I explained here.

That's a reply to a thread where Google's lawyers asked musl-libc to remove "public domain" code so musl could be used in chromium OS. I encountered this personally two months ago at linuxconf.au, where I had a ten minute argument with a Google developer whose position was that CC0 was a terrible license because it forces you to "give up your rights", but that my zero clause BSD was a much better license that he could use. (I tried to explain that they're equivalent but he literally wouldn't believe me.)

Laywers like BSD because AT&T and BSDi sued each other and AT&T lost for violating the terms of a BSD license, thus it's proven to provide paychecks to laywers. So what I did was take the simplest thing I could call a BSD license (specifically the OpenBSD suggested template license) and make a single small change (removing half a sentence). I did this so I could call the result a BSD license and get that mental "rubber stamp". There were already 4 clause, 3 clause, and 2 clause BSD licenses. Zero Clause BSD was both "just another BSD license" and analogus to the existing CC0.

The reason we need to revive public domain software is the collapse of copyleft. The GPL was a category killer in copyleft, preventing rivals like CDDL from gaining any traction and providing a single giant pool of reusable code under a single license. But there's no such thing as "the GPL" anymore, because GPLv3 split copyleft into incompatible warring camps. Now the Linux kernel and samba implement 2 ends of the same protocol but can't share code, even though both are GPL. A project that's "GPLv2 or later" couldn't accept code from _either_ source, which leaves projects like QEMU that want to turn kernel drivers into device emulations and gdb/binutils processor definitions into processor emulations stuck because they can't take code from both sources anymore. This situation sucks, it's only going to get worse with time (agpl, gpl-next, ubuntu shipping cddl code, maybe GPLv4 someday).

Before this, copyleft was simple and let programmers ignore most of the legal issues around software licensing. We had a universal receiver license acting as a terminal node in a directed graph of license convertibility, and had a simple binary decision: "is this license GPL compatible or not?" If it is, treat it like the one license we're familiar with, if not ignore it. And we're done, we don't have to be lawyers. But with GPLv3, you now have to police all your contributions because "it's GPL" doesn't mean "my project can use it".

Since GPLv3 split "the GPL", a lot of programmers (and companies) categorically refuse to get GPL code on them anymore. Android's no GPL in userspace policy (rewrite of the bluetooth daemon, etc) was a response to GPLv3 destroying "the GPL". Apple similarly froze xcode on the last GPLv2 release of gdb and binutils for 5 years while they sponsored the development of a replacement (clang/llvm), rewrote the smb server, and did a general "GPL purge".

In the absence of a universal receiver license, the next generation of programmers is taking one of two approaches:

1) Refusing to license their code. Not through ignorance, but as Napster-style civil disobedience lumping software copyright in with software patent as too dumb to live and refusing to participate. The next generation is waiting for all those old "series of tubes" fogies issuing DMCA takedowns on youtube AMV's and reaction videos to just _die_ already, and software licensing is an obvious extension of that.

2) Jumping to the other end of the spectrum looking for a universal donor license.

I want to ENCOURAGE the second approach, because today I can't deploy code with no license. But the universal donor of copyright licensing is the public domain, which was the victim of a protracted FUD campaign after copyright was extended to cover binaries in 1983 by the Apple vs Franklin ruling and the resulting shrinkwrap software gold rush competed directly with decades of accumulated public domain software. Commercial interests tried very hard to convince everyone that public domain software was poison, so you'd buy their proprietary software, and this got internalized by people like OSI's lawyer Larry Rosen, who wrote an article in 2002 comparing releasing code into the public domain to abandoning trash by the side of the highway. (No really, see paragraph 5.)

To work around the 30-year FUD campaign against public domain software, people came up with dozens of public domain adjacent licenses (bsd, mit, isc, apache...), which were _almost_ like public domain equivalent licenses except that they required you to copy a specific blob of text into all derived works, and those blobs of text differed from license to license.

This led to a "stuttering problem" where derived works incorporating code from multiple sources would concatenate multiple licenses, which quickly gets ridiculous. The kindle paperwhite's about->license has over 300 pages of license text. Android's toolbox project (the thing toybox is replacing) had dozens of concatenated copies of the same BSD license.

When I asked why they said it's because the copyright dates had changed, and a strict reading of the license meant...

Only public domain equivalent licensing provides equivalent simplicity to what "the GPL" offered. Fire and forget, you don't have to be a laywer, because public domain equivalent licensing collapses together. You can combine code under 0BSD, the unlicense, cc0, wtfpl, or a simple "public domain" dedication such as libtomcrypt's (at the heart of dropbear ssh) and then use any one of those as the resulting license, without stuttering.

With public domain, you don't have to choose a license: you can always change it later. The "should I choose apache or isc or mit" decision paralysis drives people to side with napster-style opting out because it's _not_ universal donor licensing. Add the stuttering problem and it quickly becomes "this is too complex and fiddly to understand, I'm not getting it on me".

I looked at existing public domain equivalent licenses before creating my own, but "the unlicense" is confusing "This code is unlicensed, I can't use it...", Creative Commons Zero is extremely complicated for what it does and has received a lot of FUD (some of which is spillover from various "don't use creative commons licenses for source code, it's not appropriate" campaigns from Eben Moglen and similar). WTFPL has swearing in the name (which turns out to be an issue for some people)...

Zero clause BSD is "more BSD than BSD". It's a very simple story I can tell people to convince them to license their darn code.

This is why I objected so strongly to OSI retroactively renaming this license. There were _reasons_ for 0BSD to be named what it was. Calling the license "free" anything implies an affiliation with the Free Software Foundation putting it on the wrong side of the historical GPL vs BSD divide. I'm trying to convince people disappointed by the loss of a universal receiver license to move to universal donor licensing, so that they don't refuse to license their code at _all_ (which ~80% of github is doing). OSI muddying that message was incredibly frustrating.

> Nobody on my team (or the OSI's board) had ever heard of the 0BSD when
> the FPL was being reviewed

Which surprised me because SPDX had approved it months earlier and OSI had a policy of keeping itself in sync with SPDX. We discussed it on the spdx list, and SPDX published their license approvals.

> so we were all surprised to hear that the
> 0BSD had skipped OSI approval and jumped straight to SPDX for an
> identifier.

When Android merged toybox, Samsung asked me to submit it to SPDX for approval (to simplify Samsung's internal processes), so I did. Nobody ever asked me to submit it to OSI.

At the time I knew that OSI's lawyer wrote the article comparing public domain to abandoning trash by the side of the highway (linked above) and that their FAQ disapproved of CC0, the most prominent public domain equivalent license. And that they had started pushing back against license proliferation years ago, which at the time meant they'd stopped approving new licenses.

> I don't want to rehash all of the issues
> <https://lists.spdx.org/pipermail/spdx-legal/2015-December/001580.html>
> with the 0BSD, but we're comfortable using the 0BSD identifier on our
> license, regardless of whether the 0BSD is actually approved by the OSI/FSF.

I think the best summary of the issues was actually the timeline I posted.

I'm not the only person to strip down a BSD license into a public domain equivalent license, the John the Ripper also did so. But they used a different starting point (freebsd's license) and came up with a differently worded result. If that one had existed at the time, I'd have used it, but they did that in 2015. (After I relicensed toybox, before I submitted it to SPDX.)

Yet a license with _exactly_ the same wording as 0BSD was submitted to OSI under a different name both after SPDX approved it and after Android shipped it in the M preview.

I'll accept that's all a big coincidence, but OSI failed to do any sort of due dilligence. OSI had a policy of keeping itself in sync with SPDX. Months after SPDX had approved the new license they didn't notice SPDX had already approved this license under its original name (having raised the "but it's ISC" issue during the initial approval process, and accepted the reference to OpenBSD as justification).

Months later, OSI noticed the conflict, but because OSI has no mechanism for admitting it made a mistake, they asked SPDX to change the name of 0BSD. I objected, both explaining the reasons for the name (and why OSI's name was actively counterproductive), and pointed out the timeline (the link above), and OSI's response was basically that I'd convinced them to stop trying to convince SPDX to change their existing decision, but that OSI had no mechanism for ever admitting they'd made a mistake.

> @landley 's position is also clear:
>
> > I'd really rather ignore OSI entirely than explain that after zero
> > clause bsd had been in use for years, after it had been merged into
> > android and tizen, and after SPDX had published a decision to approve
> > it, OSI randomly accepted the same license under a different and
> > misleading name because this guy https://github.com/christianbundy said
> > so and OSI didn't do its homework. (Ok, that photo with the caption
> > "this guy" would make an entertaining slide, but entertaining damage
> > control is still damage control.)
>
> I'm obviously heavily biased, and would prefer not to trample the
> original 0BSD
> <https://web.archive.org/web/20050307174729/http://urchin.earth.li/%7Etwic/The_Amazing_Disappearing_BSD_License.html>
> with a modified ISC license,

That's basically a blog post. No software ever shipped with that calling itself zero clause BSD (I know, I searched at the time).

Toybox shipped with this license in 2013, and I explained the strategy behind the name in 2014.

> but when the time comes that we hit 1,000+
> repos we'll be happy to stand behind any decision that's made (the same
> way that we support SPDX in giving us the "0BSD" identifier).

I think this is a good license. I'd like to see more people use it. I think getting the name right is important, and I took the approach I did for specific reasons.

That said, if github wants to go with the John the Ripper license instead, go for it. I don't claim to have invented the idea of public domain equivalent licensing. It's apparently an obvious enough idea that somebody else reinvented about half of it years later.

Rob


March 24, 2017

The aboriginal linux mailing list still shows the occasional sign of life despite the project being mothballed for a year now, and when it does I try to point people at mkroot but it's not finished and when they do try it, it doesn't always work for them.

I need to set up a web page and mailing list for that project. It's got a repository but by itself thats not really a project. (And documentation.) And there's the toolchain binary hosting issue I keep poking Rich about, but I need to finish the toolchain build script first, and test them all with kernel builds under qemu...)


March 23, 2017

New month, new instance of gmail disabling half the toybox list's subscriptions.

Of course dreamhost deserves half the blame for making it so I have no control over mailman's settings and making fixing it awkward and horrible and have to be done manually via 20+ pages of web interface. But gmail definitely deserves at least half the blame here, and no _other_ mail service regularly does this. Just gmail.


March 22, 2017

Posted a status update to the j-core list about stuff we went over on my most recent tokyo trip. The tl;dr is "we're in feature freeze for silicon tapeout" (which literally involves writing a test suite bigger than the rest of the code combined), and we looked into implementing the sh3 mmu and it's terrible (far more than doubles the size of the chip) because it used the wrong strategy, and now we're stepping back and going "what mmu _should_ we implement that goes with the rest of the j-core design". Stuff continues to happen behind the scenes (rather a lot of it) but it's not visible to the public at the moment. Working on it...

Oh, and we did some work on turtle manufacturing, which is also blocked by testing (in this case the testing the boards need to undergo after manufacturing; we need hardware and software you plug each one into to verify it's good, including bitstreams to drive said hardware with at least test patterns).

Meanwhile, over in toybox-land, I finally got a fix checked in for Elliott's ps crash (which I broke adding thread support, and I still think my dirtree_flagread semantics are non-obvious but having reviewed them again I can't currently think of a better way to do it).

And we're working out when/where to apply postel's law. (Design issues. Always the hard part. Especially the _small_ ones that are too tiny to get a good grip on.)


March 21, 2017

The San Diego guys just asked (via the recruiter they go through) if I could spend another couple weeks with them. The money's decent (less so when you factor in the travel costs for these short gigs). I really want to ship the stuff I'm working on at SEI, but taking off a couple weeks here and there to refill the bank account a bit seems entirely reasonable.

Hmmm...


March 20, 2017

Finally got the paste rewrite checked in. A week or two back somebody posted some paste tweak to the busybox mailing list, including test suite entries, and of course I ran those test suite entries against the toybox one (yay tests!), and noticed it didn't work for even basic stuff. (I suspect this command is yet another holdover from back before the "pending" directory went in. I need to do a full audit of everything at some point.)

Anyway, I spent a largeish chunk of the past weekend rewriting it.


March 17, 2017

I'd like to clarify that I've only started a bug report "Dear Princess Celestia" the one time, and it was several years ago.


March 16, 2017

I just noticed that glibc turned MB_CUR_MAX into a function call. No WONDER the multibyte stuff's insanely slow with that library.

Oh well, I only care about performance under bionic and musl: that glibc nonsense can go hang. Sigh: except musl's doing it now too. Honestly, utf8 parsing is _simple_, that's one fo the big advantages of utf8, why are the C libraries making this so complicated and expensive? Do I need to write inline code for this?


March 10, 2017

Back home, recovering from jetlag.

Blah, I get used to semi-reasonable phone battery life in Tokyo and then I get back to the USA and the stupid NSA listening crap kicks in and kills my battery life again. (If your phone has a "speakerphone" mode why do you think it can't hear that well the rest of the time?)

And people wonder why I have a band-aid or electrical tape over all my laptop cameras when not in use. (Well the linuxconf.au guys don't, they gave me this little stick-on camera shutter as one of the speaker gifts. I'm sticking with the band-aid, an idea I got from Val Henson's livejournal, which lets you know how long ago _that_ was. The pad protects the lens for when you do want to use it.)


March 9, 2017

Last day in Tokyo this trip. Jeff and I talked about the GPS stuff and may actually have had a breakthrough: this nonsense makes perfect sense in POLAR coordinates, why are we doing everything in cartesian coordinates?

I didn't make it out to disney or the pokemon center, but I asked if I can get more flavors of kit-kat and Jeff and Pat took me to a "kit kat chocolatier" which is upscale botique store that sells special flavor kit-kats (and nothing _but_ specialty kit-kats) in packs of 4 small sticks for $4 each. (Or big _really_ expensive assortment boxes.) I bought "butter" and "pistachio grapefruit". I didn't get the maple strawberry.

Flying home from Tokyo. It's another one of those "your plane takes off at 9:45 pm, sleep is impossible on United Economy Class, have fun being awake for two days!" dealies. I remember the one time I flew home on Delta leaving at a reasonable hour and implemented most of toybox "ps" on the flight. That was nice. All the flights since then have been sleep deprivation discomfort nightmares to the point where not only can I get nothing done on the flight, but there's a day or two recovery afterwards. But hey, no immediate danger of throwing up this time, so that's an improvement!

So many south-by-south-south people on the actual flight to Austin. Talking about how you can never let engineers and designers talk directly to each other or it'll screw up your project management. It's the dot com boom all over again. I now hate Groupon and whatever this guy's web dating site are on general principles.

Arrived home. United has promised to try to find my luggage over the next few days (their current theory is Denver). The Soup or Shuttle people (I'm picking Shuttle, there's soup at home) have been evicted from their normal counter attached to a wall and have set up an adorable little tent across the hall, due to airport construction.


March 7, 2017

Noticed I fly back thursday evening, not saturday morning. Frantic schedule reshuffle to try to compensate.


March 4, 2017

Another day spent mostly in the hotel.

So last month I figured out how to implement getconf in about 150 lines, but 20 of those are makefile plumbing doing evil things with sed. Finally got that close to being ready to check in.

Long talk with Jeff working out todo list stuff for the week. I now have a general idea why we're not doing the j3 mmu just yet: feature freeze for our ASIC tapeout, and the hitachi mmu design isn't the direction we want to go in (no chance of fitting that mess into an lx25, let alone an lx9; it generally doesn't FPGA well).

The blocker with turtle board manufacturing is testing: we need bitstreams and kernel support to drive all the hardware. We've got to test the serial console, audio out, HDMI out, ethernet, usb, sdcard, and GPIO "hat". (We also need to do a turtle board website.) We need to get that together and make a burn-in plan we can send to the manufacturing guys.


March 3, 2017

A day off! Spent huddling in the hotel. Sometime after lunch (I made it as far as 7-11 for a steamed bun; I keep trying to order "with curry in it" and getting "with sausage puck in it" but eh, close enough) I finally felt recovered enough to fire up one of the computers I brought with me. The mac's at the office (good riddance). The big machine's been suspended since last thursday and the battery didn't last that long, and I need an adapter to plug it in to Japanese power. Netbook it is!

I've decided to revert the toybox cut.c changes, both mine and the other guy's. When reviewing his code I didn't like the approach, and I just haven't got the stomack to look at mine right now either. I want to clear todo items.

Poking at the www directory, starting to tackle the faq.html backlog. It's a todo item that needs doing, and currently that's about the level of focus I have to deal with things right now.

But most of the day went to downloading/reading/replying to a week of email (through pop3; because the combination of thunderbird and gmail's imap assumptions remains problematic, meaning about 15 minutes per ~500 message chunk, and given a week's accumulation of several mailing lists (including linux kernel) that took several hours to download.

Hey, I've been nominated for the Google Open Source Peer Bonus Program! I never filled out the paperwork for the bug bounty stuff, but they found another way. I can no longer say Google's never paid me a dime, because they're offering to send me a $250 gift card! (Woo!) Ok, the link I got to a google doc I'm supposed to fill out doesn't work (sort of 404-ish only google doc's version; might be cut and pasting wrong). But as they say, it's an honor just to be nominated.)

All the naps today. Finally left the hotel for dinner with Jeff and Rich and Pat at a mexican restaurant, where I got off the Ginza line in a part of Tokyo that Jeff insists the Ginza line does not go. I think this means I got a train lost. (I went up an elevator and then couldn't find it again to go back sixty seconds later. Possibly one of them brigadoon things. Oh well, this is why my phone has GPS and google maps.)


March 3, 2017

Everybody comes on the last day of these things. We ran out of handouts (and business cards) printed in Japanese and had to use ones printed in English.

Nine hours of standing in painful shoes. Three hours of frantic booth packing and boxing. Then we went home. There was more to it, but I couldn't tell you what at this point.

My big learning experience for the day (which I already knew) is that my working style and Rich (the sales guy's) working style do not combine well at all under pressure. (He never stops smiling and confidently telling us what happens now. He's often wrong, but never uncertain.) During booth packing, this became... pronounced.


March 2, 2017

Back to "Tokyo Big Site" (really, that's what it's called) for the second day of Smart Energy Week. We stand in the booth. We tell people about our stuff when they ask (generally about 3 times an hour, although sometimes we have two or three groups at once). Then we Stand There Looking Professional.

My suit jacket does not fit. (They didn't have that in Gaijin Size either.)


March 1, 2017

First day of smart energy week and we're _already_ too stressed to function well. Woo!

We got there when they opened at 8am and we had Badge Panic: the paperwork to get our badges was in the booth, we couldn't get to the booth without badges. Jeff threw up his hands that this was unfixable because Japan and went off to sulk (it was a Looooong 3 days) so I started my method of solving this (go up to first line bureaucrat as a supplicant, apologize profusely, be humble, acknowledge that their job isn't to fix this but to tell me who _can_, so attempt to get referred to a manager who can start the exception process and hopefully be gradually escalated to people who can fix things). Alas, this does not mesh with Jeff's approach to bureaucrats (which involves being angry and important and disapproving, and assuming the worst of everybody; he was 100% certain that the manager the door guard summoned would throw me out on the street because That's How They Do That Here). So after that got screwed up I started over at the registration desk, but every time I started to get traction with somebody I was talking to, Jeff would come up and be visibly angry at them and they'd stop trying to help me. (Either of us could have gotten this fixed, but our methods were 100% incompatible. It was not a "good cop, bad cop" situation.) The third time Jeff derailed what I was doing, I sat down and waited for somebody else to fix it, at which point Pat wrote out a new form longhand to get herself registered (including writing a "business card" by hand), and then used that badge to go to the booth to get the stuff we left there yesterday.

Then we left Jeff and Rich and two Japanese men from a partner company running the booth, and Pat took me to the end of the train line to buy shoes at a ridiculously large mall called Lalaport in Toyosu. We went to a place called ABC-Mart which, despite the name, is a shoe store. It sold me a terrible, terrible pair of dress loafers which would be fine shoes if they had them in my size, but the entire store only stocks up to half a size smaller than I wear. But at least I could physically fit these on my feet.

Before this, on the way through the mall, we stopped at a sock store. One which sells expensive custom socks, and that's it. I got a single pair of white socks for $9. (We needn't have bothered because the shoe store had normal socks, but we passed something on the list and went "Socks! Right! Need those.")

Did I mention the mall was enormous? (Over 400 stores.)

I then stood, in painful shoes, for 9 hours. I don't recommend it. (A woman in high heels stood at the corner of an adjacent booth handing out literature for the same amount of time, not showing any pain. I wanted to ask if she had any shoe related advice, but don't speak enough Japanese.)

At dinner Rich the sales guy kept talking about the Dorito. I left the table and sat in the waiting area near the entrance until it was time to go. I do not want to hear what an affluent older white republican male has to say about current politics; they had their say and that's what got us into this mess, they can stop talking now. (But he won't. And he _never_stops_smiling_.)

Tired, stressed, still a bit sick.


February 28, 2017

Setup scramble day 3. We plugged a lot of aluminum and plywood into a lot of other aluminum and plywood using plastic connectors, and then there was a lot of electrical wiring and some plexiglass. Also, velcro comes in tape form, in very large spools: we made extensive use of this.

We're using a sort of modular booth system that they once hired a consultant (or possibly somebody very good with lego) to assemble in a nonstandard way, which they really liked the layout of but DID NOT DOCUMENT. (This might have been at the first Distributech, in florida a year ago?) They're trying to reproduce this layout; the elaborate diagrams in the book describe the standard layout we're not using. Instead they have cell phone photos of the look they want, and we're trying to figure out what pieces go where from those pictures.

One of the bolts needed to assemble one of the tables fell out in shipping, and we couldn't find it, and of course it's a US size you can't get in Japan. (Mail order sure, but not overnight.) Luckily they can dismantle one of the tables back in the office and get an equivalent bolt for the duration of the show.

The graphics order finished early, so we picked it up and stuck graphics on things. (The print shop is next to the office, which is like an hour from the venue by train. After the first day we only took the train _back_, and took a taxi to the venue in the mornings. With 4 people it amortizes out to a reasonable-ish cost.) Some of the graphics are backlit and were printed (at great expense) on transparent plastic instead of the extra-bright synthetic paper used for everything else. I think the synthetic paper is brighter; the ink printed on the plastic is NOT transparent, and it's a solid color background. There's some pretty bright lights behind it but it's not neon or anything. Oh well, learning experience.

At the end of the day we went back to the office to try to get the Mac laptop I brought to update to a version of keynote that can run the presentation they want to have cycling on the tables. (We need four computers to power the four TVs in the booth, and Jeff likes macs so he uses macs for everything.)

Jeff did not believe that I have trouble getting Macs to do everything, and assured me that this was trivial and they're so user friendly and almost three hours later he at least managed to get the new version of keynote on his machine to export the file to the old format the old version we couldn't upgrade my mac from could read.

By "couldn't upgrade" I mean the Apple store wouldn't show keynote in the list of things it could upgrade (even though it was installed), and when we searched for it by name it had an "upgrade" button that just spun a progress indicator endlessly without saying what was wrong, and after we reset my Apple ID password and logged in a fresh time and that didn't fix it he dug through the magic log files only Apple experts know about until he saw that my Apple ID is "not provisioned" whatever that means.

So yeah, after 3 years of me being unable to upgrade it, the mac expert tried to get it to work and the same silent failures hit him too, but Macs are more user friendly because they're what he's used to. Me, I admit xubuntu is terrible and I'm just really familiar with it.


February 27, 2017

Digestion-wise, I am now to the point where I can eat small amounts of food, and then between one bite and the next it turns from food into This Tastes Wrong.

Booth setup, on site. The site being a sort of giant upside down pyramid you reach via a long train ride through industrial wasteland. I didn't know Tokyo _had_ industrial wasteland, but apparently it does! (Fallout from the 1997 asian economic crisis: optimistic construction boom expanding the city out in this direction wound up with a lot of buildings nobody wanted to pay for. Which means the enormous convention center way out here is cheap to run events in, at least by Tokyo standards.)

The _start_ of the commute happened bright and early, meaning we set out at the height of tokyo rush hour. That was an experience. The trains weren't standing room only, they were way more packed than that. I was literally pressed on all four sides by people in black suits, and then when we got off it was almost the same density moving, you could _not_ cut across the flow and just had to sort of follow along until you got to a decision point. The train out to the convention center was less crowded, but we also stopped off at a starbucks for a bit to let the crowd thin out. (They don't have mango black tea lemonade here. They have Matcha Latte but I wasn't up for drinking much of it.).

In the convention center, which is cold and cavernous and could double as an aircraft hangar (and the delivery doors were open so they couldn't heat the place), we got a big square with tape at the corners marking our space on the bare concrete and four large crates of parts. From this, we must assemble a booth. We have 3 days. At the end of the show, we have 4 hours to take it down and pack it away again.

Most of the other booths around us had professional construction crews working since the 3am delivery time. They're cutting plywood with power tools. The one right behind us is welding rebar.

The electricity is under conductive metal conduits (mess up and you zap people 20 feet away), and they gave us a "screw terminal" which Jeff had to cut an extension cord to get bare wires to connect to it. The electricity is of course live while he does this. Somehow nobody's been electrocuted yet. (OSHA has no place in this country.)

Once we measured out where stuff should go and laid extension cords and ethernet cable down in a big backwards Z (dyslexic zorro is our electrician) we waited for the carpet guys to come put carpet over it. And waited. And waited. Marketing Rich told them to come at 5pm because "we need as much time as possible to set up the booth", but carpet is step 2 in our checklist, the booth goes on TOP of the carpet. We can't do anything else until the carpet is in.

Everybody else's carpet is in already. Most of the other booths around us had professional construction crews working since the 3am delivery time. They're cutting plywood with power tools. The one right behind us is welding fancy painted rebar 15 feet off the ground. It's quite impressive.


February 26, 2017

In Tokyo! I am not well. My first attempt to leave the hotel was derailed by an urgent need to turn back around and spend yet more quality time in the bathroom. I haven't eaten anything since Thursday and the idea of food does not appeal at _all_ right now. But at least they have my tea here. No matter how nauseated, my system knows what to do with sweet cold tea with milk in it. I grew up on this stuff, it's digestively reassuring, and several bottles of it did help my system achive throughput, which is progress without being an improvement. (It's now a different kind of bad.)

I do not have the dress shoes and sport jacket I meant to bring for Smart Energy Week. (I forgot. They're back in Austin. Technically Rich-the-manager asked me to bring them to the california convention I wound up not going to, but I should have brought them to this one.)

Finally made it to the office. There's a certain amount of panic about setup for the show. I went with the old reliable "let's make checklists", so we did that. Write down everything that needs to happen, sort it in order, figure out what we can do now and what has to happen after other things that have their own schedules attached.

The tight deadline is because the booth was just used at Distributech in California, and had to be shipped to Tokyo at great expense to get it here soon enough for us to use it for the other show. (I've been registered for Distributech twice but never actually ended up going; all the spam my work email gets is due to that.) The four giant storage cubes of Booth Parts should arrive onsite at 3am.

We needed to go over the graphics files for booth signs and such, so we did that. We needed to shop for booth supplies that need replacing after Distributech, so we did that. And asked Niishi-san to proofread all the japanese text on the graphics and correct it to what native speakers would say.

The engineer who made the graphics isn't japanese, he's chinese, but he's basically an Otaku and has a Japanese wife, and wants to do things the Japanese way. And just as american engineers get promoted into management when they're ready to retire and become useless, Japanese engineers get promoted into marketing (you built it, you know how to tell people what it does) so he very very much wants to be Vice President of Marketing. So that's what it says on our business cards, even though he mostly works out mathematical signal processing algorithms for us (and is quite good at it, but don't tell anyone).

The problem is, when actual marketing needs to occur he refuses to tell people what our stuff can do because he doesn't think they'll believe us, and wants to be "credible". So we went around changing all the "fault location to 10 meters" claims to the more accurate "fault location to 3 meters", and so on. ("But other people's products can't do that!" "Yeah, that's why they should buy ours, and we can do a live demo if they challenge us." Sort of the POINT of marketing? Sigh. As marketers go, he's a really good engineer. Very mathy.)

The mac I brought back needs to be upgraded because it's vintage 2015 software all the way down and since Steve Jobs died you have to give Apple more money annually in order to be able to read newly produced data, so we don't expect it can read the file formats current Apple software versions save. As with many things, this is assumed to be easy so they're leaving it for later. Me, I distrust all things mac at this point. (There are four graphics tables with a big TV under plexiglass, each run by its own laptop. And since it's a mac show they want to use macs, so we're using mine, Jeff's, Pat's, and Rich's mac laptops.)

On the bright side, this mac problem isn't one of the ones where Apple's "remember how the iMac didn't have a floppy drive and that was Steve Jobs being brilliant? Let's remove a feature the hardware's had for years and call that this year's upgrade" nonsense has obviously crippled something. This is just "who needs compatibility" software laziness backstopped by greed.

Jeff is sure the software upgrades in the mac will be trivial and go smoothly, since that's how his universe works, so he keeps putting it off. I fully expect the box to wind up bricked and need hours of forensic spelunking with special tools to have a desktop again because that's how my universe works. We'll see.


February 24, 2017

Travel sickness. 3:30 am alarm for another 5:40 am flight (to SFO this time) and then an 11 hour flight to Japan. Sick the whole way, with that lovely "nausea plus constipation" combo that just refuses to resolve itself either way, and United flights are uncomfortable enough when you're well.

Got in saturday afternoon (9 hour time difference plus almost 24 hours of travel).

Lemme back up: I saw more talks yesterday, went out to dinner with two j-core developers (a tiny little meetup, much fun was had by all and I emailed some ideas they had to Jeff) and along the way I had a Voodoo Donut becuase it's portland and that's what you do.

The donut may have been a mistake.

My redeye on the way in turned into a redeye on the way out because although my flight out of California wasn't a redeye (I checked!), I had to fly from Oregon to California to _get_there_. Luckily I caught it right after the j-core dinner, but not in time to get to bed early. No trains that early, but my airbnb host said he'd drive me to the airport for $20 even at 3am, and I took him up on it. But this meant I only got a couple hours of sleep, and was nauseous when I woke up.

The nausea did not go away. I dunno if it was the donut or the fact I tend to get sick after a week in a new place (the different local bacteria catch up with me), but I had borderline food poisoning for the entire duration of an international flight. On United, which is a stacking debuff although the flight attendants were very nice. I couldn't eat the in-flight meals, and when I tried to at least eat one of the not-pretzel snack bags I gained an understanding of the saying "tastes like ashes".

I don't recommend the experience.

On a related note, thanks to the airline I am now the proud owner of something called a "stroopwafel". (Mint in bag. Well, caramel. Some foxing around the edges.) Maybe, someday, I might be able to eat it. We all have dreams.

You know how I just complained about being too jetlagged to prepare and give a proper talk at ELC (well, to my standards anyway. Busy and distracted contributed too. Yes I'm aware people said they liked it and I shouldn't be too hard on myself but I think I'm going to stop giving talks until my schedule lets up and I can devote enough time to prep work.) Anyway, now I've gotten 2 hours of sleep in a 2 and 1/2 day period on top of the jetlag, having just spent an active convention trying to recover from the previous jetlag the first redeye at the _start_ of the week gave me. I do not expect to be of much use to anybody tomorrow. I tend to perk up in Tokyo, a change being as good as a rest and all, but there are limits.

Bonus fun: the international dateline ate a day so today was Friday and tomorrow is Sunday. Meaning I get one less recovery day before showtime. But for now, I can haz hotel room with Zzzzz in it.


February 22, 2017

Feeling much better today. Saw a bunch more talks.

There are a bunch of BSD-ish licensed embedded RTOS projects going on; the Linux Foundation has decided to push Zephyr (the same way it pushed maemo, meego, yocto, and tizen before it; I assume somebody gave them money) but Google's still doing Fuchsia, Sony added an ELF loader to contiki, and so on.

Walked from the ELC venue (hotel) out to a whatever Kinko's is calling itself this week (8 blocks away?) to pick up the print job the SEI guys prepared for my Turtle board demo, set up the sign, and showed people the board! We haven't got a website set up yet (there's a domain purchased but it's a parking page), but we have a board and it runs Linux and we're preparing to make more.

The attendees seemed to like it. We're committing to do a production run in May, and are accepting preorders, for a definition of "accepting" that's deeply problematic. Lots of people walked off with the preorder forms, which have no contact info for us. (Not even an email address.) And no way to pay us, it says give us your contact info and we'll contact _you_ back, somehow, at some point. (Jen assumed people would read the form, fill it out, and hand it back at the both. One person gave me cash to order the board; that was the only filled out form I got back.)

The problem is we haven't figured out how to take people's MONEY yet. For the big products it's corporate purchase orders, but individual retail turns out to be tricky. The company is headquartered in Canada and the engineers are mostly in Japan, and both jurisdictions attach buckets of regulatory baggage to getting a "merchant account", hoops we haven't jumped through yet, and without which we can't take credit cards. They have a US office, but haven't got a US corporate subsidiary. (This is why I still get paid as a contractor, a "temporary" condition now in its third year. They can't do the insurance and tax witholding stuff until they have a US subsidiary, and two countries is already more than they can handle at current staffing levels.)


February 21, 2017

I no longer try to fly on the same day as I do whatever I'm flying in for (travel eats a day), but I may have to amend that. Redeye flights eat _more_ than a day, I am _out_of_it_ today.

Saw the Device Tree in Zephyr project talk. (Lots of Zephyr talks, I guess that's what the Linux Foundation's overstocked on and trying to push this year. Try the veal!) Nice to see device tree moving beyond Linux, no real solution to the "all those device tree files in the kernel are GPL'd so BSD won't touch it" problem which is why we now have to deal with ACPI on arm. (Bravo guys. Bravo.)

Went to the Embedded Linux Size Reduction Techniques talk which mentioned toybox! (And I felt bad telling them that toysh is crap, but it is. I need to find time/energy to work on that. But not right now, still preparing my own talk material, while attending all these others.)

Then I skipped the next couple talks to finish preparing my OWN talk, and gave it starting at 4:20...

Sigh. Same failure mode as linuxconf.au: didn't fit the material in the time allotted. And the darn jetlag hangover from yesterday's redeye REALLY SCREWED ME UP: around 4pm I was in desperate need of a nap, but needed to ramp up to be onstage. I caffeinated as heavily as I could, but my eyesight goes all sparkly with visual migraine symptoms if I caffeinate too much these days. Hard to give a talk if you can't see.

Several people told me they enjoyed it, but I has a disappoint. I could have done so much better. I hope the video's at least watchable. I _really_ need to do podcasts of this stuff.


February 20, 2017

My flight to ELC left at 5:55AM, meaning I needed to leave for the airport at 4, meaning I needed to be up by 3, and I wound up staying awake all night.

I don't recommend it.


February 19, 2017

And I will be demoing my turtle board at the ELC showcase thing. Lovely. (I asked them to send me a second, but they can't do it in time. They might get a poster together though. Printed there.)

We need an external push to do preorders to actually manufacture these suckers. The design's been ok for months, we just need a deadline attached to defend it from all the other things with deadlines attached.


February 18, 2017

A couple weeks back The Adelie Linux guys poked me on the #toybox channel on freenode and suggested I look at their getconf implementation, which they're willing to license to me under 0BSD. And I've gradually been poking at it. It's a reasonably clean implementation, if you're willing to have an #ifdef staircase for every simbol. Which I am not.

So I went through all the symbols (in 3 categories) and confirmed that the names given on the getconf command line are mechanically transformable (via regex) into the symbol names pulled out of limits.h and unistd.h. And I can get that list of symbol names with:

gcc -E -dM - <<< '#include <limits.h>'

So I need to do things with sed/awk/grep to generate a new header that defines all the symbols (to the "undefined" value if necessary) and add that to the make.sh plumbing.

First is coming up with a sed expression that'll parse the C source file to create a header containing an array of either the symbol that's #defined in the header or an UNKNOWN flag (probably -1) so the C doesn't need 5 lines of:

#ifdef SYMBOL
  {"SYMBOL", _SC_SYMBOL},
#else
  {"SYMBOL", UNKNOWN},
#endif

For half the symbols. (It's icky and it's why I haven't done getconf before now.)

There's a few stages of this. One is rewriting the C files so the symbols are just:

char *limit_vars[] = {
  "_POSIX_AIO_LISTIO_MAX", "_POSIX_AIO_MAX", "_POSIX_ARG_MAX",
  "_POSIX_CHILD_MAX", "_POSIX_DELAYTIMER_MAX", "_POSIX_HOST_NAME_MAX",
  "_POSIX_LINK_MAX", "_POSIX_LOGIN_NAME_MAX", "_POSIX_MAX_CANON",
...
};

And then come up with a sed line to extract those as normal unquoted strings one per line. (And do the prefix mangling that transforms them into the expected symbol names.)

Then I need to use that list to turn the first list (from gcc -E -dM) into a list of #defined symbols in the same order as the string array, so I can use the string array position to index the symbol array.

Of course doing this is trickier than it seems. I need to substitute values preserving order, the tool for which is sed. I don't want a shell for loop iterating through symbol names and calling sed each time; that would make the build really slow. So I have a sed invocation creating another sed invocation, which possibly violates the geneva convention. (I'd have to check.)

The tricky bit is coming up with a sed command line checking for each symbol and outputting it, and outputting an alternative at the end if it _hasn't_ been matched, while preserving order. ("at the end" meaning the end of the sed script, not the end of the symbol list input.) Grep can remove things, but not replace them maintaining order.


February 17, 2017

Canceled my return flight from Portland, instead work's flying me to Tokyo.

Since I can't leave until after ELC, I get like a week's notice this time! Woo! (Luxury.) Of course I was already panicing to get everything done before ELC...

In THEORY mkroot can do everything Aboriginal Linux can in like 1/5 the code. (Factor out the toolchain and the rest simplifies greatly.) In practice, making it look easy takes lots of up-front work. I've been trying to get the actual code to the point I can do a release before doing the talk material explaining how to use it. If this _was_ my $DAYJOB I'd probably be in reasonable shape. As is... I can cut a release from the conference, right?

Working on it...


February 13, 2017

It's so easy to fall behind on blogging, but I promised in my patreon I'd try to keep up with it.

I've been poking at my ELC talk presentation materials for a while now. I need to get a toybox release out, talk Rich into a musl-cross-make release, do a mkroot release dependingon both of those, and then I can write the actual presentation using all of that.

The presentation is growing a bunch of branches and it's hard to sequence. I want to cover "make defconfig + hello world initramfs", and why it's only simplest from a certain point of view. I want to cover "hello world on bare metal". I want to cover the Linux boot sequence. I want to walk through the miniconfig symbol list and show what the vital ones DO, and then what the important but optional ones do...

As with most of my presentations, figuring out the SCOPE is hard enough, then the sequencing, then the timing. I don't consider a topic something I can really talk about when I have less than a half-dozen hours of material on it. This time I have a 2 hour tutorial, but it should be interactive not just lecture.

Sigh. What I should really do screencasts with associated audio and do youtube tutorials. I probably need video editing software for that though. (And deadlines to force me to _do_ it. :)


February 7, 2017

Wanna see something creepy and orwellian?

A recent change to "National Industrial security operating Manual (NISPOM)" requires requires Department Of Defense contractors to establish and maintain an "insider threat program", under which "Reportable Items" include alleigance to united states, foreign preference, sexual behavior...

Of course this includes anything space related, and stacks on top of the ITAR export regulations that killed the US space program (because the crypto panic of the 1990's rubbed off on the space program when Intelsat 708 blew up in 1996, and then the space side didn't get relaxed when the crypto side did).

So now if you buy a screwdriver at home depot and use it to turn a screw on a spacecraft, that screwdriver becomes a munition that cannot be discussed with non-us persons (I.E. do not mention it EXISTS on the internet), AND you're subject to full McCarthy witchunt loyalty snooping inside the bubble.

This puts the guy who owns SpaceX joining the Alleged President's circle of advisers in a new light, doesn't it?


February 6, 2017

Have I ranted about the new dmesg api? Because it's terrible. And now we're stuck with it. (Even though the documentation still says "testing", and presumably continues to for the forseeable future.)

So this guy (now one of the senior contributors to systemd) came up with a new /dev/kmesg API, and if you "cat /dev/kmesg" it hangs at the end. (So you have to open it O_NONBLOCK to get the _old_ behavior. That's just beautiful.) And that's when it doesn't spontaneously fail with "invalid argument" because your read() buffer is too small:

$ dd if=/dev/kmsg bs=110
6,30825,1057265865113,-;cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 3000 mBm), (N/A)
6,30826,1057265865118,-;cfg80211: (57240000 KHz - 63720000 KHz @2160000 KHz), (N/A, 4000 mBm), (N/A)
6,30827,1057338815561,-;cfg80211: World regulatory domain updated:
6,30828,1057338815576,-;cfg80211: DFS Master region: unset
dd: error reading '/dev/kmsg': Invalid argument
0+4 records in
0+4 records out
335 bytes (335 B) copied, 0.000563614 s, 594 kB/s

What is the magic "always big enough" value? It's 8k. Yes, they have an arbitrary limit that's _not_ page size, so "cat /dev/kmsg" may _seem_ to work but it's not reliable. If you implement something like cat using page size it'll work in your testing and then magically fail later in the field.

Except they made sure read was unreliable for other reasons, such as "the buffer moving under you" can return EPIPE. Isn't dmesg a ring buffer? Yes it is, but they can't make that work reliably without EPIPE. Don't we already have EAGAIN, an existing errno that tells libc to retry the read? Yes, but they didn't use it because systemd guys care nothing for what came before, or consistentcy with the rest of the system. They value what they pulled out of their ass, and somehow Linus merged this.

This is just such an amazingly well designed API its author should receive some kind of award, at a very high speed, possibly aimed at his head. And of course it's a new API that lives alongside the old one but doesn't share state; SYSLOG_ACTION_CLEAR is ignored, now you lseek SEEK_DATA...

So yeah, Elliott sent me a complete rewrite of dmesg because he wants the new --follow option, and it has to be based on this new nonsense the kernel guys did because adding a couple new klogctl() entries would be too hard. It's not like you could do a SYSLOG_ACTION_RINGREAD that read from a ring buffer position to the end of the ring buffer (with -1 meaning "wherever it currently starts"), never returning more than SYSLOG_ACTION_SIZE_BUFFER bytes so you can reliably size bufp, and if bufp is NULL return the current ring buffer end position (which wraps at the same SIZE_BUFFER value), then the other one would be SYSLOG_ACTION_RINGWAIT that blocks until the current ring buffer end position != the one you pass in.

But no, that would be too disruptive, instead the kernel guys got a whole new /dev node that's SO well thought out. Sigh.


February 4, 2017

Sigh, ps and top remain fiddly.

(Update: really fiddly.)


February 3, 2017

A problem with replacing Aboriginal Linux with mkroot is that the musl-cross-make native toolchain doesn't include make, bash, or distcc. Hmmm. I haven't written my own make yet, and that's a big one that's not in toybox's roadmap to the 1.0 release because at the time I was thinking it belonged in qcc, but I'm not doing qcc any time soon. I can do distcc as an overlay (it's optional anyway, not needed for chroot/container mode) and I _do_ have a proper bash replacement shell in the toybox todo list (as basically the last item, although it could get bumped higher).

Meanwhile in toybox, oneit.c is broken (when I taught xopen() never to stomp stdin/stdout/stderr I forgot to switch it over to xopen_stdio(), oops), and I should probably use returns_twice instead of noinline for the XVFORK() stuff.

Backstory: musl-cross-make's vfork is broken, it seems to be an interaction between musl and current gcc where the compiler doesn't know vfork can return twice (just like setjmp), and thus it "optimizes" stack usage, as in function calls reuse bits of the stack that store local variables that the compiler's liveness analysis thinks we're done with, but which we're NOT done with if you can longjmp() back to the earlier part of the function (which vfork does when the child exits).

The compiler is supposed to _know_ this, it's CLEARLY A BUG. But whether it's gcc's bug or musl's bug is not entirely clear.

I'm now on my third attempt to fix it. This is one of those "Rich has very strong opinions about how people should use his code and intentionally breaks ways of using it he doesn't agree with", ala "things like dropbear built against musl donn't work on nommu becuase he provides a broken but existing fork() so the ./configure stage doesn't know to use vfork() instead and there's NO WAY to tell at compile time except maybe checking for #ifdef __FDPIC__ which doesn't help binflt or static pie".

Meanwhile I'm over here hitting it with a rock until it works.


February 2, 2017

Working on musl-cross-make and the mkroot kernel stuff. I've got an mcm-buildall.sh script that tries to use musl-cross-make to build cross compilers for all the targets musl supports, but there's fiddliness (cortex-m is nommu but arm doesn't have fdpic support yet, so it only builds static PIE which is a lot less efficient).

Not entirely sure where to post this. Try to convince Rich to merge it? (I'm trying to convince him to host the binary tarballs it produces as output.) Put it in the mkroot repo? Hmmm...


February 1, 2017

Starbucks emailed me a free drink coupon for my birthday, and I like hanging out there to program. (With the big headphones to drown out Zombie Sinatra.) If you get a mango black tea lemonade in the "egregious" size (um, alegretto?) it's 50 cent refills as long as you hang out, and the 9 cell battery is still working in my netbook.

Alas, I was too busy running errands to make it there, so the coupon expired. Wound up trying to get work done at the big machine instead.

The big machine (halfbrick) is the 8-way SMP i7 laptop I got from System76 back in 2013. It still works fine, but is not as portable as the netbook and I generally try to have one "master" machine that I can rsync to the others without worrying about integrating diverse changes. But halfbrick is waaaay faster and I'm trying to get musl-cross-make toolchains for all the targets. This means I'm collating my various toolchain build snippets into a single "all.sh" that iterates through all the known targets and builds both static cross compiler and native compiler versions of each. (First building a dynamically linked i686-linux-musl cross compiler that it can build those other statically linked cross compilers with. This means the i686 target gets built 3 times.)

My goal with this is to get toolchains built how I like them, test them all, and then make puppy eyes at Rich to cut a musl-cross-make and host binary tarballs of toolchains built with this script. (Hence script must be portable and reproducible. Or at least demonstrate the build invocations I want to Rich so he can do it his way.)

Alas, Rich is really busy and hasn't got much time to coordinate on this. I've asked him to debug some wierdness I've seen, but in terms of actual project design work, musl-cross-make is too far down his todo list at the moment.


January 31, 2017

Trying to get a toybox release out. So many todo items, but they can't all fit in this go.

I'll probably just do a "ship what I've got" thing at some point. Until mkroot is ready I don't have a proper test environment to build Linux From Scratch under, I suspect there will be lots of stuff to fix when I finally get that connected back up.


January 30, 2017

Apparently I am _not_ flying to Distributech this week.

On Saturday Rich (the company president, not the developer) said they'd need me as a warm body to help run SEI's booth at Distributech in california, and they were going to fly me there either today or tomorrow. But today, word is we're not doing that. This is the second year in a row they registered me for distributech, and there is SO MUCH SPAM related to this. (Going to my work email instead of my personal one, but still.)


January 29, 2017

I called Jeff and had a long talk with him, and I just can't leave SEI right now. The money's terrible but I've spent two years working on this technology and I want to see it SHIP darn it.

Apologized profusely to the colorado recruiter. I've been dragging my feet about filling out the paperwork for days, and I'm experience enough at being me to spot when I'm trying to tell myself something.

Sigh. When Jeff visited last week he said he needed a couple more weeks to resolve the funding stuff. Under normal circumstances this wouldn't be any of my business, but "when can I go back to a full-time salary" is kind of my business.


January 27, 2017

Hanging out at the Starbucks in the domain, poking at mkroot, trying to come up with a related kernel build design.

Right now it makes a root filesystem, which is target-agnostic but just uses the supplied toolchain to determine what it's building for. But kernel builds have a .config, which I assembled in Aboriginal Linux using target-specific information appended to a generic portion. And then there's creating the run-emulator.sh script which has qemu command line arguments, kernel command line arguments, and the serial console. Plus the location the resulting kernel binary lives varies all over the place. (It would be lovely if all qemu targets had the ELF loader hooked up so I could always "-kernel vmlinux", but no. Several want arch/$ARCH/boot/*[Ii]mage but it's not that simple because arm builds BOTH "Image" and "zImage"... Wheee.)

The recruiter paperwork was a giant PDF that renders as a one page ad for Adobe. I'm not sure "For best experience" is an accurate description of "nothing but our proprietary package can render this at all". It's 2017, are there still websites that only render in Internet Explorer?

They sent me broken up files, but I can't view all of _those_ either.

So I swung by Kinko's (it's been Fedex Office for years, but if I say that nobody knows what I mean) assuming their windows machines could print this out, but when I opened the envelope the original "one big PDF" had only printed 5 pages. The broken up version had more attachments than that. So this stuff doesn't obviously work for a professional print shop, either.

Since verbally agreeding to do the new Colorado job, I've gotten 5 emails from other recruiters, it's apparently recruiter season again. But what I really want to do is go back to a fulltime salary with Jeff's company, which alas is not under my control. (I gave them 7 months already.)


January 26, 2017

Somebody asked on the buildroot list if I was going to resubmit the patch to add toybox support to buildroot.

I haven't yet for a couple reasons: first busybox is deeply integrated into buildroot and replicating that for toybox is a pain, secondly I don't really care that much about buildroot so it hasn't bubbled to the top of my todo list.

Then a buildroot developer asked about the difference between busybox and toybox and I answered his question, and his reply was that nothing I said was relevant to buildroot.

I'm trying to make Android a better base for embedded systems. I don't see how buildroot is relevant to this. So I guess we're in agreement.


January 25, 2017

Supporting date %s turns out to be unreasonably hard because strptime is stupid. Instead of returning a unix time (I.E. number of seconds since midnight at the start of Jan 1, 1970), it returns a broken down "struct tm" which cares about crazy things like timezone and day of the week, and then you feed that into mktime() to get it back into unixtime. Meaning if you feed %s to strptime, even if libc understands what to do with it, it adjusts the fields for local TZ nonsense, and then when you convert it back to unixtime with mktime() you can get a DIFFERENT RESULT. (And don't get me started on mktime() which returns -1 for errors which is a VALID RESULT if you care about representing historical dates (which is why linus insists making time_t unsigned is not the way to fix y2038.)

This struct tm normalization nonsense is why Elliott added "chkmktime()" to date, which is what's breaking here: if you convert the time to unix time and back and it's different, barf. Possibly we should just bounds check the fields that posix says have ranges. (Except what I want to do is treat them as an array, and I _can_ on every libc I've checked, but there's nothing actually _requiring_ it. So I have to open code a loop. Sigh.)

As for supporting %s in date, in theory I could just go "if strcmp(s, "%s") atol(blah);" but your pattern could be "date -D 'Timestamp: (%s)'" where it's surrounded by arbitrary context. In theory %s is the only escape when it occurs, because it would stomp all the others. (I suppose you could specify timezone, but if unix time has been consistently UTC since the 90's, only reason it ever wasn't was dual booting with windows that wanted the hardware clock set to local time and yes, adjusted each daylight savings transition.


January 24, 2017

My friend Nick got herself in legal trouble, so I sent $300 to a bail bondsperson in Arkansas. I'd like to help more but I'm resource constrained.

Speaking of which, a recruiter offered me a new job in Colorado. Hmmm. It's the same pay rate Cray was offering (which is $15/hour more than $DAYJOB paid back when it was fulltime), and it's running a debug lab which would be a nice change of pace from what I've been doing (and gives a nice work/life split where I could work on toybox stuff without guilt in my time off).

And I've learned that "length of commute" is an enormous ingredient in my job satisfaction: the Cray gig was lovely in part because my apartment was something like 600 feet from work. San Diego had a 15 minute drive when there was no traffic (I.E. never).

Hmmm... Hard decision. I really really really really want my current $DAYJOB to work out, I love the technology, I love the people. But the money's been terrible for a while now. It was supposed to resolve in October, and didn't...


January 23, 2017

Somebody asked about windows binaries for toybox, and my reply was same as always, "I don't do Windows".

This was one of the FAQ entries in my old Aboriginal Linux project, and I still haven't ported over all the old busybox FAQ material I wrote.

Somebody else (Christopher Barry) asked me about appending to a CPIO file, specifically for mkinitramfs doing multi-stage output. It's an interesting question: there's a "TRAILER!!!" entry at the end (for historical reasons, and yes that's in-band signaling) but it's fixed size and can be trimmed off. You'd have to decompress and recompress the gzip wrapper, but that's not a huge deal.

Possibly I should add this to my toybox cpio.c todo list, but I've already got a bucket of stuff there for cpio: mostly adding a new data type with 64 bit timestamps, file sizes, xattr storage, maybe allow sparse files...

Alas to make _that_ useful, I'd need a corresponding kernel patch, and the kernel developers have disappeared so far up their own collective ass it's a long walk to get their attention and I really haven't bothered. (I have patches to make CONFIG_DEVTMPFS_MOUNT work for initramfs that I haven't pushed for most of a year now. Well, I sent a quick stab at it to lkml last year which got immediately shot down, and I created a cleaned up version and then never bothered to send it because dealing with those guys is just no fun.

Sigh. I should hold my nose and try again. But not today.


January 22, 2017

Jeff (my boss) was in town today, and we got to hang out and program (while he waited for somebody to arrive on a delayed flight so he could go back to the management side of the force; there's a theme here).

We got something called OpenDNP3 working on a Turtle board, which is a protocol electrical utilities care about due to Standards. As usual Jeff did 95% of it, but I got him unblocked 3 times when a thing didn't work and I hit it with the appropriate rock.

Jeff is a lot like batman. He has a day job as Bruce Wayne and it's really time consuming, but he's way more effective than any of us when he wanders into the programming side of the world. He just has to do so surreptitiously so the investors don't find out about it.


January 21, 2017

Fun morning. The "TAXI" button at the university accomodation made the intercom dial a taxi service, and I made an appointment to have them pick me up at 6am. When no taxi arrived I pressed the button again but it kept hanging up halfway through the conversation (or possibly the person on the other end was hanging up). The text above the button said it was maxi taxi, with a number, so I called that with my cell phone and they said they haven't had anything to do with the univeristy in a year, and gave me the number of another taxi service that also said they weren't involved but were happy to send me a new taxi.

So I got a ride to the airport, and at the end of it neither of my cards would work in their machine (I've only been able to use them at ATMs here) and the ride was more australian cash than I had on me (lady couldn't accept american dollars), so I ran around the airport looking for an ATM (nope), and went back and eventually we worked out that chip and sign would work (chip and pin wouldn't). Have I mentioned I hate the chip they put in new cards?

Anyway, by the time I got to the Virgin airlines counter, they said the plane would be taking off in 8 minutes and the doors were already closed and I could give them an extra $120 australian dollars to be on the next flight to melbourne which doesn't land until after my connecting flight takes off.

So I bought another ticket on Tiger Air, which could get me to Melbourne with an hour to make my connecting flight, which I was assured was impossible but being stuck in Melbourne seemed better than being stuck in Hobart. They were very nice and put me in the front row so I could get off the plane quickly, and I ran through the airport to baggage claim (no they can't transfer bags between airlines, why would you think that?) and eventually made it to my plane (through checkin and security theatre and customs and all that) right as they were closing the doors.

I am seriously, _seriously_ out of shape. Ow.

I continue to be unable to get work done on United flights. Something about their economy class is designed to prevent concentration. Delta yes, united no. Not entirely sure why.


January 20, 2017

LCA is over. I have to catch a plane at 7am, so should probably go to bed early.

Some great panels all week, I need to go over the container internals tutorial for toybox.

I'm also going through the giant heap of pending toybox issues. I've added to it this week, of course (I need to go over the container internals tutorial for example). I still have a (December 30) patch from Elliott that I haven't applied yet, because it got fiddly and wound up on the todo list.

The real problem is I don't have a test case, and my attempts to make one ran into problems with the "date" command. I just ran into _another_ problem with the date command (wheee!) which I'm writing a message to the list about.

Sitting in the cafe uphill from the dorms the conference put us up in, it's doing a bunch of 80's music, including "Angel is a Centerfold" which is a song I find CREEPY. The singer never established any sort of relationship with a woman, she moved on with her life, went into presumably quite lucrative modeling work, and he's freaking out with some sort of ownership claim. There's a verse fantasizing about tracking down this woman he hasn't seen in years to take her to a motel room and rape her because she posed naked. "My blood runs cold, my memory has just been sold..." No, not yours. She is not your property? "A part of me has just been ripped..." Dude, you weren't even _involved_. Would he similarly be flipping out if he found out she'd gotten married (or died), or would that be ok with him?

Now somebody's turning Japanese (at least they think so). Much better song. (And now he'd walk 500 miles, only in a different accent.)


January 19, 2017

I gave my talk! The room was _enormous_ and intimidating. The schedule says it seats 650, and it was maybe half full for my talk, which I'm told is an excellent turnout for non-keynotes. (They just about fill it up in the mornings when nothing's scheduled against it.) I don't usually get intimidated speaking anymore, but this time I had outright butterflies and I think it showed in my presentation. :(

The talk appears to have been generally well received (update: the video is up and the outline is still there too). But I think I could have done better. I'm happy with my 2013 toybox talk, but I spent like a _month_ working on it before I gave it. (Seriously, I was working on that outline a week before the talk, posted them a day before the talk, and had time for a full run-through in the hotel the night before.)

This time I was editing right up until it was time to start, and hit Failure Mode #1: more material than I had time for. I knew it, and was rushing, but I made it like halfway through the outline before looking up and seeing the "2 minutes left" sign. (Very bright lights in my eyes, I missed the earlier signs.) So a less than graceful dismount, and I didn't get to half the material at all, nor time for questions. Sigh. (People came up after, but it wasn't recorded.)

Of course I didn't get my outline to Jen in time to turn it into slides (alas). I could scp the outline up to my site every time I tweaked it, but she needed a lot more turnaround time. Not a huge surprise, I usually present from an outline and a browser with lots of tabs I can point to primary sources with. This time the URL of every tab I wanted to show was in the outline, in order. (That was a failure mode in my 2013 OLS talk, I thought video was being recorded and it was only audio. Still beats the 2015 Linuxcon Tokyo talk where nothing was recorded.)

Over the years I've learned I need a special kind of outline to speak from: not enough detail and I'll go off on tangents that screw up the pacing and sequencing. Too much detail and I'm just reading text at people, which is boring. (I can craft _articles_ that way, but that's not how you do a talk.) So I had to get the sequencing right (which took days but I eventually was ok with the general scope and flow of the thing), get the level or detail in the material right (I'm happy I did that), and include the right _amount_ of stuff for the time available... which I screwed up again. Getting that right involves multiple practice runs and tends to be my failure mode. (If I can't blather about a topic extemporaneously for several hours straight and remain excited about what I'm saying I don't know enough about it to present on it, or have a high enough interest level to be an engaging presenter. But I need to pick the most interesting _subset_ of that to fit in the time, and when lots of it's interesting that's a judgement call.)

Spent the rest of the day sort of fried. Convention's still going on, lots of panels, but I went back to the room and took a nap with 2 panels left to go in the day, then went out to dinner with a bunch of Red Hat guys.


January 18, 2017

Saw several good talks. I should do writeups fo them, but there'll be videos online. (The guy doing the videos is the same guy who did the HDMI talk I watched last week.)

There was a speaker's dinner last night. Awful lot of speakers at this conference. Built-in conversation opening too, "so what's your talk on?" Beautiful venue. The "dinner" part was kind of silly though, the kind of fancy food that has such small portions you barely get anything to eat.

My talk still isn't ready. I mostly know what I want to cover, but I need so much more editing to have a _chance_ to fit it in my timeslot. (I've been getting up before sunrise due to jetlag, and using that time to spend a couple hours each day at talk prep. I've also had my netbook with me at the conference and been editing in some of the talks I've attended. Getting closer, but the talk's tomorrow. One more round of this and then I gotta Do The Thing...)


January 17, 2017

So talks. Much conference. Wow.

I may have buried the hatchet with Bradley Kuhn. We were both at a session on GPL compliance where he found out about my "promote public domain, make android self-hosting" agenda (I.E. the 2013 talk) and seemed surprised by it, and he explained that his goal for GPL enforcement wasn't to get code into projects but to get build and install instructions for hardware. Which is one of those "the reality of the embedded space is far more complicated than you seem to think" things. We talked over lunch.

I also saw the Open Invention Network lady again (I need to sign up toybox to the patent pool; aboriginal linux is in there already), and I talked to a couple people from OSI about their conflict with SPDX over thename of Zero Clause BSD; I think they understand why I'm upset now but have no procedure to amend a previous decision. I should try to follow up with them but my todo list runneth over...

Elliott submitted a Microcom implementation (serial terminal) to toybox, which we've needed for a while. I was going to do one with netcat and stty, but factoring the shared infrastructure out into lib/ makes more sense.

Alas my attempt to clean it up fizzled out because the shared infrastructure I have so far doesn't quite match up with what microcom needs.

The lib/net.c pollinate() stuff does a poll loop between stdin/stdout and a device, but it does a unidirectional shutdown: stdin closing calls shutdown(2) on the network socket (the (2) convention means manual section 2, ala "man 2 shutdown"). In the other direction the network socket closing exits the poll loop which goes on to end the program. But serial connections don't half half-connections to shutdown, because they don't propagate close state across the serial device. (They could with the data terminal ready line, but they don't because that used to signal modems, back when there were modems between the serial connections. You can also send a break, but nobody ever does and it's unclear what it would mean. I think the kernel may implement magic sysrq using that on the console?)

My first thought was just reverse input/output so a close on stdin (such as the terminal closing) exits the program, but hotplug can close the serial connection (when a USB adapter gets yanked) which should exit the program. So either side exiting should close the program. There's also break and exit key logic in the loop, which pollinate doesn't do. Possibly it needs some kind of callback to preprocess stdin input and respond appropriately? Hmmm...

The tty_sigreset() function isn't quite what this wants either, because that only resets stdin, not the remote side. It yanks us out of "raw" mode (raw mode being attached to the tty not the program is a historical thing that's really kind of sad now; it should be a process attribute but is an I/O device attribute, these days mostly attaching to _virtual_ I/O devices (ptys) provided by your terminal program and passed from shell to child process and back with unwanted state sticking to it like dirt). Anyway, it doesn't reset the terminal speed back to where it was before you opened it. Do we want to? (Do we care? No idea.)

Also, "terminal speed" _only_ applies to serial lines, so having the network version care about that seems useless. What else is going to use serial lines? I should implement stty and _maybe_ pppd someday if somebody asks for it? Most things probably won't.

So once again a burst of work I wind up backing out. There _is_ cleanup to do here, such as making it recognize all the supported Linux serial speeds. Under the covers it's almost always setting a divisor register for the serial clock speed, so you can set the speed to almost arbitrary values, but the kernel adamantly refuses to expose that to userspace and instead has a dozen canned values it allows, adding more periodically until it filled up its assigned bits when it hit 4 million bps. That's been stable for well over a decade now, so I can just loop over an array of the values and fill out the darn bitmask myself.

Oh, and half the USB serial converters out there are hardwired on the serial side and ignore the speed you set on the USB side. So testing this is going to be fun. I have USB serial on Turtle and Numato boards with me; turtle's hardwired 115200 on the console side (controlled by the kernel) and ignores the speed you set on the USB side (because it's a packet protocol, the modulating/demodulating's already been done on the other side of the converter chip and it's just bytes in USB packets on this side). Numato's the same except hardwired to 9600 (and you need a windows executable to reflash the magic chip that controls that; Numato doesn't provide a Linux binary to do that) so even though the two FPGAs are clocked at the same speed the Numato _seems_ way slower interactively.

Zero chance I can fit that level of detail into my talk. which I need to give the day after tomorrow and have maybe half an outline for. So much collating...


January 16, 2017

Greetings from Tasmania! I've been awake for 2 days!

Between the international dateline and the 14 hour flight, I missed sunday entirely. Got to the airport (most kwaj-like airport I've seen since kwaj), collected luggage, caught a bus to the event venue (Wrest Point, which is not a military academy but a casino attached to a boat dock), and attended several panels! I have not seen a single fire-breathing spider yet but they may be limited to the mainland. I had dinner with some people, whose ear I talked off because sleep deprivation and caffeine combine to put me in lecture mode. (I apologized repeatedly, but they kept listening. *shrug*)

I told them about my talk material, because they asked and that's what my brain's full of right now. Mostly I told them about stuff that probably WON'T make it into the talk. I need to edit my talk into some sort of coherent order, I half several dozen "and you should know about this!" that don't logically connect in any way at the moment. But nothing does right now because sleep. I'm in a dorm room. It's very sleep. (The conference-provided housing is at the University of Tasmania dorms. It's summer here. A very cold summer, but it's pretty close to antarctica here. Tasmania is Australia's Canada.)

I bought a power adapter at the airport which does not convert voltage, and I'm pretty sure plugging in my laptop won't fry it but since I didn't run the battery down on the flight I can wait until morning to find out.


January 14, 2017

Yay sleep. Feeling almost coherent again.

Got to the airport with 2 hours until my flight takes off and they wouldn't check me in for the flight without an Australian visa. (Tasmania is part of Australia. Who knew?) My response was: I need a visa? Japan didn't need a visa. Canada didn't. Russia did but that was 7 years ago and they made a _very_ big deal about it and I had to drive to another city weeks ahead of time to visit an embassy that had pamphlets walking you through the procedure for bribing police. (They randomly demand bribes and you need to recognize when they're doing it and know the correct amount, according to the official embassy pamphlets.)

Google's first page of australian visa places were all third party firms wanting a large amount of money to handle the process and offering a multi-day turnaround time, but after about half an hour the airline guys dug up the correct website to get it directly from the australian government (https://www.eta.immi.gov.au/) which cost $10 and took 5 minutes. So I made it on the plane.

Still, it would have been nice if the conference organizers had provided this information earlier. Finding out _at_ the airport is not a good way to handle that sort of thing.


January 13, 2017

Spending a day hanging out in San Francisco with Jeff on my way to Tasmania for LCA. I did less than 1/3 of the stuff my talk's covering, Jeff Dionne and Rich Felker each did as much. (I already got an email from Niishi-san with some bullet points.) So at the last minute, I'm picking Jeff's brain to write 300 lines of notes so far, out of which I need to assemble a talk. (We also did a conference call with Rich so I could get his answers to some stuff, that's part of the notes.)

Covering this material in the allotted time's going to be fun, but I've got most of a week for editing.

My sleep schedule's already crazy: my flight out of Austin took off at 5am this morning (I didn't know that was even an option) so my ride to the airport left at 3 meaning I was awake all night. I've got tonight in a Ramada Inn and then more time with Jeff tomorrow (because the person he was going to meet this evening missed their plane and won't be in until tomorrow afternoon)

And then tomorrow I spend 14 hours on an airplane to Hobart, Tasmania for LCA. Although first there's this layover at LAX, home of the famous "lax security" you've heard so much about. I have zero chance of getting anything done on the plane (United Economy Class leaving at the end of the day), and I can't even sleep on united unless I get a 3-seat row to myself which has only happened once so far. So that's gonna be fun. Maybe I can get some programming in at the airport before the flight.


January 12, 2017

Trying to get a toybox release out before my week of international travel starts, but unfortunately the past 6 months of craziness have stuck me in one of my failure modes: 8 gazillion half-finished things, most of which are hard problems.Getting interrupted in the middle of something is bad because reverse engineering my own half-finished code is more work than writing it was in the first place. Sometimes it's so bad it's easier to throw away what I've done and start over, but after I've done that a few times on the same thing (such as the "dd" command) I forget what state the current code's in, and my ideas get tinged with "no, I already tried that" and I have to sort through what turned out to be a bad idea when I tried it and what just never got finished multiple times because I was interrupted.

The other problem is I have a half-dozen things to work on but they're all at a tricky stage where I have to make some sort of decision or work through some problem, none of which is fresh in my head. So it's _frustrating_, I do the reverse engineering work staring at my diff, figure out where I left off and why, and then it's a hard problem. And there's lots of them stacked up all throughout the tree, and especially if I don't have time to build up a good head of steam and do the darn cleanup, I'm just going to make it WORSE by meddling a bit and then leaving myself with yet more unifinished work to reverse engineer later.

When work has _nothing_ to do with toybox, it's not so bad because I can do a clean separation and say "here's a 2 hour block in the morning/evening I can poke at this". And when work lets me spend large chunks of time on toybox, I can do that too. But when work _used_ to let me spend time on toybox but now says "no no no, GPS is all that matters", and doesn't have a strong fixed schedule (telecommuting for people in enough different timezones there's never a time when _somebody_ at the company's not awake), then I feel GUILTY about spending any time on toybox when they've told me not to. Because I _should_ be doing GPS, which I am so burned out on there's no words for it.

There's a related kind of overwhelming where I should be working on 37 different things, such as the j-core website and kernel paches and userspace build system and talk prep so on, so that no matter what I do I'm ignoring something else that I _should_ be doing... That's also a bit paralyzing. Not as bad as the "GPS is your whole world, rub your nose in the burnout!" stuff, but they stack.

I'm tired of perpetual crisis. We are now in month 7. What I should really be doing right now is packing a suitcase for my flight tonight.


January 9, 2017

Coming back to shell parsing after years away from it, and wow it has a lot of conflicting needs. The parser needs to be recursive so $() works, but it needs to be able to add multiple entries to the current command line so "$@" works. A line can have multiple commands ala "a && b; c" but a command can span multiple lines, ala:

echo ab"cd$(echo "hello
world")ef"gh

And yes the output of the $() is spliced into the same argument as the abcd/efgh. But:

$ X="A B C"; for i in $X; do echo -$i-; done
-A-
-B-
-C-

Separate arguments. The quoting is necessary to keep the argument together:

$ printf '(%s)[%s]\n' $(echo -e "one\ntwo")
(one)[two]
$ printf '(%s)[%s]\n' ab$(echo -e "one\ntwo")cd
(abone)[twocd]
$ printf '(%s)[%s]\n' "ab$(echo -e "one\ntwo")cd"
(abone
twocd)[]

The quotes inside the $() are not the same as the quotes outside the $(), meaning the quoting syntax nests arbitrarily deep. And $() subexpressions are _not_ executed immediately, if you type "echo $(echo hello >&2) \" the stderr output doesn't emit until you hit enter a second time (because of the \ continuation). So you're queueing up a sort of pipeline but instead of the pipe output going in sequence some of it turns into argument data, and yes I checked: "set -o pipefail; echo $(false); echo $?" returned zero.

There are sort of several logical scopes in quoting: you need a chunk based version that can get multiple lines (-c "$(echo -e "one\ntwo")" or a whole mmap()ed file), a logical block version that runs to the next & | && || ; command separator (newline is _sort_ of like ; but a semicolon is just a literal in quotes). As for what they MEAN:

$ echo one && echo two || echo three && echo four 
one
two
four

Which implies that || anything returns an exit code of _zero_ (success!) when the previous thing was false. And of course:

$ false && echo two || echo three && echo four
three
four

So && anything returns nonzero when disabled. So in the "does not trigger" case, || anything acts like && true, and && anything becomes || false.

Oh, and command input has to be aware of what the data means to do the $PS2 prompts and for cursor up to give you the right grouping. Hmmm... Functions and for loops store up snippets they then repeatedly execute with different variables substituted in, and an if statement is a variant of that. But beyond that, just knowing when to prompt for the next line and when to run what you've already got:

$ echo one; echo \
> two
one
two

There has to be a syntax parsing pass separate from the execution pass in order to know when prompt for more data (or error out for EOF). Which raises the question "what happens if a file ends with \" and the answer is it's ignored (or an empty line appended), and yes this includes sourcing a file. (Under bash, anyway; only really testing bash. Don't care what the Defective Annoying SHell does.) How about blocks crossing scopes?

$ cat thing
#!/bin/bash

source walrus
echo there
fi
$ cat walrus
if true
then
  echo hello
$ ./thing 
walrus: line 4: syntax error: unexpected end of file
there
./thing: line 5: syntax error near unexpected token `fi'
./thing: line 5: `fi'

That seems a bit unambitious, doesn't it? Same for the way "echo -(" seems like it should work without quotes, and yet it doesn't. The ( doesn't mean anything there, but the shell freaks out anyway. I'm aware that ) is meaningful there and doesn't need to be offset by whitespace (ala "(ls -l)" works, although "(ls -l) echo blah" doesn't), but ( is only meaningful midsentence out of solidarity? Or am I missing something?


January 8, 2017

Went down the street to get a can of tea last night and I got recognized by somebody in a car stopped at the light. As in they called out my first and last name, then explained they saw a Linux talk I gave in San Francisco. (Later emailed to invite me to dinner with their co-workers on tuesday.)

Weird but cool. (I've been variants of "internet famous" ever since I wrote stock market investment columns read by ~15 million people back during the dot-com boom, and I'm used to being recognized at conventions. But this is the first time it's ever happened in my civilian identity. :)

(And of course as soon as I have some sort of plan that requires me to be in Austin, two hours later I get a text from work wondering if I'm free to go to Japan on the way to the Tasmania trip. Or possibly San Jose, they're not sure yet, but either way it would probably involve me leaving... tuesday morning. If it happens. Tickets still aren't booked. Oh well, pack a suitcase and see what happens. My fault for making plans, of course.)

Alas I can't put nearly as much work into prepping the linxconf.au talk as into the ELC talk, because the linuxconf one mostly isn't _my_ material. I did maybe 1/3 of it, the rest is from Jeff and Rich and Niishi-san and so on. I can talk about toolchains and root filesystems and the uclinux triage and websites (nommu.org and j-core.org) and mailing lists and such, and forward porting the initial kernel port to then-current vanilla, and so on. But making the hardware faster, making the software better, adding SMP support and improving the I/O devices... I was THERE for it, but wasn't the one doing it. (Ok, I added cache support to the kernel. Lots of research for a tiny patch, but there's 5 of our 45 minutes.)

Then again my talk prep problem is usually "how will I fit what I want to say into the time allowed, I could blather for hours and hours about this without repeating myself, gotta _focus_ on just the best bits, what _are_ the best bits anyway..." This is at least new problem, although it's not "how do I fill the time" so much as "yeah I can talk about X but Y would be so much more interesting to cover except I'm not the domain expert there". Ok, "stuff I already know" is much less interesting to me than "stuff I dunno yet", so there's some bias in there. But still...

Spending a day with Jeff to prepare the talk would be nice. Of course, skype also exists and is significantly cheaper. I wonder if Rich is recovered enough from his house fire to do skype yet? He's back online but kinda busy, and even before all this he didn't want to travel to Tasmania to talk himself. I haven't wanted to bother him until he resurfaces, but in theory I get on a plane at some point soon. It would be nice to know when.


January 7, 2017

Poking at toysh but there's a scope issue. I know that monday I stop being able to work on anything useful and have to do all GPS all the time forever, so as I'm triaging the shell todo list my brain is just bouncing anything I'd still have to be working on monday, because it'll be a week before I can look at it again and it'll go all fuzzy and I'll have to run through everything again to get the level of clarity I need to implement it.

I'm probably also kind of tired, but weekends are the only time I can do real work instead of spinning my wheels on GPS. (I am dutifully staring at that window from time to time. I don't write about it here because it's not _accomplishing_ anything. But... dutifully staring. Doing GPS won't fix the company's funding issues, and the opportunity cost of NOT doing toysh right now is enormous. But... dutifully staring.)


January 6, 2017

Trying to write a README for the j2 kernel build, which is also related to the February ELC talk, and I've got a conundrum. "make j2_defconfig" sources a 42 line arch/sh/configs/j2_defconfig file, but the resulting miniconfig file (which is everything you'd have to switch on starting from allnoconfig to recreate that config; this does _not_ include symbols set by dependencies) is 152 lines.

So what are the differences? Let's rip the defconfig symbols out of the miniconfig:

egrep -v "^($(sed 's/=.*//' $DEF | tr '\n' '|' | sed 's/|$//'))=" $MINI

And the result is 114 lines of stuff added to the defconfig: container namespaces, three different ipsec transport modes (with no help text describing them in menuconfig). SLUB_CPU_PARTIAL has help text though, and it says you want to disable it on realtime systems. How do we switch that _off_ in the defconfig?

It's got 8 gazillion NET and WLAN VENDOR symbols which are just menu guards. (Because visibility and enablement are gratuitously entangled, these symbols are there to visually group options, but then they become dependencies for those options. And yet they're still enabled if nothing depending on them is switched on, even though by themselves they have no impact on the build.) And yes, looking at this CONFIG_NET_CADENCE should clearly be CONFIG_NET_VENDOR_CADENCE but nobody's edited that config line since it went in 6 years ago (commit f75ba50bdc2b was November 2011).

CONFIG_SCHED_MC: you can enable SMP but not use a multi-core scheduler. How does that work? No idea. It has an AT keyboard and PS/2 mouse support enabled.

Hmmm... the real question is how much detail to go into for the talk. What I should do is show people how to use / search in menuconfig to find a symbol and look up its help text (why you can't look at the help text FROM the / menu, or jump straight to the symbol from there, I have no idea; yes writing my own kconfig is on my todo list but it's a big complicated thing that requires keeping the problem space in your head all at once, I.E. not easily done in 15 minute chunks so not something I can work on now).

Anyway, point is to establish the difference between defconfig and miniconfig, and show that "simplest" can have different definitions depending on what you're optimizing for. (Miniconfig is most explicit and enables the least amount of stuff. Defconfig is more automated, but that means it does stuff behind your back and enables things you didn't ask for.

And a brief digression into "you think there's no work to be done here? I submitted miniconfig mode to the kernel a dozen years ago and they pulled the "do buckets of unrelated work to placate me or this won't go in" crap which I didn't so it didn't (I usually just resubmit a year later and go "Is whoever was gatekeeping gone yet?"); this guard symbol nonsense happened since, the magic special casing of CONFIG_EMBEDDED happened since... Oh and I should definitely mention the half-decade I spent removing perl as a build dependency and similar amount of time the squashfs guy spent trying to get his thing in, and maybe segue into how "Signed-off-by" is part of the layers of bureaucracy that's grown up around an aging project...

I don't worry about people stealing my ideas, it's far more work for me when they _don't_.


January 5, 2017

My talk "Building the simplest possible Linux system" got accepted to ELC (in Portland in late February). This is not work-related, and I'm paying my own way to this one.

This talk is basically on mkroot, which means I need to wean it off of busybox between now and then. Because "simplest possible" isn't going to have two userspace packages, and if I _can't_ I should show them busybox instead of toybox, which would be sad.

The simplest self-hosting system is (conceptually) 4 packages: toolchain, kernel, libc, cmdline. For a leaf system drop the toolchain, static link and handwave the libc as part of the toolchain (why Ulrich DrPepper was wrong about static linking), and replace cmdline with "app" which can be hello world. To run the result, you need a runtime (board or emulator), bootloader (qemu -kernel, system in ROM), and root filesystem (rant about 4 types of filesystems: block backed, pipe backed, ram backed, synthetic).

Hmmm... I need to explain defconfig files and miniconfig ("here is a 20 line kernel config file where every line means something"). Architecture selection and cross compilers. Booting vmlinux under qemu, bootloaders, and different binary types. Root filesystems (initramfs/initmpfs vs initrd vs root=). Demonstrate a kernel launching "hello world" and talk about why PID 1 is unique, walkthrough oneit.c and switch_root vs pivot_root. Directory layout (why LFS died, why posix is useless, and why /usr happened...). Walkthrough of a simple init script (and I should resubmit my CONFIG_DEVTMPFS_MOUNT patch that makes it work for initramfs). Probably a little on nommu systems (fdpic vs binflt, logically goes in the part where I describe _what_ the ~20 config entires in the kernel miniconfig do, and what differing target configs look like ala the aboriginal linux LINUX_CONFIG target snippets... And of course device tree vs non-device tree, dtb files and the horrible bespoke syntax du jour format that grew rather than was designed for device trees (and how its documentation is chopped into hundreds of little incoherent files in the kernel source, using a license that ensures BSD and such will never use it, which is why windows extended ACPI to Arm...)

Oh, kernel command line options (supply them, find them in the docs, find them in the source), how unrecognized name=value arguments become environment variables (which are not stack/heap/data/bss/mmap, see /proc/self/environ... Really "how linux launches a process". Which means do a quick ELF walkthrough: text, data, bss, heap, stack, that stupid TLS crap (basically a constant stack frame with its own reigster). Also #! scripts. Dynamic vs static linking: fun with ldd and readelf, the old chroot-setup script...)

Really if we're doing "simplest possible" I should demo hello world on bare metal and gluing your app to u-boot. Because as soon as you add "Linux" your overhead goes up by a megabyte. (Anecdote about Norad's Cheyenne Moutain display running busybox because they had to audit every line of code they ran it and it was much easier than auditing gnu/crap.)

But first, I need to get the toybox shell basically usable. I have a month and a half. Hmmm... (And the last file in "make install_airlock" that's not on my host system is ftpd, because there's no standard host reference version that can agree on a command line syntax.


January 4, 2017

At Texas Linux Fest last year I signed up for the Austin Checktech mailing list, and they've emailed out volunteer opportunites every month or so since then. I have not been in town for a single one of them yet, because of the travel required by my "we can only afford to pay you half time for 6 months now" $DAYJOB. The upcoming one is a design workshop on January 16, during the Tasmania trip. (And I still don't know if I'll be back to drive Fuzzy to her fencing tournament on the 27th.) I love the people, I love the technology, but this is getting old.

I got the first pass of ftpget checked in last night. (Yes there was one in pending, but I was 2/3 of the way through a new one before I noticed.)

Running my own code under strace, glibc's getaddrinfo() call is doing insane amounts of work. It's opening a NETLINK socket, doing sendto() and a bunch of recvmsg() and then opening a second socket to connect to "/var/run/nscd/socket". I'm testing against "127.0.0.1", there is no excuse for this. I tried adding a loop to my xconnect() function to do an AI_NUMERIC lookup first, and only try a non-numeric lookup if that failed (and if the user requested it), but it's still doing all that crap for a numeric lookup. (You can tell from the string if it's a numeric address.)

You wonder why there's 8 gazillion weird security holes every year? Oh well, hopefully linking against musl and bionic isn't this broken.

Sigh. I got ftpget finished and checked in, but haven't done the test stuff for it yet. Instead I did a "git diff" on my tree to see what kind of other fires I left burning, and here's a typical example:

$ git diff toys/*/dmesg.c | diffstat
 dmesg.c |   73 +++++++++++++++++++++++--------------------
 1 file changed, 38 insertions(+), 35 deletions(-)

I.E. "I was working on a thing and got interrupted. Again." This was something I did in San Diego last month, in reponse to Elliott sending me basically a complete rewrite of dmesg for a new API that Kay Sievers crapped all over the kernel. It is a very, very bad API.

But I can't work on it now. I need to go do GPS stuff for $DAYJOB instead. Endless, forever GPS stuff. (Because rtklib is only half an implementation, Andrew Hutton's code is GPLv3 so we can't use any of it, and basically everything else parses the output of an integrated on-chip solution that doesn't give us the data we need.)


January 3, 2017

The on-again off-again Tasmania trip looks on again? They're trying to involve another trip to Japan, which I'm ambivalent about. As much as I adore hanging out there I can't make plans (Fuzzy wants me to drive her to a Fencing tournament 15 minutes from home on the 27th: will I be able to? I have NO IDEA...) and I'm starting to develop a precursor to vericose veins from all the long plane flights, and my thighs object to adding an extra 12 hour leg between Japan and Tasmania. (United and Greyhound have equally uncomfortable seats.)

Still, yay talk. I've never spoken at this conference before, it's an honor to be accepted, and I'd really like to be able to do this.

Yesterday turned out to be a day off, which I found out when nobody but Rich was on the 5pm call. I texted jen and found out it was a holiday since New Year's was on sunday... at the end of the work day. So I'm taking today as that day off since I did GPS stuff monday. (Not in a hugely effective manner but it still counts.)

Trying to finish up ftpget, which is a thing that busybox apparently dreamed up, or at least there's no standard for it I've been able to find. I've been using it forever because it's a really easy want to script sending a file from point A to point B, and it's a thing I need for toybox make airlock/mkroot to do its thing with the full QEMU boot and control images and all that: the QEMU image dials out to an FTP server on the host's loopback and does an "ftpput" of its output files. This is the simplest way I've found of sending a file out through the virtual network, or from a chroot to a host system. Yes there are like 12 other ways, and I'd happily have a virtual filesystem be the way if qemu had a built-in smbfs server. Alas it has virtfs, which combines virtio with 9p (both turn out to be full of rough edges), using it requires QEMU be built against strange libraries on the host (it's just extended attributes, why would you need libcap-devel and libattr-devel) and then the setup is crotchety and I should probably revisit it someday, but my own todo item there is doing a simple smb server in toybox. So for the moment: implement ftpget.

The ftp protocol is only moderately insane, as in you can carve a reasonable subset out of it and ignore the rest... until you try to make cryptography work. Alas, even in "passive" mode you still have to open a second connection (http's ability to send the data inline in the same connection was apparently a revolutionary advance), which is why masquerading routers have to parse ftp connections, and don't ask me how sftp is supposed to work. But for sending files one hop on a local network, it still works.

(An aside: I should clean up and check in my netcat mode that prints the outgoing data in one color and the incoming data in another color, either to stdout or to a file. It's very useful logging. Doesn't quite do it for FTP because there's that whole second channel data goes along, but it lets you see what the control logic actually looks like going across the wire. But THAT's tied up with the whole "stop gcc's liveness analysis from breaking netcat on nommu in a semi-portable way" changes, which boils down to either "move everything to a function" or "move everything to globals", or possibly I can just hide it in XVFORK() which would be nice...)

The main problem with ftpget/ftpput is there are several other things you need to be able to do, which ftpget has no provision for, and I'm not adding: ftpls, ftpls-l, ftprm, ftpmv, ftpmkdir, and ftprmdir. Instead, I want to add flags to ftpget, which implies that ftpput is also an ftpget flag (with the command name being a historical alias with a different default action).

This gives me rather a lot to test, which raises the "how do I script these tests" issue. This test is tricky because I need netcat and ftpd to test ftpget. I need netcat because I've been testing against busybox ftpd (it's there, and no two ftpd command lines seem quite the same) and that only works on stdin/stdout.

This means "make test_ftpget" requires ftpd and netcat to be in the host $PATH, and the right _versions_ of each. (The toybox versions, not whatever weird host versions might have their own command line syntax. Busybox has two different ones for netcat. Back before I relaunched toybox and was trying to get back into busybox development this is one of the things that drove me away again.)

Sigh. I should start todo list just for items that would be a full-time job for somebody. A real Linux standard that documents what things do (a role Posix abdicated in favor of continuing to host Jorg Schilling, and the Linux Foundation abdicated in favor of accepting huge cash donations from Red Hat and Microsoft). A j-core+xv6+musl+toybox course teaching people C programming, VHDL, and both processor and operating system design. Where "course" is probably a 2-4 year program. A "hello world" kernel like I complained about years ago...

Anyway, back to ftpget...


January 2, 2017

In lieu of getting to spend the week between christmas and new year's catching up on toybox and mkroot, I got a 3 day weekend.

(Ok, I explained how my normal working style involves round-robining between different tasks so that when I'm blocked on one I can make progress somewhere else, and that I can focus past this to hit deadlines because the deadline is its own safety valve letting me know when I can _stop_ caring about this topic, but after the deadline I usually need extra recovery time and in the absence of deadlines a round-the-clock push that never ends is called a "death march". And that after three weeks focusing round the clock on GPS in tokyo including evenings and weekends, two more weeks in san francisco, and having GPS looming over my head as "the priority" the rest of the time while I was putting out other fires, being told the tuesday between christmas and New Year's "You don't get vacation and you can't cycle to lower priority things for recovery time, you will work on GPS and nothing else until further notice"... Yeah, that's building up to the kind of "I can't stand to look at this code, it viscerally disgusts me" that made me leave BusyBox. Yes it's a failing, but after a couple decades of doing this I know my limits. I haven't actually had a _vacation_ since I joined this company over 2 years ago, and since November have been in Tokyo, Austin, Minneapolis, San Franciso, and San Diego, with a trip to Tasmania pending (but not yet funded).)

Still: I managed to beg Friday off and took the weekend, and got a _little_ caught up on toybox and mkroot.

My mkroot project is happening because Aboriginal Linux died. I introduced the idea of ending the project here, and announced the mkroot stuff here, but finding time to work on it's been hard.

The problem is the toolchain packages have been frozen on the last GPLv2 releases for years because I'll never voluntarily ship GPLv3 binaries in a hobbyist context, and nobody regression tests against those versions anymore. In the course of 2 kernel releases last year they broke the build on 4 different architectures (for 4 different reasons). It wasn't anything I couldn't fix, but it was more than I could keep up with in the time I had available, and I fell far enough behind catching up was more work than it was worth. I'd been meaning to switch to a new (llvm/clang) toolchain for ages, other people did their own toolchains, and when Rich did his own musl toolchain builder, I went "sure, let's use that one".

But taking the toolchain build out of the project meant there wasn't enough left to justify the rest of the infrastructure (such as the download and package cache stuff extracting and patching tarballs), and I did a really simple root filesystem build that fits in one file (building a usable toybox-based root filesystem, using musl-cross-make, in a single 300 line shell script), and went "huh, how much of the rest is actually needed"?

I ported the host-tools.sh stuff to the toybox build as a new "make install_airlock" target, the environment variable sanitizing is more or less a single "env -i" call with a half-dozen variable whitelist...

I'm still not quite sure where to host it: at first I had it attached to the j-core account (back when $DAYJOB let me spend time on it and it was of use to them), then I put it on my github, but really the idea is to build a simple initramfs and I should probably just add a "make install_root" target to toybox. But to do that, I need to clean the busybox dependency out of it, which comes in two parts:

The mkroot.sh script is downloading and compiling busybox, with a config file that results in:

bunzip2 bzcat bzip2 gunzip gzip hush ping route sh tar unxz vi wget xzcat zcat

Toybox's new "install_airlock" has a $PENDING command list it symlinks out of the host $PATH (grep PENDING= scripts/install.sh) currently containing:

bunzip2 bzcat dd diff expr ftpd ftpget ftpput gunzip less ping route tar test tr vi wget zcat awk bzip2 fdisk gzip sh sha512sum unxz xzcat

The mkroot list is a subset of the airlock list, but eliminating the mkroot list lets me merge mkroot.sh into toybox. That said, the airlock list is what's necessary for a "hermetic build", which is of interest to the Android guys. (Ok, it's just the tip of the iceberg for what they need, but still.)

Of the 15 commands in the mkroot list, I have good implementations of bunzip2, bzcat, gunzip, and zcat already. (They're needed as busybox builtins due to the way busybox tar works.) I can do bzip2 and gzip reasonably easily (I did most of bzip2 a decade ago, the problem is the string sorting plumbing is just sort of a heuristic I never understood well enough to write my own; gotta dig into the math of the various sorting approaches and understand why the fallbacks trigger that way) but am not sure bzip2 compression side is actually necessary? (It's obsoletish. There's no xz compression side logic either but I'd probably just want to do lzma without the instruction-specific compressors there, if that's an option). The ping, route, and tar commands are cleanups/rewrites of stuff in pending, as is xzcat/unxz (but that's a way bigger cleanup, that _does_ need the architecture-specific instruction compressors to deal with existing archives). The reason I haven't finished gzip yet is it's not clear where the dictionary resets should happen (nobody quite agrees, "every 250k of input" is probably reasonable; using SMP for compression/decompression is related to this). I've wrestled with wget a bit already and will probably just end up rewriting it.

The really hard commands are hush/sh and vi. I don't strictly need vi to build, although that one's not hard (just elaborate). But you can't build without a shell. And _what_ you build with the shell is... squishy. Unclearly defined. I need lots of scripts to run through the shell to see where the behavior diverges and fix it, but I haven't built Linux From Scratch under hush either, so... (Sigh. Probably set up a chroot with bash, automate the current LFS build under that, and use that to make toysh tests).

So I need to tackle toysh in order to merge mkroot into toybox "make install_root", which means it has to live on its own for a while. :P

But in the meantime, I think I've gotten most of the aboriginal linux plumbing to the point I can ignore it. There are some nice bits I'm not reimplementing (mainly the package cache stuff), but not stuff I actually _need_ and it was always annoying to try to explain it anyway.

And after all that: what I actually spent the weekend banging on in toybox was mostly ftpget. (I noticed there's one in pending after I was halfway done writing a replacement. Sigh. It's one of the three things "make install_airlock" complains the host hasn't got, because my host is ubuntu not busybox.)


January 1, 2017

I aten't dead.

Last January 1, I mentioned that "update the darn blog" was one of my patreon goals, and in December it got hit! (Woo! Ok, I bumped the goal amount down a while back, but it got hit!)

I posted a patreon update explaining how my blog got constipated this time. I suppose I should explain it here as well (although that one has links).

I'm still riding a startup down (something I swore I'd never do again but I love the technology and the people). Since they haven't been able to pay me a full-time salary since June, I eventually took a side gig doing a space thing to refill the bank account, then found out why not only will China and India get to mars before the US will but Vatican City probably will too: ITAR export regulations! Yes, the same insanity that (back in the 90's) meant openpgp/openssh/openssl were developed in canada and germany and could only be downloaded from non-US websites, but you weren't allowed to upload them _back_ out of the country because the US government said that was exporting munitions. This caused US cryptographers to move overseas and give up their US citizenship because otherwise they couldn't work in their chose field.

It turns out this insanity was extended to the US space program in 1996 when we sold some crypto hardware to China on the condition that they couldn't examine it, just shoot it into space. Armed guards followed it to china, where they launched it on a rocket that exploded, and the hardware was never recovered. The resulting scandal extended ITAR to the whole space program. As my boss explained to me (the Friday of my first week there), "If I buy a screwdriver at home depot, it's just a screwdriver, but once I use it to turn a screw on a spacecraft it's now a munition and cannot be discussed with non-US persons".

As far as I can tell, this is why the US no longer has a space program to speak of (Commander Hadfield is Canadian), and why people like me don't want to get any of it on them. (It was _really_ fun for me still doing evening and weekend work on projects for a canadian company with most of its engineers in Japan, and maintaining Android's command line utilities as my main hobby project. Yeah, not a comfortable position. I know where there "proprietary vs non-proprietary" lines are, but this ITAR crap? That's "covered with spiders" leve of get it off me.)

This is why I stopped blogging, unsure what exactly I could say until I'd disentangled myself from that job (which took a while), and then I was out of the habit and way behind... (This method of bloging still has the problem that I can't post things out of order. I can _write_ them out of order, but the RSS feed generation plumbing is really simple and I have a personal rule of not editing old entries, even though it's just a text file I compose in vi.)

So, new year, new file. I'm still riding the startup down. Originally this was supposed to be until our next round of funding in October, but that came and went and I'm still paid half-time (but expected to work _more_ than full-time) without even a new deadline where the "funding knothole" might resolve. Lots of travel (which they pay for, but don't reimburse my receipts anymore). Wheee.

One of the big reasons I enjoyed this job so much is they used my open source projects, but recently they've switched to "you need to do closed source GPS correlator software to drive our patented hardware, to the exclusion of all else", and for several months I haven't had time/energy to advance toybox much. They even yanked christmas break out from under me. So, not sure how much longer that's going to last...

(The _most_ awkward part is I proposed a talk at LCA on their technology, but no tickets have been booked, we haven't had time to prepare talk material, I can't do the talk by myself and I'm not paying my own way to Tasmania to do it. It's an honor to be accepted and if I was going to cancel I should have given them a full month's notice, but I still don't know if this talk is actually going to happen. Or if it's going there and back or bouncing off there to go to Tokyo for a third multi-week intensive focus on the proprietary GPS stuff that I'm pretty burnt out on these days.)


Back to 2016