Rob's Blog rss feed old livejournal twitter

2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2002


April 27, 2015

Promoted and checked in hexedit. The advantage of this for the rest of toybox is it's the start of the long awaited not-curses infrastructure. (blessings.c? foiledagain.c? interestingtimes.c? No idea what to call it yet.) It's basically plumbing I need for command line editing in toysh and to implement vi and less and so on.

I've ranted before about how terminal control is obsolete, but let me try to summarize why I'm taking this approach.

Back in the 1970's every different brand of teletype machine (the standard I/O device of the time: a combination keyboard, serial port, and daisy wheel printer writing in ink on paper) spoke a slightly different protocol across the serial port, so the unix "tty" system grew a bunch of status bits (look in the tcgetattr man page for all those ISTRIP and INLCR constants) to humor the different variants.

Then in the late 70's we got "glass tty" dumb terminals (a keyboard and serial port hooked up to a television instead of a printer, saved on paper) that let you move _around_ the screen and change color and such, but all the ASCII values were taken so everbody used multibyte escape sequences to represent new things like "cursor up", and again each vendor used different incompatible escape sequences. So another driver layer showed up to interpret this mess using the "$TERM" environment variable to specify _which_ set of escape sequences your glass tty understood.

And all this became COMPLETELY useless when minicomputers gave way to microcomputers so by around 1982 the keyboard and display were built IN to the computer (or at least connected directly), which had complete control over it (the video buffer was memory mapped instead of only accessable through a serial port, you could draw pictures if you wanted to), so now you were talking to a terminal program running on the same machine which was _emulating_ a terminal device to work with the existing software.

This is how we reached the point where two pieces of software are talking to each other using a dozen different protcol variants (different escape sequences specified by $TERM) even though it DOESN'T MATTER which one they use as long as they agree. Dumb terminals went away before Linux got started in 1991, so all we _ever_ had to do to do is pick a common subset of these sequences, hardwire in support for that, and bury this termcap/termios/curses nonsense in a sea trench alongside EBCDIC.

It turns out there's even a standard: the American National Standards Institute documented a common subset of escape sequences over 30 years ago, and DOS implemented these "ANSI escape sequences" back in the 80's. They're loosely based on the DEC VT100 escapes, which works out especially well for Linux because Digital Equipment Corporation was not just the biggest minicomputer vendor but also the hardware that Unix was developed and deployed on (prototyped on DEC PDP-7, developed on PDP-11, and then BSD unix was mass-deployed in 1980 as the IMP replacement across the arpanet on DEC VAX hardware, which is how Unix became the standard operating system of the internet).

So the standard DOS adopted back in the 80's works fine for Linux, and all the common $TERM types ("linux", "xterm", "vt100", "ansi") should support this fine precisely because it _is_ the common subset. Even the kernel's ctrl-alt-F1 VGA terminal driver supports it. Linux even has a man page on commonly accepted escape codes, it's "man 4 console_codes" describing what the kernel's VGA terminal driver (and thus presumably TERM=linux) implement.

This is why curses needs to _die_, it's a giant pile of complexity serving no modern purpose, dragged along because we've always done it that way and the people who understand why it was that way wandered off and the new guys blindly repeat the patterns they inherited. And thus "let's just do the simple thing" is met with scorn because it MUST somehow be dangerous or we'd already all be doing it that way. That's why it takes _effort_ to make this crap go away. Sometimes via research and sometimes by taking a risk and rediscovering why not doing it was a bad idea (and then either fixing it or documenting it, but generally NOT reproducing exactly the pile of crappy workarounds accumulated in the dark).

The alternative to shoveling out this mess is drowning in superstition. (The /bin vs /usr/bin split was another one of those. There's a reason computer history is a hobby of mine, I want to know _why_ we do things.) And this is why systemd scares me. A sealed black box of ever-increasing complexity with no clear explanation even of what problems it's trying to solve, just "trust me, we'll do it for you forevermore"? That is THE WRONG APPROACH, even without bringing actively dishonest agents (NSA voyeurism, russian kleptocracy, china's great firewall, Red Hat cornering the enterprise market and forcing its technological decisions upon standards bodies ala RPM as the only packaging standard in LSB, Wintel deciding that ARM must have ACPI instead of device tree because reasons) into it.

So the hex editor gives me an excuse to write the escape sequence parsing code that reads cursor up/down/left/right, page up, page down, home, and end. (And presumably [LINK]more keys if they become interesting.) This involves putting the terminal into raw mode, and writing the signal handler plumbing to restore it atexit. (Although if you ctrl-c or ctrl-z in raw mode it doesn't produce a signal, so I have to do that myself anyway. Speaking of which, the "redefine the break key to something other than ctrl-c" functionality of stty? Screw it, that's part of the historical baggage from the teletype days, it DOES NOT MATTER anymore. I can implement it in tty, _and_ I can have hexedit respond specifically to ctrl-c, hardwired.)

It also lets me write a "put the cursor at this X/Y location" function, dig up the old "scroll the entire screen up one line, scroll the entire screen down one line" sequences, and figure out how I want to write a character in the bottom right corner of the screen (the scroll up/down stuff above could easily do it, scroll up, write the new bottom line, scroll down, rewrite the top line... but that could cause screen jitter. I really want to write the whole line and then scroll just that line one to the right, basically "insert" without redrawing the line. There's probably a sequence for that...)

And then once I start on a second user (probably cleaning up more.c) I can factor this stuff out into lib/interestingtimes.c.


April 24, 2015

I was way too fried on the plane flight back from Japan to work on anything complicated, so I started adding a hex editor to toybox. The first big program I wrote on the commodore 64 circa 1983 was a hex editor. (Which I then used on the main directory of the disk that contained itself as its first test, and it wrote the sector back rotated around by one byte. Important early learning experience. Gimme a break, I was eleven.)

Alas, unlike the commodore 64 we haven't got unambiguous representations of all 256 bytes ala "what they look like if you poke them into screen memory". The 16 bit PC back under DOS did, I know this because I was writing stuff to screen memory back in my chamelyn bbs, and yes it was spelled like that; I wrote a series of like 5 bbs programs in the late 80's and early 90's and that was the one where I reinvented the bytecode interpreter without knowing there was a name for it. But the internationalization people objected to the 128 bytes ascii _didn't_ standardize being used for graphics characters, and of course none of them could agree on what _should_ go there, so we got codepages. Eventually Ken Thompson sorted it all out with unicode, but that doesn't help print a character representing each byte's full range.

What the C64 did was values 0 through 31 were reverse video versions of characters 32 through 63, and I'm totally stealing that and using it here. But characters 128-255 had graphics on the C64, and here they don't. What I did was change the color (actually switch to the "dark" version of the default color, intensity off) so there's a grey version of 0-127 mapped to 128-255. Not perfect, but eh...

There are still some more things to do. Right now it works on an mmap(), which means you need to feed it a -r (read only) flag to edit some stuff, and it simply can't take input from a pipe. That's probably ok given what it _is_ (where would you save the result, and if you can't save, why edit?) but another thing is it can't insert. You can't change the size of the file you're editing; I might want to implement that. (We have an insert key...)

Another thing I should implement is an undo buffer. Just use toybuf as a ring buffer of edits, and roll them back one at a time when you hit the "u" key until you run out. Doesn't ahve to go back to the beginning, just let you undo typos. (The undo buffer is exspecially important because it _is_ working on an mmap, meaning all changes happen immedately. There's no "save" operation, just "exit". Yes, this means on 32 bit systems you can't edit a file larger than a gigabyte and change because you'll run out of virtual address space. But since even phones are going 64 bit, I might be ok with that. Then again, $DAYJOB's sh2/sh4 chip isn't likely to go 64 bit any time soon. I suppose I could fix it so we redo our mmap() window as you traverse the file... Six of one...)


April 23, 2015

Back at $CUSTOMER site since I'm the local. (Ok, maybe I flew back from the other side of the planet for this meeting, but I do live here. Well, about 20 miles from here, but the point stands.) Ken is still here (he's a manager, off with $CUSTOMER managers all yesterday and doing that again today, but I got a ride with him). Geoff and Martin are at the airport flying back to canada, so I actualy got to do a software thing today (cross compile an updated library version). As Flynn said in (the first) Tron, "hooray for our side". The $CUSTOMER boards continue to be problematic, but they now mostly understand why, and could fix it if the engineers in question were home (in Minnesota, they also flew here; lotsa pieces being integrated).


April 22, 2015

At $CUSTOMER site for $DAYJOB. Flew back for our Big Integration Meeting Week and then was too fried yesterday and just went to bed.

Now I'm mostly helping $CUSTOMER debug their own hardware. They may get to the point where they can test our stuff this week. That would be nice. (Ah, prototype integration bringup. If we knew what we were doing, we'd be done.)

Two other $DAYJOB coworkers I've never actually met in person are here, Geoff (not to be confused with Jeff) and Martin. Both nice. Both normally in Canada, I believe. They're actually doing the bulk of our side of the work, I'm mostly helping the $CUSTOMER engineers work out what's wrong with Linux bringup on their boards. (Three different prototype boards. Three different hardware behaviors. It's that point in the project. Luckily we can test different bits on each one and show that our respective bits work when the wires go through. It'd still be nice to see all of it work together, but you can't have everything. Where would you put it?)


April 21, 2015

Japan! Yay japan.

Sleep. Yay sleep.

I did so much stuff, and posted pictures on twitter. Many, many, many, many, many pictures. Often of food. And the occasional river, or supertoilet.


April 14, 2015

Spent the day personing a both at Cool Chips, and managed to fish a college professor and several graduate students out of the croud to give them a presentation. (Using a subset of the slides from Friday's thing.)

All this is sort of practice for a theoretical talk at Linuxcon Japan in 6 weeks. (Note that Cool Chips is _not_ run by the Linux Foundation, and thus has a year in the URL rather than a history that will vanish without trace once it stops being an effective fundraising tool to milk cash out of cluess Fortune 500 companies that want a company to represent Linux the way AOL representated The Internet to grandma. Any time a large enough accumulation of money says "Get me the blogosphere on the phone, NOW!", somebody will pick up the phone with one hand and the money with the other.)

Anyway, said Linuxcon talk remains purely theoretical because their contact address is a role not a person (well of course, it's a bureaucracy: individuals are single points of failure, you can't have any of those working for you), and thus no actual specific person has answered it since Saturday. We didn't know giving a talk about this stuff at a conference was even an option until we did it on Friday and went "we should do this again, bigger and better", at which point the CFP deadline had already passed (results hadn't been posted yet but the website wouldn't let us add more), so we're attempting to "fly standby", as it were.

For a value of "attempting" that involves being completely ignored, but we didn't sponsor the conference, so... *shrug* Oh well, wouldn't be the first "hall party" I've thrown at a con.

Meanwhile, the SMP circuitry should be ready enough to at least talk about it tomorrow. Or possibly the DMA stuff. The hardware engineers have been a variant of "ready" that collapses when you examine its quantum state, but I suppose that's what I'm here for...


April 13, 2015

The future starts in Japan! By which I mean given where the international dateline is and the 11 hour time difference (or is it 10? They don't do daylight savings time here, it varies), anyway my laptop clock says it's 11pm on the 12th, but here we just came back from lunch on monday.

Either way, the 4.0 kernel dropped last night which means I need to finally fix the #*%(&# problem with 3.19 that's blocked aboriginal on the 3.18 kernel. And so we dig.

The problem is that Aboriginal's sources/patches/linux-arm.patch, which forces the kernel's kconfig to allow the "arm versatile" board to select what kind of CPU it has (QEMU can emulated v4, v5, v6, v7 with various -cpu options, but the kernel has assumptions about what's in there), conflicts in 3.19. Bisecting, it was broken by commit dc680b989d51 "ARM: fix multiplatform allmodcompile", which was itself patching the earlier commit 68f3b875f784 "ARM: integrator: make the Integrator multiplatform".

This sounds great: do they now allow you to run the same board on multiple procesor variants? Hmmm, looks like they do actually. So I might be able to just yank that patch. And the outoutdamnperl patch is obsoleted by commit d69911a68c86 so that can go to...

And then the FUN part is figuring out what the heck happend to initramfs. The config didn't change but the kernel is panicing unable to mount root= which it shouldn't be trying to do because there's an initramfs. What?

Bisect, bisect, bisect... Oh thank you so much Andi Kleen for commit ec72c666fb34 so we now have two symbols (CONFIG_INITRAMFS_COMPRESSION_GZIP and now CONFIG_RD_GZIP) meaning the EXACT SAME THING and you have to specify BOTH of them to be able to compress your initramfs with gzip. (Note! We already compress the KERNEL with gzip using KERNEL_GZIP which sets HAVE_KERNEL_GZIP (no idea why) which means we have FOUR symbols that mean the exact same thing.

The Aristocrats! Linux!

Anyway, 3.19 seems to work now. Doing test builds on the various targets, and then I can cut a release and be once again only one version behind.

Might cut a toybox release just because I can. But I'd like to finish ps first, and expr is fairly low hanging fruit that's the main remaining item in the aboriginal linux command usage table:

   7 busybox gzip
   8 busybox dd
  17 busybox tar
 190 busybox diff
 275 busybox sh
 417 busybox tr
 457 busybox awk
3414 busybox expr

That's it for the i686 build. The LFS build plus command prompt niceties add ash bunzip2 fdisk ftpd ftpget ftpput gunzip less man pgrep ping pkill ps route sha512sum test unxz vi wget xzcat zcat and while I'm at it android is grabbing expr hwclock more netstat pgrep pkill route tar tr out of "pending" (get the value of $ALL_TOOLS out of their Android.mk and then for i in $THAT; do grep -q "TOY($i," toys/pending/*.c && echo $i; done), and a contributor to my patreon wants mdev prioritized, and $DAYJOB could really really use toysh.

So.

I should do that.


April 12, 2015

Video of the Jamboree presentation isn't up yet, but given that ELC video from last month isn't up yet either, I can't be too hard on them. (Also, we and Tim Bird skyping in from the states were the only english presenters, so Kawasaki-san did his third of the talk in Japanese. Everybody else's slides were in english, but not their talks, so pointing a US crowd at the video is... thingy.)

Bought perfume for Fuzzy, and a a can of coffee in a can for Fade. Other than that spent about half the weekend holed up in my hotel room programming (caught up with the Android git repository's accumulated commits that weren't to their Android.mk file or their checked-in copy of .config or the generated/ directory, plus their mount ioctl() thing so they can eventually switch that over), and the rest wandering around tokyo with Jeff. Fun town. I _really_ need to learn Japanese.

I also submitted a "can I fly standby" talk proposal request to the Linuxcon Japan guys. We had no idea I was coming here before the call for papers thing closed (the original reason was we're finishing up SMP support for the new chip design and they wanted me here with the hardware guys to do Linux bringup for that, it's been UP up until now). The talk we gave at the Jamboree was actually kind of nice but a bit unpracticed and described websites that aren't live yet, so we'd like to do it again in a more polished and complete form, in front of a larger audience, and all of it in the same language.

Alas the CFP is over so the web form won't let me submit a proposal. (They apparenty announce their selections tuesday. Yeah, I know.) Still, I asked and we'll see what they say. (Haven't said anything yet, but it's Sunday, so... if they turn us down maybe we can hijack a BOF or get a table or something...)

I actually learned a lot preparing the slides with the other guys. We've done an SH2-compatible chip (the "J2") because the last SH2 patent expired in October 2014, and the SH4 patents don't expire until 2016. So we can release BSD licensed VHDL (and do our public live development in a github repository) for SH2, and then add SH4 support when those patents expire.

Another reason you want a nommu design is latency: if you're doing signals measurement with nanonsecond accuracy you don't want TLB reloads adding random jitter.

Also, we're not just releasing processor VHDL we're releasing a bunch of components (serial, ethernet, DSP, etc) with a build system that lets you configure and make an entire SOC (selecting the stuff you want in it during config).

Our "sh2 managing a bunch of DSPs" design is approximately what the Cell processor in the PS3 was trying to accomplish ("powerpc managing a bunch of DSPs") and what NeXT boxes before that were doing ("m68k managing a bunch of DSPs"). The problem with PS3 and NeXT is it turns out DSP programming is something not many people know how to do, and each of them were reinventing the wheel each time. What we're trying to do is A) build an pen source community that knows how to do this and can teach even more people to do it, with a reusable library of code under open licenses, B) leverage stuff like opencl that's doing general-purpose GPU programming, since that actually maps right over to the DSP stuff.

This is stuff people working at the company know how to do... but I'm not one of them. We need to put the linux-side programming info for all this on nommu.org and the hardware-side programming for FPGA and OpenCL stuff on Zero P F dot org (becuase the Orangutan Protection Foundation got .org, Original Print Factory got .jp, and somebody in germany got .net; I think we're claiming the zero stands for "no intellectual property licensing restrictions" or some such, you'd have to ask the marketing guys).

Anyway, really fun stuff. I hope I get to talk about it at Linuxcon.jp.

(I keep typing linucon which died when I moved to Pittsburgh for a year and nobody inherited it. Also the year I chaired Linucon coincided with Penguicon 3 which wobbled badly and I spent the next 2 years focusing on getting Penguicon back up to speed (once again recruiting the guests of honor for 4 and 5, introducing Liquid Nitrogen ice cream in year 4, and so on). I stopped being involved after that because reasons. Been too busy to do another one since. Such is life...)


April 10, 2015

Enjoying tokyo immensely. They have tea the way I used to make it, at least before I switched to splenda instead of sugar. (My tendency to like cold tea with milk in it horrified both sides of the atlantic, but apparently Japan is fine with this.)

Presented at Japan Technical Jamboree #52, in the 4pm slot. They wrote in "Rob Landley and his partners" but the other two were Jeff Dionne the founder of uclinux.org (and founder of the company I work for), and Sumpei Kawasaki the original SuperH processor architect (and guy driving the new J2 processor design). They outrank me, I just got credited because I'm the one who emailed to propose the talk and we only prepared the slides the night before so they didn't have a copy of them yet. (Still don't, I should fix that...)


April 7, 2015

11 hour flight to Tokyo Haneda. Got a bit more of PS written, but netbook battery does not last 11 hours (new one was way closer but doesn't have all the right stuff installed on it yet), and there were no outlets on the plane.

Picked up from airport by Kawasaki-san (the original architect of the SuperH processor, who is working with us on the fresh implementation), and taken to Kanda Grand Central Hotel. That has outlets. Japanese ones.

Ironically, my shiny new netbook and the replacement power supply for the old one require a ground plug, which japan doesn't use. But the OLD power supply (the one with the flaky cord that only passes current in certain positions) works just fine.

(Outlets here are apparently 50hz 110 volts with non-polarized plugs, so some things just work and other things don't fit at all. The really silly part is netbooks MUST work off battery, by definition, so requiring a grounding plug or caring all that much about the plug polarity is kinda strange. And yet both new power supplies do.

I fall over now.


April 6, 2015

Guess who's getting on a plane tomorrow for a sudden last-minute trip to Japan?

Go on, guess.

But hey, this means I get to present at Japan Technical Jamboree which I've always wanted to, ever since I met Ueda-san at CELF in 2006 (the man who organizes it). It's basically a monthly Tokyo LUG meeting, but this being tokyo they fill a room with people and do a half dozen presentations.

I should learn Japanese.


April 5, 2015

Broke down and switched the toybox repository over to git.

Since android and tizen and openembedded and gentoo and so on have all been using Georgi Chorbadzhiyski's git mirror rather than the mercurial repository, I bit the bullet and switched the project's repo to git. Georgi's mirror is now pulling from that.

Now trying to figure out how to make git do lots of things I've been doing in mercurial for years. I know there's a WAY, I just have to look up each command and keep hitting crap like:

$ git log lib --stat
fatal: bad flag '--stat' used after filename
$ git log --stat lib
[ works fine ]

And there's just no excuse for that.


April 4, 2015

Spot the cheat:

F S   UID   PID  PPID  C PRI  NI ADDR SZ WCHAN  TTY          TIME CMD
0 S  1000   465   464  0  80   0 -  7313 poll_s pts/9    00:00:00 vi

I'm trying to work out appropriate padding for the ps fields, so I thought I'd take a look at what "ps -l" looks like on ubuntu, and what do I find? The ADDR and SZ fields overlap. They didn't implement ADDR (it's - for everybody, even though you could use EIP field of the proc/$$/stat stuff), and they let the SZ field leak over into it, so you have 5, possibly 6 digits worth of 4k pages until you run out of space to display the resident set size.

(Figuring out how much memory a process is "using" when the executable pages and library pages are shared between processes, and even if it isn't doing file backed mmap() a certain amount of dirty page cache may be due to other files it needs)... But let's ignore that for now.)

The way you can tell the ADDR SZ combination in ps -l is a @*%^@! _SPECIAL_CASE_ is that in "ps -o addr,f", addr is right aligned, but in ps -l it's "left" aligned. That's just _sad_.

What I'd like to _avoid_ doing is readahead cacheing all the columns to be output, calculating the amount of space they'll eventually use, and then outputting them appropriately padded in a second pass. I'm trying to make this work on low memory (even no memory) systems, which means streaming operation.

Then again, what I did for vmstat was adjustment padding: when a field goes over pad later fields by only one space until we've caught back up. The question then comes up whether you eat leading or just trailing spaces, and that seems to be a question of alignment: right aligned things eat leading spaces (so their right edge still matches up), left aligned things only eat trailing spaces (so their left edge still matches up).

Which fields are right aligned and which are left aligned? Strings are left aligned, numers are right aligned, and timestamps apparently count as numbers. (You can test this yourself with "ps -o s:3,f" vs "ps -o f:3,f", although you how put that in the test suite I have no idea, because you can't controll which PID a launched process gets. Possibly some sort of backgrounded sleep command, jobs -p, and liberal use of environment variable expansion in the test cases. (Also, I need a file named "abc) def" to test the stat parsing with.)

Creating a symlink to sleep called "1", moving it to /usr/local/bin, and running "1 100 &" then doing "ps -o cmd,f" did _not_ right justify that cmd field, so it's _not_ checking isdigit() on the first character of the field. (-o tty already showed it wasn't checking the _last_ character that way).

Hmmm, "ps -p 2,3,1" does not print output in that order, so it's just a matching filter.

Another problem: truncating fields. The "pad and catch up" thing conflicts with the old ps behavior of truncating fields, which comes up for "cmd" and such in a big way on a regular basis. Hmmm...

Ok, only let a field leak out to the left or right if there's _space_ for it to do so. If a new field needs to start on the left edge or an old field needs to end at the right edge, truncate the adjacent field far enough away to leave one space between them. This means the last field can slightly more naturally expand out to the right edge of the screen (or beyond with l).


April 1, 2015

Poking at ps. Arguing with the "C" field. What does "processor utilization for scheduling" _mean_? It's not one of the /proc/$PID/stat fields. I ran ubuntu's ps under strace and it didn't read anything obvious (or call weird ioctls), it looks like the data comes from stat or status? But where?

The STIME field is easy to fetch (stat field 22 is start_time for the process, in jiffies after system boot), but the spec doesn't say how to represent it. The other ps is doing hour:minute of starting time for the same day (ok, first entry of sysinfo() is uptime in seconds since boot, close enough), but if it's not the same day it prints a three letter month abbreviation followed by two digit day (with no space so it fits into 5 characters). And again, that month is english and I'm trying to avoid gratuitous english. (Yes, the --help text is all english. There are some built-in conflicts in what I'm trying to do here. I'm open to suggestions.)

I dunno STIME would do for a process more than a year old, haven't got a system rebooted that long ago lying around. I could fake something up under qemu but that's not the point, the point is the SPEC doesn't SAY what it should be. Grrr.

I guess 04-01 for an April 1 start time? And beyond that uptime in days?


March 31, 2015

Fiddling with the old uClinux toolchain build gives me a much better appreciation for my own build system.

Mine doesn't expect to run as root, doesn't try to build packages with -Werror by default, doesn't require makeinfo as a prerequisite, actually has the names of the files it tries to download and the ones on the website match up (gz vs bz2 confusion in elf2flt), takes advantage of more than one processor while building, has had a release in the past 5 years, doesn't have its install path in the build script AND hardwired into the uClibc config file...

Oh hey, and it doesn't treat gmp, mpfr, mpc (the three additional packages gcc has metastasized into since the last GPLv2 release's gcc/binutils) as PREREQUISITES that the build expects you to have ALREADY INSTALLED and thus doesn't try to compile itself.

That's... yeah. Sigh.

Maybe I can extract config info out of this and apply it to the Cross Linux From Scratch toolchain build?


March 30, 2015

Gave up and started rewriting ps from scratch. It needs to be based on dirtree, not calling readdir() directly. It needs to use bitmasks to set its default modes (including -f and -l). It needs to actually implement at _least_ all the posix flags. (Except -n because seriously: what the?) And it needs to get some really weird behaviors right like the way "ps -A -o pid,tty,cmd" expands cmd to the right edge of the screen but "ps -A -o pid,cmd,tty" doesn't. (Honestly, why -l has an arbitrary limit on the length is beyond me.)

And that's before the whole "posix dash options vs bsd dashless options behave differently" can of worms I have to figure out how to implement.

(P.S. Why did posix have a table of "variable names and default headers in ps" where of the 15 entries, 9 have headers that are upper case versions of the names, in 3 more one is a truncated version of the other, and then 3 just random oddballs (args/COMMAND, etime/ELAPSED, and pcpu/%CPU). Why would you do that? It's SO CLOSE TO MAKING SENSE, and then DOESN'T.)

(P.P.S. Did the posix guys even _notice_ that the XSI default columns at the start of the stdout section and the aforementioned -o field list table at the end of the section DO NOT MATCH? The first has 4 fields (PID, PPID, NI, TIME) that match, 9 fields (F, S, UID, C, PRI, ADDR, SZ, WCHAN, STIME) that don't, and two more (TTY, CMD) that just INSULT the other table because -o tty is called TT but the _default_ name is TTY with the Y!

This is a standards committe? They agreed on this? I'm aware standards bodies should document and not legislate, but _dude_. This is not a coherent result. You can at LEAST just go ahead and add the missing 9 fields to the -o table. And then accept the lowercase versions of the uppercase labels as -o input the thing will recognize to trigger that field. If you want alternate historical spellings, fine, but SERIOUSLY...

Sigh. Gotta implement the standard we've got rather than the standard we want. But I am filing off some of the stupid and documenting the deviation.


March 26, 2015

That was a fun convention. In theory video should be up eventually.

Two different panels on microcontrollers, I.E. nommu systems running from SRAM so they can boot straight into Linux without needing a bootloader to run DRAM initialization. One had 8 megs of ram, one had 256 _kilobytes_, but both got away with it because they did XIP (running the kernel code straight out of flash without copying it into memory first). The 256k one even did userspace xip from a cramfs or some such.

And then Wednesday I gave my talks. Both of them. I spent all my time preparing the toybox one, working on it right up until it was time to give it (not that unusual, but it worked out because I figured out the day before what to leave _OUT_; start with "here are links to three talks I already gave, which I will not be repeating" and don't try to even do "what's new" in those areas because if I start talking about licensing or history or the self-hosting crusade I'll be there all day).

So I'm reasonably happy with the new toybox talk, but then my shrinking C code got short shrift and once again the problem was the need to edit it down. The point of the talk was that I did elaborate writeups of the 27 commits that took ifconfig from 1500 lines to 520 lines, and I'd like to explain the techniques I used. And I was willing to take it on as a second panel when space opened up because hey, I already did the prep work!

Unfortunately, the writeups were _too_ elaborate, in the 2 hour timeslot I made it through maybe the first third of them, and then had to skip to the end. What I should have done was go through and work out the techniques and skip around showing examples. Maybe I should do it again, but I remember trying again to fit The Rise and Fall of Copyleft into an hour for Texas Linuxfest. (Ok, heatstroke, rehydrating with an energy drink, coming close to needing hospitalization, and giving the talk the next day. Took me 6 months to feel resonably normal again after that. But still! Talk was not improved by second attempt at it, is my point.)

I should do podcasts.

Meanwhile, there is a _reason_ I don't schedule travel on the same day as the thing I'm traveling for. I am _totally_fried_, even though I went to bed at like 10pm each night and slept for upwards of 10 hours a night. More than one person noted they were flagging on day 3. The greying of Linux affects us all. (There was one teenager in attendance! Because one of the attendees brought his son.)

Today's an extra day in San Jose with a plane leaving at 6pm, and my voice is toast. Ensconed... ensconced... ensconcinated in some sort of "business center" down the hall from my gate, with an electrical outlet, reasonably quiet working environment, and the prospect of a $14 sandwich in the near future from one of the overpriced airport restaurants.

I happily walked to the airport. Exercise! Getting so much exercise here. And I can _smell_ things. The relentless sinus troubles always clear up when I'm here. I keep forgetting that. I grew up breathing pacific ocean air, not middle eastern juniper imported to texas as an ornamental plant hilariously misidentified as "cedar" a century ago that's gone totally invasive species upwind of a major city. It always starts spewing pollen from late december through at least march (in the middle of what SHOULD be winter), and my sense of smell goes away entirely for months at a time.)


March 23, 2015

In California at CELF ELC (which stands for the Linux Foundation Embedded Linux Foundation Conference by the Linux Foundation), and I'm... kind of surprised at the restraint. Last time I was here (2013) it was All Yocto All The Time (sponsored by Intel) and the t-shirt looked like something out of nascar. This time the t-shirt is a simple black thing with a small name and conference logo and no sponsors listed anywhere.

I wonder if Tim Bird staged a coup?

INSANELY busy day. Great stuff.

Like three different panels were actually work related. My boss's boss Jeff Dionne (co-founder of uclinux and founder of se-instruments.com) was coincidentally in town, and I dragged him to the evening meet-n-greet dinner thing where I actually got him together with David Anders (prpplague) so they can make the eventual 0pf.org launch actually work right for hobbyists. (Jeff lives in Japan these days, and goes to LinuxTag in germany every few years but apparently hasn't been to a US event in forever. I need to wave the Jamboree things at him.)

Alas, Rich Felker the musl-libc maintainer wasn't there (his panel isn't until tomorrow). The openembedded maintainer said he was going to show up but had a childcare thing happen instead. Oh, and the buildroot maintainer was there; his talk this year was on device tree stuff and I talked to him about _both_ buildroot (he wants me to resubmit toybox _and_ he wants to merge nommu stuff but had to give back the cortex-m test system he used to have) and device tree stuff (apparently a base boot-to-shell prompt device tree needs to describe memory, processor, interrupt controller, and a timer to drive the scheduler).

This conference is making my todo list SO MUCH LONGER...


March 22, 2015

Red-eye flight to San Jose, arriving at 9:30 in the morning because I flew over two timezones, and got to have a long lunch with Elliott Hughes, the Android Core maintainer (covering bionic and toolbox, I.E. the guy who's been sending me all the android patches). Fun guy, very smart, and apparently way more swamped with todo items even than I am.

He's sympathetic with a lot of my goals for toybox, but his time horizon is shorter than mine: right now the Android M feature freeze is looming for him, and his plans for the Android release after that are in terms of what needs to get deferred out of this release to go into that one.

My "what's all this going to look like in ten years" frame of reference seems like a luxury most android guys can't afford, drinking from a firehose of a half-dozen phone vendors sending them constant streams of patches.

(Also, he used to maintain the java side of things and still thinks java and C++ were a good idea, so we're not _entirely_ in agreement on where new system programmers come from. But I expect history will sort that one out eventually.)

Yes, for those of you keeping track at home Google bought me lunch. (At Panera.) Collusion!

Staying at an airbnb. It's quite nice. It's almost two miles from the venue, but the walk is pleasant and I could use the exercise.


March 21, 2015

One of my patreon goals is "update the darn blog" and I'm doing a horrible job at it.

Right now I'm applying David Halls' big toybox patch, which he actually posted to the Aboriginal list because that's where he's using it. He sent me a big patch touching several files, and I'm going through each hunk and trying to figure out what it does, so I can commit the individual fixes preferably with a test suite entry to regression test the bug.

It all looks good except for the last hunk, which is actually a workaround for a uClibc bug. On glibc or musl (and presumably bionic) if you open a directory and getdelim() from it, you get an error and a NULL return value. But on uClibc, your process segfaults.

I came up with a cleanish fix (more or less doing what David's patch was doing but in a different way)... but I don't want to apply it to toybox. It's a bug workaround for a problem in another package. That other package should be fixed... except uClibc is dead.

The eglibc project happened because uClibc couldn't get releases out reliably, and eglibc already got folded back into glibc. The entire lifecycle of the eglibc project happend _since_ uClibc's troubles started. Same with klibc (which was a failure, but it _ignored_ uClibc). These days uClibc is coming up on _three_years_ since their last release; that's the amount of time musl took to go from "git init" to a 1.0 release! Even if uClibc did have a new release at this point it wouldn't matter. With musl and bionic competing for embedded users both at uClibc's expense, I'm calling it. The project is dead.

At CELF I should poke the musl-libc maintainer about seeing what feature list uClibc has that musl doesn't yet (basically architecture support, uClibc supports things like the DEC Alpha and m68k and the uClibc doesn't yet), and getting musl to the point where people don't blunder into the uClibc quagmire thinking they need it, and then exit embedded linux development in disgust a year later.


March 20, 2015

Listening to The Rachel Maddow Show (apparently on self-driving cars) and I'm amazed. Five minutes in she hasn't mentioned abortion or how nuclear power will kill us all even once.

Oh never mind, around the eight minute mark it turned into "why you should be afraid of self-driving cars". And now it's all segued back into an analogy about politics.


March 18, 2015

Fade and I watched another episode of the hotel version of kitchen nightmares where bald not-gordon-ramsey mentioned a couple websites people look up hotel quality on, so I checked the cheap place I'd made reservations at for ELC in San Jose.

Bedbugs. Review after review, with photos. Right.

So I cancelled that and did "airbnb" instead. (It's silicon valley, booking through a dotcom is what they do there.) Which meant I had to sign up for an airbnb account. Which was like a twelve step process wanting to confirm by email _and_ text and wanting a scan of my passport and so on. When they wanted an online social profile, I picked linkedin from the list because I honestly don't care about that one. And since I had to log in to linkedin anyway (for the first time since 2010 apparently), I added my current position to that so it didn't still think I was at Qualcomm.

I am now getting ALL THE RECRUITER SPAM.


March 9, 2015

The CELF ELC guys approved a second talk for me, on shrinking C code. Yay.

I wonder if I should mention my patreon in either talk? Seems a bit gauche, but I should probably get over that. It _is_ a Linux Foundation event these days...

(Then again, maybe I should update the top page on landley.net, since it hasn't changed in something like a decade now...)


March 7, 2015

I've been using signal() forever because it's a much simpler API than the insanely overengineered sigaction(), but for some reason adding signal handling to PID 1 isn't working, and debugging it has of course involved reading kernel code where I find horrible things, as usual.

Backstory: I'm upgrading oneit as discussed a few times on the list, and one of the things I'm adding is signal handling. The old traditional "somebody authorised to send signals to init can tell it to halt, poweroff, or reboot the system", and I'm using the signal behavior in the system v init the developers at Large Company That Wishes To Remain Anonymous sent me. So SIGUSR1 should halt, SIGUSR2 should power off, and SIGINT or SIGTERM should reboot.

In theory, PID 1 has even the unblockable signals blocked by default (because if PID 1 dies, the kernel panics). But if you set a signal handler for a signal, your handler should get called (overriding the default SIG_IGNORE behavior of all the signals that would normally kill the process). Unfortunately, this is only working for SIGINT and SIGTERM, I can't get it to call the handler for SIGUSR1 and SIGUSR2.

So I dig into the kernel code to see what it's actually _doing_, and right at the system call entry point I find:

SYSCALL_DEFINE2(signal, int, sig, __sighandler_t, handler)
{
    struct k_sigaction new_sa, old_sa;
    int ret;

    new_sa.sa.sa_handler = handler;
    new_sa.sa.sa_flags = SA_ONESHOT | SA_NOMASK;
    sigemptyset(&new_sa.sa.sa_mask);

    ret = do_sigaction(sig, &new_sa, &old_sa);

    return ret ? ret : (unsigned long)old_sa.sa.sa_handler;
}

I.E. signal() is implemented as a wrapper around sigaction() with the two "be really stupid" flags set. It intentionally breaks signal handling. (Note: the hobbyist developers at berkeley fixed this in the 1970's. The corporpate developers at AT&T maintained the broken behavior through System V and beyond.)

The solution: make my own xsignal() wrapper that's my own wrapper around sigaction() that sets the darn flags to 0 to get the sane default without having to specify extra fields to get the default behavior.


March 6, 2015

Why am I paying $500 in airfare (plus more in lodging) to go give a talk at a Linux Foundation corporate event again? I'm sure I had a reason...

Oh well, tickets booked for the thing.


March 2, 2015

Current status: force resetting the man pages database to see if whatis or apropos can find a semiportable way (works under glibc, uClibc, musl, and bionic is close enough to "portable" for me) to nondestructively reinitialize the heap (leave the old one alone, just leak it and start a _new_ one with a new heap base pointer) so i can write my own vfork() variant (calling clone() directly) for nommu systems which does _not_ require an exec() or exit() to unblock the parent, but which lets me re-enter the existing process's _start(). (I can already get clone to create a fresh stack, but the heap is managed by userspace.)

You know, like you do...


March 1, 2015

Elliott Hughes sent a bunch of patches to fix printf argument skew, and another patch to annotate stuff so gcc can spit out its own warnings about printf argument skew.

Back in C89, arguments were promoted to int which meant that varargs didn't have to care too deeply about argument types on 32 bit systems, because everything got padded to 32 bits. But C99 did _not_ do the same thing for 64 bit values, which means that some arguments are 32 bits and some are 64 bits, and if you're parsing varargs and suck in the wrong type it all goes pear shaped. (If the one you get wrong is the _last_ argument, and you treat a long as an int, and you're on a little-endian system, it works anyway. This is actually fairly common, and disguises 80% of the problem in a way that breaks stuff if you add another argument after it or build on a big endian system like powerpc, mips, or sh2.)

(Oddly enough, every big-endian system I can think of off the top of my head _can_ work little-endian too, they all have some variant of processor flag you set to tell it whether it's working in big-endian or little-endian mode. It's just that some people think big endian is cool and thus break compatability with 99% of the rest of the world because requiring humans to reverse digits when reading hex dumps is far worse than making all your type conversions brittle and require extra code to perform. But this is one of those "vi" vs "emacs" things that's long since passed out of the realm of rational argument and become religious dogma.)

So 64 bit registers went mainstream (with x86-64 showing up in laptops) starting in 2005, and these days it's essentialy ubiquitous, and that means the difference between 64 bit "long" and 32 bit "int" is something you have to get right in printf arguments because they're not promoted to a common type the way 8 bit char and 16 bit short still are. (The argument that it wastes stack memory implies that doubling the size of "char" wasn't ok on a PDP-11 with 128k of memory, and doubling the size of "short" was wrong on a PC with one megabyte of ram, yet that's what they did. But it's different now, for some reason.)

Anyway, gcc's inability to produce warnings correctly extends to printf argument mismatch warnings too: if you try to print a typecast pid_t or something as a long when it's an int, it complains something like "%ld requires a long but argument is pid_t". Note that typecasts are basically a #define substituting one type for another, it ALWAYS boils down to a "primitive" type, I.E. one of the ones built into the compiler (or a structure or union collecting together a group of such), but that's not the error gcc gives. Instead it hides the useful information and makes you trace through the headers manually to find out the actual type.

There are only a half-dozen primitive types in c: 8, 16, 32, and 64 bit integers, short and long floating point, and pointers. (The ints come in "signed" and "unsigned" but that's not relevant here. There's also bitfields almost nobody ever uses because they're inefficent and badly underspecified (honestly better to mask and shift by hand), and a "boolean" type that pretends not to be an integer but actually is. But again both of those are promoted to int when passed to a function, and thus can be ignored as far as printf is concerned.)

The other problem is gcc complains about identical types: on 64 bit systems "long long" and "long" are the same type, but it complains. This is especially hilarious when the headers #define (or typedef) a type as "long long" when building for 32 bits and "long" when building for 64 bits, so "%lld" would always try to print 64 bits and would always be _fed_ 64 bits so it's works fine, but gcc warns anyway because reasons. (They're rewriting it in C++, therefore C must be just a dialect of C++ and everything everywhere is strongly typed regardless of what type it really is, right?)

Yes, I could crap typecasts all over the code to shut the broken compiler up. And that's what most people do in situations like that. But forcing type conversions when you don't need to not only hides real bugs as often as not, it sometimes causes new ones. I rip unnecessary typecasts _out_ because simple code is better.

And then I have to deal with gcc. I know everything the FSF maintains is going away, but it's not dying _fast_ enough. (And LLVM, written in C++, isn't hugely exciting. It has momentum because people are moving away from the gcc. LLVM isn't really _attracting_ anybody, the FSF is repelling them and "any port in a storm". Still, at least the people running it aren't the FSF, so that's something.)


February 28, 2015

Sigh. One of the most useful things to be able to build standalone in toybox would be the shell, toysh. (Which is currently near-useless but one of the things I need to put serious effort into this year.)

However, the shell needs the multiplexer. It has a bunch of built-in commands like "cd" and "exit" that need to run in the current process context to do their thing, it _must_ be able to parse and dispatch commands to be a shell. So the main thing scripts/single.sh does, switch off CONFIG_TOYBOX (and thus the "toybox" command), isn't quite appropriate.

Except... the shell doesn't need the "toybox" command. When you run it, the default entry point should be sh_main(). In fact it _needs_ to run sh_main() even if the name is rutabega.sh because the #!/bin/sh method of running a shell feeds the script name into argv[0] which would confuse toybox_main().

However, the scripts/single.sh method of disabling the multiplexer treats the array as length one, and just dereferences the pointer to get all the data it needs. Currently, this means if do hack up toysh to build standalone, it thinks it's the "cd" command. (Which runs in a new process and then exits immediatey, so is essentially a NOP other than its --help entry.)

I note that somebody is arguing with me in email about calling things scripts/make.sh when they say #!/bin/bash at the top and depend on bash extensions, becuase obviously if they don't run with the original 1971 sh written in PDP-7 assembly then they're not _shell_ scripts. I may be paraphrasing their argument a bit.


February 27, 2015

Cut a toybox release. Need to do an aboriginal linux release now. (It built LFS-6.8 through to the end, if some random thing still needs tweaking in toybox, I can add a patch to sources/patches in aboriginal.)


February 26, 2015

Blah, build dependencies! In lib/xwrap.c function xexec() cares about CFG_TOYBOX and !CFG_TOYBOX_NORECURSE, and if those toggle in your config you need to "make clean" to get it to notice.

Alas, if you rebuild the contents of lib/ because .config is newer then "make change" rebuilds it every time. But there isn't a way to tell make to depend on a specific config symbol unless you do that insane "create a file for every symbol name" thing which is just way too many moving parts.


February 25, 2015

The stat breakage was printing long long as long, which is 32/64 bit type confusion on 32 bit hosts. Of course the real type was hidden by layers of typedefs, which are worse in statfs than in posix's statvfs because the linux structure is trying to hide a historical move from statfs() to statfs64(). But statvfs has the fsid as a 64 bit field, and statfs has fsid as a 128 bit field (struct of two 64 bit fields, and it uses all the bits), so switching from the LSB api to the posix API would truncate a field. Grrr.

Anyway, stat's fixed now and I ran a build of all the targets and half of them broke, with WEIRD breakage. On i486, i586, and i686 the perl build said the random numbers weren't random enough (but /dev/urandom is fine). Sparc and ppc440 segfaulted with illegal instructions.

Four things changed: the kernel version, the qemu version, toybox, and the squashfs vs initramfs packaging. The illegal instructions sound like a qemu problem, the perl build breakage might be kernel version? Sigh. Too much changed at once.

Oddly enough, arm built through to the end. Well of course.


February 23, 2015

Still trying to get an Aboriginal release out. The lfs-bootstrap control image build broke because the root filesystem is writeable now, so the test whether or not we need to create a chroot under /home and run the build in that isn't triggering.

So I need to add another condition to the test... but what? The obvious thing to do is df / and see if there's enough space, but A) how much is "enough", B) df doesn't have an obvious and standardized way to get the size as a numeric value. You have to chop a field out with awk, which is (to coin a phrase) awkward.

Yes, classic unix tool, standardized by posix, not particularly scriptable.

The tool that _is_ scriptable (and in toybox) is "stat", and in theory "stat -fc %a /" should give the available space... but it doesn't. It gives it in blocks, and how big is a block? Well that's %S, so you have to do something like $(($(stat -fc '%a*%S'))) and have the shell multiply them together (and hope you have 64 bit math in your shell, but for the moment we do).

Next problem: stat is broken on armv5. It works fine on x86, but it's breaking in aboriginal. (Is this an arm thing, a uClibc thing, a 3.18 kernel thing... sigh.)

So now to debug that...


February 21, 2015

Still banging on Aboriginal Linux. You rip out one little major design element and replace it with something wildly different and there's consequences all over the place...

The ccwrap path logic is still drilling past where I want it to stop (and thus not finding the gmp.h file added to /usr/include because it's looking in /usr/overlay/usr/include in the read-only squashfs mount). I pondered using overlayfs to do a union mount for all this, but that's a can of worms I'm uncomfortable with opening just yet. (Stat on a file and stat on a directory containing the file disagree about which filesystem they're in. I suppose the symlink thing is similar, but one problem at a time...)

Since I was rebuilding ccwrap so much, I decided to make a new "more/tweak.sh" wrapper script to generally make it easier to modify a file out of a stage and rerun the packaging. The stage dependencies are encoded in build.sh using the existence of tarballs (if the tarball is there, the stage build successfully), so it can just delete the tarball before the check and the stage gets blanked and rebuild.

However, I want to manually do surgery on a stage, and then rebuild all the stages _after_ that one without rebuilding that one. (Avoiding a fifteen minute cycle time for rebuilding native-compiler on my netbook is sort of the point of the exercise.) And build.sh didn't know how to do that, so I added an AFTER= variable telling it to blank the dependencies for a stage as if the stage was rebuilt, but not to rebuild the stage. (Sounds simple. Took all day to debug.)

The other fun thing is that system-image.sh is rebuilding the kernel, which is the logical place for it to go (it's not part of the root filesystem, and all the kernels this is building are configured for QEMU so you'd want to replace that when using real hardware anyway), but it's also an expensive operation that produces an identical file each time (when you're not statically linking the initramfs cpio archive into the vmlinux image, anyway).

So I added code to system-image.sh that when you set NO_CLEANUP it checks if the vmlinux is already there and skips the build if so. (The filesystem packages blow away existing output files, the same way tar -f does.) And have tweak.sh set NO_CLEANUP=temp (adding a new mode to delete the build/temp-$ARCH directories but not the output files) to triger that.

So when I finally finished implementing this extensive new debugging mode, it took me a while to remember what problem I wanted to use it on. It's been that kind of week...

And then, when I got "more/tweak.sh i686 native-compiler build_section ccwrap" to work, it put the new thing in bin/cc instead of usr/bin/cc because natie-compiler is weird and appends "/usr" to $STAGE_DIR. So special case that in tweak.sh...

And after all that, it produced an x86-64 (host!) binary for usr/bin/cc, because sources/sections/ccwrap.sh uses $HOST_ARCH isn't set. (Sigh: there's TOOLCHAIN_PREFIX, HOST_ARCH, ARCH, CROSS_COMPILER_HOST... I'd try to figure out how to get the number down but they all do slightly different things, and the hairsplitting's fine enough that _I_ have to look it up in places.

The toybox build is using $ARCH, the toolchain build is using $HOST_ARCH. This seems inadvisable. ($ARCH is target the toolchain produces output for, and $HOST_ARCH is the one the toolchain runs on. They're almost always the same except when doing the canadian cross stuff in the second stage cross compiler. In fact native-compiler.sh will set HOST_ARCH=$ARCH if HOST_ARCH isn't already set, which is the missing bit here.)

Sigh. Reproducing bits of the build infrastructure in a standalone script is darn fiddly. Reminding me how darn fiddly getting it all to work in the _first_ place was...


February 20, 2015

I switched the Aboriginal Linux stage tarballs from bzip to gzip, because bzip2 is semi-obsolete at this point in a way gzip isn't. There's still a place for gzip as a streaming protocol (which you can implement in something like 128k of ram including the buffer data), while kernel.org has stopped providing tar.bz2 and replaced them with tar.xz.

This gets me off the hook for implementing bzip2 compression-side in toybox. (Half of which is a horrible set of cascading string sort algorithims where if each one takes too much time it falls back to the next with no rhyme or reason I can see, it's just magic stuff Julian Seward picked when he came up with it, just like the "let's do 50 iterations instead of 64" magic constants all over the place that scream "mathematician, not programmer" (at the time, anyway). And I can't use the existing code because it's not public domain, but if I can't understand it I can't write a new one.)

Yes, I'm enough of a stickler about licenses that I won't use 2-clause BSD code in an MIT-licensed project, or Apache license, or ISC... They all try to do the same thing but they have slightly different license phrasing with the requirement to copy their chosen phrasing exactly, which is _stupid_ but if you think no troll with a budget will ever sue you over that sort of thing, you weren't paying attention to SCO or the way Oracle sued Google over GPL code. In theory you can concatenate all the license text of the various licenses you used, which is how the "Kindle Paperwhite" wound up with over 300 pages of license text under its "about" tab. If you ever _do_ wind up caring about what the license terms are, that's probably not a situation you want to be in.

The advantage of public-domain equivalent licenses is they collapse together. You're not tied to a specific phrasing, so nobody bikesheds the wording (which is what's given us so many slightly incompatible bsd-alikes in the first place).

But it's also that toybox isn't about combining existing code you can get elsewhere. If I can't write a clean, polished, well-integrated version of the thing, you might as well just install the other package. If I can't do it _better_, why do it _again_? (That's why I didn't merge the bsd compression code I had into busybox in the first place. I had the basics working over a decade ago.)

So back to Aboriginal: switching tarball creation from bzip to gzip actually made things _slower_. Yes, the busybox gzip compression is slower than the bzip compression. That's impressively bad. (Numbers: running busybox gzip on the uncompressed native-compiler tarball takes 2 minutes 3 seconds. Cat the same data through host gzip, it takes 21 seconds. Busybox is _six_times_ slower. The point of gzip is to do the 80/20 thing on compression optimized for speed, simplicity, and low memory consumption. Slower than bzip is... no.)

For a while now I've been considering how to parallelize compressors and decompressors. I don't want to introduce thread infrastructure into toybox, but I could fork with a shared memory region and pipes and probably make it work. (Blocking read/write on a pipe for synchronization and task dispatching, then a shared memory scoreboard for the bulk of the work.)

In the case of gzip, if I chop the input data into maybe 256k chunks (with a dictionary reset between each one), and then have each child process save its compressed to a local memory buffer until it's ready to write the data to the output filehandle (they can all have the same output filehandle as long as they coordinate and sync() properly).

However, first I'd like to see if I can just get the darn thing _faster_ in the single processor version, because the busybox implementation is _sad_. (Aboriginal is only still using it because I haven't written the toybox one yet. I should do that. After the cleanup/promotion of ps and mdev, which is after I cut a release with what's already in the tree, which is after I get all the targets built with it.)


February 15, 2015

Did a fairly extensive pass to try to fix up distcc's reliability, tracing through distcc to see why a simple gcc call on the command line was run locally instead of distributed. And I found the problem, in distcc's arg.c line 255: if (!seen_opt_c && !seen_opt_s) return EXIT_DISTCC_FAILED;

Meaning I have to teach ccwrap to split single comple-and-link gcc command lines into two separate calls, because distcc itself doesn't know how to do it. (At which point I might as well just distribute the work myself...)


February 13, 2015

Grrr.

The new build layout breaks halfway through the linux from scratch build, and the reason is that the wrapper is not compatible with relocating the toolchain via symlinks.

The wrapper does a realpath() on argv[0] to find out where the binary actually lives, and then the lib and include directories are relative to that (basically ../lib and ../include, it assumes it's in a "bin" directory).

I need to do that not just because it's the abstract "right thing", but but because I actually use it: aboriginal's host-tools.sh step symlinks the host toolchain binaries it needs into build/host (so I can run with a restricted $PATH that won't find things like python on the host and thus confuse distcc's ./configure stage and so on). The toolchain still needs to figure out where it _actually_ lives so it can find its headers and libraries.

But in the new "spliced initramfs" layout, the toolchain is mounted at /usr/hda and then symlinked into the host system. So /usr/hda/usr/bin/cc is the real compiler, which gets symlinked to /usr/bin. The wrapper is treating /usr/hda/usr/include as the "real" include directory, but the package installs are adding headers to /usr/include... which isn't where the compiler is looking for them. I created an environment variable I can use to relocate the toolchain, but I'd prefer if it could detect it from the filesystem layout. So how to signal that...

I was thinking it could stop at a directory that also contained "rawcc", but there's two problems with that. 1) libc's realpath() doesn't give a partial resolution for intermediate paths, 2) I moved rawcc into the compiler bin directory where the real ld and such live. (You thought the binaries in the $PATH were the actual binaries the compiler runs rather than gratuitous wrappers? This is gcc we're talking about, maintained by the FSF: it's unnecessary complexity all the way down.) So instead of /usr/hda/usr/bin having rawcc symlinked into /usr/bin, my toolchain has it in usr/$ARCH/bin/rawcc (which corresponds to /usr/lib/gcc/x86_64-linux-gnu/4.6/cc1 on xubuntu and yes this INSANE tendency to stick executables in "lib" directories needs to die in a fire. "/opt/local" indeed...)

I guess what I need to do is traverse the symlinks myself, find a directory where ../lib/libc.so exists relative to that basename(), and then do a realpath() on the _directory_.


February 12, 2015

Excellent article describing how recessions work, but it raises a question: Why would anyone save at 0% interest? Where's this "excess of desired savings" coming from, why _now_?

The answer is that paying off debt is a form of saving. (It's often one of the best investments you can make, it gives you guaranteed tax free returns at higher interest rates than you get anywhere else.) Having a zero net worth is a step up for a lot of people, burdened by student loans and credit card debt and an underwater mortgage...

But borrowing creates money, because money is a promise and borrowing is makes new promises. When you swipe your credit card the bank accounts the money was borrowed from still show the same number of dollars available in them, and the people you bought things from keep the money you borrowed and spent. That money now exists twice, because your promise to repay the credit card debt is treated as asset in your bank's books. Money is _literally_ a promise, new money comes from people making new promises, and credit cards allow banks to turn individual civilian promises into temporary money.

For the same reason, paying off debt destroys money, by canceling the magnifying effect of debt. When the debt is repaid, that money no longer exists in multiple places, so there is now less money in circulation. The extra temporary money created by securitizing your promise expires when the promise is fulfilled and the loan is repaid. Result: there are fewer effective dollars in circulation, which is a tiny contraction of the money supply.

But debt is not just magnifying existing money: this is where _all_ money comes from. It's promises all the way down, and _nothing_else_. This is the part that freaks out libertarians, who use every scrap of political power they can buy to forbid the government from ever printing new money, so they can pretend the existing money is special and perpetual and was immaculately created by god on the seventh day or something.

Here's what really happens.

A lot of government "borrowing" is actually printing money while pretending not to. A "bond" is a promise to pay a specific amount of money at a specific future date (say $100 in 30 years), which is then sold today for a lower value than the future payback (so the "interest rate" is the difference between the future payoff and the current price, expressed as an annual percentage change). The federal Department of the Treasury regularly issues "treasury bonds", which it auctions off to the highest bidder. (Again, the auction sale price determines the interest rate: divide the amount the bond pays at maturity by the amount it auctioned for and annualize it, and that's the interest rate the bond yields to the buyer.)

The trick is that the Federal Reserve (the united states' national bank) can buy treasury bonds with brand new money that didn't exist before the purchase. When buying completely safe assets (such as debt issued by people who can, if all else fails, literally print money to pay it back), the federal reserve is allowed to borrow money from itself to do so. The Federal Reserve's computer more or less has a special bank account that can go infinitely negative, and they transfer money out of it to buy treasury bonds, using the bonds as collateral on the "loan".

The Federal Reserve doesn't need congressional authorization to create this new money because "cash" and "treasury bonds" are considered equivalently safe assets (issued by the same people even), so it's just swapping one cash-equivalent asset for another, a mere bookeeping thing of no concern to anyone but accountants. At the other end the Treasury Department is auctioning bonds all the time (including auctioning new bonds to pay for the old ones maturing), but these bonds are all made available for public auction where investors other than the Federal Reserve can bid for them, so in _theory_ the federal debt could be entirely funded by private investors and thus the libertarians can ignore the fact it doesn't actually work that way.

This is why the federal reserve can control interest rates. When it's buying the vast majority of treasury bonds at each auction, the price it bids for them determines the interest rate earned on them by everybody else. (If you bid less than the fed you don't get any bonds. If you bid much more than the fed you're a chump losing money, and anyway your entire multi-billion dollar fortune is a drop in the bucket in this _one_ auction.)

So a _giant_ portion of the federal debt is money the government owes to itself. (Not just the federal reserve, but the social security trust fund, which is its own fun little fiction: when social security was created people retiring right then got benefits without ever paying into the system. Current taxpayers paid for retirees, and that's still how it works today.)

This debt created money, and the expanding national debt expanded the money supply not just so the US economy could expand but so foriegn countries can use the US dollar as their reserve currency (piling up dollars instead of gold and using them as ballast to stabilize the currency exchange rates).

The hilarious part is that the federal reserve makes a "profit" due to the interest paid on the treasury bonds. When the bonds mature and get paid back, the Fed gets more money from Treasury than it paid them to buy the things. What does the Fed do with this profit? Gives it back to the Treasury.

No really, that's literally what happens: the federal reserve's profits are given to the government and entered into the government's balance sheet as a revenue source. People are _proud_ of this, even though it's just money going in a circle. The treasury pays interest to the federal reserve which gives it right back, and it's just as good as taxes!

(Half the point of taxes to keep inflation down by draining extra money out of the system after the government spends money the federal reserve and treasury have spun up out of promises to pay each other back. It's a bit like the two keys to launch a missile thing: they have to cooperate because printing presses were too obvious. There are still printing presses, but you have to buy cash from them with money in bank accounts. _New_ money is created in the bank accounts by borrowing previously nonexistent dollars from the federal reserve in exchange for previously nonexistent treasury bonds. Welcome to the ninteenth century.)

Clueless nutballs like Ayn Rand Paul who have no idea how the system actually _works_ constantly attack it because they are incensed at the idea that the money they worship is a social construct rather than a real tangible thing, so they attack the machinery that makes it work to prove that everything will still work without the machinery. (Just like if you stop paying the water bill you no longer get thirsty. Well how do you know until you've tried? As was established in Red Dwarf, "Oxygen is for losers".)

But as I said years ago, money is just debt with a good makeup artist.

So when people respond to us being in a recession (because everybody's paying down debt so nobody has any money to buy stuff with) by trying to cut federal spending and balance the federal budget and pay down the _national_ debt at the same time...

They are IDIOTS, who should not be allowed to operate heavy machinery they clearly do not understand _anything_ about. The government _can't_ run out of money. It can cause inflation if it overdoes things, but right now we've been stuck with the _opposite_ problem for almost eight years. We could do with some inflation. (If you take on a 30 year mortgage expecting 3% annual inflation and get 1%, you wind up paying off twice as much money as you thought you would over the life of the loan. Inflation benefits debtors. That's why creditors hate it so much. They always go on about retirees, but it's billionares doing the lobbying to screw over people with mortgages.)


February 11, 2015

Oh wow, somebody donated to my patreon.

Sometime last year I claimed my name on Patreon. (My last name is nearly unique: during World War I a german immigrant named "Landecker" decided that immigrating to the US with a german sounding name wasn't a good idea, so he made something up, and my family has been the proud bearer of this tradition ever since. All what, three of us? My father's sister Barbara changed her name when she married, as did my sister Kris, so there's my father, my brother, and myself. Oh, and my father remarried. Four people with the name, of which I'm the only programmer.)

Of course nothing's entirely unique on google, it shows up as a typo in a number of places, one of my brother's friends wrote fanfic using the name for a character, and some random bodybuilder decided to use my last name as the first name of his stage name (so if you do a google image search for it you mostly get him), but eh. Unique enough for most purposes, not a lot of competition for it as login handles, but still worth grabbing if you have any plans for someday caring about the service.

Anyway, I filled out a creator profile on Patreon and did some goals called "Do the thing!" where in exchange for the money I promised to appreciate receiving it, and then largely ignored the thing. I didn't even bother to mention it here or on my project websites or mailing lists. (Once upon a time I proposed crowdfunding on the toybox list. Literally nobody replied.) It's not even the patreon account my household sponsors other patreons through (that would be Fade's patreon, through which I send like a dollar a month each to a half-dozen different webcomic artists).

Over the past year or so several companies have contacted me to ask if I had time to do toybox or aboriginal linux work for them, and I mentioned the patreon each time and they went "we're not really set up to do that". I guess it's not surprising somebody eventually took me up on it, but still... Cool.

(They're strongly encouraging me to work on mdev next. Ok then...)


February 2, 2015

I've been wrestling with an Aboriginal Linux design change for a couple months now, and it's fiddly. The problem is the linux-kernel build.

In the old design, the top level wrapper build.sh calls:

(This is slightly simplified, ignoring the optional second stage cross compiler, the ability to disable the native compiler, and so on.)

The idea behind the new design is to move simple-root-filesystem into initramfs. Then the native-compiler is packaged up into a squashfs on /dev/hda and gets symlinked into the initramfs at runtime via "cp -rs /dev/hda/. /".

This means the simple-root-filesystem output gets packaged into a cpio image, the native-compiler.sh output gets packaged into a squashfs, and the old root-filesystem.sh script that used to splice the two together at compile time goes away (they're now combined at runtime).

So the new design is:

Yes, I could have made the splice part cp -s the files into /usr instead of /, and thus not had to modify native-compiler at all, but that's less generic. In theory you can splice arbitrary extra complexity into the initramfs from the hda mount, no reason _not_ to support that. (There's a "directory vs symlink" conflict if the new filesystem has an /etc directory, mkdir tries to mkdir /etc" and complains something that isn't a directory already exists there. Of course it's a symlink _to_ a directory so if it just continued everything would work. I should check what the host cp does, reread what the standard says, and figure out what I want toybox cp to do here. But for the moment: the /etc symlink points to /usr/etc so just put the files there for now and it works...)

So root-filesystem.sh, root-image.sh, and linux-kernel.sh got deleted, simple-root-filesystem became root-filesystem, native-compiler.sh got its output moved into a "usr" subdirectory, and system-image once again builds the kernel (which is really slow, but conceptually simple).

The packaging is completely different. The old root-filesystem.sh script goes away, because the simple-root-filesystem and native-compiler output get combined at runtime instead of at a compile time. (This means more/chroot-setup.sh script also has to know how to combine them, but since it's just a cp -a variant that's not a big deal anymore.)

The old root-image.sh and linux-kernel.sh stages used to be broken out to avoid extra work on rebuilds, but I put them back because I don't like describing the design. It makes iterative debug builds take longer, but I can rebuild individual packages outside the build system if I need to fiddle with something many times. (I'm almost always too lazy to bother...)

A lot of optional configuration tweaks the old build supported go away too: ROOT_NODIRS was a layout based on linux from scratch chapter 5, but lfs-bootstrap didn't use it. NO_NATIVE_COMPILER let the build do just the simple-root-filesystem, now you can select that at runtime.

On the whole, a big conceptual cleanup. But a real MESS to explain (mostly because of what's going away), and a lot of surgery to implement.


February 1, 2015

Happy birthday to me...

Didn't really do anything for it this year. (Last year was 42, that's an important one. 43 means you survived the answer. Didn't ask for what I really wanted either year, because I want other people to be happy more.)


January 30, 2015

Work ate this week dealing with kernel stuff (adding icache flushing support to a nommu system), and now I'm back poking at toybox trying to remember where I left off. According to (hg diff | grep +++ | wc) I've got differences in 20 files, and that's _after_ moving some of the longstanding stuff like grep -ABC or the half-finished dd rewrite (the bits that broke a "defconfig" build for me) to patches...

But I'd like to ship both sed and aboriginal releases this weekend, and now that sed is in the next aboriginal linux todo item is expr. And expr is weird. Testing the host version:

$ expr +1
+1
$ expr -1
-1
$ expr 1-
1-
$ expr 1+
1+
$ expr 1 +
expr: syntax error
$ expr + 1
1
$ expr - 1
expr: syntax error
$ expr 1 -
expr: syntax error

So now I'm staring at the posix spec to try to figure out what portion of this nonsense is required by the spec, and what portion is implementation weirdness. (I think +1 is being treated as a string, -1 being treated as an integer, but I have no idea why nothing + 1 is allowed but nothing minus 1 isn't? (Maybe the first becomes "" + "1", but there's no minus behavior for integers? Maybe?)


January 27, 2015

One of my four talk proposals got accepted at CELF. Unsurprisingly, they went with "What's new in toybox". (Not the rise and fall of copyleft.)

I'd link specifically to this year's page, but this is the Linux Foundation. They never archive old stuff, it's all about squeezing sponsorship money out of thing du jour and moving on to the next revenue garnering opportunity while history vanishes. Sigh. Oh well. At least the free electrons people record and archive stuff.


January 26, 2015

Okaaaaay....

The ancient and decrepit version of Bash I've been using in Aboriginal linux, 2.05b, doesn't understand "set -o pipefail". It doesn't have the "jobs" command. And it doesn't build toybox.

I was lulled into a false sense of complacency by the fact that aboriginal uses an "airlock step" where it populates build/host with busybox and so on, and then restricts $PATH to point to just that directory for the rest of the build. So the system should rebuild under itself since it initially built with the same set of tools

The exception to this is stuff called at an absolute path, namely any #!/script/interpreter because the kernel doesn't search $PATH when it runs those so they need an absolute path. (The dynamic library loader for shared libraries works the same way.)

I think this old version of bash _should_ have pipefail and jobs, but apparently the way I'm building it they're not switching on in the config. I don't know why.

Of course I tried a quick fix of switching to #!/bin/ash to see if the various rumors I've been hearing about busybox's shell being upgraded actually meant something, and got 'scripts/make.sh: line 79: syntax error: unexpected "("' which is ash not understanding <(command) redirects. Of course.

I may have to do toysh sooner than I expected. This is not cool.

(Yes, I could upgrade to the last GPLv2 release of bash, but that's not the point. I plan to replace it eventually, upgrading it wouldn't be a step forward.)


January 22, 2015

Working on switching Aboriginal so simple-root-filesystem lives in initramfs unconditionally. Have native-compiler.sqf live on /dev/hda and splice them together at runtime instead of in root-filesystem.sh. Use initmpfs for initramfs, and have a config knob for whether the initramfs lives in vmlinux or in a separate cpio (SYSIMAGE_TYPE=rootfs or cpio). This probably means that simple-root-filesystem needs to be unconditionally statically linked, otherwise the "delete old /lib contents and replace with new lib" gets tricky. (No, you don't want to bind mount it because the squashfs is read-only so you can't add more, you want symlinks from writable lib into the squashfs.)

All this means run-emulator.sh just gives you a shell prompt, without native toolchain, so move the qemu -hda argument to dev-environment.sh.

While I'm making stuff unconditional: drop NO_ROOTDIRS, it's fiddly and unnecessary. (The idea was to create an LFS /tools style layout, but lfs-bootstrap.hdc doesn't use it.)

Leftover issue: more/chroot-splice.sh needs a combined filesystem and root-filesystem.sh isn't doing it anymore...


January 16, 2015

By the way, these are the busybox calls left in the basic Aboriginal Linux build:

    2 busybox gzip
    4 busybox dd
   11 busybox bzip2
   28 busybox tar
  121 busybox diff
  215 busybox awk
  275 busybox sh
 1623 busybox tr
 2375 busybox expr
21692 busybox sed

And I'm almost done with toybox sed.

The development category has more commands than that, the Linux From Scratch build and general command line stuff switches on another 20 commands (bunzip2 fdisk ftpd ftpget ftpput gunzip less man pgrep ping pkill ps route sha512sum test unxz vi wget xzcat zcat) but none of that is actually used by aboriginal linux itself. So, approaching a milestone...


January 15, 2015

Linux Weekly News covered Toybox's addition to Android, (Ok, I emailed them a poke and a couple links but they decided it was newsworthy and tracked down several more links than I sent them.)

Meanwhile it keeps showing up in contexts that surprise me, such as openembedded.

Heh. Grinding away at the todo list...


January 11, 2015

Finished cleaning up printf.c, back to the sed bugs. Felix Janda pointed out that posix allows you to "split a line" by escaping an end of line in the s/// replacement text. So this _is_ probably local to the 's' command the way the other one was local to 'a', and I don't need to redo the whole input path.

Which is good, because redoing the input path to generically handle this ran into the problem that it _is_ context-specific. If "a\" and just "a" could no longer be distinguished because input processing had already removed the escape, that conflicts with how the gnu/dammit extensions for single line a are supposed to behave. Or that "echo hello | sed -e 'w abc\' -e 'p'" is apparently supposed to write a file ending in a backslash, and then print an extra copy of the line.

(This is all fiddly. You wind up going down ratholes wondering how "sed -e "$(printf "a\\\n")"" should behave, and decide the backslash is _not_ the last thing on the line because the input blocking put the newline into the line, and in that context it's as if it was read in by N and becomes significant... I think?)


January 8, 2015

So my big laptop (now dubbed "halfbrick") is reinstalled with 14.04, which has all sorts of breakage (the screensaver disable checkboxes still don't work coming up on a year after release, you have to track down and delete the binary), but I used that at Pace and at least found out how to kill the stupid Windows-style menus with fire and so on.

Last night, I started it downloading the Android Open Source Project.

And downloading.

And downloading.

This morning I found out "repo sync" had given up after downloading 13 gigabytes, and ran it again. It's resumed downloading.

And downloading...


January 7, 2015

Now that toybox is merged into android, I really need to set up an android build environment to test it in that context (built against bionic, etc).

The software on my system76 laptop has been stuck on Ubuntu 13.04 since the upgrade servers went away. (I thought I could upgrade versions after that point, but apparently no.) This has prevented me from poking at the Android Open Source Project, because building that needs packages I haven't got installed (can't install more without the upgrade servers), and my poor little netbook hasn't got the disk spacei, let alone CPU or memory to build it in less than a week.

I've held off upgrading because it's also my email machine, but finally got to the point where I need to deal with this. (Meant to over the holidays, didn't make it that far down my todo list.)

The fiddly bit was clearing enough space off my USB backup disk to store an extra half-terabyte of data. (Yes, I filled up a 1.5 terabyte disk.) then leaving it saving overnight, and now it's installing.

The ubuntu "usb-cd-creator" is incredibly brittle, by the way. I have a 10 gigabyte "hda.img" in the ~/Downloads directory, and it finds that when it launches and lists that as the default image it's going to put on the USB key (but does _not_ find the xubuntu-14.04.iso file), insists that it won't fit, and doesn't clear this "will not fit" status even if I point it at the ISO and select that instead. So, delete the hda.img so it won't find it, and then I hit the fact that the "format this disk" button prompts you for a password and includes the time you spend typing the password in the timeout for the "format failed" pop-up window. I.E. the pop-up will pop up WHILE YOU ARE TYPING THE PASSWORD.

This is not a full list of the bugs I hit in that thing, just the two most memorably stupid.


January 6, 2015

The printf work I've done turns out to have broken all sorts of stuff because the regression tests I was running were invalid, because printf is a bash builtin! Which means the tests/printf.test submitted to toybox last year is not actually testing toybox printf: it doesn't matter what printf is in the $PATH, the bulitin gets called first.

(Is there a way to disable unnecessary bash builtins? Other than calling the binary by path each time...)

I keep hitting funky bugs in gnu commands while testing their behavior to see what toybox should do. The most recent example:

$ printf "abc%bdef\n" "ghi%sjkl\n" "hello\n"
abcghi%sjkl
def
abchello
def

What I was trying to test was "does the %b extension, which isn't in posix, interpret %escapes as well as \escapes?" And the answer seems to be: sort of. Only, badly.

I'm not entirely sure what this bug _is_. But it's not in the printf the man page is about, it's in the printf you get from "man bash" and then forward slash searching for printf. :)

(Well, ok, /usr/bin/pritnf produces the same output. But probably shouldn't, and I don't think I'm implementing that strangeness in toybox.)


January 4, 2015

Finally fixed that sed bug I was head scratching over so long: it was that I need to parse escapes in square brakets, ala [^ \t], because the regex engine isn't doing it for me. (It's treating it as literal \, literal t.)

Now on to the NEXT sed bug, which is that you can have line continuations in the regex part of a s/// command. (Not so much "bug" as "why would anyone do that? That's supported?"

In theory, this means that instead of doing line continuations for specific commands, I should back up and have a generic "if this line ends with \, read the next line in". (Except it's actually if this line ends with an odd number of \ because \\ is a literal \.)

The problem is, the "a" command is crazy. Specifically, here are some behavioral differences to make you go "huh":

$ echo hello | sed -e a -e boom
sed: -e expression #1, char 1: expected \ after `a', `c' or `i'
$ echo hello | sed -e "$(printf 'a\nboom')"
sed: can't find label for jump to `oom'

In the first instance, merely providing a second line doesn't allow the 'a' command to grab it, the lines need to be connected and fed in as a single line separated by a newline. but in the second instance, when we _do_ that, the gnu/dammit implementation decides that the a command is appending a blank line (gnu extension: you can provide data on the same line). (Busybox treats both cases like the second one.)

I suppose the trick is distinguishing 'echo hello | sed -e "a hello" "p"' from 'echo hello | sed -e "a hello\" "p"'. In the first case, the p is a print command. in the second, it's a multiline continuation of a.

And the thing is I can't do it entirely by block aggregation because 'echo hello sed -e "$(printf "a hello\np")"' is a potential input. (The inside of $() has its on quoting context, so the quotes around the printf don't end the quotes around the $(). Yeah, non-obvious. One more thing to get right for toysh. The _fun_ part is since 'blah "$(printf 'boom')"' isn't actually _evaluating_ the $() during the parse ($STUFF is evaluated in "context" but not in 'context'), then the single quotes around boom _would_ end and restart the exterior single quote context, meaning both the single quotes would drop out of the printf argument and the stuff between them wouldn't be quoted at all if you were assigning that string to an environment variable or passing it as an argument or some such. Quoting: tricksy!)

Anyway, what it looks like I have to do is retain the trailing \ at the end of the line. I have to parse it to see if I need to read/append more data, but then I leave it there so later callers can parse it _again_ and distinguish multiline continuations.

Sigh. It's nice when code can get _simpler_ after a round or two of careful analysis. So far, this isn't one of those times, but maybe something will crop up during implementation...


January 1, 2015

So the kernel developers added perl to the x86 build again, and of course I patched it back out again. Amazingly, and despite this being an x86-only problem, it _wasn't_ Peter Anvin this time. It was somebody I'd never heard of on the other side of the planet, and it went in through Thomas Gleixner who should know better.

The fact my patch replaces 39 lines of perl with 4 lines of shell script (and since the while shell script fits in the makefile in place of the call to the perl script it only adds _2_ lines to the makefile) is par for the course. I could clearly push this upstream as another build simplification.

But I haven't done so yet, because I really don't want to get linux-kernel on me anymore. It's no fun. I'm fighting the same battles over and over.

There are a bunch of patches I should write extending initmpfs. It should parse the existing "rootflags=" argument (in Documentation/kernel-parameters.txt) because size=80% is interesting to set (the default is 50%, the "of available memory" is implicit). I should push a patch to the docs so the various people going "I specified a root= and didn't get rootfs as tempfs because that explicitly tells it not to do that" have somewhere I can point them other than the patch that added that to go "yeah, you don't understand what root= means". I should push a patch so CONFIG_DEVTMPFS_MOUNT works for initmpfs (the patch is trivial, grep -w dev init/noinitramfs.c shows a sys_mkdir("/dev") and sys_mknod("/dev/console") and do_mounts_initrd.c has a sys_mount() example too, this is like 20 minutes to do and test.

But... It's zero fun to deal with those guys anymore. It's just not. The Linux Foundation has succeeded in driving away hobbyists and making Linux developent entirely corporate. The "publish or perish" part of open source is still there, just like in academia. But I'm not hugely interested in navigating academic political bureaucracy for a tenure-track position, either.

Sigh. I wrote a series of articles about this a decade ago. The hobbyists move on, handing off to the 9 to 5 employees, who hand off to the bureaucrats. I'm not _surprised_. I'm just... I need to find a new frontier that doesn't involve filling out certification forms and collecting signatures to navigate a process.


Back to 2014