Rob's Blog rss feed old livejournal twitter

2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2002

July 17, 2018

Alright, last day of this. But I'm almost caught up! (Who knew it would be hard to compress over 10 years of my life into a concise narrative?)

When I returned to IBM in the second half of 1998, I sprayed the whole place down with Linux.

What happened was, I wound up back on the 3rd floor of building 903, 4 doors down from my old office... and bored out of my skull. I was there to provide support for big OS/2 customers like Ford's assembly lines, in _case_ they had a problem. There was no new development planned on OS/2, and the existing OS/2 department was largely trying to find other jobs in the company like I'd done with JavaOS a year earlier. (You may notice a theme here of me being 6 months to a year ahead of my coworkers. They were just then seriously looking to leave.)

I'd left IBM in late 1997 because I was all excited about Java and couldn't get IBM to let me do java, and during my absence IBM had taken on Java as a religion and wanted to do Java everywhere... but I was rapidly cooling on Java because Java 1.2 came out in December 1998 and it redid the GUI based on a horrible model/view/controller monstrosity called "swing" that was a _bad_idea_. Java 1.1 had the lightweight AWT that was elegant, there were some holes (like the lack of truncate() I'd pointed out) but 1.2 turned the GUI into a bloated nightmare and introduced "deprecation" meaning they didn't plan to be serious about backwards compatibility like C. For me Java peaked somewhere between 1.1 and 1.2, I did _not_ like the direction the langauge was going in, and Sun's refusal to provide a JDK for Linux had revealed that they didn't want to destroy microsoft's monopoly, they wanted to capture it intact. Nobody wanted to substitute microsoft's leash for Sun's noose.

Sun was very clearly threatened by Linux, but the Linux community had written Sun off as too dumb to live after this post from a sun engineer, and the too dumb to live part was in reply to a long technical explanation of how Linux was better than Solaris which also explained why Sun had invented threading. Threading happened because Solaris' process switching was utterly terrible. Linux's process switching was faster than Sun's _thread_ switching, and Java's refusal to provide things like poll() or select() and instead make you spawn threads just so they could block waiting for I/O was _silly_. (Linux would shortly have its own performance issue trying to replicate sun's model, but they also rapidly _fixed_ it. "It never occurred to us to optimize for such a stupid use case. We've now done so. Next.") By the time Sun screwed over Blackdown, The Linux community as a whole wrote Java off as a bad idea that might need legacy support the same way Cobol did, but nobody sane should write anything _new_ in it.

Linux famously grew 212% in 1998, that was all the Java developers switching over. Netscape had collected the "anything but microsoft" crowd under a single flag, and then poured them into Linux by elevating Linux to a Tier 1 platform and pointing to Linux as the model that convinced them to release the netscape source. You still needed an OS to run Java under, and the Java userbase switched en masse to Linux and patiently waited for proper Java support from Sun, learning native Linux development in the meantime... and then Sun's bad behavior convinced them to stop waiting and stick with native Linux development.

I was about 6 months ahead of this curve because I'd been keeping tabs on Linux since the SLS disks, and had already been looking to move _off_ of OS/2 with Linux as the obvious next step (because "it's not going away" was a unique value proposition among non-microsoft operating systems, at least before Steve Jobs returned to Apple). My main problem was that Linux had never been able to make XFree86 work with any graphics hardware I could actually obtain, and even installing the OS/2 port of XFree86 to tinker with it hadn't solved the problem for me. Linux was fine, XFree86 was terrible. And after OS/2, I wasn't going back to a text mode only OS.

So I bought some new hardware and made another attempt at putting together a Linux system. I didn't have a CD burner at Quest or at home, so I downloaded the 25 debian floppies and tried to install them, but my serial port was at a nonstandard IRQ and I didn't know I needed to use the "setserial" command to tell Linux (hard to pull up a man page when you don't know the name of the command)...

And here's where I asked that question on the list, which marked me finally getting a usable Linux system installed at home, albeit in a slightly less obvious way than the question implies. After posting, I read all the list traffic, including a message from somebody who had an easier time getting Red Hat installed, a reply to my post from someone who had the same problem but no solution, and then I hit this message replying to someone comparing Debian and Red Hat, where the Debian user said he was "really grateful for not having people like that trying Debian" and that "Debian is not the right distribution for them" and I went "Ah-ha, there's my problem! I shouldn't be using Debian! It's full of assholes like that guy. He says he's glad for everybody like me to go use Red Hat and leave his thing to die, I should take this advice and let network effects do their work!"

So I downloaded the Red Hat install disk, figured out how to get setserial to teach the kernel to use my modem (either that or it probed the sucker correctly, there was a kernel config entry for that way back when that Debian didn't enable because it was "dangerous"; it poked the card to generate an interrupt and assumed the next interrupt was the card's interrupt, in theory a spurious other interrupt could come in but in reality it worked fine). And I was a happy Red Hat user for years after that (happy until the inexcusable introduction of "kgcc", and continued to use it via inertia through Red Hat 9, then Fedora Core 1 dropped support for the processor my machine was still using because they built the kernel "optimized" for a newer chip and it wouldn't boot on my machine. At which point I switched to Knoppix, and from there to Ubuntu. I refused to touch Debian without thick gloves between me and it until quite recently, all because of one asshole at a formative moment...)

This poking at Debian and Red Hat was the context in which I returned to IBM in the second half of 1998, and sat in my office bored... so I tried to install a Linux partition on my workstation so I could poke at Linux at work as well as at home. Except that Red Hat's network install wouldn't work through the socks/proxy firewall, but the department at IBM had a CD burner and a stack of blank CDs in the supply closet and I remembered Debian's download directory had contained ISOs, so I downloaded the Debian ISO image and burned it to a CD and... found out IBM had given all the leftover unsellable Micro-channel bus PS/2 systems to employees for desktop machines, and Debian didn't support micro-channel yet. (Red Hat said it did, but Red Hat had no ISO downloads (to encourage CD sales) and couldn't install through IBM's firewall...)

So I had a useless Debian CD on my desk and mentioned it to a coworker in the usual office chit-chat, and the coworker said he had a PCI bus machine (because the micro-channel ones were old and creaky by that point, and being replaced), so I gave him the Debian CD (legacy AIX knowledge was common in the department so a PC unix was of interest), and then other co-workers wanted debian CDs so I burned more and passed them around the department...

So I got my bored co-workers excited about Linux, but I couldn't get an official IBM position on Linux. I cornered an executive in an elevator and asked him what IBM's plans for Linux were, and he said the lawyers were uncomfortable with the GPL and until they signed off on it, IBM wouldn't touch it. So I let my contract expire at the end of the 6 months, and went to work somewhere else.

The next year all those old OS/2 guys founded the "IBM Linux Technology center" (and eventually got a bigger space under the cafeteria in the building next door), and when Lou Gerstner left he gave his successor Sam Palmisano a todo list starting with "spend a billion dollars per year on Linux for the next 5 years"...

The first time I left IBM, it was because I couldn't do Java. Then they got Java as a religion. The second time I left IBM, it was because I couldn't do Linux. Then they got Linux as a religion. I spent the first half of the 2000s thinking I should have stayed at IBM and waited for them to catch up with me...

A year and change later I attended the disaster that was Kansas City Linuxfest (which was _so_ bad several of us huddled around a table in the debris and talked about how to run a convention right, and I took notes, and a couple years later that became Penguicon). In the aftermath of KC LinuxFest there were boxes and boxes of the June 2000 issue of Linux Journal which they'd meant to give away as schwag at their booth, but there were like 1/10th the attendees as expected, so they were piled on a pallet to ship back to Seattle, and I grabbed a couple boxes and drove them home and left them in the entryway of the IBM cafeteria with "Free: Take One" written on the flap. That was the "who's who in Linux" issue with interviews with the top 50(?) contributors, including lots of important people like Pauline Middelink (inventor of network address translation) who you don't hear from today, because driving brilliant pioneering women away from the boys club via endless harassment is how a lot of white dudes keep themselves employed. (They can't compete, so they harass...)

I've poked at Linux as a hobby ever since. I taught more night courses at ACC around this time ("intro to unix" passing on the Linux stuff I was learning, and "intro to operating systems" which was a survey covering mainframes and such) I really enjoyed teaching, but after a break when I looked into it again the paperwork requirements had grown beyond my interest, and teaching as adjunct faculty paid less than writing for the Motley Fool did. And I wrote about Linux several times for the Motley Fool (another "for fun" thing, although for a year or so it turned into a half-time telecommuting position that let me take time off from programming while my mother was sick). Writing or The Fool tapered off at the end of 2000 when the dot-com bust happened, that's its own story, ask me about that sometime if you're curious...

I'm not sure if "coming back to babysit Feature Install" was my first Bullshit Job, since I did in fact have the skills to do it and was ready to if the need had arisen. But if not the next one on my resume, Trilogy, definitely was. Trilogy was more money than 27 year old me had _ever earned ($50/hour! About $75 adjusted for inflation.) doing a crazy java thing which was _also_ IBM politics. Two divisions of IBM had fought over a project until management took it away and outsourced it to stop the fighting. Of course Trilogy had sold them a useless home-grown Java framework as part of the deal that I was nominally there to work on, but I don't think I ever added a link of code to it that mattered? The real job was coordinating work done by competing bits of IBM in diferent cities, which Trilogy was not in a position to do. (But we were at least considered impartial, and thus distained by both sides.) So I spent all my time on phone calls getting bits of IBM to talk to other bits of IBM, and eventually figured out that what had screwed up the project was a specific manager in Dallas who had figured out how to advance his career by screwing up projects. He'd take a project still in the planning stage and go "why isn't this being implemented?" (Because the design's not done?) But he'd make noise and implementation would start with an unfinished design. Then he'd go "Why haven't we started testing?" (Because it's still being implemented?) And enough noise later testing of the unfinished thing would start. The result was a nightmare, but "Mike gets things done!" This wasn't happening before he acted, and now it's happening! The fact it _shouldn't_ be happening and is heading for a cliff wasn't to become apparent until he was promoted away.

The project outsourced to Trilogy was some sort of billing system for IBM mainframe customers, which some IBMers confessed to me _had_ no fixed prices for anything because the sales process was entirely about figuring out what the customer would pay and charging them every penny of it. My main recreational activity on the job was fishing through the bug database for a request from IBM to do a thing and a request from IBM to undo that thing (or do the opposite), pair them up, and bring them up in the daily conference calls. The project got so bad that the IBM WorldWide Integration and Test facility in Australia signed off that it had "passed" (some random version of the constantly changing spec and some random version of the constantly changing code matched some random version of the constantly changing test profile, or at least they caimed it did) and closed out their budget and said "sorry, we're done, we can't work on this anymore". Of course there were pending design changes (of a fundamental "it should work _this_ way" type) still being entered into the defect tracker, but it was brownian motion rather than progress. The test department washed their hands of it so no further progress could be made, and that was just one political crisis du jour. I was paid to be on conference calls with France and Boblingen Germany, WWIT in Australia, and Poughkipse New York. Note the three wildly different time zones? 6am call, 8pm call, or 3am call (Austin time) depending on who had to be on the phone. I would sometimes have four of them in a row (all three for the day and another in the morning), and was encouraged by my boss to sleep under my desk. It was one of those dot-com companies with a free food room, and not cheap stuff either: roast beef and powerbars and so on.

At the end of the 6 months I turned down a 50% raise ($75/hour! In 1999 money! For a 27 year old!) to continue, because I was developing stress-related health symptoms. (And my mother was dying of cancer, but that's another story...)

Trilogy taught me to pursue fun work rather than the most lucrative thing I could currently be doing. At my next job (Boxxech) I tried out a management role (and hated it), then took a huge pay cut to join a startup I found at the local Linux User's Group meeting, because I wanted to do Linux full-time. That was WebOffice, which I've already written about, and which led to Aboriginal Linux and mkroot and busybox and toybox. There, all caught up.

July 16, 2018

Day 3, still working towards my switch to Linux, we're up to about the start of 1997.

OS/2 4's two selling points when it shipped were Java and voice recognition (which didn't have an easter egg for "Tea, Earl Grey, Hot", and I'm still sad about that). OS/2 was crammed to the gills with Java, which was uniting the "anything but microsoft" crowd. (Back when I still got along with Eric Raymond I put this part of the history in the art of unix programming, which I added so much material to he almost made me a co-author. It's a pity he ossified into a loon. What is it with old unix guys? When I first met Eric, he said back in the 1980's Richard Stallman was sane and he and Eric used to be friends, but Richard got progressively crazier as the years went on. Eric was worried he'd go crazy the same way as he got older, but instead he found a whole new axis of crazy to go down. I am sad. Anyway...)

I was intrigued by Java because my old "platform independent binary excutable code" (PIBEC is not a useful acronym) project was my proposed graduate project if I'd gone to grad school instead of trying to pay down my student loans immediately. (It's also what got me into compiler development, I first read large chunks of the GCC source code trying to figure out how to add a new "backend" generating my bytecode. I'd also need to have two kinds of function pointers, one for native code and one for bytecode, but I never got that far. I also tracked down a copy of Small-C and read that when gcc circa 1993 proved intractable, my later participation in tinycc was probably inevitable.)

But I hadn't been able to go to Hursley (where IBM's JDK team worked) due to the Austin site consolidation, and Sun didn't get back to me in time before I'd committed to move to Austin, so at the end of 1996 when Warp 4 finally shipped (a year too late to matter) I finally did transfer to IBM's JavaOS team... as a tester. (There were no developer seats open, but I wanted in and I can test. And it meant I was doing Java. Now instead of working on the 3rd floor of building 903, I was on the 6th floor. Oooh.)

I thought JavaOS meant "port the JRE to DOS using expanded memory and green threads with an SVGA AWT". That was the obvious approach to me: boot from a floppy, run well in 1 meg of ram, it wouldn't do SMP but you can do upgraded versions later, NCSA telnet was providing an IPv4 stack for DOS back in 1986. This would handle all those kiosk use cases they were talking about with ease and could be scaled up in later releases.

But IBM was porting Sun's JavaOS to PowerPC, and what Sun did was take the Solaris kernel (there is no recovering from this point in the sentence, we're already doomed) and run the JDK as the init task on PID 1. Solaris had no device drivers for non-Sun hardware, so they allowed userspace device drivers to be written in Java, meaning the OS was calling out to Java to interact with the hardware. Of course they didn't hava device drivers ready in java either, for esoteric things like IDE hard drives, so they had to dhcp/bootp (which intel later renamed PXE boot but this was 1997 so Intel hadn't taken credit for it yet) to bring the system up, and ran from a 32 megabyte ramdisk, with another 32 megs for Solaris, or a total of 64 megabytes of memory _IN_1997_. This was an INSANE amount of memory at the time. OS/2 could run in 4, run comfortably in 8, and 16 megabytes was posh, and OS/2's biggest criticism is it was a memory hog. It quickly became clear to me that this was another powerpc tangent due to IBM's hardware side dragging the software side away from anything any customer would ever want.

This is also the system where the animated screensaver paused noticeably every few seconds to run the garbage collector, which got me thinking about improvements. But at the time what this meant was on top of everything else, the desktop is not reliably interactive.

This time the crippled powerpc Boondoggle IBM was doing got cross compiled from PowerPC AIX machines, so I was learning some new stuff, but I wasn't really getting a lot of Java coding in. (It was during this period I wrote a deflate implementation in Java, in the evenings on my home machine. If work wasn't going to teach me java, I'd do it myself...) The JavaOS work wasn't as stressful as the OS/2 development had been, but it seemed like such a waste of time. And as a tester, I wasn't allowed to fix anything that was wrong with it. And they didn't _need_ testers to know that 64 megs of ram with no hardware drivers and a glitchy desktop was not a recipe for success. I worked on JavaOS for 6 months, and then I too looked outside IBM.

I answered a classified ad for a Java developer, nailed the phone interview, and got a job at Quest Multimedia writing 45 "interactive diagrams" for the CD in the back of a McGraw-Hill math textbook. I'd been at IBM long enough I didn't have to pay back my relocation money, so I took the plunge.

Working at my first start-up was fun, but still fairly exhausting. Jere Confrey (who is apparently at NC State University now) and her husband Alan Maloney were professors at UT who were good at navigating grant money bureacracy, but were sick of 2/3 of it immediately vanishing when they ran it through their university (charging them exorbitant rent for office space and $50 per folding chair and so on; the university viewed such grant money as supporting the university more than any specific professor or project). So they set up their own little company to run the grant money through, and got their office space in a strip mall off of 183 and Anderson Mill (northwest of town).

They had a textbook contract that involved writing precalculus tutorial exercises as Java applets. Jere would come up with an idea, Alan would do a macintosh "hypercard stack" with a sort of storyboard, print it out on paper (usually 5-10 pages, mostly fake screenshots), and I'd implement it. They had a graphic artist to feed me backgrounds and gifs to animate (and to make the web page each applet would embed into), and the four of us were the entire company.

I worked there for 6 months, and am still proud of a lot of the work I did. I made an auto-resizing graph (figured out where the graph lines should be in the X and Y range you told it to display, trying 1, 2, 3, and 5 until it got between 7 and 10 lines) which you could feed a string equation to and it would plot it on the graph, and if the equation was an inequality it would fill the appropriate side of the graph (with a dotted or solid line depending on whether it was > or >=), and for one of them I even colored how many layers of overlapping inequality you had. And of course moving a point along the curve... Under the covers I wrote a function that would parse a string and perform the equation it said (with parentheses and correct priorities and everything, pushing and popping operator and operand stacks), and the way it plotted curves is I'd stick an X in there as a variable and then string.replaceAll() to substitute in the number and repeat that for each point I wanted to plot, and it was fast enough on a 486dx75 to do at least 4 or 5 frames per second even on the complicated ones. (The trick is never allocate objects after startup, which is expensive because it memsets their contents to zero, and also means the garbage collector never runs.)

Another fun thing about working for professors is that when I decided I wanted to teach night courses at the local community college, and write for an online publication (the Motley Fool), they were supportive. (IBM made it clear that full-time employees were serfs and they owned ALL YOUR TIME, so anything you did outside of work belonged to them or was somehow stolen from them. Quest didn't care as long as I got the work done, and with two professors running the place "busy writing and teaching when not doing work for you" made them feel right at home, like I was a grad student.) I started by teaching an "Intro to Java" course at the ACC campus closest to work, because nobody knew much about it then and my year and change of poking at it, plus the day job doing it, made me an expert.

The desktop system Quest bought me took a while to set up. I wanted to run Linux on it, but "there's no JDK for Linux" was the #1 bug on Sun's "Java Developer's Connection". The main page of, where all the java documentation lived, showed you the top 5 bugs, and if you logged in you could vote for which bug they should fix next. This bug had more votes than the next 4 bugs combined, and it stayed that way for 11 months... until they changed the page to not show any bugs. So my desktop had a Linux partition on it, and I played with Linux on my home system Xfree86 on my Western Digital graphics card was still all sparkly and vertical tearing because they updated the screen while it was drawing rather than waiting for the vertical retrace interrupt. On the new SiS motherboard that Quest got me, the framebuffer driver had an endianness issue so every 4 bytes were reversed, and the software drawn mouse cursor read the framebuffer data in one endianness and wrote it in the other, so moving the mouse cursor across the screen left a smear. And of course finding the XFree86 devs to talk to them was basically impossible, and nobody could get access to their source control without being an "approved committer", so I could never get any of it fixed unless I tracked it down myself and submitted a patch to the distribution maintainer. (Maybe they could get it upstream into the package? Not my problem at that point.)

So I put an OS/2 partition on the Quest machine, and did all my Java development on that. And then they'd test it on their MacOS 7 browser (both netscape and internet explorer for MacOS 7), and test it on a Windows 95 IE machine (which had _different_ bugs than IE for MacOS, I remember IE got nested lightweight component positions wrong, instead of traversing down each canvas and adding up the offset of each parent component to figure out where to place the upper left corner of this one, it would multiply your component's offset in this canvas by the number of parents it had. Took a while to figure that out because it was usually off the bottom right of the screen. Worked fine on everyting else, but not on IE. This meant I couldn't use nested canvases but had to position every component manually in a single canvas, because microsoft. There were so many other bugs like that, Java really earned the "write once, debug everywhere" moniker.

But I wanted very much to add Linux to the pile, because JavaOS clearly wasn't it. There was an open source thing called Jos people were playing with that I hoped might turn into a thing, but I'd been following it most of a year and it had already stalled by that point. (This is where I learned that talk begets talk and code begets code: they'd started with design discussions instead of a prototype. Their discussion quickly expanded to the point where no intiial implementation could do the vast design justice, and there wasn't a cannonical implementation for everybody to attach their work to or coordinate development via... A year in they had a page of desktop graphics and that was it. I used Jos as an example in my Prototype and the Fan Club talk.)

But it wasn't clear what Linux I should install. The SLS disks I'd tried years earlier were long gone. I remembered mention of "debian" and found that but their page pointed me to the FSF which had a don't call it Linux, it's GNU GNU GNU we take credit for EVERYTHING ALL HAIL STALLMAN! screed, which was historicaly inaccurate but I didn't know better at the time and this led to a certain amount of embarassment. And in any case: no JDK. I couldn't build and test the Java applets I was writing on Linux.

A couple months later (Feb 1998) Netscape announced it was releasing its source code and elevating Linux to a Tier 1 platform, and I went "oh good, everything should get fixed now" and expected Linux to get proper third party support from places like Sun. Instead Sun recoiled and hissed at the threat. Remind me to write up the "Sun Civil War" someday.

And I still couldn't manage make a Java development workstation out of Linux at any point during my time at Quest, so I did all my Java work in OS/2, which I viewed as increasingly dead. (For example hard drives were coming out that were too big for it to format, ala the ATA-1 128/137 gigabyte limit, with no fix in sight at the time...)

This is also where I got my first laptop, I mail-ordered a used IBM "butterfly keyboard" laptop and installed OS/2 on it to do Java development. I loved that machine. Sadly when I moved out of Austin in 1999 I leaned the laptop bag against the side of my car while packing, forgot it was there, and ran over it backing out). But I replaced it with another laptop, and have used laptops instead of desktops as my primary development system ever since.

But mostly, while I was there, I wrote java GUI programs. Lots and lots. I wrote a 4x4x4 3D tic-tac toe game that played against you with varying difficulty levels (the trick to beating it on the highest difficulty level was it would try to extend its longest line or block your longest line with the same weighting, but you could fork it by making structures you could complete along more than one axis...) I had to relearn _so_ much trigonometry to do the "point on a circle you drag around with the mouse" stuff that showed you the X and Y coordinates and the angle and you could set any of them by highlighting and typing into the appropriate display field and it would move the others and the point for you... Plus you could set its rotation in radians per second and it would move it for you...

Anyway, I _loved_ that job. The money wasn't great (they'd matched my IBM salary, which was better than flipping burgers but not twice what flipping burgers paid; add in student loan debt and a car payment and there wasn't much left each month). But after about 6 months I got a sinus infection and decided I really _needed_ health insurance, and oddly enough IBM was advertising my old position doing Feature Install for OS/2, and I went "I am literally more qualified for that than anyone else on earth". And as a contractor they'd have to pay me by the hour (at a better rate than I'd been earning as an employee), and either they wouldn't work me 90 hours/week or they would PAY ME FOR ALL THOSE HOURS. Either way, I'd come out ahead.

And that's when I got serious about Linux.

July 15, 2018

Continuing from yesterday, and trying to work towards when I finally switched to Linux as my desktop system.

In 1995 I graduated and took a job at IBM, moving to Boca Raton to work on IBM's OS/2 port to the PowerPC. It was the place the PC had been invented, Lou Gerstner had brought the company back from the dead and was running it _very_ well, and I was using OS/2 as my desktop system which seemed to have an actual chance in the marketplace that I wanted to help along in any way I could.

This was a learning experience. I worked on the last 4 months of "OS/2 for the Power PC", and basically watched it die, then got moved to Austin and worked 90 hours a week for a year on something that was already too late.

OS/2 for the PowerPC made sense when they started the project, but not when I worked on it. The powerPC was one of the late 80's explosion of RISC systems, because CISC was clearly on its last legs. I blathered about Risc vs Cisc long ago, but the point is everybody knew a RISC system would unseat CISC and thus x86, they just didn't know which one. And then x86 redesigned itself to be RISC under the covers and took the wind out of the sails of mips, sparc, powerpc, alpha, and so on because it could run the same software at the higher speeds. (Backwards compatiblity means you carry along your existing customer base to the new iteration, and snowball to dominance via network effects. People thought a massive performance advantage would overcome that, but didn't realize Intel could capture that performance advantage while retaining binary compatibility by sticking an instruction translation pipeline on the front of its chip that re-wrote CISC instructions into RISC instructions on the fly, I.E. the Pentium.)

The Pentium was introduced in 1993, doing the under-the-covers RISC stuff. So by 1995 the PowerPC hardware IBM was porting OS/2 onto, which was supposed to outperform everything, ran slower than then-current x86 chips. The effort was clearly doomed, but IBM had spent years on it and were going to finish it rather than cancel it and have to explain the write-off to their investors. Unfortunately sucking away years of development from OS/2 at a fairly crucial time, to work on some cul-de-sac that had been politically important to IBM's hardware side but not important to any customers, essentially doomed OS/2 by giving Windows time to entrench itself. (Although what _really_ doomed OS/2 was that when they _did_ start to have decent retail uptake, they plugged it into the standard IBM tech support phone system that cost them $35 to field each tech support call on a box of software that retailed for $49.95. Two calls to the 1-800 number and they lost money. IBM upper management _did_ actively sabotage OS/2 starting around 1993, and that's why.)

Anyway, we got OS/2 for the powerpc running by the end of 1995. (Even though it couldn't natively compile itself, everything was cross compiled from x86 OS/2 via a special watcom cross compiler; I showed them EMX, the GCC port to OS/2 I'd been using as a hobbyist for years, but they weren't interested because it wasn't "professional".) OS/2 for the PowerPC "shipped to a shelf", meaning you could _technically_ order it but only by knowing its catalog number, which they never told anybody, because if anybody HAD bought it they'd have had to train and staff a tech support line which would be exorbitantly expensive and they were desperate not to.

As we were finishing up OS/2 for the PowerPC, IBM announced a site consolidation: the Boca Raton facility where the PC had been created was closing down and being sold (it eventually became a retirement community), and we could either leave IBM or move to Austin Texas. They tried REALLY hard to get everyone to move, taking us on what I called the "bribe trip" to see "Austin as you will never be able to afford seeing it again" (boats on the lake, ranches way out in the boonies, parties on the top floor of a skyscraper!), and paying large relocation bonuses (I used mine as the down payment on my first condo). And it _also_ meant we couldn't go anywhere _else_ within IBM, it was Austin or quit. (Which meant I couldn't go to the Hursley England site where all the Java development happened. I applied for a job at Sun Microsystems, but by the time they called back I'd already signed the "yes I will go to Austin" contract and was too young to stand up and break it. The contract said I had to give back the relocation money if I didn't stay a year; lots of my co-workers were locked in for 2 years, although they got more people than they expected to move and paid people to retire early on _top_ of the relocation bonus. My team lead Pete Rodriguez (who is ungooglable because there's like 30 of him) spent the last week at the company a year or so later staring at the ceiling and laughing every few minutes, he apparently got a _good_ deal...)

I arrived in Austin in February 1996 to work on OS/2 Warp 4.0, the x86 release we _should_ have all been working on at least 2 years earlier. There were all sorts of fundamentally political compromises in the code, such as the fact the main filesystem driver (HPFS, "High Performance Filesystem") was 16-bit 286 code and thus ran slowly. Why? Because the 32-bit version was full of Microsoft copyrights and cost them extra (royalties) to deploy. But because a 32 bit version already existed, they wouldn't spend money developing a _new_ 32 bit version that didn't have any microsoft copyrights in it and which could be part of the base OS rather than an extra expensive add-on nobody ever bought. (IBM had a mindset that code that cost money to build was worth that money, and thus removing code was WASTING MONEY. It was insane. We did our best to work around it.)

I'd worked on something called Feature Install on the PPC version (inheriting it from contractors whose contracts weren't renewed, and who literally used variable names like ldkopqvzc and ldkopgvzc and yes differing only by a q and a g in the middle of word salad is a real expample). They'd basically programmed for job security and _dared_ IBM to fire them, and their bluff was called and I inherited the mess. And they wanted to port this to the x86 version. I was 23 and had been programming since age 10: I took a flamethrower to it, unintimidated. My team lead Pete did his best to shield me from management's notice.

On the one hand, Feature Install was a package management system, a brand new idea at the time (Linux distros have rpm and dpkg but other operating systems generally _didn't_, you extrated zip files and extracted new zip files over them when you got an update). Having a package management system is great!

On the other hand, Feature Install was part of their object oriented desktop code (the "workplace shell", based on IBM's System Object Model), and the original idea had been to subclass a file folder so that when you dragged and dropped it from removable media to the desktop, it would install the package! And then they hit the problem that most of what they wanted to install didn't fit on a single floppy, so had to be split up into multiple disks, and from then on the implementation was fighting against its original design idea. And now they wanted to use it to install the operating system, which meant we had to bring the desktop up before we could install the OS, and the desktop code was NOT designed to work that way...

It was a mess. I had my hands full making it work at _all_. I still remember staying late and coming in on weekends doing a massive cleanup/rewrite of some of the fundamental plumbing (because what was there DID NOT WORK), and having a big flag day rewrite replacing over 1/3 of the codebase with 1/10th as much replacement code (making it table driven, this affected the GUI that edited the fields and the code that used the fields and the save/load logic and made it all consistent and happen via a common codepath)... and my team lead telling me to check it in on a sunday so management couldn't object...

And then it turned out my manager was there on that sunday, and walked by asking me what I was doing, and then told me not to do it. And then TWO DAYS LATER the testing results of the old code came in and failed spectacularly (one metric took 20 minutes to do a thing my code could do in 3 seconds) and the manager went "oh, we have a performance patch" and told me to check in my code... and took credit for 3 months of my work as a "perormance tweak" he'd done in response to the bad test results.

That manager's name was Kip Harris. I _almost_ quit right there. He caused 50% of my department to quit (they couldn't transfer to other parts of IBM due to the relocation, so they left IBM. This is back when that basically didn't happen). He was then demoted out of management, and the new manager (Jim Segapelli) had a reconciliation meeting with what was left of the department and gave me the biggest raise he could (something like 15%, although as a recent graduate it was 15% of $36k/year so not _that_ big) after the previous manager had given me the lowest performance evaluation ("more is expected", although that was partly because IBM had a quota system for rankings, copying Microsoft's stack ranking which started as a way of doing stealth layoffs and then got retained permanently for a decade, which IBM copied because microsoft was doing it).

1996 was not a fun year for me. They gave me a pager, and used it between midnight and 4am three times in the same week. And I did not get paid overtime.

Anyway, IBM had us heads down doing OS/2 4.0, which kept us too busy to notice it was too late for whatever we shipped to matter. In November 1995, Windows 95 shipped, which was still terrible but _less_ terrible than the 3.x line. Instead of crashing hourly it crashed daily, which meant it was approximately usable, which made it the first version of Windows that its entire customer base _wasn't_ looking to actively replace on a daily basis. I.E. the market window for OS/2 to become the dominant PC OS had closed, but we didn't let ourselves acknowledge that until we shipped our own thing most of a year later.

When OS/2 4 shipped, we were finally allowed to interview elsewhere in the company to look for another project to work on. I interviewed for an AIX position working on X11 (because it sounded like fun and would make me learn graphics and the guts of that X11 stuff I hadn't been able to get to work properly under Linux), and the manager told me to my face that I was too young and shouldn't work on this dead-end Unix stuff because it was dying. I brought up Linux... and he hadn't heard of it.

I knew he was clueless. I was still following Linux development, or at least I pulled up the web archives of the Linux development newsgroup from time to time and conirmed it was chugging along. But I couldn't get it to run on any hardware I had. I installed the OS/2 port of XFree86, which is where I first encountered the sparkling/tearing problem XFree86 had with western digital graphics cards. I was thinking maybe I could learn enough to fix it... but the manager dissuaded me from taking the position. Even though I thought IBM was wrong about X11 being dead, if IBM thought it was a dead technology then IBM would sabotage its own version of it and I'd had enough of fighting IBM management to learn and do things.

Tomorrow: My Java years.

July 14, 2018

We're coming up on the 20th anniversary of my return to Linux. I'm in a reminiscient mood.

Summer of 1982, between 4th and 5th grade for me, my family moved from Kwajalein in the Marshall Islands to Medford, New Jersey. (Talk about culture shock... Urgh.) Every summer on kwaj we'd spend a month visiting the mainland USA, traveling around, and before settling in New Jersey we did our usual round of visiting relatives, which means over that summer I saw grandmother's Atari 800 with a basic cartridge and floppy drive, and I was HOOKED. (My father put the floppy in 90 degrees off from where it should be and I figured that out by the scratches the read/write head left on the sleeve.) When we got home I wrote basic code in a notebook with pen and paper, writing a program that was a series of print statements explaining how basic worked. (I was 10. My fascination got a bit meta.)

I pestered incessantly for us to get a computer, and Christmas 1982 my father got the family a Commodore 64. (Which was not an Atari 800 but I had no say in the matter and Sears was having a sale.) By new year's the "family" computer lived in my room. Probably for my birthday in February I got Zork I, although mostly I played pirated games copied from friends. (I was a kid with no money, I wasn't exactly costing them a sale and I knew it. I bought what I could, which wasn't much.) A couple years later I started dialing in to a local bulletin board system "The Realm of the Dragon II", running Dragonfire BBS software which its sysop had written in C128 Basic. (The computer came with a 300 baud modem that plugged into the cartridge port, and a card for some free hours of compuserve, but I never used them because I couldn't pay for online time and didn't want to use up a resource I couldn't replace. A friend borrowed the modem for a week a few months later, and from them I learned about local bulletin board systems we could call for free. It took a while to convince my parents to get a second telephone line, but tying up the first one a lot eventually did it.

I wrote my own terminal program (in basic, compiled with "Blitz!" so it could keep up with my 300 baud modem) I called "junkterm" because I knew I wasn't much good at programming yet. My online handle was "Greeny", spelled in orange using C64 graphics. (I needed a cable from the sound output to the modem to do touch tone dialing, so I taught it "pulse" dialing by rapidly hanging up and picking up the phone, timed with for loops.) And then following up on that I wrote 3 different bulletin board systems in C64 basic, but didn't really run them except briefly for testing, because I only had one computer and didn't want to tie it up (the system had 65536 bytes total address space and 39811 basic bytes free when the OS ROM was masked in and running. Multitasking was not a thing in that context)/

I did everything in BASIC. I dabbled with assembly language a bit but had a really hard time debugging it. (Really machine code, my "assembler" was typed in out of a magazine and wrote to memory directly with hardcoded offsets for all the jumps... I think that's a "machine code monitor" but it's easy to be unclear on the distinctions when you've never encountered all the different options.)

My friend Chip (he picked it to replace "Lawrence") had other computers (he'd started on a TI 99/4a and moved on to a 16 bit PC running DOS), and in 1988 that friend ran a WWIV bulletin board system on a 16 bit XT clone that I helped him apply "mod files" to. WWIV was shareware that gave you full source code when you registered (which you would then compile with a pirated copy of Turbo C, and later Borland C), and a user community that had never heard of the patch command explained to each other in mailing list postings about the cool new change they did and how you could make THIS part of the code look like THIS instead... Yes I learned C by modifying WWIV source, and got a book called "From Basic to C" out of the library to try to understand what I was doing, then in 1990 I borrowed a friend's book called "How to Program in Turbo C" by Howard Schildt and _memorized_ it (to Roxanne's "Look Sharp" album, on repeat) to get full coverage and connect everything up. (Yes, I'm aware Schildt is considered a terrible programmer, but you've got to start somewhere and "the textbook is often wrong" is an important lesson for everyone. I also found a compiler bug in Turbo C my first year where an increment would get optimized away -- not in the assembly output -- unless I suck in a printf() to check its value, in which case it came back. I turned it into "varable = variable + 1;" which was not optimized away for some reason... And I was PISSED when I upgraded Borland C++ from 2.0 to 3.0 and suddenly my throw() function was now a reserved keyword. But I digress...)

In 1989 I skipped my senior year of high school and went straight on to Burlington County College (I still regret not skiping the first 3 years). I graduating from BCC at the end of 1991, and started at Rutgers spring semester of 1992. They had a computer lab full of Sun workstations in the process of switching over from SunOS to Solaris: very expensive and very unixy and yay C but the compiler on them was DEEPLY broken (it returned a "char *" from malloc(), what?) and I mentally categorized them as "big iron" that the PC would steamroller.

Meanwhile at home, I took years of christmas/birthday money plus everything I earned tutoring other students at BCC and bought my first 386 PC (pile of parts out of Computer Shopper), and put together my own DOS box and wrote my first bulletin board in C (including serial interrupt routines! Fancy. I got 8250 ones out of a book then looked up how to enable the 16550a receive buffer. Transmit still happened spinning a character at a time, because that didn't drop characters.) That one was called "chamelyn" (filename had to fit in 8 letters) which could theoretically emulate the UI of all the other bulletin board systems I'd used.

The way it worked was I created a very simple scripting language that was more or less an assembler for an 8-bit assembly language and the C had an array of 256 function pointers where it would for (;;) execute[code[position++]](); and then wrote the actual BBS part in the scripting langauge. And yes this means I'm one of like 30 people to independently reinvent "bytecode", and when I found out about Java later I (A) realized I'd been working on it about 6 months longer than Sun had, (B) pointed out they'd missed truncate() and couldn't shorten an existing file without deleting it. (Sun Engineer "Mark English" replied to my email that I was right and I'd just missed the Java 1.0 cutoff but he'd add it to 1.1.) I then spent a lot of evenings implementing a java version of the "deflate" algorithm from info-zip, and had just started work on the decompression side when 1.1 came out with zlib bindings in the standard library working about 10 times faster than my java native version. :P

Anyway, in 1992 I found out about C++ in my language survey course at Rutgers (which coverd Lithp, prolog, and C++), so I started another bulletin board system from scratch in C++ called "xblat" and I got THAT one connected up to fidonet. (Wrote my own fidonet tosser/packer, but used binkleyterm+zmodem as the network front end. Somewhere I have a random 9999 messages from the fidonet SF echo from the early 90's archived, because the message database cycled itself and that's where I moved and unplugged it, and never bothered to set it up in the new place because I got dialup internet through

The other change between chamelyn and xblat was I got a copy of Desqview (finally, multitasking! Run the BBS while using the computer!) which didn't work with my serial interrupt routines (Desqview had simple round robin scheduling and if the serial drivers were within an image then the interrupt was blocked when that image wasn't running, so there were multi-milisecond gaps with no serial port service, and characters got dropped all over the place), so I had to get FOSSIL drivers working for the serial port and teach my BBS to use them. (Binkleyterm already knew, and could do the baud rate setup and such.) The trick was the FOSSIL drivers ran _before_ desqview, so the interrupt routine wasn't managed by the multitasker, and the "gimme the data you've collected" call still worked from within a managed DOS instance...

This is the context in which I encountered Linux: in 1993 the 4 sls floppies came across fidonet, and I went "this is very interesting, this is a whole operating system distributed the way binkleyterm and zmodem are, it's like WWIV except you don't have to buy access to the base you apply all your local changes to meaning it's hobbyists all the way down. But... why would anyone want to clone a Sun workstation? It can't run DOS programs, so I can't run my BBS or games or any other existing program I have under it.

I took it over to Chip's house and we tried to install it on one of his PCs and xfree86 WOULD NOT WORK with his Diamond Stealth Multimedia graphics card (well, mcga 320x200 mode was the highest resolution it could do). I dug into the Linux mailing list archives to see if I could work out how to get it to work, and they said manufacturers wouldn't release programming information and that this would probably never get fixed. (Years later "trying to get X11 working on any graphics hardware I had" would be the main blocker to switching to Linux as a desktop on both western digital and SiS chipsets. If the fork had happened 20 years earlier maybe Linux on the Desktop would have too... But I digress.)

Circa 1993 my professors at Rutgers were talking about how the big thing coming up was SMP (because linear CPU speed improvements had to end _sometime_), which would require threading to take advantage of, and I dug into the archives for that too... and found that none of them had SMP hardware (too expensive) and a long post by Alan Cox (#2 guy in Linux) explaining that threading was a bad idea and shouldn't _be_ supported...

But meanwhile, DOS was clearly on its last legs. The 640k barrier and lack of built-in multitasking were a PROBLEM. (I eventually worked out exactly what was going on and co-authored a paper about it, but at the time it was just the "smell of death" anyone who grew up on an 8 bit system learned to sense around a platform it was time to move off of.) Meaning I wanted to move from DOS to _something_ else... and Windows 3.x was BAD CODE, being too unstable for words was just the SURFACE problem. And Microsoft had gone full-blown evil trying to shove it down everyone's throats, to the point I installed IBM PC-DOS then DR-DOS to not be running Microsoft's version.

I wisted about Desqview X but never found a copy, and when OS/2 3.0 came out (the 32-bit 386 rewrite) I _bought_ a copy (real, actual, legitimate with money!) and installed that. It supported SMP and threading, IBM was seriously pushing it for home users (with the "nuns" televison ads and so on), it had good backwards support for DOS with a cleanish 32 bit migration path...

This is long, I should cut it here and pick up later.

July 13, 2018

My script currently builds an i686 cross compiler (dynamically linked, with lots of optional features like thread support disabled), and then builds _another_ i686 cross compiler statically linked with the options switched on.

I thought I could simplify this script so you can go "~/ m68k::" and it would build the -host cross compiler and then build the static m68k cross compiler with that. I tried it. The m68k cross compiler build broke in the gmp build because one of the components segfaulted when built directly with simpler cross compiler.

Bravo, gcc developers. You know my rant about how a compiler isn't fundamentally different than a docbook to pdf converter? While that remains true, the FSF is fundamentally terrible at software and screw up everything they even peripherally touch. (Also, cross compiling sucks because the number of hosts is _multipled_ by the number of targets when working out how many different codepaths need testing. And documenting _why_ you need to do these elaborate rube goldberg build setups is kinda horrible. My motivation here was to simplify the build so I didn't have to _explain_ it...)

On that note, glibc is appalling.

July 12, 2018

Listening to talks about David Graeber's "Bullshit Jobs" book, and I wonder how the resource curse works into this. You can't go on strike if your work isn't needed, so countries where the economy's based on things like oil revenue, and the work of 99% of the population does not significantly contribute to the tax base, have horrible human rights records because the government doesn't need the consent of the governed. They just need them to stay out of the way while 1% of the population makes 90% of the money from 1% of the labor.

Today we're automating away entire industries, and our last few recessions we've had the opposite of a labor shortage. You can't go on strike if the boss doesn't need your work.

Can automation cause the "resource curse" in the united states? How do we get to a Star Trek style Universal Basic Income future when capitalists corner the market? All the historical precedents involve torches, pitchforks, and guillotines. This time around everybody's waiting for the boomers to die before reassessing the situation. And thus we get a holding pattern...

(And the _fun_ part is the way capitalism's set up, if nobody can afford to buy anything you get a demand limited liquidity crisis that makes all the companies lose money, and then the financial sector privatizes gains and publicizes losses triggering a federal bailout to print buckets more money and give it directly to rich people. That's where we are _now_. Awkward way to run a railroad, innit? At what point to we admit capitalism stopped being a functional thing back about when Ronald Reagan cut taxes on the 1% from 70% to 28% and exploded the national debt, and that the whole edifice has been a prolonged exercise in deficit financing ever since? 90% of the modern economy is completely imaginary, the assets only exist on paper, the jobs are useless busywork, and it's mostly just a way for billionaires to feel in control and on top... well, that's what the book is about. "Why are we not guillotining those clowns again?" is a legitimate question. "Because they're septagenarians who will die soon on their own" is literally the current answer. Which means 20 years from now is gonna be... interesting times.)

July 11, 2018

I've previously mentioned my corollary to moore's law, that 50% of what you know about programming is obsolete every 18 months. And that the reason for the longevity of unix is that it's mostly been the same 50% cycling out over and over ever since midnight, January 1, 1970.

The flattening of moore's law's S-curve has slowed but not stopped this cycle, and I'm waiting for systemd to go the way of devfsd and hald. Some bits do flake off over time (sccs->cvs->svn->git), but the portions of unix that are "old but still relevant" are as close to universal constants as we get in programming, and yes I throw the C language (but NOT C++) in that pile.

*shrug* Time will tell, but fragmentation seems less likely to form a new plateau. If lua had been the browser language instead of javascript the world would have moved to a new baseline already. (C is from the people who made Unix. Go is from the people who made Plan 9. Sure, they're the same people, but context _matters_...)

(C is a portable assembly language, however much the C++ folks shriek Luke Skywalker's "No, that's not true, that's impossible!" line every time somebody points out the obvious. And strive to sabotage compilers with endless Undefined Behavior to try to screw it up. Just admit that signed math is two's complement and make LP64 part of C20 already...)

July 10, 2018

My domain has been up for over a decade, and has a reasonable google rank, which means I get weird SEO emails all the time, which aren't just pure bulk spam but at least lightly targeted.

I also have some content up there like the history mirror, my old motley fool articles (which I've meaning to properly index forever), and the kdocs staging are from back when I maintained, none of which were originally written for this website but are basically just mirrored here. And this tweaks people who want to ADD to them, somehow.

Today's is:

On 07/11/2018 09:59 AM, [NAME] wrote:
> Hi there,
> I wanted to reach back out regarding my previous email (attached below)
> about [SITE]'s newest article, "Why Women Should Invest and How to Get
> Started".
> Many women assume that investing either requires expertise, a lot of time,
> or large amounts of money, but that's not the case! Our article highlights
> reasons why investing is important and more profitable than traditional
> savings alone while helping women craft a strategy and find an investment
> platform that works well for them.
> I believe that our article would be a great addition to your page here:
> I have included our article for your reference:
> [URL]
> Please don't hesitate to reach out if you have any questions, I hope to hear
> from you soon!
> Best,
> David

And I replied:

20 years ago I wrote stock market investment columns for The Motley Fool. I have mirrors of some of my old columns on my personal website. You're asking to add a thing you wrote, which is already on your website, to my mirror.

This seems... odd?


I'm sure there's some weird exploit going on here, but I dunno if it's SEO rank harvesting, or a cross-site-scripting exploit, or a page that's going to be innocuous for 3 months then start serving viruses, or...? (I get requests like this a couple times a month. And those are just the ones that make it through gmail's spam filtering...)

July 8, 2018

I have a dozen things queued up to do and what do I spend time on? Fixing ping. And not even the improved error reporting suggested on the list, but reviewing the code to see where that should fit in and then testing corner cases shows me the error reporting stutters (division by zero error in the summary display logic when no packets have been returned, which triggers a signal that... calls the summary display logic again at exit.) And -c isn't working when you ping a site that doesn't reply (it's limiting returned packets, not sent packets), and it looks like -w isn't working (haven't tested yet, just reading the code and seeing a dodgy "else" handoff)...

July 6, 2018

Upstream Linux broke LED platform data on sh4 when they converted it to device tree. They still allow platform data to pass in a pointer, but they changed the structure type that pointer dereferences, and the definition of the structure is local to the driver consuming it so the platform data CAN'T provide it. (There's still a generic structure providing all the info in a driver-agnostic way, but the device tree conversion changed the code to no longer _use_ it.)

And of COURSE the device tree guys' response is "convert everything to device tree, you have no other choice, we're part of systemd now" or some such...

*shrug* I have a local patch that makes LED platform data work again without breaking device tree (I think, haven't got a device tree thing to test but they obviously never tested platform data so...), and I posted that patch, and I'm using that patch. If they don't want the patch, vanilla can stay broken. As usual, add a todo list item to poke 'em again in a year to see if whoever was objecting has died yet.

July 5, 2018

Looking at watch again: "watch ls --color" prints "^[01;34mandroid^[0m" and similar. And "watch 'date ; sleep 5'" produces no output for 5 seconds, and then updates the display every 5 seconds.

That's... fairly simple behavior to implement. It's also wrong, I want toybox to do _better_ than upstream here.

(Sigh. I always get screwed up by singular/plural in directory names. Is the url "download" or "downloads"? Is it "tests/files" or "test/file". I _fairly_ consistently use plural, but not quite enough to get it right, and lots of directories in URLS and packages and such aren't mine anyway.)

And how does "quoting arguments" work? According to strace, "watch echo 'one  two'" becomes exec("sh", "-c", "echo one  two"), which results in an output with just one space between the one and the two. (The downside of washing the command line through sh -c, passing through argument quoting becomes darn near AI-complete. Looks like they basically don't try to get it right.)

July 4, 2018

Went down to the beach yesterday evening the evening to watch fireworks. It was really crowded. My apartment's like 3 blocks west of lake michigan, and the "beach" here is basically a corrugated metal barrier between the water and land, marking an abrupt edge of the land a couple feet above the water, stretching for farther than I explored. Swimming would not appear to be a design critera.

There were a bunch of food trucks and people selling glowsticks. And many, many people. (Everybody gets the actual 4th off, so that was the night when nobody had to be up early.) Nothing quite as tiring as being alone in a crowd. Went home after about 5 minutes of fireworks.

Day off today, poked at toybox stuff. Didn't get as much done as I wanted, but then I seldom do. Looming end of available time limits what I'm willing to start on, most ratholes are too deep to go down, I have to leave off halfway and then next time I've made new work for myself rebuilding the context where I left off. It's not bad if I know I can get back to it the next day, but I usually can't. So I pick at things with little or no momentum.

Oh great. Fuzzy drove to a drafthouse rolling roadshow (Independence Day, with fireworks and flamethrowers, at a "stunt ranch" half an hour outside of town), and had to swerve to avoid a drunk so the car crashed into a ditch right outside the venue. Some people from the stunt ranch helped her get it out of the ditch and into the parking lot, she stayed for the event, and then tried to drive a damaged car home.

Now she's in a parking lot 15 minutes outside of austin, the symptoms sound like the engine had an infarction, and it's after midnight on a holiday. I've found a 24 hour tow service and a mechanic she can drop the car at (google maps' THING near ADDRESS search remains lovely, living in the future), but there's a _reason_ I check for anything dripping under the car after even a fender bender, and then I'm really careful driving and ready to pull off at the first strange noise or smell or guage different from usual until I can have a professional frown expensively at it. (They train in front of special mirrors.)

Sigh. The car already had issues. Check engine light's been on for a year in a manner that stumped even the dealership, which wanted to do $3000 of _unrelated_ work on it. But other than needing to add a new container of steering fluid every 6 months it's been fine, and we don't drive it much.

All the research I've done says not only will app-summonable self-driving electric car services become available in all major cities over the next 5 years (at a monthly cost cheaper than owning an already paid-off car), but somewhere around 2025 the decline in gasoline demand will cause the gasoline supply chain to dry up and blow away. All the refineries and tanker trucks making daily deliveries to gas stations are operating on razor-thin margins optimized over the past century, and they don't scale down easily. It doesn't take much drop in volume for profits to fall below fixed costs there, and a supply chain collapsing due to unprofitability before all its consumers had weaned themself off of it happened in Australia in 2016, causing rolling blackouts in electricity generation. Of course the buggy whip manufacturers coal billionaires blamed the technology that had rendered them obsolete and sponsored hit pieces, but as with the buggy whip manufacturers they still lost.

Without that supply chain, gasoline becomes something you basically mail order, like getting liquid nitrogen to make ice cream. Call somebody up and a truck delivers a large cylinder to your driveway the next day, and picks the empty cylinder up again a few days later. (I suppose if you own a tank it would be more like natural gas for barbecue grills, or furnace oil delivery.)

The point is, in about 7 years gas stations go the way of CRT televisions, CD players, and landline phones. I'm reluctant to buy another gasoline car under those circumstances, but the electric cars are still too new to be available used. A new car gas _or_ electric is at least $30k, more than I want to spend in this situation. (About like installing a sattelite dish when you know cable modems will reach your neighborhood in a couple years. I limped along on dialup until the new thing was ready...)

I'm currently up in Milwaukee and Fade's up in Minneapolis, neither of us were using the car. I let my driver's license expire in 2013 (didn't want to pay an extortionate ticket to Stafford, Texas) and didn't renew it for ~4 years until a friend needed someone to help her move in another state. (The entire time I was working at Pace I took the bus there.)

Really the one this impacts is Fuzzy. I'm pretty happy to wait for Waymo to put Uber and Tesla out of business. The first real-world self driving trials are underway and showing up in my twitter feed. Soon owning your own car should be like owning your own milk cow, and knowing how to drive a car like knowing how to ride a horse. You can, once upon a time most people did. And then it became a very expensive hobby supported by adjacent hobbies. (Auto mechanics in about 20 years will probably be like farriers are today, and there aren't a lot of feed stores, watering troughs, and hitching posts downtown anymore.)

July 3, 2018

I emailed to ask if Google might be interested in buying a "toybox support contract", and heard back that although it's not the decision of any of the people I regularly communicate with (they're all engineers)... the answer is basically no. Google doesn't like to use vendors, and the corporate side doesn't see the point in investing in a command line that already does what they think they need.

*shrug* Can't say I'm surprised. I got turned down when I first asked if they just wanted to _use_ it. If I was easily discouraged I wouldn't have gotten this far. But having to keep digging away in my own time to solve these problems in spite of the large institutions is painfully slow, it would go so much faster if I didn't have to spend most of my time and energy working on something else entirely to support a family.

Large institutions have a different mindset than individuals. I need to do a proper version of my 3 waves talk. (I'm still sad that when I _did_ a proper version, which I was pretty happy with right after I'd given it, Chicago's Flourish conerence never posted the recording. Sigh. I could propose it as a talk at another conference, but I haven't been doing that this year because I'm sick of hearing from white guys and don't want to take up a slot that should go to someone else. Really I should just record the talk myself and put it on youtube where it's not _my_ fault it's completely ignored. (Yes, I say that with the original article series on The Motley Fool having gotten something like 17 million views the first year, being reprinted in their "popular articles" section years later, getting third party commentary, etc. My brain does not accept this.)

The problem is without an externally imposed deadline and an audience/editor who would be disappointed if I didn't finish by deadline, it stays on the todo list forever. (I put "podcast" as a patreon stretch goal in hopes of creating a deadline/audience without having to scrape up several hundred dollars to travel at my own expense to some hotel in some city where half the time they WON'T RECORD IT ANYWAY. Grrr...)

July 2, 2018

Google Maps' "keyword near address" search mode is very useful, as are (as kbspangler calls them) "hired dudes".

I found a handyman near my apartment to install the air conditioner that's been sitting on the floor next to the inflatable mattress for a month. Somebody with actual tools who could get the screen out and maybe would have insurance if they dropped the air conditioner into the alleyway and it hit something expensive.

They didn't, and it works fine now. Not exactly an _elegant_ install, it doesn't quite fit the window without styrofoam and duct tape. But... close enough!

July 1, 2018

I wax rhapsodic about universal basic income on twitter partly because "retiring to do open source full time" seems peverse. (I could accomplish so much more if I didn't have to do all this other work!)

But also because there's no NEED for most modern work. We've long since automated away the "subsistence farming" jobs: 200 years ago 80% of the population farmed, now less than 1% do, then manufacturing similarly declined, now we're all doing made up jobs because "employment" is a good even when it's "stand out in the street and hold a sign reminding people our restaurant exists". David Graeber's got a new book on this I need to finish reading, but his original 2013 article on it remains a pretty good summary. (The book just has a lot more supporting data, analysis, elaboration, historical context...)

Starting in the 1970's computers started seriously automating away clerical jobs: the "typing pool" at large companies became word processors, desktop publishing software took out the typesetting profession, drafting blueprints isn't really a thing anymore, tabulating spreadsheets used to be what accountants did all day and now it's the name of the program that does it, nobody "gets their start working in the mailroom" at a company with email, the only reason tax preparation is still people instead of a .gov web page is the tax prep companies spent huge amounts annually lobbying to keep their jobs as unnecessary middlemen...

Now the fossil fuel industry is switching over to distributed solar panels and batteries (that's 1/6 of the economy), self-driving electric cars are automating away taxis and truck driving, and if you add in a short haul delivery drone going from truck to house you've got mail/package/pizza delivery sorted soon.

End result is that almost all the jobs are optional, everything that really _needs_ doing doesn't even add up to 1/10th of the population. The "but what will people eat" objection ignores the fact food is so cheap you can't make a living _providing_ it except at really big scales with razor-thin margins. The current estimate is housing all the homeless people in the country would cost about $10 billion, and the military _misplaced_ that much money in Iraq. (As in "lost shipping containers full of cash", there were Leverage episodes on this and yes it was a real thing.)

The big pending unmet demand is for the surge of baby boomers needing hospice care, but HMOs bought up the independent medical practitioners in the 80's and it all turned into for-profit corporate conglomerates that treat employees as disposable. The certification requirements there mean you drown in student debt to get permission to change bedpans. That demand remains unmet for structural reasons having to do with capitalists cornering the market, regulatory capture, insurance industry middlemen, the AMA acting as a guild limiting membership via medical school quotas starting back in the 1970's, and so on. Basically "corruption" but with huge marketing and lobbying budgets to avoid anyone calling it that.

Royalty went away. Serfdom went away. Guilds mostly went away. Capitalism can join the heap, we just have to wait for the boomers to die first because they're too old to fit a new idea into their collective heads. (Well, 2/3 of them are. I'm aware it's #notallboomers, but it's most of them.)

Capitalism used to be the solution. Then it became the problem. The wheel turns, old story...

June 30, 2018

Went to the big protest in front of the courthouse, somebody was handing out posterboard squares and letting you use a market to make your own sign, so I did "ICE at the poles, not at the borders". (Something somebody said on twitter.) I brought a powerade bottle, but it wasn't enough.

It was nice, the only dip was when they had a grey haired old white guy speak (we've had enough of that, thanks, and I say that as one), but that was just one speaker out of a dozen or so.

Next time: sunscreen, two water bottles, make a sign ahead of time.

June 28, 2018

Politics is horrible and draining and relentless, but while doing a version march from 3.18 to 4.14 at work I hit "Temporary per-CPU NMI log buffer size" showing up as a new config option under the RCU Subsystem and still had a burst of rage wanting to smack the linux-kernel development community at large for unnecessary overcomplication.

So there's that.

(A version march is where you try the release versions in sequence because the config and board support patches need so much manual adjusting every couple releases a bisect is crazy to try to get anything coherent out of. I stopped at 4.14 because 4.15 has some intermittent flash/jffs2 corruption bug that eats the filesystem if you write enough to it, but is intermittent _enough_ it's annoying to track down. No idea why it's happening. Stopped at 4.14 or the moment, that's relatively recent.

June 27, 2018

Got the new m68k toolchain and target built in mkroot. Needs vivier's qemu-m68k, but otherwise pretty much works.

June 26, 2018

Work's got a giant multithreaded application that's trying to do realtime tasks, and they have a shared library that calls system() and popen() to run subtasks. Naturally: that doesn't work right.

If you fork() from a thread, the normal copy-on-write semantics don't apply and the new process has to copy all the process's memory to the new PID, which can easily be dozens of megabytes. (The problem is threading is already sharing it, and having separate processes _and_ threads share the memory would require each page to have _two_ reference counts to keep track of the different categories of sharing, which would bloat the page tables.) Forking takes locks that block system calls and page faults in the parent process, which is not a problem when a single-threaded parent process hasn't returned the fork() system call yet, but it means other threads of that process can't do it either. Copying lots of memory (dozens of megabytes) takes a long time and happens under these locks. Between the two of them this manifests as a large latency spike in the existing process's other threads while the fork() is happening. We were measuring 70 miliseconds on an embedded board.

The fix is to use vfork() instead, which only blocks the parent _thread_, not the parent processs. It doesn't copy memory (the parent and child use all the same mappings, even the stack, with the parent blocked until the child calls exec() or _exit(). Of course using vfork() properly is tricky (mostly because very few people these days seem to understand what it _does_), so I wrote a vsystem() and vpopen() based on vfork that they can replace all their system() and popen() calls with. (Luckily, I'd already done most of that for toybox so I could crib from my own work.)

Way back when I wrote up a couple pages on vfork() for the busybox FAQ, but after I left the busybox guys crapped a long busybox-specific digression into the middle of it (something about configuring and building the busybox binary?) that made it useless for explaining vfork to people. I should dig up my old text from before they broke it and put it in the toybox FAQ so when I want to explain "why vfork" I can link them to an existing thing. (It's on the todo list. I don't have as many half-finished FAQ entries as blog entries, but it's a similar category of problem.)

June 25, 2018

Swapped out my phone battery with a new mail-ordered one. The new one lasts MUCH longer.

Making another try at an m68k toolchain build. It's being stroppy.

Editing another batch of blog entries. I got up to an entry that trails off halfway through a technical explanation, which is where "editing" turns into large amounts of "new authorship". But it's a writeup of what I was thinking at the time, so it counts. (Autobiography is seldom written live as it happens.)

(Then again I left myself a "Did my old realtime Java GC idea ever get implemented?" note as the whole blog entry a week ago, and that's a longish writeup to properly explain, which I haven't done yet. My blog has significant technical debt.)

June 24, 2018

Cut a toybox release yesterday, so today I'm tying off some things I bumped until after the release. One of them is splitting up the ps help text so "ps -o help" (or any unknown -o field) shows the field list, and the normal ps --help is the other half.

This of course led me to thinking that if I'm breaking this into a function anyway, what I should REALLY do is move the help text snippets for each option into the option array, and just have a function traverse it to generate help output. It would have to make a couple passes because I have the six different variants of "show the command name/line" broken into their own sections, but that's easy to detect/categorize in the table so sure.

Except there are two multiline entries in the table, and it might make sense to move those into a third section. And this led me into looking at the list of "S" field output, which has a bunch of magic letters that mean stuff, and what the heck is "wakekill" anyway, so I went down the rathole of trying to figure out where I got that from, and... it doesn't seem to be in current kernels?

Table 1-4 in Documentation/filesystems/proc.txt just says "state (R is running, S is sleeping, D is sleeping in an uninterruptible wait, Z is zombie, T is traced or stopped)" which isn't all of them, so I dug down into the source until I got to fs/proc/array.c task_state_array (line 140-ish) and that's got RSDTtXZPI but still no K... Ah, here's wakekill.

June 22, 2018

I have no been away from the cats long enough to miss them a little. Not enough to want to have any up here in Milwaukee, but enough that I am not Actively Overcatted.

(I wanted to upgrade from pets to kids a decade ago, but that's not how my life turned out. Oh well. Cats started registering as "this is what you got to have in your life instead of children" and it got old.)

June 21, 2018

A really good tweet led me to read about the history of dinosaur extinction research, and science mainly advancing when old fogies die remains very true. As does the observation that when someone's conclusions remain constant but their professed reasons for coming to that conclusion constantly change (same result, different justification), something is wrong.

I remember one of my school teachers had a newspaper article about this, back around 1990, which was quite detailed about the decades of research the local Yucatan oil drillers had put into examining this big crater they'd found, and their inability to get rich white english speakers' attention, and then I'd tell people for over a decade afterwards "oh yeah, they found the dinosaur impact site, it's just off the bumpy bit south of panama" and nobody believed me.

Same with the story of the guy going "no seriously, H. Pylori bacteria cause ulcers, you can treat this chronic disease with antobiotics" and the rest of the medical profession ignoring him despite that one being really easy to test. (Everybody thought ulcers were caused by stress, it was a plot point in the 1979 Disney movie "The North Avenue Irregulars" I had on videotape growing up, but in 1984 he ran an experiment proving it couldn't _not_ be a bacterial infection no matter how much senior medical professionals insisted it was stress because it had always been stress, and he got the Nobel prize for proving them wrong 20 year later after enough of the old fogies he'd offended by having the facts on his side finally died.)

The interesting part of the long wikipedia history of the extinction research above is how many different ways it got confirmed with people still flipping out. (Lots and lots of "3/4 of all species go extinct at the K/T boundary layer" and "irridium at the K/T boundary layer consistent with asteroid impact found at dozens of sites around the world" and "there's this K/T boundary layer AROUND THE ENTIRE WORLD, seems like a thing" being responded to with "crater site or it didn't happen". I grew up with that question for about 10 years and then "Oh look oil drillers found a HUMONGOUS CRATER the same age as the K/T boundary, they found it years ago but they were poor spanish-speaking brown people in south america who sent letters to white dudes that the white dudes never opened, and presented at conferences that no white dudes attended. Funny that."

The important part is that the people insisting the world was the way it wasn't, and who had seniority and power and could make everybody ACT like the sky wasn't blue, finally died. And then of course painfully obvious truths wer acknowledged; the emperor can have no clothes once he's in the casket.

(Today's "huh" is actually the diabetes vaccine, although a decade ago they thought injecting capsaicin into the pancreas would shock it back to normal so who knows. "Everybody's wrong about this" doesn't mean they _are_. Just that you have to remain open to new evidence.)

June 18, 2018

We're seeing garbage collection latency spikes in the mono app at $DAYJOB, which is frustrating because I designed a realtime garbage collector back in the 1990's. I wonder if it ever got implemented?

Hmmm, I'm not finding a writeup of my old idea to link to. (The blog I had at the time was on, it's not even in the wayback machine and I lost those files when a Zip disk got the Click Of Death.)

Back in 1997 I was thinking about both java GC latency spikes (the screen saver in IBM's powerpc port of JavaOS would freeze visibly every couple seconds), and how to extend java references to 64 bits (because clearly that was coming after we dealt with all the Y2K bugs). The obvious way to do the second was to have the actual reference be an index into an array of pointers (and when I later described this idea to a professor at UT he said that's called an "object table"), and it occurred to me doing that could make garbage collection fully asynchronous.

Garbage collection needs to know "is this reference still used or not", which is one bit of information. So have a bitfield with one bit per array index, and at the start of garbage collection memset the bitfield to zero. Then as your garbage collector walks the global variables and down each thread's stack, set the bit of each reference you find (basically "thingy[index>>3] = 1<<(index&7);").

The trick is during garbage collection, any java "assign a reference" bytecode should set the bit for the reference it just assigned to a new location, because otherwise it could move it "behind the back" of the garbage collector to somewhere it's already checked. That way you don't have to freeze all the threads to run your garbage collector: the still-in-use bit gets set either way.

There are of course implementation details. You might need as many as _three_ bitmaps to do this (one an allocation bitmap when creating new references, which becomes the "output" of the gc process, a second for the references your GC has confirmed are still used, and a third for the ones you've found but not recursively looked at the dependent objects for yet).

And of course there's a bit of fiddlyness to make sure you don't walk off the end of the stack of a thread as it returns from functions, but as long as you bounds check the values as not off the end of the object table, worst case scenario of reading garbage that you think is a reference is you just misidentify a lost object as still used until next gc run. That could create a false positive but never a false negative. (And if you do implement a bit of locking at the end of a single thread's stack to avoid that, it's still _bounded_ latency. You only ever need to freeze one thread at a time for the duration of examining _one_ function's local variables, which might be easier to implement as "when the GC in progress flag is on, returning from a function checks if the GC is in this function, and if so waits until the GC has left this function before returning").

Doing this would even let you pack your memory: copy an object's memory to a new memory location, then move its reference in the object table. All the scattered references to it in objects and local variables and such are just an index into the table, which doesn't change. The actual memory location's in one place that can be changed atomically. You can use mprotect() to yank the write bit from the old copy to pause threads that try to write to the old copy during the move and it's still bounded latency. Heck, you could fixup the write attempts in the fault handler _during_ the move if you wanted to, so the size of the object being copied doesn't affect the latency.)

Of course this was back before JIT, when bytecodes were interpreted in a loop, so switching between "doesn't set the bit" to "sets the bit" modes when you start a GC pass would be easy. (Setting the bit _all_ the time would be impolite to L1 cache.) And it's a realtime thing rather than an actual optimization (avoids latency spikes, but slower over all due to the extra dereference in every object access, although modern processors with large caches and deep pipelines honestly might not care).

But I got busy and didn't pursue it. (After all, if I could think of that obviously other people doing all the fancy JIT stuff had to be way ahead of me...) I did describe the idea to that UT professor I mentioned (well, adjunct faculty? I think he was the assistant for the Data Mining class I was taking), but I got busy and didn't sign up for any classes next semester and didn't see him again for months. (Which means it must have been fall of 1996 when I spoke to him, that's when I applied to grad school and took one Data Mining class, then stopped for several years before trying again.)

Amusingly, I bumped into the professor again (I forget his name; white guy, dark beard, barely older than me and I was in my mid-20's) a year or so later at one of the Mensa Thursday night dinners at HEB Central Market's little restaurant thing, and he said IBM (which I'd stopped working for by that point) had offered a low-five-figure grant to sponsor work on my realtime garbage collection idea... but I'd never responded to the email he'd sent to my student email address (which I'd never asked the university to give me, and had never logged into; I had a home email address, he hadn't used it). I assumed at the time IBM had implemented it themselves if they thought it was worth doing, and got on with my life.

And now 20 years later I'm hitting a problem I designed a solution for right after I graduated from college. Frustrating. (Obviously I'm not personally rewriting Mono's garbage collector from scratch any time soon. But dude, how is this not obvious?)

June 17, 2018

Back from austin. Jetlagged by the redeye flight.

Finishing up fmt.c.

June 14, 2018

Thursday already. Tempus fugerunt. Far enough into my week away from $DAYJOB that I start culling my todo list because it ain't gonna get done this time.

Last night I opened one too many chrome tabs and my netbook ran out of memory to the "can't move the mouse cursor for half an hour because it's swapping" level. Once upon a time Linux used to have this thing called the out-of-memory killer but people complained it might kill the wrong process, so now when it would have triggered it instead hangs, to the point it won't recover if you leave it running overnight, and you need to reboot and lose all open windows/tabs on all 8 desktops instead. About what I've come to expect from Linux on the Desktop: lateral progress. Stuff that used to work no longer does. (A companion to "Something must be done, this is something, therefore we must do it." The old Linux Luddites podcast motto: "Not all change is progress.") Anyway, rebooted and trying to figure out what I was doing now all my context's gone. And my blog serves its original purpose! (Notes-to-self: what was I doing again?)

Checked in the do_lines() semantics change with changes to sed and cut, found a bug in cut (regression from October where adding a pedantic posix compliance corner case broke real use elsewhere), complained on the list and fixed it. Still haven't gone back and redone fmt yet.

June 12, 2018

Spent most of the day watching Fade play Revenant Kingdom, my biggest achievement before dinnertime was getting out to the credit union to activate my new debit card. But I should get a couple hours programming in, so headed out to wendy's with fully charged netbook and phone. Let's see...

I want to switch fmt to use loopfile_lines(), and it cares about the end of files so I want to change the do_lines/loopfile_lines shared infrastructure in lib to pass a NULL line to flush, and it currently doesn't which means I need to adjust existing users.

Context: do_lines() takes a filehandle and calls readline() in a loop, passing each string to the callback function. The callback gets a char ** so it can NULL it out if it wants to keep it (caller frees the string otherwise), or it can assign (char*)1 to it to skip the rest of the file. Then loopfile_lines() takes a list of files (null terminated char *[]) and iterates over them, calling do_lines() on each.

The implementation requires a glue function between loopfiles_rw() to translate argument syntax (loopfiles_callback(int fd, char *filename) to do_lines_callback(char **str, int len)), and the glue function stores the real callback in a global variable. This is slightly awkward on two levels: 1) and means you can't nest two loopfile_lines() calls (which hasn't come up yet), and 2) the global isn't in struct toy_context. (Keeping it near the user vs keeping them collated so sizeof(toys) shows how much global data we're using, both have downsides, it's one of those things where the stakes are small enough the relative cost of the less-right solution isn't easy to see, so I'd wind up agonizing over minutiae if I tried to fix it. There's a few static vars in lib/*.c, but no toys/*/*.c gets to have any so at least there's a rule.)

Anyway: changing the semantics of do_lines/loopfile_lines to indicate end of file because something cares now, and there are two current users: The "cut" command is using loopfile_lines() and doesn't care about file breaks so I can just add an if (!line) return; at the start of its handler function. But the other user is sed, which is calling do_lines() directly and it _does_ care about file breaks, but only in -i mode. And it's doing explicit do_lines(0, 0) flushes. Hmmm...

June 11, 2018

I'm in Austin for a week, taking time off from work. (I don't get paid, but I get to recover, see everybody at home, and work on toybox.)

I finished up and promoted ping, and now I'm poking at fmt which logically should use the loopfile_lines() infrastructure, except it doesn't have a flush call at the end of each file. The only current user is cut.c, which means sed is _not_ using it. i should figure out why.

These aren't necessarily the most important commands, these are the ones that are closest to being done. I need to cut a toybox release this week, and I should get kernel patches resubmitted (week 2 of the merge window), and I should integrate mkroot into toybox, and I should figure out what to do about microsoft buying github...

June 8, 2018

Set my alarm early to pack, wound up lying in bed doing the "I need to get up" but not actually moving thing. (My flight's somewhere around 6:30 pm so leaving work a bit early.)

Trying to finish up everything in the world at work in the meantime...

June 7, 2018

I get emails. Today I got:

Hi Rob,

I have seen your video regarding building a Linux system. You have been doing an amazing work in the Open Source world.

Quick Intro (myself): I am an Embedded Software Developer, an Operating System enthusiast. Love and Passion for Kernel internals, building easy to use systems.

My Problems: I have tried may open source distributions. None of them would work for me satisfactorily. I run into many issues. Like,

> Sometimes I try to install some drivers (for some hardware) and it won't work. There would be endless compilation errors.
> Sometimes window will crash, some application may crash.
> One day or the other, something would be broken.

There are many more issues, I can not recollect all of them at this time.

I know it is open source software, but still as an independent user (software developer) I struggle a lot.

How about creating an operating system that just works. Idea is to create a Linuxbook (like chromebook) and a whole ecosystem for that. I see a very good market for that.

I have list of features in mind which can bring good from all the worlds in to one system.

I would like to know your thoughts on this.

Which is why I'm trying to find a good introduction to the dunning-kruger effect that ISN'T a pile of smug superiority. So far I've found this.

We all start there. "Thousands of other people have tried, but I haven't yet, how hard could it be?" That's the dunning-kruger effect. The domain expertise necessary to figure out how difficult something is turns out to be exactly the same set of skills necessary to perform the task, meaning anything you have no idea how to do seems easy.

And it's not entirely a bad thing! Linus Torvalds said if he knew what was involved in writing Linux when he started, he'd never have started. Linus dunning-krugered his way into doing Linux and then played "oh, we need to change this" for over twenty years now. He's _developed_ buckets of domain expertise as he went along, but his initial "I can do this" bravado was based on a one semester course followed by copying a toy OS. (Luckily his "I don't see why we need to do this" rejection of microkernel architecture turned out to be right because the ivory tower academics were teaching BS [LINK tanenbaum-torvalds debate], but the newbie being right and the old hand not seeing the forest for the trees was largely coincidence.

(Although the confrontation boiling down to the newbie saying "Ok, explain it to me" and the expert not being _able_ to is an important corrective factor in science. Has been for centuries.)

June 4, 2018

Last job I spun my wheels in pursuit of world-changing goals. I really wanted to Do The Thing and we never made decent progress largely due to factors beyond our control (politics among the board of directors screwed up our second funding round leading to perpetual understaffing so we were all swap-thrashing between too many tasks, and Jeff always shot down any short-term "funding from operations" approach which was all _I_ knew how to help with, because it wouldn't bring in enough money to matter with the burn rate the company had). That turned into Giant Bundle of Stress, the money dried up and I went into debt to keep doing it, and I'm still recovering from both.

This job I'm well-paid to do a small near-inconsequential thing each day. My Big Project over the past month was a kernel version upgrade. Nobody outside the company is ever likely to notice and even within it only a half-dozen actually understand what a given thing is for. (My last two issues were "show the right mac address in the boot console messages on startup" and "the board should remember the DHCP address it had last time and request it again next boot". The larger project is that we're migrating networked climate control systems for large buildings from Windows CE to Linux because CE on this hardware was end-of-lifed.) But the work gets done. Things are finished, checked in, tested, and we move on. This company has no shortage of money.

Neither was what David Graeber's new book calls a "bullshit job", but each is only half of an ideal job. I'm strongly reminded of an old dilbert cartoon.

I look forward to the baby boomers dying so we can stop considering capitalism as normal, and cash in the past century of economic and technological progrss to finally get universal basic income. (There may be some "let them eat cake" between here and there. I really really hope the current administration means we're getting the Robespierre/Napoleon part out of the way _before_ guillotines come out, but I have no clue what the future holds. The millenials seem to be waiting for the boomers to die before taking any other action. Gen X has been waiting for the boomers to die... since Reagan was elected, I suppose.)

(Yes, I am aware #notallboomers. As with #notallmen, it's not a useful objection. This ain't getting fixed while Racist Grandpa is voting with the dregs of the confederacy. Society advances the same way it always has.)

(No, "people will just breed to consume the resources" turns out not to be true: if you educate women and reduce child mortality rates without providing significant support for child rearing ala subsidies and free daycare and so on, population falls below the replacement level. This has happened in every advanced civilization around the globe, it's a significant problem and part of the reason we have more old people than young people, and only places like finland have been effectively compensating. The average family size is _less_ than 2 children. The US population has only been growing recently due to immigration, and that's rapidly declining because nobody wants to come here anymore. Not even to visit.)

June 2, 2018

I've been doing cleanup for a toybox release, although I'm flying to Austin on the 8th for a week off from work, so might bump the release back another two weeks so I can work on it while I'm there. Or maybe that should just be the start of the next dev cycle...

Anyway, I'm going through the github pull requests, and I'm finally taking another look at izabera's commit adding unlimited precision support to sort -n. I rejected it the first time as unnecessary for sort (single precision float is fine), but... what with the offered bc implementation turning into a mushroom cloud of politics, plumbing towards that end seems good and doing math as ascii strings (via long division and such) is dog slow but should be reliable and easy to understand?

June 1, 2018

I should have put a release out last weekend (3 months), but I was exhausted and Fade was visiting and I hadn't noticed the date. So I'm chipping away at it this weekend, which is another variant of "closing tabs". Cleaning up the endless backlog of half-finished stuff, and doing proper writeups.

I'm tempted to move the ps -o field list out of the "ps --help" output and instead have "ps -o help" show the list, but... I'm not sure it's an improvement? It wouldn't put the remaining text down small enough to fit on an 80x25 screen (although I've got a patch getting it a lot closer).

And once I get zcat promoted I plan to compress the help text (at least for non-single builds), and this would move it out of that. (Although when I've got the infrastructure maybe I could add some COMPRESSED_STRING() macro to append large blocks of text to the compressed help field entry and gimme a pointer to it using the same infrastructure... But there's already too much generated/ magic going on. Hmmm.)

May 31, 2018

Finally got a stable kernel forward-port at work so we're not working on a 4 year old kernel. Not to current (some intermittent bug in the flash code keeps eating the filesystem?) but two releases back (4.14) seems stable.

That was an _exhausting_ 3 weeks

May 30, 2018

Youtube's strategy is to show way more commercials to annoy people with music playlists into paying for a subscription. (No really, they explicitly said this.) My reply is basically "better dead than red", and I just turn the volume down when it does a 15 second commercial between each song. (There's a dial on my headphones.)

But WOW there are a lot of car commercials (and car insurance commercials) they're spitting at me. Which are hilariously ineffective because I know that both are going away in a few years when google maps adds a "ride" button next to the "directions" button when you selected a destination, and a self-driving waymo thing shows up. That is a service I'll subscribe to. If Google wants my video streaming dollars they can buy netflix or hulu. (I doubt they're going to buy amazon and the video is part of the giant Prime hairball anyway.)

In the meantime the car companies (and car-adjacent companies) are desperately trying to squeeze out a last few dollars before closing time. Self-driving means car sharing which means you need 1/10 as many cars. Yes even at rush hour: there's "be there at 6 am" through "be there at 10am", and at a fairly pathological average of half an hour to get there and half an hour to get back to pick up the next person (current reality's less), that's 4 cycles per car without "surge pricing" leading to the sort of carpooling people already accept for airport shuttles.

There's zillions of bigger issues (turning all those parking lots into extra buildings increases density which makes cities function better, no more gas stations, geico goes away, jiffy lube goes away, no more car loans...)

Sigh. The fix for youtube being stupid is to load music onto my phone via usb (the stuff I had on there went away in the factory reset last year), but they made it stop mounting as a USB stick a while back and I hate setting up the funky magic windows protocol thing they think we should use these days. I suppose I should learn to do it via adb...

May 29, 2018

I'm overdue for a toybox release! Oops. (Feb 24 + 3 months was last thursday.) As with all toybox releases, the big push of work (after finishing whatever I'm curently working on) is going through the git log to see what I did so I can write up the release notes, which always finds dangling threads I want to tie off before shipping, things I don't really have a proper test for, and stuff I did the first 1/3 of and could probably finish with just another hour or two of work (it's never another hour or two, it's days)...

At work I need an sh4 system booting to initramfs from real hardware, and getting a glibc buildroot to fit in the 4 megs of flash reserved for the kernel in the partition layout ain't happening (the uncompressed /lib directory is 7 megs with just about everything switched off).

So I threw a mkroot cpio at it, and although i got it to boot... the eth0 address is, which is the qemu static IP, and I need dhcp. Hmmm. I need to wget stuff, which means it needs to talk to the lan, which means an address in the dhcp range.

I switched buildroot over to musl and am rebuilding, that'll probably be small enough to fit. But collecting data for what an actually _usable_ mkroot needs. Shell, route, and dhcp. Heh, and strace didn't build against musl with buildroot's toolchain. (I have the start of an strace in toybox I should find a few hours to work on...)

Right, switching back to mkroot and adding dhcp there... it builds toybox defconfig so grabbing the dhcp out of toybox pending is awkward, but I'm building busybox with a config file so switching on the _busybox_ one is trivial... ok, try that.

My limiting factor these days is more "energy" than "time". Work pays quite well, but leaves me exhausted. It's still a step up from working for a startup that couldn't reliably pay me, still left me exhausted, and didn't have a defined schedule so I could say when I'm _not_ working and thus focus guilt-free on other stuff. (What I did for them was never _enough_, but I was spinning my wheels inefficiently for a lot of it because I was spread too thin and couldn't rest and recover.)

That said at SEI I was working on potentially world changing stuff there that advanced the _definition_ of the state of the art along multiple major axes, and here at JCI we're coming out with a software update for climate control equipment in large buildings, which already worked. It's worth doing, but it's the programming equivalent of washing dishes. The challenges are of the "oh that's baked on, how do we avoid scratching the no-stick surface" variety. It can require close attention and cleverness, but the problem being solved isn't novel or of much interest to anyone else.

Still, I'm being well paid for work people find useful which is neither unethical nor a legal quagmire. The AT&T set-top boxes we were working on back at Pace (the position before SEI) were supposed to spy on the people who used them (monitoring what shows you watched on Netflix, for example, so AT&T could try to sell its own video streaming services to you. We weren't implementing that bit, but we knew it was coming because they _told_ us). The worst you can say about the JCI boxes is "maybe we could sell them a little cheaper", and the customers aren't complaining. Seem quity happy, and have been for decades: they do what they're supposed to, to the best of our ability to make them do that, and do not act against the customer's interest in any way that I am aware of.

For a Fortune 500 company, that's kind of impressive. (Ok, JCI moved its headquarters to Ireland a few years ago to dodge taxes. There _is_ evil going on. But we're not being directly asked to perpetrate it yet. As capitalism in 2018 goes, that's high praise.)

May 28, 2018

Memorial day. Fade went back on the bus at 2.

Poking at the hello world on bare metal stuff again, trying to strip down a userspace one first, and building gcc -nostartfiles --static for a version with write(1, "Hello\n", 6); is still linking in hundreds of bytes of useless __libc_disable_asynccancel crap I haven't figured out how to disable.

On the kernel side, trying to do an ELF version of Balau's example without the crazy linker script, you can "gcc -Ttext 0" to move the text segment to location 0 (where the arm reset vector apparently lives, so execution starts there. Yes there's more interrupts that could happen later in the table but for these purposes I'm going to assume they won't and just put the hello world there. (You'd think that the interrupt table would be a list of addresses the processor jumps to, but no. It jumps to a fixed location where you put a branch instruction, and the spacing is enough for "branch instruction plus address". Why? Because the Arm designers thought that was a good idea.)

May 21, 2018

The kernel has "make savedefconfig" which does something a little like the plumbing I have, but the format's different. Miniconfig is the deltas from allnoconfig, and defconfig is the deltas from the symbol default values (many of which are on).

I like miniconfig because _conceptually_ it shows you all the important selected symbols. The ones that if you started from allnoconfig, you'd have to switch on to get this configuration. This "defconfig" stuff starts from an arbitrary base and makes arbitrary changes to it, half the knowledge winds up in each places and the result doesn't necessarily mean anything by itself.

But it's what the kernel guys are using, and it's already merged. Inferior but deployed.

May 20, 2018

Went off caffeine this weekend. Spent a lot of it sleeping, the rest failing to make any progress on any of the open source stuff I want to do. A bit underclocked just now.

I didn't figure out a way to remove mmap() from the elf parsing stuff in file.c, couldn't find my open windows for the rbtree stuff, forward porting buildroot's nommu arm qemu image from the 4.4 to 4.5 kernels hits the switch to device tree after which there's no console output whatsoever and I haven't figured out why yet...

Currently poking at putting the mkroot script in toybox, and as I'm converting it I realized... there's no need for the airlock script? All I'm compiling is 1) toybox itself (which I already had to be able to compile under the host to _create_ the airlock directory), 2) the Linux kernel. There are no other packages being built, and I'm removing the "modules" stuff that adds extras, hence no real need for the airlock?

I still want to build a kernel under the resulting system, but I need to add a "make" command first, and an awk, both of which are kind of a big ask. But getting a system to boot to a shell prompt should be as simple as possible, and adding native development tools to the result should be a tarball extract or similar. ("Or similar" because if the base system is in initramfs, the development tools may be bigger than the filesystem can hold, hence the symlink script aboriginal had to do that from the squashfs.)

I need to make an archiver that can create/extract squashfs like tar or zip files...

May 15, 2018

The System76 machine arrived. Nope.

Oh well, I tried.

May 14, 2018

Got another email about j-core from somebody wanting to participate in its development. I sent back the first paragraph of this:

Unfortunately I haven't had access to the or domains for months. Ever since the servers moved to cloud hosting my ssh key didn't transfer over, so I can't update the website or fix the mailing list. I also lost personal access to the "not cleared for public release" code at the start of the year, so haven't been able to do any development on that stuff either. (Or track the development being done in japan for proprietary uses.)

I did _not_ send back these paragraphs, which I typed and then cut out:

I arranged weekly calls with the developers for a couple months, and brought these issues up each time, but and each time, but nothing ever changed. Last week I _didn't_ arrange one to see if the project maintainer would notice its absence, and he didn't.

The problem is I don't work for that company anymore, and although I was still trying to participate in the open source side of things there doesn't really seem to be one when it doesn't suit the company's proprietary interests, which it hasn't recently. Maybe that will change in future, but so far anyone not working at the same company as the other developers has been second class citizens, so at best we're looking at something like Android or Sun's OpenOffice or Mozilla under Netscape before Jamie Zawinski resigned.

The classic failure mode of that kind of read-only project is that there's no point in outsiders submitting even bug reports upstream because the version they're using is a year newer than anything you have access to, and the design and development conversations all go through privileged insider-only channels you can't even read, let alone contribute to.

*Shrug* Jeff doesn't see it that way, you can always try asking him. I think [email redacted] still works.

Probably what somebody needs to do is take the last open-source tarball, check it into github, and do an open fork as a real project just ignoring what sei propreitary does. But the chance the private one _might_ release another drop has so far overshadowed the public one enough that nobody's seriously tried to bang on the "stale" version that's out there.

May 13, 2018

On Friday, work asked if I wanted to extend my contract here in Milwaukee through next October. (This job pays twice what SEI was paying _before_ we all went half-time, then SEI fell behind on those payments to the point I don't even remember how much they owed me. Yes, I kept the macbook, although I gave it away last month.)

I texted Jeff about this, since he's been making noises about Jam Tomorrow turning into Jam Today and maybe we'd be able to come back and work there again. Last night he noticed and responded "sounds like your work there is quite successful... We are on track over here also." When I said that stable and lucrative but not world-changing isn't necessarily what I want to do with my life, his response was "Up to you, of course."

When Jen stopped running the 5pm daily calls, I organized weekly ones with Jeff as a replacement. Last week I didn't organize one, to see if anyone else would notice. They haven't so far...

It's frustrating: I'm still trying to participate in j-core as an open source project. It would be nice if they'd release some source for me to work with! I'm told Niishi-san is still working on the VHDL, but I haven't got a login to the VPN anymore...

May 10, 2018

Ooh, liwakura on irc converted the PDF version of my old /usr/bin rant back into text.

Many moons ago I did a post on the busybox list, then a magazine asked to reprint it and I said I should check the claims against primary sources, and corrected things. (For example, I was right about the 3mb total drive space, but remembered an even split when reality was 0.5 megs for the fast drive and 2.5 megs for the slow one. Which meant when /home showed up it was another 2.5 megs, not another 1.5 megs, so the first unix development system had 5.5 megs total disk space, not 4.5 megs. Well _I_ care...) Then they sent back a PDF, with bibliographic links to the old documents on Dennis Ritchie's home page where I'd learned this stuff in the first place.

My old busybox post got linked from places like hacker news, and seems to have kicked of the spate of /usr merges (Lennart Pottering linked to it from the piece he wrote justifying the Fedora 17 usr merge, for example) but I was always slightly embarassed the "off the top of my head rant, got some of the details wrong" version kept getting linked to, and not the corrected version.

Yeah, that ship has sailed, but I should convert this to html and post it anyway.

May 8, 2018

I'm poking at buildroot's qemu_*_defconfig targets and seeing what architectures I can learn to add to mkroot from that. Which is why I sent this patch:

--- a/package/elf2flt/
+++ b/package/elf2flt/
@@ -29,4 +29,10 @@ endif
+       ln -s $(GNU_TARGET_NAME)-ld.real $(HOST_DIR)/bin/ld.real
 $(eval $(host-autotools-package))

To the buildroot list, but it doesn't seem to have wound up in the web archive. Spam filter ate it, maybe? Meh, I tried.

Some patches are sent just so I can say I did, not because I realistically expect it to be useful to anybody else. It's not a licensing issue (I'm not shipping binaries to anybody), but it would be selfish to keep the fix to myself, and it's good to be able to google for it again if I need to dig it up a year from now. But the buildroot guys actually fixing their stuff? Not something I really expect to happen, at least not promptly and not because of me. They didn't even regression test this infrastructure from when it _was_ working, or I wouldn't have needed to fix it. And no, I'm not jumping through the hoops and retrying and negotiating and reminding them to get a fix in. (I may have been somewhat burned by linux-kernel.)

The bug I hit is building qemu_arm_versatile_nommu_defconfig in buildroot, which dies when the standard "prefixed-elf2flt ld wrapper tries to call the non-prefixed renamed real linker" glitch elf2flt usually has hits, and the Wrong Fix is to symlink "ld.real" to the prefixed-ld.real that's actually there. Which the above patch does. A _proper_ fix would be to switch arm to use fdpic, but that's been stuck out of tree for years because gcc development is terrible and llvm hasn't noticed that nommu exists yet.

The other problem with adding support for more targets to mkroot from buildroot's qemu configs is that buildroot _explicitly_ doesn't support native toolchains -- as in they had support but then removed it, and when I asked in IRC they said it's not within what the project considers its current scope, which is "to make cross-compiling easy" and nothing else... Up to the point you can't build a buildroot system using the host toolchain, which is _sad_. You haven't got the option to _not_ cross compile. (I started putting together a patch and they said it would never get merged. Nuts to your white mice.)

Anyway, these QEMU targets haven't gotten regression tested a lot, so I had to fix the very first target I tried to build, and it's hit or miss since. (And my netbook is way too slow to build these in a useful amount of time.)

May 7, 2018

Ooh, Oligarchs!. Fascinating.

Backstory: In the 1980's Ronald Reagan scrapped the banking regulation that had prevented a repeat of the 1929 stock market crash and associated economic mushroom cloud for 50 years, on the theory it had worked so well clearly we didn't really need it. This pretty much immediately led to the the Savings and Loan crisis/bailout of 1991, followed by the asian economic collapse of 1997 (other countries copied our norms and put their money in our casino; japan invested in a lot of US real estate), then the dot-com bust in 2001 and a series of multi-billion dollar financial scandals under Dubyah (Eron, Worldcom, Bernie Madoff, etc) and then the mortgage crisis of 2008 where the wheels very nearly came off the global economy and the millenials basically got screwed in perpetuity. (Don't get me started on making student loans immune to bankruptcy. The Boomers have a _lot_ to answer for.)

By the way, the 2008 crisis was fallout from the 1980's invention of the mortgage bond, which was detailed in Michael Lewis's book "Liar's Poker", and then a quarter century later Lewis wrote a follow-up (he was present for lighting the fuse and still around for the bang) that became The Big Short (which is on netflix). Also some nice NPR coverage. So basically Reagan pulled the pin, the H.W. and Dubya bushes fanned the flames, and the whole mess exploded and got duct-taped back together. It is _so_ not "fixed".

Russia's economy wasn't exacty unscathed by all of this, but they worked on a diferent cycle: what really hurt them was the collapse of oil prices, because they're every bit as dependent on oil revenue as Saudi Arabia. (The top three oil producers are Saudi Arabia, Russia, the United States, and then there's a BIG falloff before you hit the #4 producer, which is Canada.) And Russia has ALWAYS been dependent on oil prices, since the days of the Czars a hundred years ago. The reason the Soviet Union collapsed in the first place was a significant decline in oil prices. (That's link is a longish thread about Russia, each tweet linking to a lot of further reading/watching.)

Russia can't feed anything close to its modern population from domestic production, only the westernmost part of the country is close enough to the mid-atlantic current to have a proper european climate; just like canada's population is clustered against its southern border with the USA, Russia's is mostly along its western border with Europe. The land to the east hasn't got the climate or irrigation to grow a lot of food, so they have to import it. But Russia doesn't manufacture much or perform a lot of services other countries really want to buy. That's why about 80% of Russia's international income is from fossil fuel exports, without which they can't feed themselves.

The implosion of the soviet union was like going through the Great Depression all over again, except the rest of the world wasn't going through it at the same time. They had more than a lost decade, their infrastructure eroded, institutions collapsed, lots of trained people emigrated and those who were remained didn't get the same education or experience. (You think the millenials had it bad after 2008, people in Russia were starving and freezing to death, plus an epidemic of alcoholism to make our current opioid crisis look mild.) They eventually dug out of that, but Russia can't play at the level it used to, so it's decided instead to drag the rest of the world down to its level. (The Gerasimov Doctrine, basically spending your enire defense budget on psi-opts delivered through the internet. Remember, Russia's current leader used to be the head of the KGB.)

And just as Lancelot was dragged down by "the old wound" in the camelot myth, the USA's original sin of slavery is ours. It was a major hurdle during the declaration of independence (most of the musical 1776 was about the issue of slavery), then it resulted in the civil war (there was a great Ken Burns documentary about that, it's on Netflix), and then the confederacy transitioned seamlessly into the KKK (founded by Stonewall Jackson's cavalry officer) and Jim Crow laws which MLK fought against in living memory. When the South South swapped sides after LBJ signed the Civil Rights act in 1963, the confederate rot switched from eating away at the Democrats (who were at least used to it) to the GOP (which wasn't, and the tea party chased out everybody who wasn't a reality-ignoring loon), and that's the vulnerability Russia exploited.

And that's the context for the paul manafort article at the start. Russia's a kleptocracy, organized crime running the show, with its fingers in a bunch of nearby states the same way the USA had the Monroe Doctrine. The New York and New Jersey real estate markets are more organized crime, which is where The Donald comes from. The Russian mob runs the Russian government, just like the US mob briefly ran this country (de-facto) during prohibition. We had alcohol funding our organized crime, they have oil funding theirs. The USA can feed itself without alcohol (at least until the fossil water under the breadbasket states runs out we're a net exporter of food). But without oil Russia hasn't _got_ an economy, and they are SCARED that the rise of solar and wind and batteries means their gravy train's coming to an end. The march of technology (and recognition of global warming as a real problem) is an existential threat to Russia's continued existence as a first world country.

That's why they hijacked our election. When the RIAA and MPAA were faced with the internet eliminating their role in music distribution, they lobbied for insane extensions of intellectual property law (ala the Digital Millenium Copyright Act). The fossil fuel companies went for regulator capture instead, except they're 1/6 the world's economy (if they were a country their economy would be the third largest after the USA and China), and to them "regulatory capture" means toppling governments and installing puppet regimes. Russia and Saudi Arabia already have the oil interests running the government, the USA was the only big oil producer that _didn't_. And now it does.

The rest of the world's economy only matters to Russia (and Saudi Arabia) when it comes to A) being able to afford to buy their oil/gas, B) being able to export food for them to buy. That's _it_. Anything else we do they'd like us to stop, because they can't compete with it. They got NOTHING except what they pump out of the ground, but that's currently worth enough to make them international players.

Solar/wind/batteries are killing fossil fuels in solid/liquid/gas order. Coal is toast and has resisted attempts to revive it. The switch to electric vehicles will decimate oil (and self-driving subscription fleets mean the new thing needs about 1/5 as many vehicles as the old so "time to replace the entire fleet" isn't the issue people once thought it was: most of the old vehicles will be scrapped, not replaced), and batteries mean solar and wind become baseload power taking out gas.

Russia and Saudi Arabia are terrified of this. They're trying desperately to slow if not stop it, but also trying not to draw _attention_ to it so their enemies don't invest more heavily solar/wind/batteries as a way of opposing them. The best way for the USA and Europe to fight back against Russia is to cut off the fossil fuel money maintaining their economy. That's why the Dorito keeps adding tariffs to solar panels.

May 6, 2018

Carl Dong gave $40 to my patreon and proved I _can_ be bribed into getting over it and putting mkroot back up. I also posted a few longish explanatory whatsises to the list. Carl's donation means the amount I'm getting from Patreon this month is more than I earn in one hour at $DAYJOB! (Woo!) It's not something I expect to retire on any time soon, but the concrete expressions of appreciation are really good motivation. (It's like flowers and chocolates, only I can spend it!)

He also emailed wondering if he should ping his company's HR department because I've talked about wanting to work on open source full-time, and his company has various exciting open source projects I could work on... Except that's what happened at SEI. I went to go work at an open source company, and spent all my time on _their_ projects (like j-core) instead of the ones I already have (like toybox). I've already _got_ open source projects I want to work on, being hired to work on "open source" takes time away from that just as much as any other $DAYJOB. They're never hiring me to do the stuff I already want to do, they're hiring me to do something else they want done that isn't already on my todo list. (That's why patreon is nice, it's "go work on your todo list". Get that done. *thumbs up* Got it boss.)

The job I'm doing now isn't bad. Right now at work I'm digging into new corners of Linux (currently migrating jffs2 to ubifs, which means I'm learning how to create a ubifs instance on the ubi layer for this flash ("mtdinfo -au", fun corner case: your boot can be rate-limited by the amount of console output you're spewing to the serial port...) But it's not my existing open source todo list. It still takes time away from the things I'd be doing if it was my choice.

Back before I got married I did high-dollar consulting gigs for a few months, then did open source in the multi-month gaps between them, often not looking for work again for half a year after the last contract because I made so much more than I spent (especially in the condo, which was cheap to live in), so I more or less wound up working half time and doing open source half time. Then I got married and Fade was kinda stressed by the uncertainty, so I got Real Jobs at Timesys and so on. Now Fade's in a doctoral program with a scholarship that covers her dorm room, and she's on anti-anxiety meds, and I'm kinda edging back towards my old consulting habits. Only thinking "if I keep consistently employed for 5-10 years and sell the house that neither of us are currently living in (Fuzzy's taking care of it), maybe I could retire and do this open source thing full-time."

Except "spend your life waiting to live your life" is something I've never been good at. Happy to work towards a goal, but there are limits. When I _get_ money, I tend to mail it to people who need it more than I do, which is the main reason I'm not rich. I've earned a good living for decades, wrote about investing for 3 years back during the dot-com boom, and paid off my student loans and cancelled all my credit cards permanently almost 20 years ago... but have never quite mastered the "accumulating" part of wealth accumulation (beyond home equity). I don't spend money on fancy stuff, I give it to other people who need it.

(I strongly suspect it's not possible to be a billionaire _without_ being an absolute bastard. At best, you're wilfully blind to the suffering around you. But then I can only justify retirement saving as a cross between self-care and not being a burden on others later, so I'm not sure I'm a good baseline for comparison here. It's not so much altruism as hardwired lack of self-worth papered over with years of work. Yeah, being "recovered" from depression is a lot like being an alcoholic in recovery. The gaping psychic scars are still there, thanks Dad, I just know how not to trip over or dwell on them these days. *shrug*)

May 5, 2018

And a weekend again.

Ordered a new laptop from system76, which might make it to Fade's while I'm there next weekend. (We'll see.) It probably won't work with the lapel mic either, but eh.

May 2, 2018

So today I'm trying to get a hello world ELF program to boot on hardware. This is sort of a strange complement to mkroot. I should explain.

Years and years and years ago I talked about a hello world kernel, which is a tiny kernel that writes the bytes "Hello world\n" to the serial port, then halts or spins. Various people have done it over the years for various platforms (I linked to one such effort in my sadly jetlagged "simplest possible linux system" talk), but nobody seems to have tried to make a generic-ish version for each board.

But having a hello world kernel for a board has some interesting properties: you can glue it to the front of a real kernel and fall through to make sure the bootloader has loaded your stuff and handed off control properly (which means you got the compiler variant packaging, endpoint, and load mapping right-ish). You can cut and paste the "spit out a string" code later into your kernel as a simple debug printf (we got here) arbitrarily early in the boot. If you're trying to add qemu support, it's a nice first target. And what I'm trying to do _now_ is get a hello world vmlinux image for the Turtle board so people who want to port otehr operating systems to j-core have a starting point. It would also help with the "make the qemu-system-sh4 first serial port work" effort.

And then eventually, I'd like to genericize the qemu vmlinux ELF loader to apply to _every_ target, so you can always feed a vmlinux to -kernel and not have to work out "what packaging should I be using for _this_ board". Unfortunately, the only vmlinux I seem to have working on qemu right now is powerpc, and although I can get a _kernel_ to work I haven't built my own vmlinux from a .c file that does it yet. It's being stroppy.

May 1, 2018

I updated my Patreon! Woo! I'm not even managing to post there _quarterly_. I really suck at this.

I have a "podiatry" directory in which I've been collecting scraps of podcast ideas for quite a while. What I _don't_ have is video editing software, or any sort of experience/skill with such. The "Linux Luddites" podcast used audacity, of all things, to remove the pauses and "ums" in people's speech. So maybe I could do it with a purely audio format, but I watch a lot of youtube ones with either animations or screen capture of programming stuff, and I dunno how to edit that part of it.

And it turns out that neither my netbook nor my desktop will work with the lapel mic I got, it needs to be powered and the microphone jack on both won't do that.

The lapel mic works with my phone, but it isn't noticeably better than the phone's built in microphone and the problem with recording on the phone while screencapping on the laptop is synchronizing the two; they tend to record at infintessimally different rates that drift away from each other over time. A clock being half a second per minute different isn't a big deal in playback or recording, but if the audio and video drift 3 seconds apart after 5 minutes it's way different.

So I'd really like the same program that's recording the video to also record the audio, so they stay in sync. I could capture video as a _camera_ with the phone, but I'm not trying to record my face, I'm trying to record my laptop screen with the terminal windows and/or web pages I'm talking about (I.E. the interesting part).

I suppose I could try to come up with video to go along with prerecorded audio? (That's how animation usually works. Hmmm... I've also pondered trying to give a storyboard to Fuzzy and seeing if I can bribe her into doing animation for me, but (A) she's a lot busier these days, and (B) that's harder to do when I'm in Milwaukee and she's in Austin.)

I miss when I wrote Motley Fool columns. They never told me what to write about, but I had regularly externally imposed deadlines forcing me to Ship Something, and that forced me to do imperfect work and get it OUT there, which is really important. (The 80% correct thing you have assembled in your head will be forgotten in a week, just getting what you have on hand written down and out there is often the sort of thing you look back at a year later and go "Wow, how did I ever manage that? I'm no longer that good, I suck for _different_ reasons!" (I've learned this is wrong, and to ignore it. It's in the impostor syndrome bucket.)

April 28, 2018

And almost two weeks go by unblogged. I went to Fade's last weekend, hurt my back doing laundry (didn't expect my empty laundry baskets to walk off if I left them alone for half an hour, you have to use a key to get into the _building_...) but it only bothered me for maybe 5 days? Better now. Not _that_ old (or overweight) yet.

While I was at Fade's the japanese remineralizing toothpaste arrived! ("Apaguard", Rachel gave it a "works for me" in a Rachel and Jun video about things you can only get in Japan.) I look forward to seeing how that works. (I mean, it's toothpaste. It works as toothpaste. Already tried that much. Whether or not it's effective at building new layers of calcium phosphate on my teeth via nanotechnology is the question. A quick Google says Proctor and Gamble bought the US patents to this technology years back and have consistently failed to bring it to market for 5 years now. As I keep saying, Technology advances when patents expire, not when they're granted...)

My toybox irons in the fire are A) restarting route.c from scratch, b) restarting sh.c in a new file, c) writing lib/arbys.c (rbtree code).

I figured out I need to start route.c over from scratch to staisfy the multi-table objection that got that command removed from Android. I need to clean up and promote that anyway because there are two commands left that mkroot is using from busybox, once I've replaced both I can yank busybox and have a toybox-only build script, then I can glue and modules/ into a single script and check it into toybox's scripts/ directory or something; still not sure where package downloads should go, maybe it needs its own "hermetic" subdirectory. (Or I could call _this_ dorodango. Yes I'm aware of the alumnium ones now, march of progress and all that.)

The other command is still using from buildroot is the shell, and I'm doing a fresh sh.c from scratch because I'm sick of being blocked trying to clean that up by tracing the loose wires off into tangents and reverse engineering my own code every time I sit down. The data lifetime rules have changed: my original pass at this was trying to use all the string data in-situ from wherever it came from, whether it was getline() or -c or a mmaped file. While this is very nommu friendly, it means we're writing null pointers into mmap() or argv[] data because execv() needs an array of null terminated strings, and don't get me started on substituting in environment variables. Tracking what lives where when you can't just strdup() one you exclusively own was WAY TOO COMPLICATED. So start over Not Doing That.

The third one is a red-black tree implementation I can use as a toybox dictionary, a rathole I went down when I started reading the mkjffs2 source to see why it was doing something funky and it's using an old fork of the kernel rbtree.c. Yes, I wrote the linux kernel red black tree documentation ages ago but as I said in the commit I was _asked_ to do that and it was collating various sources (a writeup, wikipedia, etc) that all glossed over important details about HOW and WHY you rebalance. I'm more comfortable with a balancing tree than a hash table for most of toybox's dictionary use cases, so this has been on my todo list forever anyway, and now I'm reading through that and drawing trees and trying to understand what the corner cases are, the undocumented input assumptions of each function and why it's doing what it's doing. The trick about using the low bits of a pointer as the color is simultaneously clever and obvious. So why is it masking &3 instead of &1 for a single bit? I think I found a bug in it already where it's leaving a node's parent pointer pointing at itself, but maybe that loop gets broken later? Dunno.

The real problem for toybox work is that my day job, which is tolerable and lucrative, totally eats my brain and I go home exhausted every day and get nothing done in the evenings. And they keep scheduling 8am meetings so I can't even get up early and reliably have those slots, a 6am alarm just about gets me to work on time. Work's a big fortune 500 cubicle farm where the average age of my coworkers is about 55, but it's literally paying me 4 times the reduced hourly rate I was getting at SEI (when the paychecks arrived), and I'm still paying down that home equity loan (and saved nothing for retirement the past couple years).

So I have weekends. When I'm not visiting Fade. But I'm getting a little done. Tired, but at least not paralyzed by stress.

April 17, 2018

What is this nonsense?

Author: Geert Uytterhoeven <<><>>
Date:   Thu Nov 30 14:11:59 2017 +0100

    tty: serial: sh-sci: Hide serial console config question

No, EARLY_PRINTK works fine on qemu-system-sh4, I've been using it. Stop breaking stuff please.

April 16, 2018

The reason I added getconf to toybox is the kernel build was complaining it wasn't there, although the "command not found" messages never seemed to break anything. But now that it's in, the kernel build is complaining that LFS_CFLAGS, LFS_LDFLAGS, and LFS_LIBS are unknown getconf arguments.

So I grep and git annotate, and the calls were added by this commit and it's too dumb for words. A kernel build was creating a dependency file larger than 4 gigabytes on a 32 bit host, and without special arguments couldn't read it.

Let's back up and list the ways this is stupid. A) solving the wrong problem, why is your dependency file over 4 gigabytes? B) nobody ever needed linker flags or extra libraries to enable LFS, it was a #define to tell the libc headers to use the new syscall and typedef size_t as 64 bits instead of 32 bits, C) glibc implemented the "Large File Support API" in 1997 over 20 years ago.

In 1997 you could already buy a 16 gigabyte 3.5" hard drive (the "IBM Titan"). By 2002 PATA (IDE renamed by SCSI bigots) had to modify its protocol to go above 128 gigs, and Hitachi shipped a terabyte drive 11 years ago. The old api isn't even implemented in musl-libc or bionic, there's ONLY large file support. (Yes even in embedded systems, a _small_ sd card is 4 gigs and they go up to 128g retail.) So still needing a flag to enable this in any version of glibc that's shipped in the past decade is INSANE.

And yet...

April 15, 2018

I've been editing and uploading old blog entries, but got stuck at November 11 for reasons I wound up editorializing about. Then the November 12 entry I left myself reads, in its entirety:

Hah. A recent discussion brought up which was a story.

With the obvious [TODO] item of telling that story. And you wonder why getting my blog up to date is so time consuming?

April 14, 2018

Linux on qemu-system-sh4 serial still has a broken serial console, due to qemu and the kernel pointing fingers at each other. Rich just pushed a pile of patches that did NOT fix the serial console, and seems to have washed his hands of the situation, so I'm trying once again to get the _first_ serial port working (stop skipping first port), and I've reminded myself why I haven't done this before.

There's an arm bare metal hello world. Getting that working on sh4 involves A) figuring out what -kernel wants (why isn't the elf loader universal?) B) figuring out what the two line write loop for the existing working port is.

Tried running it under gdb to see if I could get it to run known entry code, but mcm hasn't got a gdb in it. (Thought it did? Do I still need a prefixed version or did it start understanding all the targets in one yet?)

April 13, 2018

Finally finished and merged getconf, which I started working on over a year ago.

I have so much half-finished stuff in my tree complicating checkins of anything else that touches those files. It's a bit like my 8 browser windows with a hundred or so tabs in each. I _could_ use bookmarks but out of sight out of mind and there's no reasonable way to browse them. Most of those tabs are todo items of some sort, anywhere from "finish reading/watching this" to follow-up analysis or projects suggected by the content.

One of the reasons I grumble about basic income isn't just that anyone who isn't a 1% Boomer is horribly screwed by the current organization of the economy, it's that I would get SO MUCH MORE DONE if I didn't spent all my time working a day job, and I think it would have a much greater impact and help more people. Making it so anyone with a smartphone could do systems programming would have a greater positive economic impact on the country than porting high-end thermostats from Windows CE to Linux because CE was end of lifed. Or working on j-core. Or doing qcc. Or writing documentation. Or teaching. Or about 30 other things. But the current economic incentives say (quite strongly) to do the other thing...

April 11, 2018

Took an evening and sent yet another perl removal patch to the kernel. (Well it's a merge window.) The workaround to the orc unwinder bug is to rm include/config/auto.conf after configure. (It'll remake it from .config when you build the kernel, but the dependencies are wrong so it won't remake it just because .config is newer and has different data in it.) I could fix the kernel's build dependencies, but that would involve more interaction with the kernel community which I find unpleasant.

April 9, 2018

I caught up with some toybox work over the weekend, by which I mean finishing off some partially done things in my tree and getting them checked in.

Next up is getconf, a can of worms I opened over a year ago, and stopped working on because I had to fly to ELC to give my underprepared simplest linux system tutorial. (Which I should really redo in a coherent fashion someday.)

The problem is it's been long enough I don't remember what I was thinking, and have to reverse engineer it from the code I left and the blog entry at the time.

April 8, 2018

Why do we need Universal Basic Income? In 1840, 70% of the US population worked as farmers. By 2000 less than 2% did. We're automating away a lot of the remaining jobs. Not only can we afford it, we can't afford _not_ to.

The internet has rendered data reproduction and transmission basically free (the Pony Express and telephone Operator used to be important jobs, these days more people have cell phones than running water), the cost of solar panels and batteries is expected to continue exponentially dropping for _decades_ and is already cheaper than installing new fossil fuel alternatives (and installing new solar/wind/battery systems is expected to be cheaper than continuing to fuel and maintain existing fossil systems within 5 years), self-driving electric car fleets and drone delivery are redoing transportation (on top of the revolution shipping containers already caused starting in 1956), 3D printing's just starting to affect manufacturing, and that's not even talking about stuff like mail-order kit housing a century ago...

Economic production has fundamentally changed since the last time peasants did subsistence farming in western culture, the world _profoundly_ doesn't work like that anymore. Women used to spend the majority of their time making cloth (from nobles doing embroidery to peasants endlessly spinning, weaving, and sewing). Nobody can make a living at that anymore (except maybe "artisinal" pieces, I.E. selling it as a form of artwork like a painting) because better versions of the results are available en masse incredibly cheaply thanks to mechanized mass production, and this is true of a thousand different things. The world has moved on, many of the assumptions our society was designed around are no longer true.

"But where will we find the money?" Money is a social construct. The Gross National Product of the USA the year before the 2008 crash was about 14.5 trillion. The federal reserve printed more money than that to stabilize the economy after the crash. The real concern is inflation, although despite injecting a currently estimated $21 trillion into the economy (half again the size _of_ the_economy_) the federal reserve couldn't get inflation _up_ to their 2% annual target rate. (And yes, inflation being too low is a problem.)

Still, the conventional solution to inflation is to tax the extra money away. During World War II the top personal income tax rate in the USA was 91% (kicking in at just under $1 million/year in today's dollars), and the corporate rate was 50%. The top income tax rate was lowered to 70% in 1964, and then Ronald Reagan lowered it to 28% (causing the modern problems with both the national debt and 1% of the country having over half of all money). I.E. during the entire "postwar boom" the tax rate prevented the existence of billionaires, and going back to that would provide plenty of money for basic income. This is a recent problem, easily solved.

And a predictable amount of inflation isn't a bad thing for most people. Rich people hate it, but it's good for creditors. If you owe a bunch of money on a 30 year mortgage at 5% interest, but inflation's 2% a year, you're actually only paying 3%. Over the life of a 30 year loan, inflation could easily mean you're effectively only paying off half as much. If you owe $50k in student loans, 5% inflation pays back $2500/year for you.

The modern "finance" economy is based on creating social construct money out of thin air (and then fighting over it). Michael Lewis covered the creation of the "mortgage bond", abuse of which which is what led to the 2008 crash. But the sheer fiction of the modern economy goes far deeper than that.

A decade ago Taxi Medallions in New York City were worth $1 million each, an entirely artificial value placed upon a regulatory monopoly granting exclusive permission to provide a service. Except without the monopoly, a more competitive marketplace provides the service at a gross annual rate less than 1/10th what the medallion is worth. And app-summonable self-driving cars can eventually provide the service for a fraction of that. The actual service people _need_ keeps getting cheaper, and is approaching "too cheap to meter" the way netflix streaming has made video rental "too cheap to meter". How many videos are you allowed to watch per month? They don't put a limit on it, there's no point.

The "70%->2%" change listed above (decrease in the percent of the population engaged in food production) was a similar productivity revolution: centuries ago scientists discovered fertilizers by burning plants and analyzing the ash (on the theory that anything that didn't burn away was something the plant couldn't have gotten from the air in the first place), then refrigeration and tractors were invented, then Norman Borlaug did the Green Revolution (dwarf wheat and rice) _really_ caused production to explode wiping away the mathusulan concerns of the 1960's... and it's now to the point where forty percent of the food produced in the US is wasted and nobody cares enough to do anything about it. (Not "you left food on your plate and it got thrown out", but never made it onto anybody's plate. Every night at 11pm the hot bar at the grocery store 2 blocks away in Milwaukee is emptied into trash cans, enough to feed like 50 people, and that's not even worth _tracking_ in the modern economy.)

These days the only reason anybody goes homeless, hungry, or without internet access is because we made a choice not to give it to them. Just like we're _choosing_ to deny medical care to people, choosing to deny education to people (videotaped telecourses were available 30 years ago, then khan academy and crash course on youtube...)

The problem is capitalism and billionaires. Capitalists get rich by "cornering the market", I.E. it's not enough for you to provide more, you have to make other people provide less so you have a monopoly. Warren Buffett referred to this as a moat around a business. Capitalism is a mechanism for regulating scarcity, and in the absence of scarcity it creates it. (Despite the inherent "too cheap to meter" nature of the internet, for-profit corporations keep trying to charge extra. Given tools like intellectual property law, they use it to corner existing markets.)

Capitalists aren't just making up fake assets (this Banksy graffiti is worth $15 million dollars because I _say_ it is) and printing money to buy them, they're making fake jobs and hiring people to do them as a way of controlling people. Arguing against basic income by saying "people won't do work"... one of the big and increasing problems of our age is a shortage of _employment_, and people spend their free time doing work they find meaningful. (Each year two million people volunteer just for habitat for humanity. Add in "you can increase your standard of living quite a ways beyond mere subsitence before serious taxes kick in" and a labor shortage is not a real concern.)

Unemployment aside, lots of the jobs people are doing now accomplish literally nothing and the people doing them know it. Nobody can miss the work they're doing because they're literally not doing anything. Estimates are that 40% of all jobs serve no purpose, lots of the rest are in service of nothing (the janitors in the office building where everybody's a brand image consultant), and then entire industries like tax preparation that _do_ currently perform a real service could be entirely eliminated (the government knows what you owe, it already deducted it from your paycheck and HAS the money, we intentionally slightly overpay because we're all bad at saving and don't want to wind up oweing extra if we got it wrong, and if your tax filing doesn't agree with what the government thinks you owe you get an audit instead of a refund: I.E. the entire tax preparation industry, software and in-person both, has no reason to exist; it's a lucrative sinecure maintained by a guild/cartel).

The Baby Boomers grew up with all this as normal, but it was new with them. The rise of for-profit health insurance? It happened right before the boomers. The idea of moving out of the house when you turned 18? Unique to the Boomers. Ronald Reagan dropping the income tax rate from 70% to 28%? Boomers. (Note: high taxes make companies spend money on "wasteful" things like worker training and long-term research and development, because the alternative is "losing" it to the government so even small gains are much more worthwhile. Lowering taxes reduces investment, and instead leads to profit-taking because there's no penalty for cutting the company to the bone and pocketing the money.)

An awful lot of us are waiting for the boomers to die and designing the kind of society we want after their ironclad assumptions about what is normal die with them.

(Speaking of which, this and this were really good articles.)

April 7, 2018

Still on cp --parents. The problem I noticed today is that the old code was using basename() not getbasename(). The libc function can modify the string passed into it, which is terrible but works for argv[] because environment space is writeable, but I try not to do it because that changes what other processes see in "ps". (Several entries in /proc read the process's environment space live.) Busybox and chrome-browser have both had problems with that in the past, I'm trying to do better in toybox.

So I'm need to stop and think through the ramifications of the corner cases where basename() and getbasename() differ in behavior and whether the old code was using it thoughtlessly (probably because it predated getbasename) or if there was a _reason_ for it (in which case I should have left a comment). Probably the first, but I gotta do the exercise anyway.

Ok, according to the man page the corner case is basename("/usr/") will return "usr", so it trims the trailing slash. And if I don't do that then it will open and write "/usr//bin", and the only actual behavior change is that -i and -v would show slightly different output (/usr//bin instead of /usr/bin) and I think I'm ok with that.

This is bolstered by the fact the last commit to touch these lines predates getbasename (even under its old name) by a year.

Yesterday I mentioned an approach that's been on my todo list forever is basically "readlink -f" on both source and dest and failing if dest isn't under source. All sorts of stuff from tar -x to httpd should use that to constrain input or output under a directory.

Except it's not readlink -f, it's readlink -m, as in "mkdir sub; readlink -m sub/not/there" should return a path rather than failing because more than _one_ component at the end doesn't exist (yet).

But what happens if you do:

$ mkdir sub
$ readlink -m sub/none/../../../../fruitbasket

Because the ubuntu version is cancelling "none" with one of those .. entries. Which makes sense if we've cannonicalized the path up to this point, each .. corresponds to a single path level... ok, I can presumably do that too. The current xabspath() plumbing doesn't, but if I feed in -1 for the "exact" paremeter... heck, I can add -m to readlink. :)

April 6, 2018

Weekend again. (Well, Friday evening.)

My tree's cp.c has the start of "cp --parents" in it, so I took another look and tried to finish it last night (there's a pending feature request but it turns out to be fiddly). Since my question at the end of that last link never got answered, I'm probably just doing the simple thing until I get fresh complaints.

The new problem I hit is that "cp ../../../usr/bin dir" creates "dir/../../../usr/bin" which seems wrong. Did I mention the Free Software Foundation is terrible at designing software and tends not to think things through? They just slap new layers on top endlessly and grow software via accretion. It's kinda annoying.

The tricky part of all this isn't implementing it, it's figuring out what the correct behavior should _be_. Last time permissions were fiddly, this time constraining the output under the target directory is. (I don't hugely care if you follow a symlink in the target because that's pilot error, but the _source_ is more likely to be untrusted. Then again some sort of --constrain option to make sure all the stuff you create is under the target would be nice. I even have the infrastructure for it, just use xabspath() on both (it's the plumbing behind readlink -f) and strncmp. It's expensive, but quite reasonable to add as an option.

Speaking of options, I hate --longopts without shortopts. Not in the kind of commands toybox should be implementing, rm -r is way faster to type (and more unixy) than rm --recursive, and given that we've already got short options for almost everything needing to say "ls -l --fruitbasket" is inconsistent.

So I'm adding cp -D for --parents (create leading directories), and if I add a "constrain" option maybe it'll be cp -C.

April 4, 2018

It's adorable Twitter seems to think I'm going to stop blocking every advertiser that shows up in my feed. I don't have a Faceboot account, never programmed for Windows and wiped it off every machine I've ever owned, only ever used Horrible Retweets on other people's tweets complaining about Horrible Retweets, and still limit my tweets to 140 characters. I didn't drive for 6 years because I refused to pay an unjust traffic ticket (until a friend needed my help moving). I didn't speak to my father for 10 years after his divorce until my mother's _dying_wish_ was that I start talking to him again.

You can convince me to change. Circumstances can change. I try to _constantly_ reevaluate my positions and assumptions. New reasons to do things come up all the time. But if you try to wait for me to "get over it", when the cause of the problem is still there, I wait for you to die.

April 2, 2018

Built an x86-64 aboriginal image and... it's failing in the exact same way as m68k. Probably the ancient toybox fork it has checked out in downloads/ which means it's not m68k's fault specifically. So Laurent's qemu is off the hook there, which implies m68k is working. Cool.

The netstat thing is weird, the kernel file is giving the hex digits so they wind up in network endianness when read into an int, so I don't need to htons it (although: creepy). But reading it into a long and then typecasting the long * to an int * is still wrong on a 64 bit big endian system. Still, simpler fix.

April 1, 2018

Visiting Fade. It's Easter, everything's closed.

Lots of little todo items in toybox. I did a cleanup pass on netcat, which needed it after the nommu weirdness left it awkwardly hunched.

I also want to make it use generic lib/net.c infrastructure for stuff, starting with xbind(), ideally a version that looks at the sa_family field of its argument to figure out the structure size for itself so you don't have to pass it in as its own argument. This means searching the rest of the code for bind(), but while I'm there lib/net.c also has ntop() handling both ipv4 and ipv6 and returning a static instance of the looked up thing, so I should examine inet_ntop() uses too.

Which means finding that netstat.c function display_routes() is reading "unsigned long" values via scanf() from /proc/net/route and feeding a pointer to them into inet_ntop()'s second argument, which should either be a struct in_addr or struct in6_addr depending on the family constant fed into the first argument.

There's layers of things wrong with this: word size and byte order are both wrong and combine badly. The ipv4 address field is an int (4 bytes) but the ipv6 address is word salad (a structure wtih multiple fields because gratuitous complication).

This is why I do cleanup passes, and why I'm uncomfortable promoting commands I never got quiet time to go over.

March 31, 2018

Visiting Fade.

Huh, the qemu website got worse (in a fancy way) since I last checked it. I wanted to look up a link in the mailing list archive, and it's gone all style-sheety and I'm guessing "contribute" would be where that's hiding now... and there's an email address for the mailing list but no archive link, but hovering over the email address (which contextually says subscribe to this) gives me a mailman link, not a mailto: like the link test BEING AN EMAIL ADDRESS implies...

If I didn't already have a history with this project I'd go "this is run by the marketing department of Red Hat or IBM, no actual developers are involved in this project" and move on. It's the website equivalent of a content-free glossy marketing brochure that's trying SO HARD to sell you on the idea that this thing is GREAT and REVOLUTIONARY that it fails to give you any actual technical information.

Anyway, I'm still _subscribed_ to said mailing list and while once again flushing my active thunderbird folders to backup folders to work around thunderbird's horrible design, I saw that Laurent Vivier submitted some interesting m68k patches a few months back. I've been trying to get full m68k linux for 68030 or similar to run on qemu for years, and qemu never supported it (only nommu coldfire), but Laurent implemented most of the missing chunks years ago out of tree.

So I asked whatever happened to his qemu-800 stuff (because there are 51 branches in his github tree, and it's probably it's years plural since I last looked at it). He just pointed me at the one to test, so I'm trying to give it a go.

But mkroot can't build for m68k because musl-libc never bothered to implement support for the target, so I had to dig up my old aboriginal linux directory and build the last m68k image for that. (I had aboriginal linux booting to a shell prompt under his qemu fork once upon a time, although it would crash if you did too much with it.)

I just dug up an aboriginal linux m68k image and it booted through the kernel startup messages, gave me the "type exit when done" message echoed out from the init script (which means userspace was running fine)... and then exited immediately. Maybe a tty problem? Failure of oneit to launch the user shell? Sigh, this is more likely to be an aboriginal linux problem than a qemu problem, and I haven't touched it in a while. No idea where I left off. (Building x86-64 to see if it does the same thing.)

(Is the init code leaving the tty in nonblocking mode? Aboriginal's building something like a 2 year old kernel at this point, haven't got the context to debug without more shoveling than I wanna do, putting time into the wrong things...)

Still, Laurent's qemu fork seems decent. Now to just get -M q800 upstream into vanilla qemu so I could submit bug reports against _that_...

March 26, 2018

I should deal with the chrt.c problem building toybox with musl, namely the big #warning "musl-libc intentionally broke sched_get_priority_min() and friends in commit 1e21e78bf7a5 because its maintainer didn't like those Linux system calls".

Unfortunately the warning is true, and the problem is exacerbated by Rich's refusal to provide a #define _MUSL_ symbol you can probe for at compile time to fix up his lunacy. He's broken a bunch of syscall wrappers to always return failure, which you can ONLY detect at runtime and not probe at compile time when cross compiling. This means if you're going to provide workarounds, the code must always be compiled in, in every instance. Unless you identify musl-libc by process of elimination (it's not glibc, it's not bionic, it must be musl).

And he does this a LOT. Musl provides a broken fork() on nommu systems that always returns failure, uClibc simply didn't provide fork() on nommu so it was a build break you could probe for and work around. Here he only provides thread apis to implement chrt, a command line utility that does processes implemented in a package that never uses threads and does not link against pthread.

Rich has very strong opinions on how other people should program, and is willing to punish other programmers for not doing it his way. Since I will _never_ do it his way, I've given up on trying to turn musl into something real and am instead trying to build toybox with the bionic ndk. But meanwhile, musl is in the cloud space, and it's what mkroot currently has working.

March 25, 2018

Tired. I need about two days of recovery time to switch back into proper open source development mode and clear all the little tangents that accumulated over the week, but that's all the downtime I _have_ before the week starts up again, so...

Didn't get a lot done yesterday, my tab closing hit "zcip.c", which should probably be called something else but that's the name the busybox version I used at work is called, and it's so HORRIBLY DESIGNED that I really wanted to write one that didn't require a shell script to just Do The Thing. And there's an RFC on it, so it's not hard to figure out what it should do. So I opened a tab...

I wanted mine to autodetect the first wired interface if you didn't specify the one you wanted, so I copied the code from ifconfig.c (maybe it needs to go in lib/net.c later but get it working first and see if there's still commonalities afterwards), and... it's not finding eth0. My netbook has eth0, lo, and wlan0, and it's only finding two of those. Why? The ifconfig code is finding all three and it's using the same ioctls against a socket opened with the same flags? What the...?

It's one of these "I wanna stick printfs in the kernel" things (to see why it's making the decisions it's making) and I can't on my host kernel (not easily anyway), and I'm not wasting effort on mkroot right now and I dunno if it would reproduce this anyway (I've never set up a virtual wireless interface in qemu? Or maybe it's always skipping the _first_ interface...?)

Anyway, that ate my programming time yesterday. That and the fact my netbook is REALLY SLOW right now because it's swapping its guts out with all the open thunderbird reply windows and chromium browser tabs. I should really close tabs. And/or get a new laptop, but every time I do that it's spin the dice on what subset is supported by Xubuntu. (I have good luck with ancient obsolete crap because Linux has had 5 years to reverse engineer it and get support upstream. The current shiny stuff has never been properly supported. Possibly the PC world is less of a moving target these days since all the effort went to phones. Or I could get a chromebook, but you can't stick a terabyte of storage and 16 gigs of ram in a chromebook...)

March 24, 2018

Ooh, guilt. Guilt. Somebody emailed me asking where mkroot went _and_ signed up at the $5 level on my patreon right afterwards. Ummm...

Hmmm. How do I say "it's dead" as nicely as possible?

(Rifling through my open email reply windows I found the "last chance to submit a talk proposal for tokyo automotive summit and open source conferences", which coincided with the evening I took down mkroot so didn't happen), and the Google Open Source Peer Bonus Award Thingy sending me a "gentle reminder" to update my payoneer info so they can send me the $250 that goes with the award. Both windows are over a week old. I should spend a day closing tabs again...)

March 23, 2018

End of the "sprint" at work, meaning deadlines. I worked extra hours to catch up from monday (contractor, paid for hours not accomplishments, no vacation or sick days; can't complain because the hourly rate's pretty good). So I've gotten very little open source stuff done this week.

Friday: time to catch up on open source stuff! Starting with design work.

After the mess on the mkroot list, renaming the project "hermetic" would be gratuitously picking a fight with a large corporation. But _not_ doing so would be backing down from my legal rights in the face of shadows that _might_ turn into empty threats that _might_ turn into a battle I could almost certainly win.

So I took the project down, because I don't like either option. Now it doesn't matter what it's called, and armchair lawyers can't empty chamberpots over it again.

I gave it a week to stew, and it turns out somebody did notice it was down. I should send him a tarball. (As with busybox, the work I did is out there open source, I'm just not continuing it as a separate project that would need a name. Yes, there are still scars from SCO and bruce. Work is assigning me to work on systemd configuration, my "this is not fun" bandwidth is accounted for these days, thanks. My open source work is either because I enjoy playing with it or because I'm trying to accomplish something specific.)

There were two near-term use cases for mkroot: 1) better toybox test suite, 2) natively compiling stuff. Making the second work without plugging the gaps with busybox is significantly more work than the first, but the biggest single blocker to either is the lack fo toysh. Then again I don't need every toysh corner case to get something that can run the init script and toybox's scripts/ and so on...

Ok, if I'm going to merge a subset of into toybox as scripts/ or similar, I should merge the modules/kernel and modules/native scripts into the main file (those are the only part that can't build natively under qemu with some control-image plumbing, even the dynamic libraries can be added by rebuilding libc natively). I no longer need it arbitrarily third-party extensible if it's not going to be its own project.

I also need to rip out the busybox build. (I've kept an air gap between toybox and busybox ever since I stopped maintaining busybox, originally because of Bruce contamination, then because license. I've contributed things like toybox patch _to_ busybox, but nothing comes back the other way except bug reports.)

Ripping busybox out of mkroot leaves a largeish hole, although all of those commands are also in "make install_airlock" so it's part of an existing todo list. That said, I can't bring networking up without a "route" command (toybox's is in pending), and it really needs a command shell to run (which can handle the init script). The rest is there for native builds (wget and tar most obviously).

March 19, 2018

Fade was sick yesterday, and I seem to have it. Remarkably short incubation time. Possibly something we ate? (Not really defined symptoms, just general aches and fatigue and blah.)

Taking the day off from work, hanging out at Starbucks and trying to apply pending toybox patches and such. (Haven't got the brain to do design work, but I can close some open tabs when somebody sent me a patch I just haven't gotten around to testing.)

I think I know what to do with mkroot: it's two scripts. I can put them in the toybox scripts directory. (Or maybe the kernel one goes in scripts/modules or something, dunno. Little more tweaking needed.)

The airlock step from aboriginal linux already went into toybox, I might as well put the build script in there. My short-term goal with this is to try to come up with a toybox test environment for commands that run as root and need a defined system environment to produce consistent testing results, so...

(The alternative is pretty much abandoning mkroot, because I'm not putting it back on github as its own project. Just no. I wouldn't even bother merging it into toybox, but I have work on system building as one of my patreon goals people are contributing to, so I should get unblocked on fixing the 4 kernel regressions since 4.14...)

March 18, 2018

Fade headed home on the bus, and I camped out at a coffee shop to try to get some programming done, but it closed like an hour later because sunday.

The guy who submitted bc clarified that not only should I not touch it, but I won't hear from him again until it's perfect. So I removed it from pending.

Ugh. Trademark crap on the mkroot list. I just don't want to step in that swamp. I don't want to back down from a fight and change my plans because of the _shadow_ of a threat either. Really, I don't want to work on mkroot anymore if it's going to have people nominally on its side dragging legal clouds over it. Why bother publishing the code, just do my own development and feed the results to the android guys. Except the Linux guys _already_ don't listen to me when I send them patches for issues other people in the embedded compunity poked _me_ about. (As their designated "willing to go into that sewer to meet with the morlocks" person.)

And then I was too depressed in the evening to submit any of the talk proposals. (Especially ones about mkroot, if I'm abandoning the project. But even the ones on other topics... I'm back to "don't want to travel, don't want to interact with the Linux community"...) I never heard back from Jeff anyway (despite multiple emails and a day and a half of waiting), and flying to tokyo's a long way for a conference if I can't hang out with the j-core guys. (About like my visit to LCA in Tasmania: it was nice, but I'm not going back because it's just too much travel.)

Going to bed early.

March 17, 2018

Tomorrow's the last day to submit talk proposals to the tokyo open source summit. A couple minutes of pondering came up with:

Beyond uClinux: nommu in 2020
  - Nommu processors are the single celled organisms of the computing world,
    which means we're surrounded by billions of them without noticing.
    - 256k ram in qemu? ROM kernel, nommu, xip romfs/cramfs.
    - j-core example
    - musl, toybox
Building the simplest possible Linux system.
Building Android under Android.
  - updated version of 2013 talk
  self-hosting hermetic system build
Hermetic Linux
Android on the Desktop

This year it's colocated with the automotive summit, because the linux foundation loves diluting the audience for external conferences it inherits and trying to erode their individuality. I think they got the idea from the way Ottawa Linux Symposium died when the kernel summit forked off and half the attendees stopped coming, this way they can kill conferences they inherit from outside and replace all the "Not Invented Here" conferences with ones they fully control. Hasn't quite worked yet, but they keep trying.

Anyway, I poked Jeff about doing a presentation together with him on designing a new GPS implementation from scratch. Might be of interest to the automotive half of the thing...

March 15, 2018

Fade's spring break was this week, so she's coming to visit this weekend.

Thinking about adding nm because this wandered by and reminded me it's not a big deal (we've already got file parsing ELF), and I'm wondering if there should be a "development" menu in the toybox menuconfig for this? I'm already adding ar, because ar is needed by dpkg. But the main use for ar is static libraries (ala libc.a). So... is it a development command or not?

There should be a name for "decisions that are hard because the stakes aren't large enough to clarify the issue". Sigh.

Let's see: nm -A = names, -a = all symbols (debugging syms), -D = dynamic, -f FORMAT? (-f posix), -g = exports ("external only"), -u imports ("undefined"), --defined-only (not -u), sorting: defaults alphabetic, -n numeric, -p unsorted, -r reverse. And the one I use all the time (as does make bloatcheck) is nm --size-sort.

Not a huge amount to implement, really...

March 14, 2018

Happy pi day! I have an alarm set for 1:59 so I can eat the tiny pie I got from the grocery store. (At 26 seconds after the minute. Update: did the thing.)

I've decided to rename my mkroot project to "hermetic", since the point is to do hermetic builds (specifically hermetic system builds). The phrase "hermetic build" appears to have originated within Google and mostly be used in there, but that just means it's not currently got a lot of collisions when you search for it outside the Google bubble.

I need to cut a release with the 4.14 kernel before tackling the pile of breakage in 4.15 and newer, and QEMU is barfing on arm64 because of a QEMU bug (VIRTIO is announcing itself strangely and confusing the 4.14 kernel), which has a fix, but it's not merged in the version I have built. So trying a QEMU 2.11.0 release build to see what works there...

March 13, 2018

Someone commented that 0BSD doesn't require preserving copyright statements in the code, and I typed up some reply text I should probably record here. (I need to do a proper licensing writeup. I've done like a bunch of partial ones over the years.)

Modern copyright law doesn't require notification, hasn't for decades. The internet's pretty good at finding plagiarism, regardless of copyright. And these days authorship info goes in source control, not inline in the source.

The bigger issue is the warantee disclaimer: it's a historical relic. People give medical and legal advice on blogs and youtube channels, but we still expect disclaimers on software because "we've always done it that way". (It's like the White Knight in Through the Looking Glass giving his horse anklets to keep sharks away: if Alice pointed out there are no sharks on this hillside, he'd take it as proof they're working.)

It's there because licensing for PC software started when mainframe developers ported their stuff to smaller machines. The only commercial software back in the mainframe/minicomputer days was written on commision, bought and paid for _before_ it was written. The shrinkwrap software market didn't exist before 1977 because the unit volume wasn't high enough.

The PDP-8 was the best selling computer each year from its introduction in 1965 until it was replaced by the Apple II, and in its entire production run the PDP-8 sold a grand total of about 50k machines. If you wrote PDP-8 software at the _end_ of its production run, there _might_ be about 50 thousand total customers for it in the whole world (if all those machines were still in use and in the market for new software). And it took over 10 years to accumulate that many, and it was _the_ biggest software market in existence.

The Apple II sold almost that many in its first two years. The first machine to sell a million units was the Commodore Vic-20 (introduced in 1980 and selling 600k units in 1982 alone), the commodore 64 was introduced in 1982 and immediately outsold the vic-20 (it did about 2 million units/year through retail outlets like Sears). The IBM PC took about a year to sell its first million units (1981-1982), and so on.

Microcomputer unit volume growing orders of manitude larger than mini or mainframes changed the _nature_ of the market. Suddenly you could make a piece of software and have a million customers waiting for it. You could write _then_ sell, which was new. (And so was the concept of piracy: if you wrote software for the PDP-6 there were a grand total of 23 built _ever_ and MIT owned most of them.)

That's why Apple got the law changed in 1983 extending copyright to cover binaries. (Bill Gates had been complaining about it since 1976 and even addressed congress in 1980 (yes there's audio), but he almost never managed to accomplish anything himself, he was all about capitalizing on other people's work.)

Back in the mainframe world software was custom tailored to each installation, and each machine cost millions of (inflation adjusted) dollars. If you caused an outage you were easily costing the customer five figures a day, and big companies that could afford a mainframe had lawyers on staff. So you BET you had huge legal boilerplate full of disclaimers and indemnification in that context. Plus, when it's custom software your customer is the first (and likely only) deployment: they _will_ find all the bugs.

So when people started selling shrinkwrap PC software into the microcomputer market, they just copied existing practices, including giant disclaimers that made no SENSE in the new context. But if you ask a lawyer "are we safe from being sued" the answer is NEVER yes. (No lawyer will ever tell you NOT to CYA. They'll tell you to stay in the basement covered in bubble wrap: it's their _job_.) And the "two guys in a garage" operations starting up in the micro world (Gates, Kildall, Bushnell, Jobs, Gariott...) happily copied what the big boys did so they'd look grown up. Keep in mind they were writing "software licenses" _BEFORE_ Apple vs Franklin, when they were clearly unenforceable. But convincing people to go along with what the fancy words say is 90% of the law anyway, and it _looked_ convincing.

Apple v Franklin wasn't the end of software makers paying to change the law, of course. The phrase "shrinkwrap licenses" comes from 1980's license terms saying that by breaking the plastic shrinkwrap around the box you'd agreed to the license terms inside, except the license was printed on a paper _inside_ the box so you couldn't read it until after you'd opened the box. Back before the internet, "informed consent" (the basis of contract law) was literally impossible in the context of a retail purchase. Then software makers paid a lot of money to lobby for passage of the DMCA to retroactively make that legal around 2000, after doing it for years. (And then there's first sale doctrine, lots of software makers insisted the software was a _lease_ not a sale, until in 2011 they changed the law again to end first sale doctrine for software. Proprietary software's kind of a nightmare these days.)

The real reason 0BSD has the warantee disclaimer is its goal was to start with a large block of existing, approved license text, and make a single small change. The warantee disclaimer is established context that provides a security blanket for large corporations. Nothing in 0BSD requires you to copy that disclaimer into your own derived work: it's a disclaimer made by the _existing_ author(s). So if you lift a couple functions and use them in another context, that new thing can be under literally any license. That's the point of public domain equivalent licensing, frictionless relicensing so the license becomes irrelevant. (I.E. Universal donor.)

I remembered the Apple II figure because I wrote about it long ago. It was a lot easier to find my old Motley Fool articles before the database migration that labelled all the really old articles as written by "Motley Fool Staff". (They tend to still be signed -Oak at the bottom, since that was my login handle on the message boards, which is where they hired me from...)

March 12, 2018

Back from visiting Fade.

I merged bc. (More drama! Don't care.) Sooooo much cleanup to do.

Fiddling with the new ndk: -llog is only there for dynamic links ( exists, log.a doesn't), the library probe logic isn't using LDFLAGS right (that's my bug). A static hello world built with the NDK's clang segfaults. And I've confirmed it did the same with gcc: it's that hello world built with _bionic_ segfaults on my netbook. Illegal instruction before making it to main(). Great.

March 10, 2018

Hanging out in T-Rex Cookie with Fade, trying to catch up on the giant backlog of open source todo items, and on IRC there's bc drama. Of course. So graff posts in the #toybox channel to complain about what gdh has done ("have been kicked out of the project and my work replaced by someone who appears fraudulent" was said in a public channel), Thalheim pms me to try to give me context, and I haven't even reviewed this code yet.

Either way, they're submitting their bc to both toybox and busybox, but I'm likely to clean it up extensively whenever I get around to to looking at it again. Will they then marshall those changes over to busybox? Will busybox changes come back to toybox (along with license contanimation because they didn't clear it properly?) Sigh...

A few days ago Rich tweeted a link to Fabrice bellard's new arbitrary precision math library, which is available under an MIT license. That's not quite the toybox license, but close enough he might allow me to use it under 0BSD if I asked him? (Long ago he gave me permission to BSD license is tinycc code. I should ask if 0BSD counts, but I haven't had time to poke at qcc in ages...)

Meanwhile, the longer I put off dealing with the "gmail hates dreamhost" issues the worse it gets. The bounces turned into a mass unsubscribe when I let it time out, and I've been meaning to fix that but dreamhost has no https on mailing list administrative access (it's an unencrypted http page you log into with a password that lets anybody mess with your list), so I'm really reluctant to poke at it (kinda the same way I feel about entering my credit card info into any web site ever, I'll do it but with great reluctance and tongs at arms length, and then breathe a sigh of relief if it doesn't _immediately_ manifest as a disaster)...

But I've now waited long enough that 3-4 messages have posted to the list (and didn't go out to those subscribers), and I don't want to reply to them until I've dealt with it. Let's see... save the batch of unsubscribe notification mails into a directory, mass sed them to get the names out into a file, copy that to the clipboard, pull up the insecure admin web page's mass subscription thing, paste the list there, and subscribe.

Obviously a far easier interface than giving me command line access to the mailman server. Sigh.

One message was about the new android ndk. Back on Feb 14 I did an x86-64 api 26 build, but libc.a was old. I should see what fixes got updated. Since I can't build this from source myself yet out of their git repos, I basically test and send bug reports, then wait for the next -rc tarball to be posted. It's a slow process, especially since my context switch to respond to each new NDK, after at least weeks of not working on it, is "whenever it makes it that far up my todo list".

I want to work more closely with the Google developers on stuff like this, but not being a Google employee I honestly don't know how. They share a cafeteria. I don't.

March 9, 2018

On a bus to visit Fade in Minneapolis, trying to get some programming time in. Kinda bouncy, but otherwise...

Always weird little design issues. Doing ar, I sort of want to use copy_tempfile() out of lib.c except if the old file can't be opened I want permissions 664 on the new file (default permissions for a new archive in ubuntu's ar), and mktemp() does 600. And it's not really accessible, but setting up and calling mktemp() from the host myself and then doing lstat() and copying the permissions over is duplicating an uncomfortable amount of infrastructure for a tiny behavior change.

Sigh. Only 4 users of copy_tempfile() so far, I should add an argument and have it be "0" when you want to error_exit() if the stat() fails. (Which is fine for things like patch.)

March 8, 2018

And the GOP has destroyed another american institution. "The downfall of toys R Us can be traced back to a $7.5 billion leveraged buyout in 2005, when Bain Capital..." which was Mitt Romney's company "loaded the company with debt.... The company's massive interest payments also sucked up resources that could have gone toward technology and improving operations."

Meanwhile, on the "Capitalism is destructive" front...

March 7, 2018

Sigh. I'm sure I had more blog entries (several) but my netbook rebooted and the forest of .notes-2017.sw? files vi left behind were uninformative.

I spent some of the time on mkroot. Finally got the mailing list migrated (laboriously cutting and pasting the aboriginal list subscriber base to the new list one subscriber at a time) a few days ago, and I'm trying to get that to work.

My toybox "working on this" stack includes tftp, deflate compression side code, implementing ar (because dpkg is an archive.a containing a pair of tarballs)...

March 6, 2018

I should probably respond to the Linux kernel's new license enforcement statement but I'm not sure I have the energy. Intellectual property law needs to go away, it's something society has outgrown, and they know it, but Max Planck said "science advances one funeral at a time", and the same has been said about math in webcomic form. Unfortunately it's true of the law and society in general.

We have to work out what we want society to look like, then start describing it and how and why it should now work that way, and moving the overton window in that direction.

Capitalism creates scarcity. That's what "cornering the market" isn't the only way capitalism does that, but suing farmers over patented crops because pollen blew into a neighbor's fields is evil.

There are so many articles about extending IP law past expiration, from patenting minor variants on existing drugs to raining down lobbyist money to change the law. The open hardware clones of arm and x86 stopped at the last versions too old to be compatible with anything currently in use, and where then abandoned under legal threat if they took one step further despite the technology they'd be copying having shipped more than 20 years ago. (The only reason j-core exists is Jeff was willing to call Renesas' bluff, do his homework and win a lawsuit if necessary. Plus Renesas wasn't selling SuperH anymore due to politics, so didn't budget a large legal leg-breaking budget to defend it by bankrupting people regardless of the merits of the suit.) Here's a classic article on how big players shake down small players for patent royalties and proving you didn't infringe is no defense because they can simply bankrupt you with endless litigation if you don't play ball.

Then add in regulatory capture, submarine patents (where a patent application is eternally amended to defer issue, and then they decloak and start suing people when somebody else starts making money in this area, and the patent expiration clock only starts ticking when enforcement starts.

The USA's early success involved a significant lack of enforcable IP law until the 20th century, china's rapid growth was triggered in large part by completely ignoring foreign IP claims... NOT doing this turns out to be way better for the economy than doing it. Yes you have to work out how to pay creative types so they can afford to do it, but "basic income" is a way better solution than resticting distribution of the results in a whole lot of areas. Software wasn't copyrighted at all before 1976 (Berne Convention), and the copyrights didn't cover binaries until 1983 (Apple vs Franklin), by which point Unix was 15 years old.

The Baby Boom is dying soon, and there's no reason for the rest of us to continue their cultural assumptions. The top tax rate was 91% from World War II until 1964 (and even that only lowered it to 70% where it stayed until Ronald Reagan lowered it to 28% and started society's modern domination by the 1%). The 91% tax kicked in at just under $1 million/year in today's dollars and prevented billionaires from existing and thus being a problem; american's global dominance happened under that tax regime and going back to it would be a good thing. (High corporate taxes drive investment: they'll spend money on things like R&D and worker training if it would otherwise be taxed away. If they get to keep it, the owners pocket the money through stock buybacks and cut the actual business to the bone.)

We need instant runoff voting. We need universal basic income (Which almost passed under Nixon but the democrats killed it in the senate for not being _big_ enough, once again snatching defeat from the jaws of victory). With the green revolution, vertical farming, exponentially advancing solar+battery technology, self-driving electric vehicles becoming transporation as a service, container housing, and AI expected to automate away a lot of the remaining jobs, and many of the jobs we _do_ have being pointless social constructs...

The end of capitalism's a bit like the end of monarchy. It's something people living under it couldn't imagine doing without, but once it's gone it seems simultaneously silly and horrible...

March 2, 2018

Meanwhile, on the advancing solar power technology front, solar microdots.

March 1, 2018

So LWN had an article about a company's software license compliance training program, "including the GPL, Apache, BSD, and MIT licenses, in easy-to-honor checklist form", with "a decision tree for choosing a project license", and they ran 2 day workshops with "short lectures, lighting talks, and small-group breakout sessions".

Beyond complying with existing licenses, they got internal pushback to open sourcing this company's own software due to "Missed revenue generating opportunities", and I'm really, really, really looking forward to the end of capitalism.

Back when phone companies charged for domestic long distance calls, the metering was the most expensive part of providing the service. It cost more to measure and bill for it than it did to provide it. There's an old tension between making something "too cheap to meter" (as people thought nuclear power would make electricity back in the 1950's) vs "cornering the market" and charging money to a captive audience who hasn't got the choice _not_ to use your product or service.

We see this in the internet service providers perpetually wanting to de-commoditize the internet and charge per megabyte. AT&T most recently led the charge for this with their cell phone customers back when they had a monopoly on iPhone sales.

This article once again makes the mistake of referring to "the GPL", which hasn't existed since GPLv3 fractured copyleft into incompatible camps. "The GPL" was a response to capitalism, and the old problem "you become what you fight" is on full display. GPLv3 provides a restrictive license regime full of obligations and the promise of giant legal headaches if you screw up, because its proponents can't conceive of NOT doing that.

This is why I did Zero Clause BSD, a public domain equivalent license designed to be familiar and nonthreatening to large corporations and government entities, without burdening individual developers with strange obligations like 37 copies of the same concatenated license text they're not allowed to clean out. ("You are not meant to understand this, just do it.") I've been testing the Android NDK and the file android-ndk-r16b/sysroot/NOTICE is 63,075 lines of concatenated license text. The "stuttering problem" is on full display.

When the article talks about how BSD and Apache are GPL compatible I look at the stuttering problem and go "define compatible"...

So much wasted effort. Existing software should be too cheap to meter, it's development of _new_ stuff that costs. You amortize the development cost over a small number of years. Back in Charles Dickens' day copyright only lasted 20 years, he outlived many of his own copyrights but kept writing. (He didn't even have patreon.) It doesn't matter what the license is when the copyright has expired. I googled for "software asset depreciation schedule" and the first page of results had 4 marked ads at the top, 4 marked ads at the bottom, and the 10 hits in between were all ads. The point I was trying to make is companies that deprecate software as an asset probably aren't taking more than 20 years to do it, telling the IRS that it's worthless after that point.

If copyright did still last 20 years, and software development was fully amortized over the period and its asset value fully depreciated, then Windows XP would be out of copyright in 2021. It's still what most Windows users _want_ to use, if the ReactOS guys had the source they could fix the security issues. Netscape's long dead. Microsoft suing Linux devs over FAT patents recently provided no value to anybody. That sort of thing still _being_ under copyright is obscene. "How will I still own and control this after I'm dead" is a bad question.

Sigh. People are pushing in the wrong direction, as usual. Applying a licensing regime to something infinitely replicable is not what future generations should be doing at all. But society only frees itself of bad assumptions when people who've lost the ability to question their own assumptions die off.

February 28, 2018

The "80/20 rule" is important. Clay Shirky taked about it in one of his videos, but what I'm thinking about here is you should be able to get 80% of an operating system kernel for 20% of the effort (code/complexity).

I'm looking at xv6 and thinking it's maybe 5-10%, not 20%. To get a mkroot kernel that you can build Linux under, you need 2-4 times as much code as xv6.

I want a simple kernel, libc, compiler, and command line. Capable of rebuilding under itself and building Linux From Scratch under the result. So far I've been doing this _with_ Linux, but Linux added perl, libelf, yacc, and bison as new hard dependencies in a 3 month period.

Assuming you can get a tinycc+cfront capable of building gcc or llvm, they won't run under this because there's no mmap(). (I'm not entirely sure musl will run under it either, because it just uses sbrk() and Rich thinks that's terrible and he tends to remove support for interfaces he aesthetically disagrees with.)

A simple kernel would be single processor, use a simple "generate a software interrupt" system call method, and implement posix system calls. Alas there isn't a system call to get a process list or process data, so it probably needs /proc too. It should have mmu support.

Really, it probably looks a lot like linux 1.0 circa 1994...

I spoke to Jeff on the phone last night and he recommended I look at OpenBSD (which I'll never do, Theo and Stallman are in about the same bucket to me), and netbsd (which is hard to care about since its own developers keep declaring it dead. (Then microsoft puts money into it for a while to revive it, which is not really an argument in its favor.)

February 27, 2018

The linux kernel's going crazy enough that I'm pondering trying to put together a simple build environment with a different kernel, and then build Linux under the result?

That's kind of what I was thinking about with qcc, that it could be the bootstrap compiler you get up and running on a new system, and then natively build llvm or gcc or whatever your final optimizing compiler was with that. (And then rebuild the optimizing compiler with itself, etc.)

Which brought me back to xv6, and I'm finally properly reading the xv6 textbook rather than just skimming, and... it hasn't got mmap. That's kinda important.

I'm not sure what the minimal set of things a kernel needs to support a build environment _are_, but I don't think gcc can run without mmap? (I remember a broken mmap on arm in 2006 prevented gcc from working, back when wanting to natively run a compiler on arm was a crazy thing for me to want to do.)

If you can't build a bigger environment under your tiny/simple one, its not useful as a bootstrap, is it?

That said, I'm building the kernel with a stripped down miniconfig, so I'm in a better position to figure out "minimal" than almost anyone else. But there's an awful lot of stuff you can't configure out, allnoconfig has hundreds of syscalls...

February 25, 2018

Heading back from Fade's. This bus had outlets. Reading my deflate code, and rereading the rfc. I should separate out and promote gunzip, and finish the deflate compression side plumbing to do gzip. Then "zip" and hooking it to tar should be simple in comparison. Plus compressing the --help text.

Left my headphones behind. Can't get new ones because Fedex is evil and nothing else in walking distance of downtown milwaukee has yet revealed an offer of headphones.

February 24, 2018

Cut a toybox release at Fade's. Otherwise offline.

February 23, 2018

Heading to Fade's. The bus does not have an outlet, not online much.

February 22, 2018

Trying to test some stuff, I have "sleep" processes I can check with ps and pgrep and so on, but there could easily be legitimate "sleep" processes running on the system so what I wanna do is "ln -s sleep xiphoid" and then run ./xiphoid 30 so I'm pgrepping for a sufficiently unique name in my test script. The problem is if it's toybox, it doesn't know what "xiphoid" means and won't act like sleep.

So toybox_main() needs to follow symlinks to find a command it recognizes when basename(argv[0]) is unknown. I think one level should be enough?

February 20, 2018

Still trying to clean up toybox for a release (logger's being stroppy, turns out it won't build under musl for reasons Rich has acknowledged are a musl bug, but in the meantime I should inline the priority and facility name tables).

Which brings up an interesting question: how do you list the options? For "kill" there's the -l option, for ps they're listed in the help text (which makes that command's help text outright unwieldy).

The way qemu handles this is "qemu -M ?", which is cool and obvious (you see it once, it's easy to see what it means and easy to remember)... except ? is a shell wildcard. So 99% of the time it'll work, but every once in a while you'll have a single character file in the current directory and it'll get substituted in and break.

February 19, 2018

I read an article on founding a small start-up that _stays_ small that reminded me of an excellent earlier article comparing the growth strategies of Amazon vs ben and jerry's, which relates to something Eric Raymond said (back before he went crazy) about how open source software works like a dentist's office or a law firm: you have two or three professionals and some support staff, and that's your business. "The law offices of Dewey, Cheatam, and Howe, LLC." The kind of business where professionals sell their expertise does not naturally scale up and become a multi-billion dollar business. "The next Microsoft" doesn't work that way.

Capitalism likes cornering the market, and extracting revenue without doing work. Forced routing through a toll bridge is an obvious way, doing work once and charging for it a million times is a minor variant. When Intel required multi-billion dollars to build a fab and thousands of people to design the next processor, sure: amortizing the huge start-up costs over an enormous production run made sense, and was a natural moat around the business. But open hardware? In its entire history the j-core processor has had commits from a little over a dozen people, and most of them didn't work on it at the same time. And the reason they _could_ do it is the patents had all expired, so they could implement in the shadow of prior art. Doing that _again_ would be a question of sufficient individual expertise, not how much money you threw at it or what IP restrictions you could fence off.

What we need is financing tha can support a dozen people indefinitely, on of Brooks' "surgical teams" from the mythical man-month, so we can do the work. This is not something the market is set up to provide funding for. (Corporations used to, but not so much these days.)

Late-stage capitalism giving way to basic income would potentially cause a great increase in certain kinds of engineering productivity. We know this is true because open source, wikipedia, the blogosphere, youtube. Creative people WANT to make stuff, they earn money to afford to be able to do so. Take money out of it, production efficiency increases.

February 18, 2018

Walked to the Avalon Theater to see Black Panther. It was good. Along the way I found the closest McDonald's (about 20 minutes walk south of work, I.E. _away_ from my apartment).

And I found a gas station that stocks the Monster Muscle cans I've been trying to find since the convenience store near work ran out of them. (I bought 5. It also has the discontinued banana version, which implies it has a stock of them from a while back, who knows if it can still get more after that. The nearby convenience store says its distributor stopped carrying any of the monster muscle.)

I like them in part because while fasting, and caffeinating heavily as an appetite suppressant, it's a source of protein for only 200 calories. But it doesn't seem very popular in the wider world. (Given that the chocolate and strawberry flavors were terrible, I'm not that surprised. But the vanilla's good.)

Fuzzy posted a photo of Fade's banana bread recipe to slack, and I should write it down: 1 1/4 cups sugar, 1/2 cub butter, 2 eggs, 3 very ripe bananas, 1/2 cup buttermilk, 1 teaspoon vanilla, 2 1/2 cups flour, 1 tsp baking soda, 1 tsp salt. Heat oven to 350, grease 2 loaf pans, mix sugar/butter, add egs, add bananas buttermilk and vanilla, beat until smooth, stir in flour baking soda and salt until just moistened, bake for 1 hour.

February 17, 2018

Two's complement is the obvious way to do signed integers. The C++ guys are trying to standardize what posix basically already did. (Hint, if your compiler is _required_ to support two's complement behavior, that's how it's going to treat all signed integers. Implementing _two_ sets of signed integer behavior is deeply crazy.)

And yes, this means the compiler optimizer guys who have made integer overflow Undefined Behavior and intentionaly break it are crazy, and you have to work around their crazy by typecasting pointers to longs to compare them (typecast numbers to unsigned, do the math, and typecast them back). But you had to typecast to char * to get byte offsets anyway, so using long instead isn't as big a deal. (Yeah yeah unsigned long, but again if signed integer wrap is two's complement it works either way.)

And since this came up again, one reason you can't have one program linked against more than one libc is each libc instance maintains its own heap, so you malloc() from one and free() into the other and Bad Things Happen. Note that statically linking your program and then using dlopen() to pull in a library that links against a dynamic will _also_ usually do this.

This is why shared libraries load other shared libraries, and dynamic linking is recursive. If you try to statically link a dynamic library to eliminate external dependencies (my library doesn't need to pull in zlib, I created my .so with --static), subtle badness can happen.

(And this is entirely in C! Do not open the C++ can of worms. There lies madness. And very smug people with stockholm syndrome bragging that they have 20 years of experience soaking up punishment and believe they have learned where every sharp edge is and C++ isn't so bad as long as you don't make it angry or make direct eye contact or mention the existence of petunias, and it works very hard and is under a lot of pressure and always apologizes when you get out of the hospital and really given how it was raised it's doing the best it can and if they left it would only start drinking again and it's getting better, why the most recent stint in a standards group fixed all the problems for sure this time, it must have...)

February 16, 2018

Catching up on the Linux Kernel Mailing list, using my standard procedure: Go to a sane lkml archive (even when sitting down at a new machine, "" is easy to remember and the archive is one link away from there), then pick a recent week I haven't read, search for "torvalds" in the thread view page, and read each message from Linus. (Right click open in new tab, because clicking on the link directly loses your place in the text search.)

Sometimes I read the message Linus was replying to, or the entire thread he participated in, and I sometimes read messages from other names I recognize or click through an interesting title I notice. But "I read all the posts Linus made that week" is my definition of "done".

I tend to do this in batches, because it's easiest to read completed weeks (you don't have to check back to see if new posts have shown up), although the browser highlighting will show you which ones you've already clicked through. (The advantage of reading the current week is you have the option of participating in the discussion, but I generally haven't got the bandwidth.)

This time there are a bunch of links to bookmark, such as an anecdote about why C+ is such a pain to compile. Here's an updated statement about 32-bit support. And the kernel developers seem to finally be ready to take llvm seriously.

Some of these I should reply to even if it's a few weeks late, such as this one about perl, which I should reply to with a perl removal patch for the new nonsense that showed up in the arm build. (Maybe with reference to my original perl removal series.)

And I should reply to the compiler version bump to 4.5 to say I was doing a 4.2 compiler but stopped, and it was about licensing issues, but llvm is feasableish now, so...? (Really musl-cross-make started providing something usable so I stopped doing the sisyphos/necromancy thing.)

Oh no. Flex and Bison. Have they really started requiring flex and bison to build the kernel? Yes they have. Sigh. Hopefully when that stabilizes they can do the _shipped trick they did to make menuconfig not need flex...

February 15, 2018

I wrote up Too Much Detail when responding to a mailing list message about whether or not getprop should be in toybox, and decided not to post it, so saving it here instead:

The original idea was hermetic system builds, and when I started that was defined as providing enough command line tools to build a development environment that could rebuild itself and then build linux from scratch under the result. (On the assumption that once you've built LFS, you have enough infrastructure to build any arbitrary additional package under the resulting system.)

So any command line utility needed to build the kernel, compiler, libc, or toybox itself (well, busybox back then), if it wasn't a tool provided _by_ the kernel, compiler, or libc, was therefore something toybox had to provide. Then LFS needed a few more things.

Except... glibc was providing getconf and iconv and uClibc/musl weren't, so toybox needed those to make a bootstrap circle with those libraries. And Linux From Scratch would often have to build its own version of a command which toybox already provided, but which turned out to have more features required by later package builds, so toybox grew those features so you could skip the otherwise redundant package builds. (You could still build them to regression test, but they shouldn't be _required_...)

And of course when you're at a shell prompt on a toybox system, if you haven't got "ps" you really miss it even if nothing in any of the builds ever used it, and you'll miss command line history and vi and less... most of the missing stuff was posix or LFS commands so a triage of those produced lists of stuff we should probably have. (Some of which was stuff like "cal" that was easy to do, even if it wasn't hugely useful.)

And then there's an actual automated build system itself: it's going to want to wget source packages, apply patches, extract tarballs in the three major formats... not being able to do that is noticeably limiting to _implementing_ an automated hermetic build.

And if you're building natively, the init script in the resulting system needs to be able to mount stuff...

All that fed into the current roadmap: list of posix commands, list of lfs commands, list of things needed to build LFS, and "requests" which are largely filtered by "easy to do given the infrastructure we've already got".

There have been two major changes this decade:

1) I used to assume I'd be bootstrapping Linux distributions (red hat, debian, gentoo) under LFS. Now I want to boot AOSP under a pure toybox system with no additional GPL packages required. (But if there's an existing non-gpl version that the toybox system can build, I don't necessarily need to provide one.)

2) the compilers rewrote themselves in C++ for some reason, and nobody's done a modern cfront in a while, so the toolchain needs C++ support not just C or you can't bootstrap llvm under the result. Figuring out whether this includes crap like the boost libraries (or they can be built natively on the target system) is a todo item.

February 14, 2018

Another attempt to build toybox with the Android NDK, I downloaded the current version which is -r16b, and ran the make standalone toolchain script for --arch x86_64 and --api 26 (which is the version in Android O).

Unfortunately, while toybox compiles with that it doesn't link. Doing CROSS_COMPILE=/opt/android/x86_64/bin/x86_64-linux-android- CFLAGS="--static" make 2>&1 | sed -n "s/.*undefined reference to '\(.*\)'/\1/p" | sort -u | xargs (as you do) yielded:

__android_log_write facilitynames getgrgid_r iconv iconv_open prioritynames sethostname stderr stdin stdout

Which is rather a lot of missing stuff. The annoying part is all that stuff was found in the headers, or else the compile never would have made it to the link stage. So the NDK's headers are providing stuff the bionic static library isn't.

February 12, 2018

Capitalism is a mechanism for regulating scarcity, and in the absence of sufficient scarcity capitalism will create it.

That's why I'm worried about what capitalism's going to do to solar power. They're trying to turn "buy solar panels once and have electricity for 40 years" back into a rental model with unnecessary middlemen charging you a monthly fee. Not just a mortgage to buy the panels, but "we install panels on your roof and then charge you for the electricity". It's stupid and stuck in the past, but there's a lot of money (and entrenched assumptions of powerful people) behind it.

The internet promised freedom and equality and a lack of scarcity. You could endlessly copy digital information so paying for copies was nonsensical. But then capitalism cornered the market so you have to put your video on youtube if you want t-mobile not to count it against your monthly streaming data cap, and youtube has automated DMCA takedown requests. And yes I wrote about it at the time, but didn't expect people falling for facebook's ring-fenced private property a _second_ time (after AOL did it the first time), or the political damage that would do when capitalists learned to leverage the flaws in their business model to defend the next interation of "Leaded Gas", "Tobacco Industry" from the end-stage lawsuits as we figure out how many people fossil fuels are killing each year and reality threatens a profitable business model.

Capitalism works like linear algebra trying to maximize income and minimize cost, and plugging a zero into any of the numbers breaks the model. They'll use up all the free air and water and chop down the forests and slaughter the buffalo until it becomes scarce enough the price of obtaining one more goes up above zero. Wasting a million gallons of water to save a penny is the "right answer" according to capitalism. This is a problem, and an increasing problem as time goes on.

People who chant "There's no such thing as a free lunch" as an article of faith don't believe in them and don't trust them when presented with them. And truly devout capitalists poison free lunches to make you buy lunch from them.

Star Trek didn't just predict flip-phones and voice recognition, it predicted post-scarcity society that had done away with capitalism. You can't HAVE post-scarcity without capitalism, CAPITALISM CREATES SCARCITY. It's called cornering the market and it's how you get rich.

February 11, 2018

Made it home just in time to park the car, hug fuzzy and pet the cats, then head out to the airport for my flight back to Milwaukee.

Well, that ate a weekend.

February 10, 2018

Driving back to Austin to drop off the car.

February 8, 2018

Google testers keep bringing up code purity issues I mistake for real problems.

This week it's "address sanitizer complains about a read from malloc(0) return value which is never used", and I thought that meant bionic was returning NULL instead of glibc-style zero sized heap allocations, and that the code was following a NULL poiner when it shouldn't.

So I added an extra NULL test to the variable initialization... But bionic _isn't_ returning 0, it's doing the same thing glibc and musl are doing, returning a valid pointer into the heap that's a zero-sized allocation, which is safe to read from (even at the end of heap it's followed by at least sizeof(pointer) internal heap data) but the results are meaningless. And we already weren't using the results when the count was zero: so the code worked in practice, but not in theory.

I.E. I fixed it wrong because I thought they were complaining about a real issue rather than a theoretical one, and then had to do a seperate fix to mollify their bug dowsing rod. But what they were complaining about was reading uninitialized memory, which _isn't_a_thing_. The kernel gives us initialized mappings, always. Program may not properly track what's happened to it since but if we're not saving the result it doesn't MATTER.

The high water mark of this is still the ls valgrind thing where I had a for loop adding numbers from two arrays and saving the total in a third array, and then conditionally using the entries in the resulting arrray. I didn't care whether or not the fields in the first two arrays had been initialized because the code that used the _output_ did that, and valgrind freaked at reading uninitialized data (to do addition, then never using the result), so I had to add a useless memset to make them happy. (I could have tested whether each field was used in the calculation loop as well as the display code, but it would have quadrupled the size of the code for zero ultimate behavior change. It was faster to just do integer addition on the cache lines we'd already faulted for the adjacent fields we were using, then decide what was needed at display time.)

The ONLY way that change could ever affect the behavior of the code is if an out of control optimizer decided to damage code to punish access to "undefined" (but correctly mapped) memory. If it does what the code SAYS it would always be right.

There are plenty of things that work fine in practice but not in theory. That generally means your theory is wrong.

I suspect thinking that C++ and C are the same thing leads to treating C as toxic waste only to be handled with full hazmat protocol, rather than "think it through down to what the hardware is actually doing". (Because in C++ you CAN'T think through to what the hardware is doing, it's got layers of gratuitous abstraction that change behavior annually without you ever changing your code.)

February 7, 2018

Decided to drive the car back to austin this weekend rather than next weekend (of course more snow is coming), and Southwest screwed up so badly I've cancelled my "Rapid Rewards" account.

I tried to use my $144 flight credit from cancelling my return trip from ELC last year (since Jeff flew me straight to Tokyo from the west coast), and the site barfed because it expires tomorrow. Called customer service and it turns out you have to _complete_ travel by the expiration date, not just book it. (That would have been good to know.)

They suggested I call another customer service to see if they could make an exception (since I'm trying to book travel for sunday to fly back from Austin to Milwaukee, it's an extension of 3 days). And after half an hour on hold the customer service drone tried to charge me $100 to _not_ help me. ("All I can do" was buy a six month extension, and then they started into a long explanation about how this wouldn't let me apply credit to the Sunday flight, but instead I would be mailed a new voucher. So why bring it up?.)

Meanwhile Expedia found a flight that's cheaper than Southwest would have been _with_ the $144 applied, so it's not actually a loss. But I am "this company needs to die" levels of disappointed in them right now.

Maybe I'll forgive them after six years. (Historically when my vindictive streak is triggered it tends to last an even decade, but... I don't really care about southwest enough to hold a grudge? They're just useless and incompetent. They lost the "most airlines suck but this one is good at getting a plane full of people from point A to point B for an obvious price" special regard I held them in, and since they _don't_ sell through the same site others do, why bother to go look at them specially anymore?

Hmmm, way back when I read articles on the history of the company and how they felt they were competing with ground travel rather than other airlines, so had to keep improving even when they already had a huge competitive advantage. (This is why I described them as proof you could get the contents of a greyhound bus airborne.) Ah-ha! Their founder retired in 2008. Add ten years for his residual influence to attenuate and all their policies to be replaced by corporate drone du jour industry average BS, and yes. Southwest is Just Another Airline now.

(Same thing happened to IBM after Lou Gerstner left. Sam Palmisano followed the roadmap Gerstner left for 5 years, then stepped down at the end of it, and handed off to a clueless corporate drone. Neutron Jack Welch at GE leaving has been bad for that company too. Corporations try very very hard to treat humans as fungible (any unique individual is a liability, you must break up with them before they can dump you and find an appropriately bland beige robot), and it's a total lie. Steve Ballmer was a boring punch-clock villain, not an Evil Mad Scientist like Gates. Apple with and without Steve Jobs is a totally different company, I'm aware that Ives is doing the Plamisano thing of running out the clock of residual inspiration Jobs left but it doesn't change the "10 years later you're kinda screwed" timeline. A conglomerate without a good CEO steering is in for a hard time.

It's a pity my 3 waves talk at Flourish never got the recording published (despite me poking them about it repeatedly for over a year). I should try again...

February 6, 2018

I didn't get to write the initmpfs patch over the weekend, or the new perl removal thing for arm, or something to fix the ORC dependency, or updated initmpfs stuff, or nearly as much toybox stuff as I wanted.

This weekend Fade was visiting. Last weekend I moved into a new apartment. Next weekend I drive to visit fade so she can use the car. The weekend after that I drive the car back to austin so it's not getting a $40 ticket every time it snows (on street parking isn't valid during DPS operations, I need to move the car... to _where_?)

And at the end of the day, after two 20-minute trudges through snow to do 8 hours of porting legacy code in a cubicle (Ubuntu in a vmware window on a windows machine with outlook) I'm too tired to do much. And I can't do the "get up early and program before work" thing because they schedule 8am or 9am meetings 4 times a week, setting the alarm for 6 is barely enough time to make the 8am meeting.

I'm hoping that _next_ month I actually get a weekend to myself.

Oh well, at least there are no cats. So I'm getting a _little_ done.

February 5, 2018

Politics makes me angry because there's a lot of "that's not the real fix" going around right now.

Any time you have a "two party system" your politics are broken. First past the post voting needs to be replaced with instant runoff and more of a parlaimentary system.

Capitalism served its purpose: it regulates scarcity we no longer _have_. Most of the scarcity we wrestle with these days is _artificial_, created by cornering the market and protecting entrenched (outdated) interests. An economy where 60% of the population is full-time farmers is very different from 2% farmers, and we have not adjusted. We still threaten people with starving and freezing to death, while 1% of the population collects 90% of the output.

Between solar power with battery walls, self driving cars, the green revolution, vertical farming, container homes, internet on everybody's cell phone... most scarcity is pretty darn _optional_ for first world populations right now. Yeah the future is "unevenly distributed" but a "moon-shot" style deployment (ala FDR's Tennesee Valley Authority) could get new insfrastructure everywhere in under 10 years. We've done it before, more than once! But instead we subsidize oil companies by billions of dollars each year so they can turn around and spend it on lobbying to keep the subsidies going. We spend more on defense than the next dozen countries combined, decades _after_ winning the cold war. The third world is likely to leapfrog the first in things like rooftop photovoltaic and TAAS ("transportation as a service") because they haven't got existing infrastructure to replace, so they're not throwing good money after bad maintaining expensive legacy infrastructure. (They already did this with cell phones.)

Bullshit jobs. Universal Basic Income. Billionaires cornering the market, the 1% at Davros... These are radical positions the same way that abolition and women's suffrage were radical a century ago. They are major societal changes whose time has come. (Even a lot of racism is economic scar tissue that continues once the original reasons are long forgotten. See also african slavery and the demand for malaria-resistant plantation labor 300 years ago...)

Unfortunately the old geezers on top of the current pyramid are terrified of change and will attack anything that challenges the status quo. Society advances when old people die. I am _so_ looking forward to the end of the Baby Boom. But then I'm generation X, waiting for Boomers to get out of the way is _our_ defining shared experience...

February 4, 2018

Dropping Fade off back at the Greyhound station, we walked to the Stone Creek coffee shop across the street from the bus place to hang out with laptops for a bit.

In theory the greyhound station is as far from work as my apartment, just in a different direction (west instead of north). That's part of the reason greyhound might be a better option than driving, I could go there friday after work and get on a bus to Minneapolis, then back sunday night.

In practice, work isn't open on sunday, and neither is the Pita Pit we were navigating to as halfway point. Milwaukee, outside in February, is really really really really cold, and a half hour walk in it is a lot less pleasant than a fifteen minute walk. Even with a break in the middle (which turned out to be at potbelly subs, which was open and warm). Fade may be used to this, but I'm not.

Finally got some time to poke at pending toybox issues. I noticed the crc32 command in ubuntu (in both the new ubuntu 16.04 I set up for work and in my netbook's 14.04, "dpkg-query -S $(which crc32)" says it's in the libarchive-zip-perl package which is disgusting but apparently commonly installed. My crc32 logic can spit that out, it's "toybox cksum -HNLP", so a simple NEWTOY(crc32) with NULL arguments and crc32_main() that sets toys.optflags |= FLAG_H|FLAG_N|FLAG_L|FLAG_P; before calling cksum_main()... and it doesn't work... because I forgot FORCE_FLAGS. (Using cksum's flags when cksum is disabled, they get zeroed unless forced.) Ok, now it works.

So I can trivially provide this as a new toybox command with just a couple new lines, except A) if there's no file ubuntu's crc32 exits immediately instead of reading from stdin (I'm gonna call that "their bug"), and B) it only prints the filename when there's more than one argument. That's a design decision: grep works that way, sha1sum doesn't. What's _inconsistent_ is that cksum always prints the filename.

I already added -N to cksum (the ubuntu one has no arguments, mine lets you select endianness, pre/post inversion, and whether to include file length in the crc). Teaking -N not display the length either makes sense. having -N also only print the filename if there's more than one argument is less obvious, but adding a seprate option just to disable that is kinda silly... (It's another one of those "the difference is too small to have an obvious right answer" things.)

Oh, another difference is that crc32 always outputs 8 bytes of hex data where mine won't include leading zeroes. I think mine is wrong, and I should fix that.

Hmmm, the "print name or not" logic is actually a little more subtle: cksum doesn't print the filename when there is no filename (zero args, reading from stdin). The ubuntu one does, toybox doesn't yet but should. So if crc32 decrements argc by one, then the logic matches up... EXCEPT then we depend on optc being signed (because optc == 0 becomes -1) and while it _is_ I'm uncomfortable with leaving open the possibility of optc changign to unsigned at some point in the future and breaking this. (Well, test suite entry should check it and regression test should catch it, but in the meantime it can be if (toys.optc) toys.optc-- which would also work with unsigned.

February 3, 2018

Hanging out in Tiny Apartment with Fade. Introduced her to the nearby grocery store, which is pretty much all I'd found in the area. Finally tried the Gyro place around the corner, which is pretty good.

I've been caffeinating pretty heavily during the week, and not having any during the weekend, so there were some unexpected Attack Naps on the new air mattress.

February 2, 2018

Fade's come to visit, trying out the Greyhound bus from Minneapolis to Milwaukee. Given the security theatre at the airports, the bus takes about as long as planes do, and it's got more legroom, wireless, and an outlet to charge a laptop. So we thought we'd give it a try.

My plan was to pick up my car after work, drive to the greyhound station, get a little programming done at the coffee shop across the street until her bus came in, then drive her to Target to get an inflatable mattress for The Tiniest Apartment. (I've been sleeping on the two stacked sleeping bags I brought with me in the car, and given how hard the floor is it's still noticeably unpleasant, so I needed to do that anyway.)

Problem 1: Milwaukee may be a very walkable city but _driving_ through it, in the snow and slush and a layer of salt congealed on the windshield and covering all the road signs and lane markings, is No Fun At All. I managed to turn the wrong way on a one way street _three_times_. (Also, half the streets are two way and half are multi-lane one way and the lane markings are identical even when you _can_ see them.)

Problem 2: her bus was delayed by 2 hours leaving minneapolis, for reasons I'm still unclear on (they had to find a new pilot).

Problem 3: The coffee shop across the street closes at 7 and I got there at 6, with Fade now expected to arrive at 10. Not worth setting up, really.

Wound up going to Target myself, then going on a Quest For A Food Place That's Still Open to bring her a dinner-like item. Downtown kinda switches off after work. (Luckily the McDonald's near the Target is 24 hours, and grilled chicken snack wraps are almost like food.)

Got zero programming done, though.

February 1, 2018

Stopped at a second starbucks to redeem my Free Birthday Thing, and they don't do it either. Something about corporate vs franchise stores. Gave up and uninstalled the starbucks app.

January 31, 2018

A message on lkml fiddling with initmpfs wondered why it checks that you don't set root= (I.E. "as there must be a valid reason for this check...).

Backstory time!

I didn't want to switch rootfs to tmpfs all the time because it uses very slightly more resources, and if you're overmounting it with a fallback root= filesystem anyway those are wasted. It's a tiny waste, but it would be there on every system, so the check.

The _proper_ check would be that you have an archive to extract into initramfs: if you're extracting an archive into initramfs then you're using initramfs as your root filesystem, and thus making it a tmpfs instead makes sense.

Unfortunately, for years the default output of was three lines or so that created a /dev directory and a /dev/console entry. It was meant as example code, but when you didn't specify initrams contents it wound up getting called with no arguments and the build would create a tiny (150-ish byte?) cpio archive with /dev/console, and gzip it up. So initramfs would have a /dev/console in it, and then get overmounted and ignored.

And then the init/main.c logic grew a _dependency_ on this /dev/console. When opening stdin/stdout/stderr for pid 1, it basically called the open() syscall in the new process context with /dev/console, before pivoting out of initramfs. It worked because it was there, and then when it STOPPED being there (because I pointed out the default output and they fixed it) your initramfs wouldn't have stdin/stdout/stderr so they added a gratuitous mknod in initramfs context.

This feeds into the devtmpfs_mount patches, where right now there's a kernel config option to automagicaly mount devtmpfs when the system comes up, which ONLY applies to the fallback root= and not to initramfs. So I have a patch to add support, which is necessary if you create an initramfs by pointing the kernel souce to a directory of the initramfs contents as a normal user: It's the simple straightforward thing to do, but doesn't automatically add /dev/console and you can't create the device node as a normal user.

While I was there I cleaned up the kernel config stuff so you can tell it all the curent user's files should belong to root in the initramfs. Why nobody did that before I couldn't tell you, you had to specify which uid to map meaning your config had to know gratuitous details about your build system.

Anyway, I remember how somebody had a problem because their cpio.gz filled up more than half their ram and it failed with initmpfs but worked with initramfs. (Due to 50% of total memory being the default tmpfs size limit, so it filled up during the extract and stopped extracting.) I don't remember if lkml was copied on the email exchange but it resulted in this blog entry from the affected party.

Meaning I need to be able to specify "no really, rootfs should be ramfs" unless I can pass through size= to tmpfs options, or otherwise there are real world failure cases that hit existing people.

Unfortunately, some people clearly still don't get it. (Those are instructions for copying your initramfs into a tmpfs mount and then doing switch_root. My patches to let rootfs _be_ a tmpfs were merged in 2013.)

January 29, 2018

I wrote a thing on hermetic builds. It's related to the shared library part of the toybox design page which came up when the bc guys want an external lib.bc file to implement bc -l and I said might as well make it a big string constant in its own file (or with some #include magic).

January 28, 2018

I should probably have a page somewhere of "classic links", on topics that I should remember to introduce people to. (I have a links page but it's old and doesn't have summaries. I tried to put a few on the kernel docs page I used to maintain but lot access to update that in 2011.

One is the "Resource Curse", which is the problem that if most of a country's income comes from something like oil revenue, the country's government doesn't need 99% of its people. If you can't strike for better conditions because your labor is neither the source of income nor the thing that income is buying (everything, including cheap labor, can be imported), you have no natural leverage over those in power.

This is why you get "oil oligarchies". Countries like Russia and Saudi Arabia that earn the majority of their income from oil tend to have zero respect for human rights because if a plague wiped out 99% of their population the ruling elite wouldn't necessarily lose any income or amenities.

This is one of the reasons people are fighting for basic income as we automate away entire sectors of the economy: a century ago more than half the population worked as farmers, now it's less than 1%. The service and transportation industries that replaced them are also being automated away. This isn't a new problem: the Luddite movement protesting textile factories automating away weaving jobs happened over 200 years ago. But the erosion of the bargaining power of labor during the lifetime of the Baby Boomers has led to a real possibility of a technology-driven Resource Curse where the government doesn't need the people because we've got solar powered factories delivering 3D-printed goods via self-driving drone, and less than 1% of the population has any work you'd notice stopping if they went on strike. The Boomers won't live to see this, but the rest of us might.

I'd love to set up a conversation between David Graeber and Clay Shirky where they talked about this sort of thing for an hour. I really want to hear what they'd have to say, because I've got nothin'. (Shirky's Looking For the Mouse talk and Graeber's Bullshit Jobs essay play off each other quite interestingly.)

A persistent problem is that rich people are insulated from the consequences of their actions by a cushion of wealth, so they can be DAMN STUPID. (Hence the libertarian fish tank filter issue.)

And the anti-global-warming people are the tobacco institute are the leaded gas defenders, there are some good writeups about how those are literally the same people moving from one think tank to another as the funding sources change over the years.

And writeups on how capitalism is all about cornering the market and creating scarcity...

Sigh. I do my own writeups sometimes, with links to other people's stuff, but they get buried and lost in this blog. Dunno where else to put them. I haven't had a regular column with an externally imposed deadline since The Motley Fool days. (And those old archives are buried too, even things that made quite a splash at the time...)

But really, there's stuff out there that people should already know. Most of them _don't_, and I should have a place to point them for backstory.

January 27, 2018

Packed out of my hotel room by the noon checkout, although my car's still in their parking lot at the moment. I meet the apartment manager at the new place at 6pm to move in there. (No furniture but I brought two sleeping bags and a tray table to put my netbook on. I should buy a folding chair, I wonder where would sell that if there's no Target around here?

I looked for a clean quiet room, in walking distance of work (about 15 minute walk) with a shower/stove/refrigerator (pity it's gas, but oh well) with controllable temperature, outlets and a lockable door. (Well, it was quiet when I was there, we'll see how it is long term but I have earbuds and can get earplugs.) This fits those critera, and is quite reasonably priced.

And it has NO CATS IN IT. I might actually be able to get through the rest of the toybox roadmap in a finite amount of time. We'll see.

I type this from a starbucks. Well, a sort of starbucks. It's a corner of the grocery store I found (Metro Market, 2 blocks from the new apartment and more or less on the way to work from there) that has INSTALLED a starbucks, which opens on the 31st. Until then it's a seating area. I'm all for it. (No outlets, phone battery's already dead, netbook's at 38%. The replacement battery Fade ordered is regular size, not the jumbo size ones which last a long time but stick out awkwardly in a way that means I've now broken two of them.)

I've mostly been reading and closing browser tabs. So much backlog...

January 26, 2018

End of my first week at Johnson Controls. It's nice, for a Fortune 500 corporation that's put me in a cubicle. I don't see a problem doing 7 months of this.

I found a broom closet for $575/month with most bills included (you can get really SMALL efficiencies if you try), signed all the apartment paperwork, and today got a cashier's check for the proprated first month. They say I can move in tomorrow at 6pm.

Heart still beating way too fast this evening. I gave up and bought some chicken and one of the steaks the grocery store had on sale. My hotel room has a kitchenette in it (it's a lovely place, which has apartments on the top two floors. I found this apartment by talking to their apartment people, and they got me something in another building they manage the next block over). A week of fasting seems long enough for now, maybe I can atkins for a bit.

I've read organized, detailed diet plans with Intermittent Fasting and Keto Protein Loads and really, I'm not good with this. I can manage "do this" vs "not do this at all" distinctions. I suck at exerting consistent willpower over regulation of amounts over long periods of time, I have other things to DO. So "not eating today", "not eating carbohydrates"... That's about the level of granularity I can manage.

Hmmm, maybe I should find a gym. These tend not to work for me, but I'm still establishing a routine here. Walking to work and back builds a little exercise into my day, so that's nice.

January 25, 2018

I've been more or less fasting for a week now (I'm 80 pounds over what I weighed in college, that's like 1/3 of my current body weight), but something's going weird this time. My resting heartbeat lying on the bed at night is over 100 bpm, that doesn't seem right.

I've been using caffeine as an appetite suppressant, which amounts to a diet monster energy drink and a 1.5 ounce piece of "driving chocolate" per day this week (which is like _two_ energy drinks worth of caffeine, and I eat it in small chunks through the day). But if I stop having caffeine arond 4pm and it's 9pm, shouldn't it have worn off by now? Hmmm...

Last time I did this I leaned heavily on McDonald's Side Salads (15 calories by themselves, still less than 50 with half a pack of vinagrette dressing), which was fine for the drive here but the closest McDonald's is like an hour walk from my hotel room.

On tuesday I found a can of "monster muscle vanilla" and had my 200 calories all at once (with actual protein), but it was that convenience store's last can, they haven't restocked, the grocery store I found doesn't carry it, and google is unhelpful. (Hipstercart claims they can get it from kroger, but the nearest kroger is halfway to Chicago so I'm not sure what they mean by that.) There isn't a Target downtown either.

Of course another thing that does this to my heart rate is food poisoning, and without the salads my digestive systems seems to have entirely shut down this time. I wonder if that's related...

Broke down and bought two scoops of the "chicken and gravy" stuff the grocery store had. Absolutely delicious, and let's see if that settles my system...

January 24, 2018

I've been fasting on this trip, by which I mean eating 15 calorie McDonald's "side salad" with 40 calorie vinagrette dressing (using half a packet). but at one stop I failed my saving throw vs free pie because if you ordered through the kiosk, you got a free apple pie. McDonald's is trying to turn itself into a giant vending machine with no humans working there, as predicted by the expanded version of my old three waves talk, where stage zero is an idea you haven't acted on and stage 4 is fully automated with nobody working there anymore. Neither is a "business" so I didn't write about them for The Fool way back when, but it's kind of the full life cycle. "Computer" used to be a job title people did. Telephone operators used to connect every call. Elevators had operators before they had buttons. Further back, every household used to spin and weave and sew its own clothing, grow and preserve its own food...

There was a display of farm statistics at the last rest stop heading out of Texas, neatly explaining why "basic income" is now possible: a century ago we had 60% of the population working on farms (and a century before that it was 80%), now it's 2%. It was an Oil! Oil! Oil! display touting Tractors! and Chemical fertilizers!, but along the way we had the "green revolution" with dwarf wheat quadruple food production with better plants, so either way a smaller fraction of the populace is now producing way more food. (Most corn isn't for humans, see also the circle of rice.)

This means, strictly speaking, we don't _need_ the work over 90% of the population does, as in we're not going to starve without it. (But housing! The construction industry employs 10 million people, that's about 1/4 of 1% of the population of the country, 2% to <10% is a lot of slush factor for "ok, maybe necessary". And yes, I'm glossing over the can of worms that is healthcare, given how utterly screwed up it is in the USA, but most of "healthcare" is a giant bloated insurance industry and about half the rest is an administrative bureaucracy engaging with said insurance industry. Googling for per capita statistics, between europe and the US I get 3 doctors, 10 nurses, 2 pharmacists, and 1 dentist per 1000 people. Altogether that's 1.6%, still plenty of slack in the <10% actually assumed necessary above.)

Add in the revolution in transportation brought about by containerization starting in the 1950's, internet and smartphones, and the ongoing advances in solar power and self-driving vehicles, and meeting the basic survival needs of people is likely to take a _very_ small part of a modern economy a decade or so from now. Our big growth industries are things like entertainment. (Most people would rather hang out with friends, but who has time or energy when life revolves around sitting in a cubicle pretending to work most of each day?)

The knee-jerk argument against basic income is we can't afford to feed and house people for free, but exploding prison population? No problem! QE/bank bailout? Of course! If 2008 made one thing clear, it's that modern money is completely made up, it's numbers in a computer that the rich and powerful can edit on demand by _trillions_ of dollars, their only constraint is making sure the rest of us keep believing in it, respecting it, and chasing it.

The theoretical problem with printing money is inflation, so you tax the excess money away. The actual problem with adding money to the system is it pools in the pockets of rich people, so you have to tax _them_, and they complain loudly, with entire think tanks tasked with lying ot make them look indispensably important.

Rich people claim they're job creators but they're not: supply comes from workers and demand comes from everybody buying stuff they want or need. Billionaires are gatekeeping middlemen. But even assuming they were correct, the "incentives" argument gets cut out by real world research showing Say's Law doesn't kick in below a 70% tax rate. And what's another billion to a billionaire except a way of keeping score? Compound interest says they can spend millions of dollars every day for rest of their lives and end up with more money than they started with. It _doesn't_ run out. Techie co-founders like Paul Allen and Steve Wozniak (or founders like Jim Manzi of Lotus 1-2-3 fame) quit at $100 million because at that point more doesn't MATTER, the interest buys you a new house each week. They never have to do anything _useful_ again in their lives.

The people who continue to actively accumulate wealth into the billions are either driven by something other than money, or think they're bidding on the Titanic's Lifeboats and can never be "rich enough" to sleep soundly. (This is a self-fulfillig prophecy when their own asshole behavior in pursuit of wealth is the disaster they expect to be sending torches and pitchforks after them someday.)

David Graeber wrote about BS Jobs, which are useless jobs that produce or accomplish nothing. Many other jobs are only mostly useless, a 40 hour work week with 4 hours of actual work is fairly normal. They're created to satisfy a capitalist society's need for people to be employed in order for the people to be valued members of society (I.E. "Productive members of society"_ without producing anything anyone needs. Then there's entire industries like Tax Preparation that defend themselves via lobbying or similar, but are completely unnecessary. (Your information's already been reported to the IRS, in sane countries there's a website or similar you go to that has all the forms already filled out. You don't have to pay hundreds of dollars to pointless middlemen bureaucrats.) I'm also reading about how underemployment of lawyers is the new normal. There are no "safe" jobs, and many of the ones with good salaries tend to involve a modern guild like the American Medical Association that restricts membership.

But automating away all the jobs isn't a _bad_ thing if you kill enough billionaire middlemen intentional bottlenecks to clear the way to provide basic income, with which people can find new things to do. Creativity is _helped_ by having free time/energy/flexibility to play. Steve Jobs and Bill Gates didn't start new businesses because their survival depended on it, both were supported by their parents well into their 20's. They wanted to move up and have an impact on the world. As Graeber said, 99% of people not doing anything useful with their time is no different than 99% working retail jobs at Sears before Amazon mail orders came from a robotic warehouse to your door by delivery drones.

This is the kind of stuff I muse about on long cross-country drives. We're waiting for the Baby Boomers to die off so we can reach the kind of post-capitalism future Star Trek predicated half a century ago, but which they're too old and set in their ways to ever believe could be real even with solar power and self-driving cars and smartphones. Our problem isn't famine, it's obesity. There's a _distribution_ problem, since 2008 we have a simultaneous problem of abandoned houses and homeless people, that says the way we choose to organize society _sucks_. "We've always done it that way" isn't helpful when the rules change.

I do worry about the resource curse: a government that doesn't need its people tends to suck for those people, who can't strike for better conditions. But staying with capitalism isn't going to fix that. Again, the real problems are political, not technological.

January 23, 2018

Had to set up a new xubuntu system, 16.04 this time (if work is _paying_ me to use something with systemd...) and the procedure is always changing. This time the way to get the scroll bars back is to edit /usr/share/themes/*/gtk-2.0/gtkrc (where in this case the * is Adwaita, that's the theme selected in settings->appearance->style) and switch "GtkScrollbar-has-*-stepper" from 0 to 1, and also to change the GtkRange-stepper-size to 13 (from 0). (In theory you can set it globally but in practice every xubuntu theme manually sets these, overriding the global setting).

Without scrollbar arrows, scrolling the display up or down a fixed number of lines requires fiddly litle movements with the mouse and isn't always possible at low screen resolutions. With the arrow, click once to go up one line. So naturally, ubuntu disables them.

The SH4 VoD system has shown up in Austin! In a way that required a signature to accept delivery. I am in Milwaukee. I'd have them forward it to Fade in Minneapolis so I can pick it up when I visit (only 5 hours away, longish but reasonable weekend drive)... except for the requiring signature for delivery part. Hmmm...

January 22, 2018

Made it to milwaukee, first day of the new contract. Reading printouts, waiting for IT to drop off a computer, listening to long "this is the project" lectures from multiple coworkers. Pretty standard so far.

Quiet time in hotel room afterwards, cat-free. Luxury. (I napped, due to all the fog I was up driving last night until 2am.)

I just did "diff -u <(git diff toys/*/fmt.c) <(diff -u fmt.c fmt2.c) | less" with malice of forethought, because I saw this and went:

$ git am 0001-Un-default-fmt-1-while-it-s-in-pending.patch
Applying: Un-default fmt(1) while it's in pending.
error: toys/pending/fmt.c: does not match index
$ git diff toys/pending/fmt.c tests/fmt.test | diffstat
 tests/fmt.test     |    7 ++++
 toys/pending/fmt.c |   76 ++++++++++++++++++++-------------------------
 2 files changed, 42 insertions(+), 41 deletions(-)

I should really finish that. I wonder if I left myself a blog entry talking about what I was doing... No I didn't. Gotta read the diff.

A failure mode when I get _really_ overwhelmed is having a half-dozen tabs in a console window somewhere recording the state of an ongoing cleanup, where the backscroll shows tests I'm running that need fixing, experiments I did against multiple versions, and so on. If my netbook reboots before I get to a good stopping point and write it down or turn it into proper tests that TEST_HOST passes, and then I don't get back to that particular command for a month, I often wind up just "git resetting" the file and losing days of work that it would be easier to just redo.

This is why I call it "swap thrashing". I realy hope to be able to flush some cache on this expidition, as well as becoming flush with cache. (Sorry, couldn't resist.)

Elsewhere, the debian sh4 maintainer is being very nice and sending Rich Felker and myself a pair of cheap taiwanese Video On Demand boxes that (can be made to) run sh4 debian, and when Rich was talking about tracking down the right adapter to hook up the serial console, I asked to be kept in the loop. This led to the following exchange which I record here so I don't have to type it again if it comes up in another context. :)

> As I said, it’s already pre-installed with Debian Wheezy. I tested both boxes.

I was talking about Rich's attempt to get a serial console.

Without console output the box provides an all-or-nothing canned distro that has to bring up a large chunk of userspace before you have any output. So if I upgrade the kernel from -rc1 to -rc2 and it has a problem with some driver halfway through, I never get to see how far the boot got. If I tweak musl and sshd didn't come up, all I know is sshd didn't come up. I can't rdinit=/bin/sh or rdinit=/bin/helloworld-static to see what _does_ work. If device tree version skew can't find the interrupt controller because they changed something and the real problem is I need to upgrade dtc now, I have no trail of breadcrumbs to track that down.

> Connect power, ethernet, wait a few minutes until the LED is solid blue.
> Then check your router/DHCP server which address the box received, then just:
> ssh root@$IP
> Password: root

Which means that if the kernel doesn't boot all the way through, successfully extract its root filesystem, get through its init scripts far enough to successfully configure the network, and launch a daemon against a working C library, all I know is "it didn't work".

I've fed cpio.gz to kernels that only had cpio.xz support configured in. I've seen upgrades introduce a kconfig guard symbol that switched off BINFMT_ELF. I've accidentally dynamically linked something I meant to statically link that the init script depended on. I've seen binutils version upgrades make it write an inappropriate instruction because now it needs --no-really-stop-it-with-the-vector-extensions in ./configure, if I can't see the illegal instruction printk during the kernel boot that would really not be fun to track down.

I've worked on enough "If I change anything, it either works or it doesn't with no diagnostic information in between" systems over the years to know I probably wouldn't poke at anything that brittle in a hobbyist context. It would go back on the todo heap and stay there because I'd be afraid to touch it.

Possibly I could try getting a netconsole working on a static address (although that's still pretty iffy about early boot messages, many moons ago there was some work to create interruptless network driver stubs ala the early_printk serial drivers, but I think Alan Cox shot that idea down? Don't recall and the pages google's finding say netconsole just doesn't do early boot messages before interrupts are enabled, which is basically when it's about to launch PID 1 (interrupts = we can drive the scheduler now)...)

> Check /dev/sda1 if you want to see the uboot config.

With a serial console I could use u-boot interactively, and set up tftpboot and all that fun if I really wanted to (without even persistently changing the uboot config).

Some kernel developers won't touch a box without a jtag, but I'm the "stick printfs in everything" kind. With serial console you can get it down to two lines right at the start of "it's running code":

This example is missing the real-world "spin checking the ready for output bit in the status register" part, but you can usually track down the appropriate uboot serial output driver and figure out what your two lines are. Or break down and read the spec sheet. :)

Quiet hotel room without cats. I get so much more done here, even if at the moment it's still mostly just catching up on email...

January 21, 2018

So much fog approaching wisconsin. Stopped at a McDonalds for a couple hours to see if it would clear up, and it got worse instead. Oh well, got to catch up on some email, anyway.

I've recently noticed that "I've publicly said this 5 times" doesn't mean other people have heard it, so here goes again.

Speaking of which, it's possible the "minimal" system will grow a fifth required-ish package: cryptography. (Largely thanks to out-of-control state surveilance bureaucracies trying to endlessly expand their budgets.) If public key signing is required to verify package downloads (not just checkign a hash), or https:// downloading becomes necessary for the base OS build (we're flirting with that already), then that doesn't really belong in any of the above because "not leaking data through crypto side-channels" is its own area of expertise needing its own set of experts doing their own package.

Except... ktls is half a solution exported by the kernel already. It's possible some crypto is in scope for toybox (such as https) using ktls. Right now it's just half the plumbing and you need a big wrapper around a member of the openssl family to use the ktls plumbing the kernel provides, but maybe that's doable and/or less of an issue in future?

Encryption is not within scope for toybox because of the same zlib/curses problem: external libraries _must_ be optional so we'd need to provide a simple built-in version of their functionality, and I ain't rolling my own cryptography. (Hence wanting an stunnel style solution for wget and httpd forever, without which neither command is hugely worth doing...)

January 20, 2018

I stopped at a McDonald's in Texarcana to recharge my phone on the drive up to wisconsin, and I saw email from the manager from back when Large Phone Manufacturer That Still Wishes To Remain Anonymous sponsored some toybox work a few years back, and I accepted the link because for once it's somebody I actually know. (Well, we never met in person, but I sent her a lot of email.)

This opened linkedin, and one of the links on there was an incredibly vague position at... Google Austin. Which I found greatly amusing, and I almost tweeted "I really, _really_ shouldn't apply to this..." with a link, but it would require too much context to explain.

But "too much context for twitter" is what blogs are for. So:

Yes I just signed up for a 6 month contract in wisconsin (which I am driving to, so I'd have the car up with me), but last time I applied to google it took 8 months to work through their hiring process, so I wouldn't expect them to conflict. (Besides, given my previous experience with Google I wouldn't expect to _get_ the job, I'm mostly just amused.)

The _first_ time a Google recruiter called up and tried to hire me was over 15 years ago. (I took the phone call in the apartment I had when I worked at WebOffice, so 2001 or 2002.) I think I was on Google's radar because way back in 1999 when I wrote stock market investment columns the portfolio I covered included Yahoo, and I wrote an article mentioning I preferred Google's technology. And google sent me a t-shirt and a bunch of stickers, because they said it was their first stock market coverage. (This was 5 years before their IPO, they were still a "linux search" site in beta, I think I heard about 'em though slashdot. That's how long ago it was.) Or maybe the recruiter called me because of my posts to lkml, who knows?

I've never particularly wanted to move to California, although my reasons why (expensive and earthquakes) don't seem to apply to Tokyo for some reason. Huh. (At this point I suspect it's inertia.) But Google recruiters kept calling like clockwork every 6 months for the next few years.

Then ten years ago ChromeOS came out, which sounded like fun (this was _after_ I co-authored a paper on why Desktop Linux hadn't happened, so yay new approach with a hardware vendor behind it who could get preinstalls). So I followed Google's "apply to work on ChromeOS" link but selected the Dublin Office from the site selection pulldown because I'd never lived in Ireland and that also sounded like fun. This confused Google's hiring process (seriously, I break everything), so they didn't get back to me for a few months, but I was in the downtime between contracts and didn't mind. (Consulting meant I earned enough I could take time off between contracts, which is when I got most of my open source programming done. This is back before marriage led to big house and other people to support, or at least reassure that I know where the money is coming from next month).

Google's version of a cat's "when in doubt wash" seems to be "Site Reliability Engineer", which they could do in Dublin, and it sounded worth a try, setting off an odyssey of endless phone interviews, culminating in an all expenses paid trip to the Googleplex (my first time in Silicon Valley proper), and then deculminating in some sort of telepresence interview _after_ that in Google's Austin office (next door to Qualcomm, northwest corner of I35 and Mopac, and deserted except for a receptionist when I arrived) where somebody on the other end of a camera wanted me to write code in chalk on a blackboard. As I said, I confused them, and they spent a long time making up their mind...

Except they didn't. A full 8 months after I'd applied, when my bank account was getting kind of thin waiting for a decision (I'd have gone to work somewhere else months earlier but I was waiting to see what Google thought), they said I'd passed all the interview hurdles, my resume was sitting on the desk of whichever cofounder it was who personaly approved all hires, but the position I'd applied for had been filled and I needed to restart the process from scratch.

I thanked them for their time and got on with my life.

The next google recruiter to call me 6 months later was confused about my status in their HR system. I explained my Interview Odyssey and resulting reluctance to reopen that can of worms, she put a note in my file, and they stopped calling for a while.

Shortly after I did my 2013 toybox talk about hijacking android for my own purposes to steer the computer industry, I got a call from another google recruiter (no, he hadn't seen my talk) and went "ok, why not" and went through the thing again, except I'd just finished up my 6 month contract at Cray in Minnesota an was spending a week with my sister and her 4 children before returning to Austin, and hanging out with small children exposes you to every stomach bug they pick up at school, so I had to cut the interview short to urgently visit the bathroom, and the Google guys decided _not_ to continue, an I went "ok" and got on with my life. Haven't heard from a google recruiter since.

Google merged toybox in 2015 and has been using it since, but toybox development's stalled badly as SEI struggled to stay afloat. As the company lost staff instead of staffing _up_ we all wound up doing 4 jobs apiece (the corollary to Brooks' Law I learned at timesys remains true, removing people from a project is as big a delay as adding them, you spend all your time on "knowledge transfers" and then the remaining people have to come up to speed on tasks the departed used to do) and the stress started affecting my health.

At the start of the year I went "this is the _second_ set of taxes I'm going to have to check my bank statements to see which paychecks they managed to make, _after_ dropping us to half pay", and when a recruiter offered me twice what SEI had paid back when we were full-time (so 4 times now even if they _did_ make every scheduled paycheck) I took it. (The email I got from Elliott talking about "the thing that replaces toybox" helped with my decision to sign the contract. The advantage of a 9-5 job in an office is you know when you're _not_ working, and can do open source stuff without guilt...)

I was tempted to apply to the linkedin thing in part because the idea of using Google's "20% time" to work on toybox was just too ironic. Google's never paid me a dime for toybox. Elliott bought me lunch once. And they gave me an "open source award" (along with a dozen other people) that came with a $250 gift card, but I had to go to payoneer's website to activate the card and the login credentials they sent me didn't work. I even poked the Google open source award coordinator to confirm the credentials, but never could log in and after enough failed tries it disables itself. (I still have the card in my wallet, probably expired by now.)

And yes, I'm aware 20% time no longer really exists, that's a whole 'nother rant (that links to December 1 but the topic continues through december 2, 4, 5, and 6, I should collate old blog entries into proper writeups someday. My todo list runneth over. I prepared and presented a proper talk on that topic at Flourish last year, but they never posted the recording.)

Anyway, the bit about the google job is moot because when I clicked through it went to an application page on (with the same info), and when I clicked on "apply" there Chrome gave an error page because the site "redirected me too many times". (I repeated this 3 times to be sure it wasn't transent, then got on with my life.)

I break everything. And I continue to confuse Google's recruity-bits.

Anyway, back on the road to Milwaukee.

January 19, 2018

Finally finished flushing the lkml and qemu-devel folders into "2017" sub-folders so thunderbird doesn't choke on the giant mboxes bigger than it can handle (making email download sit there and twiddle its thumbs for a couple minutes when the filters try to move the first message into that folder, sometimes making the smtp server time out).

And this meant I started reading qemu-devel at the start of january, and noticed Laurient Vivier pushing his m68k support patches again. Looks like seriously this time. (Yay!)

So I wander to my qemu directory, make sure it isn't locally patched (git diff), do a git pull, start to ./configure, kill it and do a make clean just in case, and...

$ make clean
  GEN     aarch64-softmmu/config-devices.mak.tmp
  GEN     aarch64-softmmu/config-devices.mak
  GEN     arm-softmmu/config-devices.mak.tmp
  GEN     arm-softmmu/config-devices.mak
  GEN     i386-softmmu/config-devices.mak.tmp
  GEN     i386-softmmu/config-devices.mak
  GEN     ppcemb-softmmu/config-devices.mak.tmp
  GEN     ppcemb-softmmu/config-devices.mak
  GEN     x86_64-softmmu/config-devices.mak.tmp
  GEN     x86_64-softmmu/config-devices.mak
  GEN     config-all-devices.mak
config-host.mak is out-of-date, running configure

And so on and so forth. It generated dozens and dozens of config-target.h and blah-commands.h files just so it could delete them!

Meanwhile, "git clean -fdx" took like 2 seconds. Except for the part where .gitinfo can tell it to ignore files which don't get deleted, and I'm reluctant to try to add options to override that because qemu uses subtrees for dtc and stuff and I don't want to delete them.

Projects seem to have a natural lifecycle where they get so complicated fewer new developers come on board, and eventually they starve for resources when the existing batch ages out. The average age of linux developers is Linus's age, and he's something like 47 now...

January 18, 2018

I have a failure mode during software development, which is naming stuff strangely. I had to do a cleanup pass removing the "9 princes in amber" references (a book series by Roger Zelazny) because after enough repetitions of the variable "pattern" I threw a "logrus" in there in self-defense and it spiraled from there.

Now I'm cleaning up ps, which has -o fields living in "struct strawberry" with the variable length char array at the end of course called "forever". This doesn't help anyone understand the code.

I'm banging on gzip right now and resisting calling the --rsyncable option --titanic during development.

All this should have been cleaned up and properly explained long ago, I've just been so drained trying to keep SEI afloat. And now I'm packing to move to Wisconsin for half a year.

January 17, 2018

The reason "sudo echo 0 9999999 > /proc/sys/net/ipv4/ping_group_range" doesn't work is the shell opens the redirect file before calling sudo, which means it does so as your normal user. Alas putting quotes around the sudo arguments doesn't work because it doesn't re-parse the command line so tries to run a single command with spaces and a > character in it.

This is why I wind up doing sudo /bin/bash a lot.

January 16, 2018

Youtube cut off monetization of channels with less than 1000 subscribers. It's the hobbyist->employee->bureaucrat progression again. I should really give a proper _recorded_ version of that talk somewhere.

I gave it at Flourish. I was prepared, reasonably well rested, and gave a version I was proud of. They recorded it. The recording never went up. There is very little a conference an do to annoy me more than _promise_ to post a recording of a talk and then _not_ do it. Sadly common problem: Flourish screwed up the recordings both times I went there, LinuxCon Tokyo 2015 had a video camera that apparently wasn't on, Ohio LinuxFest pointed a video camera at me and then only posted audio....

Penguicon failing to record Neil Gaiman's "crazy hair" reading after which I got him to say "By Grabthar's Hammer, You Shall Be Avenged" into the microphone with NO TAPE IN THE MACHINE was the Science Fiction Oral History Society's fault, but it was at a conference I co-founded so that makes it my fault. I added a "new thing" to each year of Penguicon (year 4 was LN2 ice cream, we dumped the extra LN2 into the swimming pool sunday afternoon), and year 3 (I think?) I bought 5 MP3 lecture recorders with a promised 12 hour battery life and taped them down to the tables in each panel room. No idea what's happened to that since Matt Arnold drove away all the people who used to run it, I haven't been back in 10 years...

January 15, 2018

Ok, fixing ps -T. If I go "ps -AT" I get 13 hits for chrome (pid 1401 and 12 threads). But if I go "ps -T 1401" I get just one hit (pid 1401, no threads).

And done. The /proc layout repeats the thread under the pid, so /proc/123 will have all hte process information for the parent, and then there's a /proc/123/task/123 that _also_ has it. There's a check to notice that and skip it when we're parsing threads, which was supposed to copy the parent pid into the child's PID slot when it doesn't skip it.

I.E. if the parent->PID and my->TID were equal, return. Else my->PID = parent->PID; The else bit was missing.

January 12, 2018

Signed the contract for the new job. I need to be in Milwaukee by the 22nd.

My next choice is do I drive up or fly up? I have southwest credit from cancelling my return flight from ELC last year (work flew me from LA straight to Tokyo instead, yes those trips were always on that short notice), and if I don't use that it expires soon. But if I drive up I'd have the car with me and can drive to see Fade on weekends. (It's a little under 5 hours drive each way, reasonable to drive up friday after work and drive back sunday evening. Flying each way probably takes _longer_ if you add in getting to the airport, through security theatre, and then public transit through minneapolis.)

Decisions, decisions...

January 11, 2018

I tried -rc7 in mkroot. The arm build grew a perl dependency again. The x86-64 build died because it couldn't find an ORC unwinder. Wheee!

New battery arrived! It's 6 cell rather than 9 cell but hopefully that means it's less fragile. (I ordered 2 9 cell batteries and wound up breaking both, they stick out awkwardly as a sort of footrest leaving the keyboard at an angle and only letting the screen fully open if everything aligns exactly. The 6 cell ones I've never broken (just worn out) and I can almost lay the screen flat back.)

Downloading email is _so_ much faster now I've cleaned out LKML. I'm still shoveling out qemu-devel, and then buildroot's got over 100k messages in it that should probably get moved out of the folder my mail filters are dumping new messages into. (You can have an enormous mbox that doesn't get USED during download and it won't slow down email downloading.)

Yes, this is related to the "I have to download from gmail via pop because imap is far more broken".

January 10, 2018

Met with the recruiters for the new job, picked up the pile of paperwork to sign. Feeling kind of morose, like I'm letting SEI down.

I've spent 3 years working for Jeff, which I think is longer than I've been at any other job. (Even beat out my first job at IBM by a few months.) I believe in what SEI's trying to do, and stayed a year and a half longer than they could reliably pay me, and I'd happily go _back_ there after this contract... if there's anything left. Jeff insists that there's a new contract coming soon that gives us a change of direction and fresh funding, except the fix for everything has been Real Soon Now for 2 years. This is the THIRD set of investors that have deliberated at length about giving us money. The stress is killing me, I need a break.

Alas, you can't fund from operations targeting utilities without bootstrapping to a large size and going through standards compliance nonsense. To get around that Jeff parnered with a big company that screwed us over for internal big company political reasons, and then he tried to put together a funding round based on another big company that was once _again_ paralyzed by big company internal politics. Disruptive technology 101: a large existing corporation cannot commercialize anything new in-house, it can only buy it once it's already proven.

This is not on the tech side of the house, I dunno how to fix it. Make a product and sell a product to people who will use the product I understand, navigate corporate status/dominance games where everything is some shade of affinity fraud and nobody involved in the decision making will be personally affected by the outcome except politically... that's not a domain I've spent a lot of time building skills in, because I sympathize with the people polishing guillotines every time it comes up. During the entire "postwar booom" period the top tax rate in the USA was 91%. (In 1963 they lowered it to 70%. Reagan lowered it to 28%, at which point our deficit exploded and corporations stopped investing in anything. Taxing profits makes companies spend money on research and training and all sorts of things that won't impact next quarter's numbers but are better than seeing the money confiscated by the feds. Lowering taxes makes them stop making any long-term investments in their business, their fig leaf being they can pile up cash and buy some other company that did all the right things later, the reality being they legally embezzle it all. Why is this hard to understand? This guilded age royal court nonsense is a _sickness_. It is symptomatic of an unhealthy economy, these are parasites feasting.)

Sigh. Happier thoughts.

My sad little netbook is plugged into wall current. It'll run without a battery, but isn't happy about it.

Thunderbird's terrible at dealing with large mbox folders, where large" is "a year of linux-kernel or qemu-devel". So I've created "lkml-2017" and "qemu-2017" subfolders and am once again copying all the year's messages into them and compacting stuff. It's REALLY slow.

You click on the first message, scroll down to the end of the range you'd like to copy (too much and it triggers the OOM killer, I can get away with maybe 20k each pass), shift-click on the ending message, then wait multiple minutes for the highlight to happen, then right click on any of the highlighted messages and wait the same amount again for the pop-up menu to appear, then navigate to the folder you want under "copy to->", click, and go to lunch.

If you've highlighted more than about 25,000 messages the copy will complete (and it deletes them as it goes), but afterwards thunderbird does some insane processing that exhausts all memory, drives the system into swap, and eventually triggers the OOM killer to kill thunderbird. (That's assuming you don't think the system is hung because your mouse cursor takes 3 minutes to respond to attempts to move it.)

If it's less than 25k messages it just takes forever to complete. As in I went to the grocery store and it wasn't done when I got back. Did I mention 25k messages is maybe 2 months of lkml traffic? It's something like 350 messages a day, plus bursts of ignorable bot-generated nonsense. (Your patch failed to build against the -tip tree! Why are you mailing the list? The giant backports against -stable patch series need their own list, but nobody would read it, so...)

Mostly I read the web archive, but I need the messages to reply to.

January 9, 2018

Dropped Fade and Adverb off at the airport. New semester starting, she's going back to her dorm in Minneapolis.

Sigh. LWN's is it time for open processors article (in response to meltdown and spectre) doesn't even mention of j-core. It mentions openrisc, and clones of powerpc and sparc, and links to riscy's press release. I guess we look too dead to matter.

(I _cannot_ get excited about RISC-V, it strikes me as Open Itanium. They promised everything to everyone and are cashing very large checks, and I see no obvious reason for it to displace x86 or arm? And that's _with_ meltdown and spectre. Maybe china will standardize on it by fiat, but didn't they already try that with a mips fork?)

Of course j-core's still a nommu processor, so you don't _need_ a memory protection bypass because there's no protection to bypass, but... Rich hasn't posted to the linux-sh list in months, and it has an outstanding futex bug for how many releases? QEMU's sh4 serial console's been broken for ages and still not fixed? Our last VHDL code release tarball was 2016 (did that support SMP? I don't remember). We never got even _part_ of the VHDL code up on github...

Jen says that Jeff had a good meeting with the new investors yesterday, but they didn't sign a check at the meeting. Just like we didn't get actual money from the december meeting, or the november meeting, or the october meeting. Not even the money for the "statement of work" that was supposed to tide us over until the end of last year. (I.E. it's a quarter's worth of money we've already spent a quarter trying to get.)

I can't make this happen by myself.

Heard back from the recruiter about the Milwaukee gig. They want me, but the recruiter was trying to talk down my quoted hourly rate at the last minute? Confused.

I had my netbook closed on a bench, it fell off about a foot onto a tile floor, and the battery case cracked in 3 places. Wheee. The screen no longer opened because it was hitting a piece of cracked battery case, and pulling it off took off about half the plastic.

Running it without a battery right now. Fade's ordered me a new one. (Did I mention I know too much about how the sausage is made to be comfortable typing my credit card info in to a website _ever_? I'm aware having someone else on a joint bank account do it does not improve matters, and yet.)

January 8, 2018

And my netbook finally rebooted. I tried to reproduce a mkroot issue which meant a script ran oneit as root, which couldn't attach to the requested console, and on the way out it rebooted the system.

Todo item: fix that.

January 7, 2018

I've found the jpop group responsible for Miss Kobayashi's Maid Dragon's opening and closing music. It is All The Bouncy.

Appending it to my normal music playlist put it right after Demi Lovato's cover of "Take me to Church" and the switch between the two has gears grinding.

Listening to colorado video about demand charges being one of the big drivers for pairing battery walls with solar and going for "complete curtailment". I.E. collect extra solar in your battery wall, and when your batteries are full just switch off the solar panels. Never try to feed anything upstream into the electrical grid. Apparently getting to 80% of this is easy, getting to 100% is hard.

January 6, 2018

Fade took me to Dead Lobster, by which I mean I drove and she paid. Took the hybrid loaner car, which remains deeply shiny. I looked up its price (they're so clearly letting my use this thing as a form of advertising) and it's $29k. That's for last year's model, not the new one. It's not outside what I could afford, but it's outside my comfort zone.

When I was 7 years old I got all excited about the idea of compound interest, and was pretty sure I could retire at 30 (or at least get to the point where I earned more in interest than in paycheck), and I was on course to do that circa 1999 or so (earning $50/hour and offered $75/hour to stay, plus owned two condos that went up $20k each in price while I owned them, not bad for a 27 year old), but over the years instead of saving and investing I gave time and money to friends and family in need. I'm doing ok, but I'm not close to retired.

Take SEI: I've been on half pay there for a year and a half, and they haven't even made those reliably. They're making about 2 out of 3 paychecks these days, which means I'm down to 1/2*2/3 = 2/6 = 1/3 pay which is not sustainable with this house even without the flooding. And that's on TOP of the fact I could make twice that fulltime hourly rate if I went back to consulting, so I'm choosing to earn 1/6 my market rate. I don't care about money, but I do care about a _lack_ of money, and things like social security and medicare won't survive the GOP, so I need to provide for my own retirement. After ten years of marriage Fade's never had kids so I'm pretty sure that's not happening at this point, and she's up in Minnesota, so I might as well go back to the Lucrative Nomad lifestyle before age discrimination kicks in too hard. (It's easy to find work if you go where the work is and do what they pay you to do. I've worked from home on stuff I find interesting, but the stress is getting to me.)

At the start of the new year I decided to look around. I did a phone interview for a gig in Milwaukee on Thursday, and I'm told I'll hear back on that Monday. I very much want to see SEI succeed but I can't make customers pay their bills or investors follow through on their promises, and they're not really sponsoring toybox development anymore...

I got a reminder about the CELF deadline (which has been extended to tuesday). Do I want to commit to travel at this point? Hmmm...

Where did I leave off... ping.c! (Although if I'm to make proper use of that cortex-m board before innoflight asks for it back again, I should do tftp/tftpd since that tftpboots.)

I need to check timestamps in fractions of a second, and I vaguely recall I created a millitime() function which returns current time in milliseconds (for the pun if no other reason: it's millitime). But it's not in lib, it's in ps.c, which means I have a second file wanting to use it so I should move it to lib/lib,c, and looking at that I trivially cleaned up the last function there, environ_bytes(). Except that function should really take environ as its argument, and thus be able to iterate over argv[] too. But I shouldn't go down that tangent just _now_...

Hmmm, this implies that xparsetime() from yesterday should probably return milliseconds too. (When launching command line binaries, that's about the resolution you can expect. You need nanosecond accuracy for things like filesystem timestamps where you're reproducing a previous reading exactly, but not delta-from-current with pages faulted in from storage and a potential call to the dynamic linker in there before any of your code runs. Again, todo item for later.

Sigh. It would be nice if posix made proper use of C's object orientation. Specifically, in struct sockaddr and friends, wouldn't it be nice if:

struct sockaddr {
  short family;
  // whatever else

struct sockaddr_in {
  struct sockaddr sa;
  blah blah blah;

struct sockaddr_in6 {
  struct sockaddr sa;
  blah blah blah;

Right now you can typecast either to struct sockaddr and works fine, but it's not obvious what portion of that you can use. With the above you could &(sockaddr_in->sa) and not even have to typecast. (You'd still have to typecast it back once you knew what the type was, the pointers will be the same because a pointer to a struct is a pointer to the first member of the struct, there can be no padding or alignment space at the beginning. But right now it's implicit, not explicit, and if I declare a function to accept "struct sockaddr *" you have to typecast to call it with sockaddr_in or sockaddr_in6. At which point it might as well just be a void *, because that's what I'm going to typecast it TO to make the compiler shut up.)

There are ways to declare your data so "I know what I'm doing, let me do it" does not require hitting the compiler with a rock, but the network stack doesn't do it that way.

(But no, people think you need C++ for that kind of thing. You very much don't. C++ only makes things worse. Because they don't teach how to do it right, and the berkeley guys especially spent their first decade doing CRAZY THINGS. Everything's a file... except network interfaces, those aren't. Ken and Dennis were very good at finding the "sweet spot" between not enough capability and too much complexity, and I greatly admire what they accomplished. Many of their successors in BSD and AT&T, not so much...)

January 5, 2018

Cycled back around to ping. Specifying time between ping instances means you do fractions of a second, but I'm trying to restrict the use of floating point in the code and keep it under #ifdefs (to work on really tiny systems). So my infrastructure for that is xparsetime() (originally for sleep) which returns seconds and fractional seconds in two longs, and only uses floating point when the ifdefs are defined.

I want to add -i, which needs fractional seconds, and at the moment that means I need to turn its optargs from a number to a string (# to :) and call xparsetime() on the string myself. That raises the question of whether I should do the same for -s and -W, so the time parsing is consistent. But neither of those particularly care about fractional seconds, and the OTHER thing the optargs number parsing does is range checking and default value assignment. Having to do that manually raises the expense a bit.

Speaking of range checking, if you _do_ feed a negative time to xparsetime() the non-float path errors, and the float path returns the negative value, except if it's -0.5 then you have to check the seconds and fractions seperately to catch that it's a negative value, and really I should just check it in the strtod path. Alas, then it needs another error message which seems wasteful. Also strtod() can skip arbitrary spaces and allows a + at the front so checking for - at the start is more complicated than it seems... (So many corner cases.)

I could add an xparsetime() type to lib/args.c but there aren't realy enough users to justify it? The other big one is sleep, but there it's an optarg, not a flag argument, so sleep_main() has to parse it anyway, and in GLOBALS it would still have a sizeof(long) slot needing to fit 2 fields, and a struct that fits in 32 bits on 32 bit systems would have to be 2 short ints so it couldn't do nanoseconds, which eliminates about half the other uses.

Ah, I see: if you go "sleep -1" it says 1 is an unknown option, that's probably why I didn't care at the time. Of course you can do "sleep ' -1'" and strtod() eats the leading space and then parses it and returns a negative number, although sleep then returns immediately so it doesn't hurt anything...

Sigh, ok. Keep -w and -W doing the optargs # integer parsing, and have -i do something different.

January 4, 2018

All the bugs in the world. I updated my offline backups.

Wouldn't it be nice if we had an organization like the NSA that was supposed to find and publish the sort of vulnerabilities that make Hardison from Leverage or Finch from Person of Interest's ability to hack into any computer anywhere NOT FICTION? Instead of hoarding them so it can keep their budgets unlimited in perpetuity by blackmailing future politicians with the porn they browsed as teenagers, and treating any other possible use of the data (such as law enforcement) as compromising their sources? Wouldn't that have been nice.

I know it sounds crazy blaming those sorts of guys for vulnerabilities that go back before September 11, 2001. But we know they're _trying_, the counter-argument is they're not as _effective_ as they'd like to be.

January 3, 2018

The Call For Papers deadlines for both ELC and TXLF are coming up. I'm still sort of "too tired, dowanna travel, I should just podcast", but at the same time I should show the flag and I do have various things I should probably talk about: 0BSD and licensing stuff, mkroot, making android self-hosting... Heck, I could do a panel of just war stories. Haven't bothered to write up any proposals yet though.

Dropped the car off at Howdy Honda. In addition to the crunchy noises from the suspension when it hits uneven road (cv joint?), it's now making growling noises when it's cold and you turn the wheel. (Power steering pump?) It's a 2002 car, about 16 years old now (we bought it used). I've been waiting for app-summonable self driving car services, but that's like 2 more years for early adopters and maybe 5 to be ubiquitous in urban centers. (And in about 7 gasoline volume declines enough that the profit marging for refining, distributing, and selling it with the current infrastructure and transportation network goes negative, at which point a car running on gasoline isn't quite so useful. And yes the auto industry knows this so resale value's likely to decline well before then, but "when does the herd break and run" is always a hard financial question. All the manufacturers are switching over to electric cars now, but the first generation models are still too expensive for my tastes and when the self-driving subscription fleets show up why own your own? Don't sink a well when city water's 5 years away from reaching your neighborhood...)

So yeah, waiting out the awkward adolescence of yet another industry. I rememer the days of "when can I get an ISP instead of dialing in to my university or work", "when can I get broadband instead of dialup", "when is my cell phone good enough to stop paying for a landline", "should I just have a laptop and not bother having a desktop", "when can we switch from netflix mailing us DVDs to just the streaming", "hard drive or ssd"...

These days there's "when to get rooftop solar and a battery wall", "when can I get a development environment on my phone/tablet so I don't need a PC anymore"... There's usually some case where :I know where it's GOING but is it quite HERE yet", and a car is a large purchase that kind of imposes itself upon you at times...

January 2, 2018

Jeff just asked me to work on an 8-bit chip design with him, but I'm already stretched too thin on the stuff I'm already doing. The Big Push in november involved GPS, helping arrange investor meetings, trying to track todo items for the whole company, turtle manufacturing stuff, and of course the endless uncertainty. (During investor prep Jeff kept gaming out how the bloq guys might screw us over or flake, so we'd be prepared. About half the time I didn't know where I'd be sleeping the next day.)

Jen not showing up wasn't Jeff's fault but it meant plans changed and I had to try to figure out what Jen does and maybe try to come up to speed on the existing customer phone calls (maintaining their trickle of R&D funding) and see if Weekly Engineering Call With Jeff could replace Daily Engineering Call With Jen if she flaked completely. That's a management job I got sucked into a vacuum for.

Jeff tried to sit me down and teach me enough VHDL to help with the ASIC tapeout, despite niishi and arakawa with years of experience in it _not_ being up to help with the tapeout. He tried very hard to get me to track what RiscV was doing and I _cannot_ bring myself to care, it smells too much like an open source version of Itanium made from hype and overcommitted promises and absorbing all the funding in the world to be less interesting than x86, let alone arm. We met with a nice lady at a university who's doing a toy processor. We sat down to try to sort the instruction bit patterns of j-core so we could redo the front end more efficiently, but didn't have time to finish that. We started to triage the build system for a github release, but didn't have time to finish that. We talked about hooking up the GPS-stabilized nanosecond accurate clock to the userspace signal monitoring package, but didn't have time to finish that...

All this has put me way over mental budget on my normal ecosystem (which used to be aboriginal linux+busybox and is now toybox+mkroot/aosp). Trying to turn android into a self-hosting development environment is STALLED HARD. (Politics: the pixel 7 tablet is discontinued so all the google in-house testing systems are now chromebooks; chromeos runs android apps but what does this mean for testing android base layers? How is development shifting inside google? I haven't had a chance to ask. Whatever it is is happening without me and I'll find out 6 months later when it's too late to provide feedback that might influence any of the decisions. Oh well.)

I haven't done half of what I need to on SEI's Board Support Package because that hasn't really been my job in forever. The website is in pieces and the mailing list is silent because I'm not sure what I'm allowed to _say_. The website needs to turn into kernel Documentation/ files. The arch/sh and linux-sh stuff is badly stale upstream and in _theory_ that's Rich's task but in practice he hasn't got cycles for it (and he only cares about testing on real hardware, even though QEMU is what the upstream kernel guys can actually regression test against; the serial console's been broken for most of a year and we never fixed it translates to a perception that "this platform is dead"). I haven't kept up with new kernel developments in general for the quarterly releases, and I've had patches I've wanted to push upstream for a year, but haven't.

I have a significant issue that that my own projects look dead to other people. I haven't posted to the mkroot list since October. I spent some time getting mkroot closer to parity with aboriginal in terms of supported targets (the reboot was required by swapping out the toolchain for musl-cross-make) but I still haven't got the native toolchains working, let alone the distcc trick or the build control image automation layer. The last mkroot release was in June, using a 4.11 kernel (which is over a year old now).

I spent part of this vacation getting my technical development blog caught up closer to current, which means I've gotten it up to mid-september. (I have daily-ish rough draft notes-to-self in a text file but it needs significant editing and expansion to make sense to anyone else. Plus html tags and links and proofreading.)

I've spent the rest of this vacation trying to do enough toybox work the project doesn't look dead to the android guys. I got the smallest two commands promoted out of pending and I'm trying to deal with the new submission of fmt.c (from the android guys).

I'm sitting on the west coast Embedded Linux Conference call for papers and haven't submitted anything yet because I'm _tired_. It would be really good to show the flag there but my talk there last year and the one before that at were incoherent because I was too exhausted to prepare and give good talk. (And given my baseline fatigue and redeye flights one day was NOT enough to recover from jeglag in either case.) It hasn't gotten better since.

It looks like I can either stop doing open source development, or I can get a day job doing something less taxing which I can stop thinking about when I leave the office.

Sigh. Jeff talked about how great sitting down and grinding is, and I WANT to do that but I CAN'T because it's a constant stream of interruptions swap-thrashing between too many projects that never produce output and idle for so long between bursts that when you go back to them you spend all your time trying to figure out where you left off and why because you've forgotten all the context and have to reverse engineer your own code. This has been the failure mode of toybox development for the past couple years, now it's becoming the failure mode of EVERYTHING, because I can't focus and when I do carve out time I'm too exhausted to make good use of it.

Random example: waiting at the airport for the flight back from tokyo, I caught up another couple weeks on the j-core news page. Triaged, edited, and uploaded. Haven't touched it since, so of course it's now further behind than it was when I did that. And of course there's no https on that website even though doing so is like half a day's work. (Well, for Rich. Probably about 3 days for me, the update scripts are fiddly and there's a dozen implementations with no obvious winner because the one Let's Encrypt provided/recommends is overcomplicated crap so many people have made their own but _because_ there's an "official" one none of the others has coalesced a big community around it yet and become the obvious one to use.)

I remember when Jeff and I talked about moving all the servers to tokyo. A year or so back, we bought a USB drive to do backups to, and he had me install ubuntu on an old 32-bit machine he had lying around. Might have been the end of the trip with Tokyo Big Sight?) Out of curiosity I just ssh'd into and did a sudo aptitude update and it has 58 packages it wants to upgrade. I'm afraid to do the corresponding upgrade because if it breaks, what do I do? The person Jen tried to transfer wale's sysadmin responsibilities to was... me. The servers are in the back of an office in canada, I'm in texas.

I'm not sure I'm still making a difference here.

January 1, 2018

Happy new year.

Next low hanging fruit pending command to clean up, sorting by source file size, is logger.c. The main reason it's in pending is it depends on CONFIG_SYSLOGD. That kind of cross-command dependency is unpleasant, I try to either merge them into the same .c file (ala ps/top/pgrep) or move whatever they share to lib.

Since the actual function logger wanted out of syslogd was only a few lines long, I just inlined it in the two calls in logger, did the other obvious cleanups, and tried a test build... at which point I noticed the next problem.

The function I inlined is iterating through two arrays, facilitynames and prioritynames, which are defined in sys/syslog.h. But you have to #define SYSLOG_NAMES before #including that in order to get them. Why? Because #ifdef in the header is instantiating the array, which means if you #include it from two places you get two copies of the array.

The really STUPID part is I can't #include it from one file and then extern reference it elsewhere because the TYPE is defined in the same #ifdef.

One of Rich Felker's coworkers complained about this before, and clearly this was a case of glibc being stupid, but it's one of those things that shipped and now fixing it would break existing programs.

Back to 2017