changeset 1580:455f8dec31a0

Large rewrite of about page, probably still needs cleanup.
author Rob Landley <rob@landley.net>
date Sun, 17 Feb 2013 22:17:26 -0600
parents bd385934451f
children f9b0ebfbdd1b
files www/about.html
diffstat 1 files changed, 529 insertions(+), 131 deletions(-) [+]
line wrap: on
line diff
--- a/www/about.html	Thu Jan 10 02:49:32 2013 -0600
+++ b/www/about.html	Sun Feb 17 22:17:26 2013 -0600
@@ -1,141 +1,72 @@
 <html>
-<title>About Aboriginal Linux</title>
+<title>Ab Origine - Latin, "From the beginning".</title>
 <body>
 <!--#include file="header.html" -->
 
+<b><h1>Ab Origine - Latin, "From the beginning".</h1></b>
+
+<table border=1 width=100%><tr><td bgcolor="#C0C0FF">
+<ul>
+<li>Build the simplest linux system capable of compiling itself.</li>
+<li>Cross compile it to every target supported by QEMU.</li>
+<li>Boot it under QEMU (or real hardware).</li>
+<li>Build/test everything else natively on target.</li>
+</ul>
+</td></tr></table>
+
 <b><h1><a href=documentation.html>What is Aboriginal Linux?</a></h1></b>
 
-<blockquote>
-<table border=1><tr><td bgcolor="#C0C0FF">
-<p>Aboriginal Linux is a set of tools to build custom virtual machines.
-It lets you boot virtual PowerPC, ARM, MIPS and
-<a href=screenshots>other exotic systems</a> on
-your x86 laptop (using an emulator such as QEMU).  These virtual system
-images provide a simple development environment within which you can compile
-software and run the result.</p>
-</td></tr></table>
+<h2>Creating system images.</h2>
 
-<p>Aboriginal Linux has an obvious niche within the embedded community, but
-has many other uses as well:</p>
-
-<ul>
-<li><p><b>Allow package developers and maintainers to reproduce and fix bugs
-on architectures they don't have access to or experience with.</b></p>
-
-<p>Bug reports can include a link to a system image and a
-reproduction sequence (wget source, build, run this test).  This provides
-the maintainer both a way to demonstrate the issue, and a native
-development environment in which to build and test their fix.</p>
+<p>Aboriginal Linux is a shell script that builds the smallest/simplest
+linux system capable of rebuilding itself
+from source code. This currently requires seven packages: linux,
+busybox, uClibc, binutils, gcc, make, and bash. The results are packaged into
+a system image with shell scripts to boot it under
+<a href=http://qemu.org>QEMU</a>. (It works fine on real hardware too.)</p>
 
-<p>No special hardware is required for this, just an open source emulator
-(generally QEMU) and a system image to run under it.  Configure and make
-your package as normal, using standard tool names (strip, ld, as, etc).
-You can even build and test on a laptop in an airplane, without internet
-access.</p>
-</li>
-
-<li><p><b>Build arbitrarily complex Linux distributions without messing with
-cross compiling.</b></p>
-
-<p>The point is to separate _what_ you build from _how_ you build.  Build
-systems have enough to do handling package dependencies and configuration
-without entangling cross compiling into it.  If one system builds the right
-set of packages and another system works on the right type of hardware, life
-is much easier if they can work together to produce a single result.</p>
+<p>The build supports most <a href=architectures.html>architectures</a>
+QEMU can <a href=screenshots>emulate</a> (x86, arm, powerpc,
+mips, sh4, sparc...). The build runs as a normal user (no root access required)
+and should run on any reasonably current distro, downloading and compiling its
+own prerequisites from source (including cross compilers).</p>
 
-<p>If you need to scale up development, Aboriginal Linux lets you throw
-hardware at the scalability problem instead of engineering time, using distcc
-acceleration and distributed package build clusters to compile entire
-distribution repositories on racks of cheap x86 cloud servers.</p>
-</li>
-
-<li><p><b>Automated cross-platform regression testing and portability auditing.</b></p>
+<p>The build is modular; each section can be bypassed or replaced if desired.
+The build offers a number of <a href=/hg/aboriginal/file/tip/config>configuration
+options</a>, but if you don't want to run the build yourself you can download
+<a href=downloads/binaries>binary system images</a> to play with, built for
+each target with the default options.</p>
 
-<p>Aboriginal Linux lets you build the same package across multiple
-architectures, and run the result immediately inside the emulator.  You can
-even set up a cron job to build and test regular repository snapshots of a
-package's development version automatically, and report regressions when
-they're fresh, when the developers remember what they did, and when
-there are few recent changes that may have introduced the bug.</p></li>
-
-<li><p><b>Use current vanilla packages, even on obscure targets.</b></p>
+<h2>Using system images.</h2>
 
-<p>Embedded hardware often receives less testing than common desktop and server
-platforms, so regressions accumulate.  This can lead to a vicious cycle where
-everybody sticks with private forks of old versions because making the new
-ones work is too much trouble, and the new ones don't work because nobody's
-testing and fixing them.  The farther you fall behind, the harder it is to
-catch up again, but only the most recent version accepts new patches, so
-even the existing fixes don't go upstream.  Worst of all, working in private
-forks becomes the accepted norm, and developers stop even trying to get
-their patches upstream.</p>
-
-<p>Aboriginal Linux uses the same (current) package versions across all
-architectures, in as similar a configuration as possible, and with as few
-patches as we can get away with.  We (intentionally) can't upgrade a package
-for one target without upgrading it for all of them, so we can't put off
-dealing with less-interesting targets.</p>
+<p>Each system image tarball contains a wrapper script <b>./run-emulator.sh</b>
+which boots it to shell prompt. (This requires the emulator QEMU to be
+installed on the host.) The emulated system's /dev/console is routed to stdin
+and stdout of the qemu process, so you can just type at it and log the output
+with "tee". Exiting the shell causes the emulator to shut down and exit.</p>
 
-<p>This means any supported target stays up to date with current packages in
-unmodified "vanilla" form, providing an easy upgrade path to the next
-version and the ability to push your own changes upstream relatively
-easily.</b></p>
-</li>
-
-<li><p><b>Provide a minimal self-hosting development environment.</b></p></li>
-
-<blockquote><p>Perfection is achieved, not when there is nothing more to add,
-but when there is nothing left to take away." - Antoine de Saint Exupery</p>
-</blockquote>
-
-<p>Most build environments provide dozens of packages, ignoring the questions
-"do you actually need that?" and "what's it for?" in favor of offering
-rich functionality.</p>
+<p>The wrapper script <b>./dev-environment.sh</b> calls
+run-emulator.sh with extra options to tell QEMU to allocate more memory,
+attach 2 gigabytes of persistent storage to /home in the emulated system,
+and to hook distcc up to the cross compiler to move the heavy lifting of
+compilation outside the emulator (if distccd and the appropriate cross
+compiler are available on the host system).</p>
 
-<p>Aboriginal Linux provides the simplest development environment capable
-of rebuilding itself under itself.  This currently consists of seven packages:
-busybox, uClibc, linux, binutils, gcc, make, and bash.  (We include one more,
-distcc, to optionally accelerate the build, but install it in its own
-subdirectory which is only optionally added to the $PATH.)</p>
-
-<p>This minimalist approach makes it possible to regression test for
-environmental dependencies.  Sometimes new releases of packages simply won't
-work without perl, or zlib, or some other dependency that previous versions
-didn't have, not because they meant to but because they were never tested in
-a build environment that didn't have them, so the dependency leaked in.</p>
-
-<p>By providing a build environment that contains only the bare essentials
-(relying on you to build and install whatever else you need), Aboriginal
-Linux lets you document exactly what dependencies packages actually require,
-figure out what functionality the additional packages provide, and measure
-the costs and benefits of the extra code.</p>
-</li>
+<p>The wrapper script
+<b>./native-build.sh</b> calls dev-environment.sh with a
+<a href=control-images>build control image</a> attached to /mnt in the emulated
+system, allowing the init script to run /mnt/init instead of
+launching a shell prompt, providing fully automated native builds. The "static
+tools" (dropbear, strace) and "linux from scratch" (a chroot tarball) builds
+are run each release as part of testing, with the results <a href=bin>uploaded
+to the website</a>.</p>
 
-<li><p><b>Document how to put together a development environment.</b></p>
-
-<p>The build system is designed to be readable.  That's why it's written in
-Bash (rather than something more powerful like Python): so it can act as
-documentation.  Each shell script collects the series of commands you need
-to run in order to configure, build, and install the appropriate packages,
-in the order you need to install them in to satisfy their dependencies.</p>
-
-<p>The build is organized as a series of orthogonal stages.  These are called
-in order from build.sh, but may be run (and understood) independently.
-Dependencies between them are kept to a minimum, and stages which depend on
-the output of previous stages document this at the start of the file.</p>
-
-<p>The scripts are also extensively commented to explain why they
-do what they do, and there's design documentation on the website.</p>
-</li>
-</ul>
-
-<p>For more information, see <a href=documentation.html>the documentation
-page</a>.</p>
-</blockquote>
+<p>For more information, see <a href=FAQ.html#where_start>Getting Started</a>
+or the presentation slides
+<a href=http://speakerdeck.com/u/mirell/p/developing-for-non-x86-targets-using-qemu>Developing for non-x86 Targets using QEMU</a>.</p>
 
 <b><h1><a href=downloads>Downloading Aboriginal Linux</a></h1></b>
 
-<blockquote>
 <table border=1><tr><td bgcolor="#c0c0ff">
 <p><a href=downloads/binaries>Prebuilt binary images</a> are available
 for each target, based on the current Aboriginal Linux release.  This
@@ -143,17 +74,14 @@
 chroot, and system images for use with QEMU.</p>
 </td><tr></table>
 
-<p>The <a href=downloads/README>binary README</a> describes each tarball.
-The <a href=news.html>release notes</a> explain recent changes.</p>
+<p>The <a href=downloads/binaries/README>binary README</a> describes each
+tarball. The <a href=news.html>release notes</a> explain recent changes.</p>
 
 <p>Even if you plan to build your own images from source code, you should
 probably start by familiarizing yourself with the (known working) binary
 releases.</p>
 
-</blockquote>
-
-<b><h1><a href=http://landley.net/hg/aboriginal>Development</a></h1></b>
-<blockquote>
+<b><h1><a href=/hg/aboriginal>Development</a></h1></b>
 
 <table border=1><tr><td bgcolor="#c0c0ff">
 <p>To build a system image for a target, download the
@@ -183,15 +111,485 @@
 dependencies.  Each layer can be either omitted or replaced with something
 else.  The list of layers is in the <a href=README>source README</a>.</p>
 
-<p>The project maintains a <a href=http://landley.net/hg/aboriginal>development repository</a>
+<p>The project maintains a <a href=/hg/aboriginal>development repository</a>
 using the Mercurial source control system.  This includes RSS feeds for
-<a href=http://landley.net/hg/aboriginal/rss-log>each checkin</a>
-and for <a href=http://landley.net/hg/aboriginal/rss-tags>new releases</a>.</p>
+<a href=/hg/aboriginal/rss-log>each checkin</a>
+and for <a href=/hg/aboriginal/rss-tags>new releases</a>.</p>
 
 <p>Questions about Aboriginal Linux should be addressed to the project's
 maintainer (rob at landley dot net), who has a
-<a href=http://landley.net/notes.html>blog</a> that often includes
+<a href=/notes.html>blog</a> that often includes
 notes about ongoing Aboriginal Linux development.</p>
+
+<b><h1>Design goals</h1></b>
+
+<p>In addition to implementing the above, Aboriginal Linux tries to
+support a number of use cases:</p>
+
+<table border=1><tr><td bgcolor="#c0c0ff">
+<ul>
+<li>Eliminate the need for cross compiling</li>
+<li>Allow package maintainers to reproduce/fix bugs on more architectures</li>
+<li>Automated cross-platform regression testing and portability auditing.</li>
+<li>Use current vanilla packages, even on obscure targets.</li>
+<li>Provide a minimal self-hosting development environment.</li>
+<li>Cleanly separate layers</li>
+<li>Document how to put together a development environment.</li>
+</td></tr></table>
+
+<ul>
+<li><p><b>Eliminate the need for cross compiling</b></p></li>
+
+<p>We cross compile so you don't have to: Moore's Law has
+made native compiling under emulation a reasonable approach to cross-platform
+support.</p>
+
+<p>If you need to scale up development, Aboriginal Linux lets you throw
+hardware at the scalability problem instead of engineering time, using distcc
+acceleration and distributed package build clusters to compile entire
+distribution repositories on racks of cheap x86 cloud servers.</p>
+
+<p>But using distcc to call outside the emulator to a cross compiler still
+acts like a native build. It does not reintroduce the
+complexities of cross compiling, such as keeping multiple
+compiler/header/library combinations straight, or preventing configure from
+confusing the system you build on with the system you deploy on.</p>
+</li>
+
+<li><p><b>Allow package developers and maintainers to reproduce and fix bugs
+on architectures they don't have access to or experience with.</b></p>
+
+<p>Bug reports can include a link to a system image and a
+reproduction sequence (wget source, build, run this test).  This provides
+the maintainer both a way to demonstrate the issue, and a native
+development environment in which to build and test their fix.</p>
+
+<p>No special hardware is required for this, just an open source emulator
+(generally QEMU) and a system image to run under it.  Use wget to fetch your
+source, configure and make your package as normal using standard tool names
+(strip, ld, as, etc), even build and test on a laptop in an airplane
+without internet access (10.0.2.2 is qemu's alias for the host's 127.0.0.1.).</p>
+</li>
+
+<li><p><b>Automated cross-platform regression testing and portability auditing.</b></p>
+
+<p>Aboriginal Linux lets you build the same package across multiple
+architectures, and run the result immediately inside the emulator.  You can
+even set up a cron job to build and test regular repository snapshots of a
+package's development version automatically, and report regressions when
+they're fresh, when the developers remember what they did, and when
+there are few recent changes that may have introduced the bug.</p></li>
+
+<li><p><b>Use current vanilla packages, even on obscure targets.</b></p>
+
+<p>Nonstandard hardware often receives less testing than common desktop and
+server platforms, so regressions accumulate. This can lead to a vicious cycle
+where everybody sticks with private forks of old versions because making the
+new ones work is too much trouble, and the new ones don't work because nobody's
+testing and fixing them. The farther you fall behind, the harder it is to
+catch up again, but only the most recent version accepts new patches, so
+even the existing fixes don't go upstream. Worst of all, working in private
+forks becomes the accepted norm, and developers stop even trying to get
+their patches upstream.</p>
+
+<p>Aboriginal Linux uses the same (current) package versions across all
+architectures, in as similar a configuration as possible, and with as few
+patches as we can get away with. We (intentionally) can't upgrade a package
+for one target without upgrading it for all of them, so we can't put off
+dealing with less-interesting targets.</p>
+
+<p>This means any supported target stays up to date with current packages in
+unmodified "vanilla" form, providing an easy upgrade path to the next
+version and the ability to push your own changes upstream relatively
+easily.</b></p>
+</li>
+
+<li><p><b>Provide a minimal self-hosting development environment.</b></p></li>
+
+<blockquote><p>Perfection is achieved, not when there is nothing more to add,
+but when there is nothing left to take away." - Antoine de Saint Exupery</p>
 </blockquote>
 
+<p>Most build environments provide dozens of packages, ignoring the questions
+"do you actually need that?" and "what's it for?" in favor of offering
+rich functionality.</p>
+
+<p>Aboriginal Linux provides the smallest, simplest starting point capable
+of rebuilding itself under itself, and of bootstrapping up to build arbitrarily
+complex environments (such as Linux From Scratch) by building and installing
+additional packages. (The one package we add which is not strictly required
+for this, distcc, is installed it in its own subdirectory which is only
+optionally added to the $PATH.)</p>
+
+<p>This minimalist approach makes it possible to regression test for
+environmental dependencies. Sometimes new releases of packages simply won't
+work without perl, or zlib, or some other dependency that previous versions
+didn't have, not because they meant to but because they were never tested in
+a build environment that didn't have them, so the dependency leaked in.</p>
+
+<p>By providing a build environment that contains only the bare essentials
+(relying on you to build and install whatever else you need), Aboriginal
+Linux lets you document exactly what dependencies packages actually require,
+figure out what functionality the additional packages provide, and measure
+the costs and benefits of the extra code.</p>
+
+<p>(Note: the command logging wrapper
+<a href=/aboriginal/FAQ.html#debug_logging>record-commands.sh</a> can
+actually show which commands were used out of the $PATH when building any
+package.)</p>
+</li>
+
+<li><p><b>Cleanly separate layers.</b></p>
+
+<p>The entire build is designed to let you use only the parts of it you want,
+and skip or replace the rest. The top level "build.sh" script calls other
+scripts in sequence, each of which is designed to work independently.</p>
+
+<p>The only place package versions are mentioned is "download.sh", the rest
+of the build is version-agnostic. All it does is populate the "packages"
+directory, and if you want to provide your own you never need to run this
+script.</p>
+
+<p>The "host-tools.sh" script protects the build from variations in the host
+system, both by building known versions of command line tools (in build/host)
+and adjusting the $PATH to point only to that directory, and by unsetting
+all environment variables that aren't in a whitelist. If you want to
+use the host system's unfiltered environment instead, just skip running
+host-tool.sh.</p>
+
+<p>If you supply your own cross compilers in the $PATH (with the prefixes the
+given target expects), you can skip the simple-cross-compiler.sh command.
+Similarly you can provide your own simple root filesystem, your own native
+compiler, or your own kernel image. You can use your own script to package
+them if you like.</p>
+</li>
+
+<li><p><b>Document how to put together a development environment.</b></p>
+
+<p>The build system is designed to be readable. That's why it's written in
+Bash (rather than something more powerful like Python): so it can act as
+documentation. Each shell script collects the series of commands you need
+to run in order to configure, build, and install the appropriate packages,
+in the order you need to install them in to satisfy their dependencies.</p>
+
+<p>The build is organized as a series of orthogonal stages. These are called
+in order from build.sh, but may be run (and understood) independently.
+Dependencies between them are kept to a minimum, and stages which depend on
+the output of previous stages document this at the start of the file.</p>
+
+<p>The scripts are also extensively commented to explain why they
+do what they do, and there's design documentation on the website.</p>
+</li>
+</ul>
+
+<b><h1>What's next?</h1></b>
+
+<p>Now that the 1.0 release is out, what are the project's new goals?</p>
+
+<table border=1><tr><td bgcolor="#c0c0ff">
+<ul>
+<li>Untangle hairball build systems into distinct layers.</li>
+<li>Make Android self-hosting</li>
+</ul>
+</td></tr></table>
+
+<a name=hairball>
+<li><p><b>Untangle hairball build systems into distinct layers.</b></p></li>
+
+<p>The goal here is to separate what packages you can build from where and how
+you can build them.</p>
+
+<p>For years, Red Hat only built under Red Hat, Debian only built under Debian,
+even Gentoo assumed it was building under Gentoo. Building their packages
+required using their root filesystem, and the only way to get their root
+filesystem was by installing their package binaries built under their root
+filesystem. The circular nature of this process meant that porting an existing
+distribution to a new architecture, or making it use a new C library,
+was extremely difficult at best.</p>
+
+<p>This led cross compilng build systems to add their own package builds
+("the buildroot trap"), and wind up maintaining their own repository of
+package build recipes, configurations, and dependencies. Their few hundred
+packages never approached the tens of thousands in full distribution
+repositories, but the effort of maintaining and upgrading packages would
+come to dominate the project's development effort until developers left to
+form new projects and start the cycle over again.</p>
+
+<p>This massive and perpetual reinventing of wheels is wasteful. The
+proliferation of build systems (buildroot, openembedded, yocto/meego/tizen,
+and many more) each has its own set of supported boards and its own half-assed
+package repository, with no ability to mix and match.</p>
+
+<p>The proper way to deal with this is to separate the layers so you can mix
+and match. Choice of toolchain (and C library), "board support" (kernel
+configuration, device tree, module selection), and package repository (which
+existing distro you want to use), all must become independent. Until these are
+properly separated, your choice of cross compiler limits
+what boards you can boot the result on (even if the binaries you're building
+would run in a chroot on that hardware), and either of those choices limit
+what packages you can install into the resulting system.</p>
+
+<p>This means Aboriginal Linux needs to be able to build _just_ toolchains
+and provide them to other projects (done), and to accept external toolchains
+(implemented but not well tested; most other projects produce cross compilers
+but not native compilers).</p>
+
+<p>It also needs build control images to automatically bootstrap a Debian,
+Fedora, or Gentoo chroot starting from the minimal development enviornment
+Aboriginal Linux creates (possibly through an intermediate Linux From Scratch
+build, followed by fixups to make debian/fedora/gentoo happy with the chroot).
+It must be able to do this on an arbitrary host, using the existing toolchain
+and C library in an architecture-agnostic way. (If the existing system is
+a musl libc built for a microblaze processor, the new chroot should be too.)</p>
+
+<p>None of these distributions make it easy: it's not documented, and it
+breaks. Some distributions didn't think things through: Gentoo hardwires the
+list of supported architectures into every package in the repository, for
+no apparent reason. Adding a new architecture requires touching every package's
+metadata. Others are outright lazy; building the an allnoconfig Red
+Hat Enterprise 6.2 kernel under SLES11p2 is kind of hilariously bad: "make
+clean" spits out an error because the code it added to detect compiler
+version (something upstream doesn't need) gets confused by "gcc 4.3", which
+has no .0 on the end so the patchlevel variable is blank. Even under Red Hat's
+own filesystem, "make allnoconfig" breaks on the first C file, and requires
+almost two dozen config symbols to be switched on to finish the compilation,
+becuase they never tested anything but the config they ship. Making
+something like that work on a Hexagon processor, or making their
+root filesystem work with a vanilla kernel, is a daunting task.</p>
+
+<a name=selfhost>
+<li><p><b>Make Android self-hosting (musl, toybox, qcc).</b></p></li>
+
+<p>Smartphones are replacing the PC, and if Android doesn't become self-hosting
+we may be stuck with locked down iPhone derivatives in the next generation.</p>
+
+<blockquote>
+<b>Mainframe -&gt minicomputer -&gt microcomputer (PC) -&gt smartphone</b>
+</blockquote>
+
+<p>Mainframes were replaced by minicomputers, which were replaced by
+microcomputers, which are being replaced by smartphones. (Nobody needed to
+stand in line to pick up a printout when they could sign up for a timeslot at a
+terminal down the hall. Nobody needed the terminal down the hall when they
+had a computer on their desk. Now nobody needs the computer on their desk when
+they have one in their pocket.)</p>
+
+<p>Each time the previous generation got kicked up into the "server space",
+only accessed through the newer machines. (This time around kicking the PC
+up into the server space is called "the cloud".)</p>
+
+<p>Smartphones have USB ports, which charge the phone and transfer data.
+Using a smartphone as a development workstation involves plugging it into a
+USB hub, adding a USB keyboard, USB mouse, and USB to HDMI converter to plug
+it into a television. The rest is software.</p>
+
+<p>The smartphone needs to "grow up and become a real computer" the
+same way the PC did. The PC originally booted into "ROM Basic" just like
+today's Android boots into Dalvik Java: as the platform matures it must
+outgrow this to run native code written in all sorts of languages.
+PC software was once cross compiled from minicomputers, but as it matured
+it grew to host its own development tools, powerful enough to rebuild the
+entire operating system.</p>
+
+<p>To grow up, Android phones need to become usable as development
+workstations, meaning the OS needs a self-hosting native development
+environment. This has four parts:</p>
+
+<ul>
+<li>Kernel (we're good)</li>
+<li>C library (bionic->musl, not uclibc)</li>
+<li>Posix command line (toolbox->toybox, not busybox)</li>
+<li>Compiler (qcc, llvm, open64, pcc...)</li>
+</ul>
+
+<p>The Android kernel is a Linux derivative that adds features without removing
+any, so it's already good enough for now. Convergence to vanilla linux is
+important for long-term sustainability, but not time critical. (It's not part
+of "beating iPhone".)</p>
+
+<p>Android's "no GPL in userspace" policy precludes it from shipping
+many existing Linux packages as part of the base install: no BusyBox or
+GNU tools, no glibc oruClibc, and no gcc or binutils. All those are all
+excluded from the Android base install, meaning they will never
+come bundled with the base operating system or preinstalled on devices,
+so we must find alternatives.</p>
+
+<p>Android's libc is called "bionic", and is a minimal stub sufficient
+to run Dalvik, and not much more. Its command line is called "toolbox" and
+is also a minimal stub providing little functionality. Part of this is
+intentional: Google is shipping a billion broadband-connected unix machines,
+none of which are administered by a competent sysadmin. So for security
+reasons, Android is locked down with minimal functionality outside the Java
+VM sandbox, providing less of an attack surface for viruses and trojans.
+In theory the <a href=http://lxc.sf.net>Linux Containers</a> infrastructure
+may eventually provide a solution for sandboxing applications, but the
+base OS needs to be pretty bulletproof if a billion people are going to
+run code they don't deeply understand connected to broadband internet 24/7.</p>
+
+<p>Thus replacement packages for the C library and posix command line
+should be clean simple code easy to audit for security concerns. But it
+must also provide functionality that bionic and toolbox do not
+attempt, and do not provide a good base for. The musl libc and toybox
+command line package should be able to satisfy these requirements.</p>
+
+<p>The toolchain is a harder problem. The leading contender (LLVM) is sponsored
+by Apple for use in Mac OSX and the iPhone's iOS. The iPhone is ahead of
+Android here, and although Android can use this it has other problems
+(implemented in C++ so significantly more complicated from a system
+dependency standpoint, making it difficult to bootstrap and impossible
+to audit).</p>
+
+<p>The simplest option would be to combine the TinyCC project with QEMU's
+Tiny Code Generator (TCG). The licensing of the current TinyCC is incompatible
+with Android's userspace but permission has been obtained from Fabrice
+Bellard to BSD-license his original TinyCC code as used in Rob's TinyCC fork.
+This could be used to implement a "<a href=http://landley.net/qcc>qcc</a>"
+capable of producing code for
+every platform qemu supports. The result would be simple and auditable,
+and compatably licensed with android userspace. Unfortunately, such a project
+is understaffed, and wouldn't get properly started until after the 1.0
+release of Toybox.</p>
+
+<p>Other potential compiler projects include Open64 and PCC. Neither of these
+has built a bootable the Linux kernel, without which a self-bootstrapping
+system is impossible. (This is a good smoketest for a mature compiler: if it
+can't build the kernel, it probably can't build userspace packages of the
+complexity people actually write.)</p>
+
+<b>Why does this matter?</b>
+
+<p>This is time critical due to network effects, which create positive
+feedback loops benefiting the most successful entrant and creating natural
+"standards" (which become self-defending monopolies if owned by a single
+player.) Whichever platform has the most users attracts the most
+development effort, because it has the most potential customers. The platform
+all the software ships on first (often only) is the one everybody wants to
+have. Other benefits to being biggest include the large start-up costs and
+much lower incremental costs of electronics manufacturing: higher unit
+volume makes devices cheaper to produce. Amortizing research and development
+budgets over a larger user base means the technology may actually advance
+faster (more effort, anyway)...</p>
+
+<p>Technological transitions produce "S curves", where a gradual increase
+gives way to exponential increase (the line can go nearly vertical on a graph)
+and then eventually flattens out again producing a sort of S shape.
+During the steep part of the S-curve acquiring new customers dominates.
+Back in the early minicomputer days a lot more people had no computer than had
+an Atari 800 or Commodore 64 or Apple II or IBM PC, so each vendor focused
+on selling to the computerless than converting customers from other
+vendors. Once the pool of "people who haven't got the kind of computer
+we're selling today but would like one if they did" was exhausted (even if
+only temporarily, waiting for computers to get more powerful and easier
+to use), the largest players starved the smaller ones of new sales, until
+only the PC and Macintosh were left. (And the Macintosh switched over to
+PC hardware components to survive, offering different software and more
+attractive packaging of the same basic components.)</p>
+
+<p>The same smartphone transition is inevitable as the pool of "people with
+no smartphone, but who would like one if they had it" runs out. At that point,
+the largest platform will suck users away from smaller platforms. If the
+winner is android we can open up the hardware and software. If the winner
+is iPhone, we're stuck with decades of microsoft-like monopoly except
+this time the vendor isn't hamstrung by their own technical incompetence.</p>
+
+<p>The PC lasted over 30 years from its 1981 introduction until smartphones
+seriously started displacing it. Smartphones themselves will probably last
+about as long. Once the new standard "clicks", we're stuck with it for
+a long time. Now is when we can influence this decision. Linux's
+15 consecutive "year of the linux desktop" announcements (spanning the period
+of Microsoft Bob, Windows Millennium, and windows Vista) show how hard
+displacing an entrenched standard held in place by network effects actually
+is.</p>
+
+<b>Why not extend vanilla Linux to smartphones instead?</b>
+
+<p>Several reasons.</p>
+
+<ul>
+<li><p>It's probably too late for another entrant. Microsoft muscling in with
+Lumia is like IBM muscling in with OS/2. And Ubuntu on the phone is like
+Coherent Unix on the PC, unlikely to even register. We have two clear leaders
+and the rest are noise ("Coke, Pepsi, and everybody else"). Possibly they
+could still gain ground by being categorically better, but "Categorically
+better than the newest iPhone/iPad" is a hard bar to clear.</p></li>
+
+<li><p>During the minicomputer-&gtPC switch, various big iron vendors tried to
+shoehorn their products down into the minicomputer space. The results were
+laughable. (Look up the "microvax" sometime.)</p>
+
+<p>The successful tablets are big phones, not small PCs. Teaching a PC to be
+a good phone is actually harder than teaching a phone to be a good PC, we
+understand the old problem space much better. (It's not that it's less
+demanding, but the ways in which it is demanding are old hat and long
+solved. Being a good phone is still tricky.)</p>
+</li>
+
+<li><p>Deployment requires vendor partnerships which are difficult and slow.
+Apple exclusively partnered with AT&T for years to build market share, and
+had much less competition at the time. Google eventually wound up buying
+Motorola to defend itself from the dysfunctional patent environment.
+Microsoft hijacked Nokia by installing one of their own people as CEO, and
+it's done them about as much good as a similiar CEO-installation at SGI did
+to get Microsoft into the supercomputer market. (Taking out SGI did reduce
+Microsoft's competition in graphics workstations, but that was a market they
+already had traction in.)</p>
+
+<li>Finally, Linux has had almost 2 decades of annual "Linux on the Desktop"
+pushes that universally failed, and there's a reason for this. Open source
+development can't do good user interfaces for the same reason wikipedia can't
+write a novel with a coherent plot. The limitations of the development model
+do not allow for this. The old adage "too many cooks spoil the soup" is not
+a warning about lack of nutrition, it's a warning that aesthetic issues do
+not survive committees. Peer review does not produce blockbuster movies,
+hit songs, or masterpiece paintings. It finds scientific facts, not beauty.</p>
+
+<p>Any time "shut up and show me the code" is not the correct response to
+the problem at hand, open source development melts down into one of three
+distinct failure modes:</p>
+
+<p>1) Endless discussion that never results in actual code, because
+nobody can agree on a single course of action.</p>
+
+<p>2) The project forks itself to death: everybody goes up and codes their
+preferred solution, but it's no easier to agree on a single approach after
+the code exists so the forks never get merged.</p>
+
+<p>3) Delegating the problem to nobody, either by A) separating engine from
+interface and focusing on the engine in hopes that some glorious day somebody
+will write an interface worth using, or B) making the interface so configurable
+that the fact it takes a week to learn what your options are and still has no
+sane defaults is now the end user's problem.</p>
+
+<p>Open source development defeats Brooks' Law by leveraging empirical tests.
+Integrating the results of decoupled development efforts is made possible
+by the ability to unequivocally determine which approaches are best (trusted
+engineers break ties, but it has to be pretty close and the arguments go back
+and forth). Even changing the design and repeatedly ripping out existing
+implementations is doable if everyone can at least retroactively agree that what
+we have now is better that what we used to have, and we should stop fighting
+to go back to the old way.</p>
+
+<p>In the absence of empirical tests, this doesn't work. By their nature,
+aesthetic issues do not have emprical tests for "better" or "worse".
+Chinese food is not "better" than mexican food. But if you can't
+decide what you're doing (if one chef insists on adding ketchup and another
+bacon and a third ice cream) the end result is an incoherent mess.</p>
+
+<p>The way around this is to a have a single author with a clear vision
+in charge of the user interface, who can make aesthetic decisions that are
+coherent rather than "correct". Unfortunately when this does happen, the
+open source community pressures the developer of a successful project to
+give over control of the project to a committee. So the Gecko engine was
+buried in the unusable Mozilla browser, then Galleon forked off from that
+and Mozilla rebased itself on the Galleon fork. Then Firefox forked off
+of that and the Mozilla foundation took over Firefox...</p>
+
+<p>Part of the success of Android is that its user experience is NOT
+community developed. (This isn't just desktop, this is "if the whole thing
+pauses for two seconds while somebody's typing in a phone number, that's
+unacceptable". All the way down to the bare metal, the OS serves the task
+of being a handheld interactive touch screen device running off of battery
+power first, being anything else it _could_ be doing second.)</p>
+</li>
+
 <!--#include file="footer.html" -->