Hey! While you're here, why not check out my open source endevours?

LineageOS updater app UI tweaks

LineageOS has a pretty fantastic updater app for pulling down OTA updates, but it has one quirk that's likely to confuse typical users: it always displays the currently installed build as an update. This behavior may make sense to developers and power users who might want to reinstall the current version to revert changes. However, to a typical user, it may look like they always have an update or their last update didn't work.

To correct this, I've created a couple small patches that add an option to hide the current build in the updater. It is on by default and power users can just turn it off to get the old behavior. The patches are available on the ui-tweaks branch of my GitHub fork.

New repo: android-scripts

I've published a new repo, android-scripts that contains a collection of scripts and environment modulefiles for working with Android and LineageOS.

Gentoo: Catching stray files on install

File collisions can get annoying when installing packages on Gentoo. It's particularly bad when the collision is caused by a file that shouldn't even be there. Recently, I've noticed a number of Python packages that install files directly into /usr. This is usually a side-effect of upstream developers poorly using data_files in setup.py and then a Gentoo dev not catching it when writing an ebuild. (I too am guilty of this.) I've added this small function to help catch stray files and I recommend other devs use something similar:

/etc/portage/bashrc
post_pkg_preinst() {
        STATUS=0
        ebegin "Checking for stray files"
        pushd ${D} >/dev/null
        out=`find usr -maxdepth 1 -type f`
        if [ "x$out" != "x" ]; then
                STATUS=1
                for file in $out; do
                        ewarn "Stray file: /$file"
                done
        fi
        popd >/dev/null
        eend $STATUS
}
Ganglia Web and rrdtool >= 1.5.0
The commands/permissions available in rrdcached changed in version 1.5.0. The recommended setup as found on the ganglia web wiki is not sufficient and will result in empty graphs. Users following this guide with rrdtool >= 1.5.0 should add the fetch permission to the limited socket for ganglia web to function correctly.
Introducing cq for automating binpkg-multi-instance

I have been tracking the "binpkg-multi-instance" bug in Portage since 2007 when I was looking to use binpkgs for a slew of VMs. They tended to be grouped such that each group needed different use flags for the same packages - a perfect use case. Now that zmedico's binpkg-multi-instances feature work is in a stable version of portage (2.2.20) there has been some small discussion on the original bug as to the best way to use/automate this feature.

A few months ago, after some discussion with cchildress, who has been tracking the bug for the same reason, I started some work on a "reserve scheduler". The basic idea is the opposite of what is found in HPC schedulers - rather than schedule a queue of jobs on many hosts to run in parallel, the scheduler needs to serialize many requests on a single host. I chose a general solution where commands are run remotely rather than some form of IPC. This is riskier but is more flexible and intended for use in a closed environment. Most significant though is when a build or job is complete, the requester is notified and can execute a script. This is very useful for chaining events such as installing a freshly built package.

I have a very primitive, working command queue or cq that can handle build requests: cq on github I have used it successfully and repeatedly in my testing environment. However, it's not really documented yet and it's useful for more than just a build host so it's own documentation might take another form. I'll try to document a good use of it with the Portage binpkg-multi-instance feature here.

For this to work some shared filesystem or a mechanism to deliver packages to clients (NFS/web server/FTP/etc) is necessary but I'm assuming that's already in place. Add binpkg-multi-instance to make.conf on all the involved hosts - clients and the build host. Add my junkdrawer overlay and unmask app-admin/cq. This will also pull in an updated version of munge. Once installed, ensure a consistent munge key is installed on all the hosts (/etc/munge/munge.key) and has safe permissions (400) and start munge.

At the time of writing cq_server does not daemonize. Just run cq_server, optionally with -h or -p flags for host and ports unless the default 0.0.0.0:48005 is acceptable. (I intend to push changes soon so it will run as a daemon and log to file(s)/syslog.)

On the client side some scripting is required for automation but the basic idea is to run cq_client with the following options:

  • -h host
  • -p port (optional)
  • -P script (optional)
  • -E port (optional)
  • -- command string
Multiple -E flags may be specified for things like USE, ABI_X86, etc while -P can be a local script to run once the build is finished (such as an install script that calls emerge). The script will receive two parameters. The first is the internal status from cq (did we succeed in running the command remotely). I propose a script that will take (or discover) packages to be updated with their appropriate environment, generate an install script, and then run cq_client with the appropriate options. (This script left as an exercise, blah, blah, blah...)

There is still a good amount of work to be done. A few planned items are recorded in the README. Other theorized improvements include:

  1. A hook for incoming commands - when building packages it could be useful for finding duplicate requests in the queue and eliminating them.
  2. Multiple build hosts - the architecture already supports this (for a different reason) but some small changes would be necessary to add options to enable it.

Quick Gentoo dev-lang/perl update

The command below may be helpful to other Gentoo users that prefer to cherry-pick their updates and want to cherry-pick a dev-lang/perl update with the additional (read: required) dependencies.

root shell
emerge -upD world |
egrep '^\[ebuild.*(dev-lang/perl|virtual/perl-|dev-perl/)' |
perl -pe 's/^\[.*?\] (.*?) .*$/=\1/' |
xargs emerge -1
Using leds-gpio dynamically

In a blog post (Linux LED Subsystem), Fabio Baltieri details using the leds-gpio driver to quickly add GPIO-connected LEDs to a SoC. His method references nslu2-setup.c as an example implementation which utilizes static structures and the platform_add_devices function to add the LED devices. This is fine for an unchanging platform device where the driver will be compiled into the kernel - using static structures makes perfect sense but it does have a major drawback for developers.

If you find yourself developing a board and you want to load and unload a module rather than constantly rebooting while you work on writing parts of the driver you will have a big problem. leds-gpio does not get the opportunity to release (during platform_device_unregister) these statically initialized devices. This means they will be left out in the cold and you cannot reload the module successfully. It makes sense: the underling platform_device_register that platform_add_devices calls is meant for statically initialized devices. Since this memory cannot be freed there is no sense in the release hook being assigned - doing so will only lead to problems.

Rather than statically initializing the devices, they can be created with a combination of platform_device_alloc, platform_device_add_data, and platform_device_add. Within the call to platform_device_alloc, an unregister hook to free the memory is added. This means a later call to platform_device_unregister will actually remove the devices. Thus you can do work in a module that can be cleanly unloaded and loaded again.

Copyright 2018 Daniel M. WeeksLoaded in 0.00279 seconds using 2048 KB
All kinds of people
look all kinds of ways.
You'll never know anything about anybody
just looking all day.