Swapping like crazy

I’ve noticed that my Mac seems to be eating up quite a bit of disk space every now and then, without any action from me. I have MenuMeters installed and realized that this extra disk usage is due to insane swap-file usage. I know one cannot rely completely on the reported RAM usage numbers (“used” vs “free” due to all the other states, “free”, “wired”, “active”, “inactive”), but in my end-user mind I find it insane that though I’m currently using 7.1GB and have a total of 16GB, the OS should not be using 7.3GB of my hard drive. Use up the fast RAM first, then move the storage to the (albeit a fast SSD) slow disk. Don’t pre-emptively swap! Yes, I know this is a simplistic view and probably technically incorrect. But it’s annoying that OSX is using my hard drive when it doesn’t need to!

Restarting the Mac solves the immediate swapping issue, and my hard drive recovers the missing 7GB (in my current example, see screenshots below). But after about 3-4 days of uptime, the heavy swapping returns. Since this does not make any sense to me, I want to find the root cause. And I think I might have an inkling as to some of the swapping. Sorting the processes in Activity Monitor by virtual memory, I see that over time the socketfilterfw memory usage grows and grows and grows.

So my solution until Apple fixes their memory leak: every now and then manually disable and enable the firewall in System Preferences to immediately flush the bloated swap. 🙂

You can see in the screenshots below that I recovered > 7GB disk space used by the memory hog socketfilterfw by restarting the firewall.

before

memoryhog

after

I wonder what would happen if I left my machine on for an extended amount of time (talking many weeks here) without restarting? I bet socketfilterfw would eat up all the free space on my hard drive. Sounds like a challenge: maybe I’ll leave my machine on during my summer vacation and see what happens…

Fix lagging display performance on retina MacBook Pro

This weekend my Mac suddenly started behaving strangely: moving windows around occurred with a nearly psychodelic delay, mission control (aka exposé) was “jerky”, and scrolling was not fun. Forcing the graphics card to the discrete NVIDIA GT 650 instead of integrated Intel GPU sped things up, but the overall experience still didn’t feel right. Since the onset was sudden I immediately though: imminent hardware failure! But thankfully that turned out not to be the case.

Scouring forums for answers led me here, which worked for me!. The basic idea: delete some preferences files and reset the PRAM:

  • Delete /Library/Preferences/com.apple.windowserver.plist
  • Delete ~/Library/Preferences/ByHost/com.apple.windowserver*.plist
  • Shutdown OSX
  • Startup, immediately press and hold the P and R keys while holding down the option (⌥) and command (⌘) keys before the gray boot screen appears, which resets the PRAM
  • You may have to reset your display preferences (resolution) once you login
  • Done!

2013-08-12: fixed typo in preference plist filename.

Recover a failed TimeMachine backup

I recently received an unpleasant warning message after TimeMachine routinely tried to perform a backup:

Time Machine completed a verification of your backups on “matmos”. To improve reliability, Time Machine must create a new backup for you.
Click Start New Backup to create a new backup. This will remove your existing backup history. This could take several hours.
Click Back Up Later to be reminded tomorrow. Time Machine won’t perform backups during this time.Time Machine completed a verification of your backups on 'matmos'. To improve reliability, Time Machine must create a new backup for you.  Click Start New Backup to create a new backup. This will remove your existing backup history. This could take several hours.  Click Back Up Later to be reminded tomorrow. Time Machine won’t perform backups during this time.

Googling around for others with the same problem I found quite a few tips (like this one, or this one). The basic idea is to mount the sparsebundle image, run a disk check/repair, and hope for the best. In my case (as you will see in a bit), my sparsebundle appeared to be hosed. My options: lose my old backups or look for a way to recover the old backups. But first up, turn off TimeMachine, and then try to run a standard disk check.

Run disk check/repair

  • Unlock and mount the TimeMachine sparsebundle from the already-mounted server share (of course your server name, network share, sparsebundle names will not be the same as mine):

    $ sudo chflags nouchg /Volumes/TimeMachine-David/fünke.sparsebundle
    $ sudo chflags nouchg /Volumes/TimeMachine-David/fünke.sparsebundle/token
    
    $ sudo hdiutil attach -nomount -noverify -readwrite -noautofsck /Volumes/TimeMachine-David/fünke.sparsebundle
    /dev/disk2          	GUID_partition_scheme          	
    /dev/disk2s1        	EFI                            	
    /dev/disk2s2        	Apple_HFS
    
  • The disk check utility fsck may now be running, so get the PID of it and kill the process so we can manually run it:

    $ ps auxwww | grep fsck
    $ kill PID
    
  • Now run fsck with some repair options on the correct disk partition (use the “Apple_HFS” partition as listed in the mount step above (/dev/disk2s2 in my example):

    $ sudo fsck_hfs -dryf /dev/disk2s2
    journal_replay(/dev/disk2s2) returned 0
    ** /dev/rdisk2s2
    	Using cacheBlockSize=32K cacheTotalBlock=65536 cacheSize=2097152K.
       Executing fsck_hfs (version diskdev_cmds-557~393).
    ** Checking Journaled HFS Plus volume.
       Invalid number of allocation blocks
    (4294967295, 0)
    	IVChk - volume header total allocation blocks is greater than device size 
    	volume allocation block count 102374400 device allocation block count 97630464 
    ** The volume could not be verified completely.
    	volume check failed with error 7 
    	volume type is pure HFS+ 
    	primary MDB is at block 0 0x00 
    	alternate MDB is at block 0 0x00 
    	primary VHB is at block 2 0x02 
    	alternate VHB is at block 781043710 0x2e8dc7fe 
    	sector size = 512 0x200 
    	VolumeObject flags = 0x07 
    	total sectors for volume = 781043712 0x2e8dc800 
    	total sectors for embedded volume = 0 0x00 
    CheckHFS returned -1317, fsmodified = 1
    
  • The disk check did not do anything, so let’s unmount the sparsebundle:

    $ sudo hdiutil detach /dev/disk2s2
    
  • You can try running the disk check (fsck) multiple times. Some have reported that does the trick! In my case, it didn’t help. I had to try something else.

What to do next?

So, “The volume could not be verified completely” says that disk check is not going to repair my sparsebundle. But one good thing to note: the sparsebundle can be mounted read-only, so the old backups should still be there. So the plan: make a new sparsebundle, mount it, mount the old sparsebundle, and then copy all files from the old sparsebundle into the new. Sounds easy, right?
Making a new sparsebundle is not rocket science, however copying the files from TimeMachine backups can be quite challenging. I learned quite a bit when trying various methods to copy the files across (Finder, cp, rsync, ditto, etc). TimeMachine is quite ingenious: it uses a combination of file hard links and directory hard links (the latter is a new one to me!) in order to keep the backup size at a mininum. Unfortunately, all the methods I tried could not reconcile the directory hard links: instead of the links being created, the actual directory contents were copied. Furthermore, Apple has made it difficult to work directly with files in TimeMachine backups by making use of sandboxing and an access control “safety net” (see this or this). So I did some more digging and found a great product that can deal with TimeMachine backups, directory hard links, and this safety net: SuperDuper.

Recover contents of failed sparsebundle

Move failed sparsebundle to a new location

I tried restoring the failed sparsebundle to a new sparsebundle while both were on a network drive (connected to my machine via gigabit ethernet) and the backup was painfully slow (wasn’t finished after 24 hours) meaning the bottleneck was random access/seek time of my poor, slow network drives. I cancelled the restore operation, moved the failed sparsebundle to an external USB drive.

Create new sparsebundle

With the failed sparsebundle no longer in my TimeMachine network share, I created a new sparsebundle. Since I encrypt my backups, I used TimeMachine to create a new sparsebundle on the network share:

  • Enable TimeMachine, select the network share, select “encrypt backups”, then “use disk”
    select network share in TimeMachine
  • Provide encryption password:
    create sparsebundle encryption key
  • Let the backup run, then cancel after a few minutes. This will create a new sparsebundle on the network share.
    backing up...

Mount both sparsebundles

  • Mount the failed sparsebundle (from the external/USB drive). Unfortunately, you can’t use the paste command in the encryption password field 😦
    field does not allow pasting!
  • Note that the sparsebundle will be mounted read-only (which is just fine):
    Mac OSX can't repair the disk 'Time Machine Backups'. You can still open or copy files on the disk, but you can't save changes to files on the disk. Back up the disk and reformat it as soon as you can.
  • Now that both sparsebundles are mounted and have the same name (Time Machine Backups), we need to make sure we know which is the source (external drive) and which is the destination (network share). A little commandline magic:

    $ mount
    ...
    /dev/disk4s2 on /Volumes/Time Machine Backups (hfs, local, nodev, nosuid, read-only, mounted by david)
    /dev/disk5s2 on /Volumes/Time Machine Backups 1 (hfs, local, nodev, nosuid, journaled, mounted by david)
    
  • So, “Time Machine Backups” is the source (it’s read-only) and “Time Machine Backups 1” is the destination.

Copy files from the failed sparsebundle to the new sparsebundle

  • Open “SuperDuper!”
    Select copy: “Time Machine Backups”
    to: “Time Machine Backups 1”
    using: “Backup – all files”
    SuperDuper
  • Select “Options…” and choose “Smart Update” (this prevents SuperDuper from reformatting the destination sparsebundle, ask me how i know 😉 )
    Smart Update
  • Advanced options are left as the default:
    tm-3-sd3
  • Start the copy process:
    Start copy
  • Let it run for a long time…
    running
    done

Enable TimeMachine

Enable TimeMachine and start the backups. When the backups first started, the “Oldest backup” date was not listed. But when the backup finished, TimeMachine successfully recognized the oldest backup. Success!
Backing up
done!

Updated 2013-07-06: updated fsck step to not be recursive, can try to run fsck multiple times, thanks to comments!

Update CrashPlan on QNAP

It’s happened to me twice now: CrashPlan stops backing up my files apparently due to a failed software update. I usually don’t know CrashPlan has stopped backing up until I get the weekly email status update. Fortunately, the files that I back up to CrashPlan are not changed often at all, and missing a backup for a couple days isn’t the end of the world.

I have CrashPlan installed on my QNAP (see my previous post about that), and CrashPlan smartly autoupdates their software every now and then. Unfortunately this autoupdate doesn’t seem to work when installed on QNAP. The following instructions should help get CrashPlan updated and running again smoothly.

Connect to the CrashPlan server running on QNAP by first creating an SSH tunnel to QNAP and then open the GUI client locally (connecting to the CrashPlan server through the ssh tunnel). The GUI reports that “CrashPlan Upgrade Failed. CrashPlan failed to apply an upgrade and will try again automatically in one hour…”
crashplanupgradefailed

How to upgrade (I had version 3.2.1 installed, version 3.4.1 was available):

  • Stop the CrashPlan server:

    $ /share/MD0_DATA/.qpkg/crashplan/cprun.sh stop
    
  • Move the existing CrashPlan installation out of the way:

    $ mv /opt/crashplan /opt/crashplan.bak
    
  • Download the latest linux version of CrashPlan, decompress the tarball, etc

    $ wget http://download.crashplan.com/installs/linux/install/CrashPlan/CrashPlan_3.4.1_Linux.tgz
    $ tar xzvf CrashPlan_3.4.1_Linux.tgz
    $ rm CrashPlan_3.4.1_Linux.tgz
    
  • Edit the CrashPlan install.sh, replace the bash path and add BINSLOC at the top:

    $ cd CrashPlan-install
    $ nano install.sh
    #!/opt/bin/bash
    BINSLOC="/bin /opt/bin /usr/bin /usr/local/bin"
    
  • Now, run the install script:

    $ ./install.sh
    Would you like to switch users and install as root? (y/n) [y] n
      installing as current user
    No Java VM could be found in your path
    Would you like to download the JRE and dedicate it to CrashPlan? (y/n) [y] y
      jre will be downloaded
    
    ...
    
    Do you accept and agree to be bound by the EULA? (yes/no) yes
    
    What directory do you wish to install CrashPlan to? [/root/crashplan] /opt/crashplan
    /opt/crashplan does not exist.  Create /opt/crashplan? (y/n) [y] y
    
    What directory do you wish to store backups in? [/opt/crashplan/manifest]  
    /opt/crashplan/manifest does not exist.  Create /opt/crashplan/manifest? (y/n) [y] 
    
    Your selections:
    CrashPlan will install to: /opt/crashplan
    And store datas in: /opt/crashplan/manifest
    
    Is this correct? (y/n) [y] y
    
    ...
    
    ./install.sh: /opt/crashplan/bin/CrashPlanEngine: /bin/bash: bad interpreter: No such file or directory
    
    CrashPlan has been installed and the Service has been started automatically.
    
  • Oops, need to fix some paths in /opt/crashplan/bin/CrashPlanEngine. Add the BINSLOC at the top of the file, and add the full path to nice:

    #!/opt/bin/bash
    BINSLOC="/bin /opt/bin /usr/bin /usr/local/bin"
    ...
    /opt/bin/nice -n 19 $JAVACOMMON $SRV_JAVA_OPTS -classpath $FULL_CP com.backup42.service.CPService > $TARGETDIR/log/engine_output.log 2>
    $TARGETDIR/log/engine_error.log &
    
  • Now, move the backed up configuration/cache folders into place:

    $ mv /opt/crashplan.bak/conf /opt/crashplan/
    $ mv /opt/crashplan.bak/cache /opt/crashplan/
    $ mv /opt/crashplan.bak/manifest /opt/crashplan/
    
  • Then start CrashPlan:

    $ /share/MD0_DATA/.qpkg/crashplan/cprun.sh start
    
  • Connect from the client and verify in the GUI that the server is now working:
    crashplanupgradesuccessful
  • And finally, clean up the unused, old crashplan backup (mine was 2.1GB as there seemed to be 100s of failed yet downloaded upgrade attempts):

    $ rm -rf /opt/crashplan.bak
    

Happy CrashPlanning!

Move music library and update iTunes database

I’m doing some reorganization of my network shares. My music is saved on the server under its own share, “Music”. In the new scheme, I want the music folder to be located in a subfolder in the share “Multimedia”. This poses a small problem: I have to update iTunes to recognize the new file locations. I’ve got iTunes 10.6.3 on OS X Mountain Lion 10.8.1.

Edit iTunes Music Library.xml

The simple solution is to modify the Location values in the iTunes Music Library.xml file, mangle the iTunes Library.itl file, then open iTunes. iTunes will then rebuild the database file based on the xml (the .itl file is the active database file, the .xml file is regenerated based on the .itl database).  To find and replace all the locations I tried this:

cat ~/Music/iTunes/iTunes\ Music\ Library.xml | perl -pe 's/\/Volumes\/Music\//\/Volumes\/Multimedia\/music\//i' > itunes.xml

Then quickly checked to see if I was missing any other locations:

cat itunes.xml | grep "Location" | grep -v "/Volumes/Multimedia/music"

Then erase iTunes Library.itl, replace iTunes Music Library.xml with this new copy (making backups of the originals first, of course).

An unfortunate side effect of rebuilding the .itl based on the xml: the Date Added values for the entire library are reset (probably other values are reset as well). I wanted to move my library files and keep all the metadata intact.

Edit iTunes Library.itl

So, instead of editing the xml file, what about editing the itl file? Unfortunately, the .itl file is a proprietary format binary file. Luckily, there are some who have tinkered with the file in order to edit .itl database entries. Enter Tools for iTunes Libraries (titl): excellent! I cloned the mercurial repository, built the code and tried to move some files:

$ hg clone https://code.google.com/p/titl/ titl
$ cd titl
$ mvn verify
$ java -Xmx512m -XX:MaxPermSize=256m -jar titl-core/target/titl-core-0.3-SNAPSHOT.jar MoveMusic --use-urls ~/Music/iTunes/iTunes\ Library.itl "file://localhost/Volumes/Music" "file://localhost/Volumes/Multimedia/music"

That resulted in a Exception in thread "main" java.io.EOFException, the exact same issue as this one. Downloaded the patch file that the user thankfully uploaded, applied the patch to the code, and tried again (disabling the now broken unit tests with -Dmaven.test.skip=true): success! Excellent!

Final step: rename the iTunes Library.itl.processed file to iTunes Library.itl (making backup first of the original file of course). iTunes works as expected, music files are found, play count still there, “last added” dates still there.

Not that I use iTunes very often (or really at all) anymore to play music. Spotify is the scheisse these days! 😉

Updated 2011-12-16: Uploaded the patched + compiled jar (for those of you who want it)

No more annoying password popups for Cisco VPN on OSX Lion (and Mountain Lion)!

I am currently working on a development project for our office in München. Accessing their internal servers requires connection via VPN (I’m working from Stockholm). I’m using the very handy built-in Cisco IPSEC VPN client in OSX and have had some annoying problems which until today I have not been able to solve. I am documenting these configuration changes so I remember what I did, and hopefully it can also help others out there!

The problem

After being connected via VPN for about 48-54 minutes (seems to vary), OSX will throw up a “please enter password” dialog (I can’t remember the exact wording…). After entering the password, the VPN connection stays active for another 48-54 minutes, at which time another password dialog pops up. Lather, rinse, repeat. Not very fun during a standard work day, especially when my application-in-progress likes to crap out as soon as it loses connectivity to those remote servers (and requires lengthy restarts).

The solution (I thought)

After much googling I found this solution, for which I had high hopes (despite the comments from fellow OSX Lion users who couldn’t get the solution to work). In short, that post goes about showing how to grant /usr/libexec/configd access to your keychain, in order to squelch the password dialog. Well, unfortunately that solution didn’t work for me as well 😦

The working solution (finally!)

After a week or so of still getting that annoying password dialog, I managed to google the correct sequence of terms and I finally found a working solution! Over at the Apple forums, a very clever Mr Geordiadis posts a working solution to the problem. His solution is to modify the racoon configuration files for the VPN connection by tweaking a few settings and increasing the negotiated password timeout value from 3600 seconds to 24 hours (perfectly fine for my intended use). I’ve been connected now for over 8 hours today, haven’t had a password dialog yet! So excellent! Confirmed that it works on Mountain Lion (10.8) as well.

I hope this information helps you as it helped me!

Steps:

  • Connect to the VPN so the configuration file is generated
  • Create a location for the VPN configuration files

    $ sudo mkdir /etc/racoon/vpn
    
  • Copy the auto-generated configuration file into the new configuration folder:

    $ sudo cp /var/run/racoon/1.1.1.1.conf /etc/racoon/vpn/
    
  • Edit the racoon.conf file:

    $ sudo emacs /etc/racoon/racoon.conf
    
  • Comment out the include line at the end of the file and include the new configuration folder:

    #include "/var/run/racoon/*.conf" ;
    include "/etc/racoon/vpn/*.conf" ;
    
  • Edit the VPN configuration file:

    $ sudo emacs /etc/racoon/vpn/1.1.1.1.conf
    
    • Disable dead peer detection:

      dpd_delay 0;
      
    • Change proposal check to claim from obey:

      proposal_check claim;
      
    • Change the proposed lifetime in each proposal (24 hours instead of 3600 seconds):

      lifetime time 24 hours;
      
  • Disconnect VPN and reconnect.

Updated 2012-08-07: added the detailed steps
Updated 2012-08-07: tested on OSX Mountain Lion (10.8)

CrashPlan on QNAP

First up on my QNAP: offsite backup of my data in “the cloud”. QNAP’s recommended and integrated cloud backup provider was a bit too expensive for my taste, especially since I plan to backup nearly 1TB of data. After some serious googling I found the perfect candidate: CrashPlan.

CrashPlan’s software–written in Java–has two components, a server and a GUI client. This makes it a perfect candidate for QNAP. Get the server running in headless mode on my QNAP, then connect to it via the GUI client from my desktop machine.

  • First up with the QNAP, install Optware IPKG.
  • Then follow Cokeman’s excellent guide for CrashPlan on a QNAP.
  • Installed CrashPlan for OSX and modded the ui.properties file to servicePort=4200 as in CrashPlan’s docs (to edit I used TextMate):
    $ mate /Applications/CrashPlan.app/Contents/Resources/Java/conf/ui.properties
    
  • Added a tunnel to my QNAP in SSHKeyChain(an excellent program to manage SSH connections and tunnels):

    SSHKeyChain CrashPlan tunnel configuration

    SSHKeyChain CrashPlan tunnel configuration

  • Open CrashPlan on Mac. Because of the ui.propertiesconfiguration change, you will now be connecting from CrashPlan’s Mac GUI to the CrashPlan engine on QNAP. Go through the CrashPlan setup, configure backup options, done!

    Connecting to headless CrashPlan engine via OSX GUI

    Connecting to headless CrashPlan engine via OSX GUI

Auf Wiedersehen Frankenstein, hello QNAP

My homemade server running Ubuntu started to experience strange hardware issues (random crashing, etc). Over the course of a year I replaced and fiddled with memory, motherboard, and hard drives yet still I had random crashes (would even randomly crash running memtest, so I ruled out a software issue). Since the server was my home NAS, I began to rethink the idea of maintaining unreliable hardware (and throwing money away left and right).

Out with Frankenstein, enter the QNAP TS-459 Pro II Turbo NAS. Fitted with four WD 2TB drives in RAID-5 my new QNAP has been extremely reliable, quiet, energy efficient (measured 39W at load!) and space-saving! Frankenstein was a huge server tower, the new QNAP is a tiny 18x18x24cm block that rests nicely under a cabinet. And quiet enough to be unenclosed: fan noise is imperceptible, yet faint hard drive noise can sometimes be heard if the room is totally silent. The QNAP is connected to a CyberPower UPS (as are my internet modem and router). The UPS and NAS fit nicely under a cabinet in the living room:

QNAP TS-459 Pro II Turbo NAS

QNAP TS-459 Pro II Turbo NAS and CyberPower DX 600E UPS Green Power

The QNAP’s web GUI is less than perfect (I find it slow, illogical, and limiting) and as I subsequently found out, the command-line configuration is also less than perfect. But the reliability, quietness, and “it just works”-feel of the QNAP so far has me quite happy. That said, going from a completely configurable linux server to a more closed system like the QNAP was going to take some getting used to. I plan on writing a series of posts documenting the changes I make to my new QNAP to customize it to be my perfect little AFP NAS.

Stay tuned, more to come!

Reduce the size of MySQL ibdata1 on OSX

So I finally figured out why my TimeMachine backups were becoming bigger and bigger, 3-4GB backing up every day when I come home from work… It seems that my MySQL database file (/usr/local/mysql/data/ibdata1) keeps getting larger and larger, unnecessarily, even if I deleted databases, tables, etc. It seems even if I only update a few rows in a table in a small database, the ginormous idbdata1 file grows and then get marks as a candidate for backup into TimeMachine. Ugh.

I did some digging and found this interesting tutorial on how to clean up InnoDB storage files.  Here I’ll explain what I specifically did on my OSX 10.6.5 machine with MySQL v5.1.38.

  1. If you’re not a cowboy, stop MySQL, backup all files, then start MySQL again (I used the System pref to stop/start MySQL, feel free to use the command-line instead):

    $ sudo cp -R /usr/local/mysql/data /usr/local/mysql/data.bak
    

  2. Export all data from MySQL:
    $ mysqldump -u root -p --all-databases > alldatabases.sql
    
  3. Drop databases in MySQL (except “mysql” and “information_schema”):
    $ mysql -u root -p
    mysql> show databases;
    mysql> drop database XXXX;
    

    or use this great one-liner to delete all databases, modded a bit so it would work for me:

    # measure twice, cut once... make sure we are deleting what we should be deleting
    $ mysql -u root -p  -e "show databases" | grep -v Database | grep -v mysql | grep -v information_schema | awk '{print "drop database " $1 ";select sleep(0.1);"}'
    # now delete them
    $ mysql -u root -p  -e "show databases" | grep -v Database | grep -v mysql | grep -v information_schema | awk '{print "drop database " $1 ";select sleep(0.1);"}' | mysql -uroot -ppassword
    
  4. Stop MySQL again
  5. Add the following to the [mysqld] section in /etc/my.cnf:
    [mysqld]
    innodb_file_per_table
    
  6. Remove the files:
    $ sudo rm /usr/local/mysql/data/ibdata1
    $ sudo rm /usr/local/mysql/data/ib_logfile0
    $ sudo rm /usr/local/mysql/data/ib_logfile1
    
  7. Start MySQL again
  8. Reload the databases from the sql dump-file:
    $ mysql -u root -p < alldatabases.sql
    
  9. Verify that your database(s) are working properly
  10. Delete the backup
    $ sudo rm -rf /usr/local/mysql/data.bak
    
  11. Done!

After this modification my TimeMachine backups are much more reasonably-sized. Very nice!

Furthermore, I noticed the same infamously large ibdata1 file on our continuous-integration build server at work: it was 40GB! I applied the same modifications as above. That server runs Ubuntu 10.10, the MySQL files are located in /var/lib/mysql/data, but otherwise the steps are pretty much the same. Even build-server bongo seems snappier now, and the total size of the MySQL data folder is 1GB (insane that there was 39GB of “dead” data in the ibdata1-file…)

Very happy.

AirPrint on Ubuntu

How to print from iPhone/iPad (iOS 4.2) via Ubuntu

I just noticed that with the latest 10.6.5 update, Apple at the last minute disabled support for printing from iOS 4.2 devices to shared printers in OSX. Instead of trying to overwrite new drivers with prerelease versions, I thought that there must be someone, somewhere out there who has figured out how to do this using the fine, free tools available to us. I like open standards, open tools, open source.

From what I understand AirPrint supports printing two ways:

  • Via officially-supported HP ePrint printers that advertise themselves on your subnet
  • Via shared printers from Mac/Windows (this is the part that apparently got axed, at least in the latest 10.6.5 update… don’t know how Windows will fare)

I don’t have a new HP ePrint printer, but I do have a wonderful Ubuntu 10.04 server running avahi (open source Bonjour/mDNS responder)… I wonder if there’s a way to have my server act as an AirPrint device and then send the print job to my networked printer?

Enter this post. Excellent! Set up a Bonjour service, point it to a shared printer, done!

Here’s what I had to do in Ubuntu  to get printing to work, YMMV:

  • Install my printer (networked HP Color LaserJet 1515n) using the graphical configuration utility (System->Administration->Printing). Make sure the printer is shared.
  • Updated my /etc/cups/cupsd.conf configuration to allow network access (default Ubuntu configuration is for localhost only):
    # Only listen for connections from the local machine.
    #Listen localhost:631
    #Listen /var/run/cups/cups.sock
    Port 631
    ServerAlias *
    

    and

    # Restrict access to the server...
    <Location />
    Order allow,deny
    Allow @LOCAL
    </Location>
    
  • Tested remote access to Ubuntu’s cups web GUI from my laptop (to make sure machines other than the Ubuntu server had access to cups): http://172.16.0.50:631/printers/
  • Tested that I could print remotely from my laptop to the new shared printer on Ubuntu (simple “Add Printer” in OSX and then print test page)
  • Configured avahi with a new printer service /etc/avahi/services/printer.service. As mentioned in the original link, rp and adminurl are the most important configuration bits.
    <?xml version="1.0" standalone='no'?><!--*-nxml-*-->
    <!DOCTYPE service-group SYSTEM "avahi-service.dtd">
    <service-group>
      <name>HP 1515n</name>
      <service>
        <type>_ipp._tcp</type>
        <subtype>_universal._sub._ipp._tcp</subtype>
        <port>631</port>
        <txt-record>txtver=1</txt-record>
        <txt-record>qtotal=1</txt-record>
        <txt-record>rp=printers/hp-cp1515n</txt-record>
        <txt-record>ty=HP 1515n</txt-record>
        <txt-record>adminurl=http://172.16.0.50:631/printers/hp-cp1515n</txt-record>
        <txt-record>note=HP Color LaserJet cp1515n</txt-record>
        <txt-record>priority=0</txt-record>
        <txt-record>product=virtual Printer</txt-record>
        <txt-record>printer-state=3</txt-record>
        <txt-record>printer-type=0x801046</txt-record>
        <txt-record>Transparent=T</txt-record>
        <txt-record>Binary=T</txt-record>
        <txt-record>Fax=F</txt-record>
        <txt-record>Color=T</txt-record>
        <txt-record>Duplex=T</txt-record>
        <txt-record>Staple=F</txt-record>
        <txt-record>Copies=T</txt-record>
        <txt-record>Collate=F</txt-record>
        <txt-record>Punch=F</txt-record>
        <txt-record>Bind=F</txt-record>
        <txt-record>Sort=F</txt-record>
        <txt-record>Scan=F</txt-record>
        <txt-record>pdl=application/octet-stream,application/pdf,application/postscript,image/jpeg,image/png,image/urf</txt-record>
        <txt-record>URF=W8,SRGB24,CP1,RS600</txt-record>
      </service>
    </service-group>
    
  • Printed a page from iOS simulator
  • Printed a page from iPhone 3GS with iOS 4.2.1:
  • Done!

I’m not sure if all printers will work out of the box with this configuration, but since my printer supports PostScript I assume it can rasterize pretty much anything iOS will send it. In any case, I didn’t have to configure any filters or print settings. It just worked. Hopefully Apple won’t further cripple AirPrinting by also “patching” iOS so that only HP ePrint devices are supported and it no longer recognizes Bonjour services with subtype _universal._sub._ipp._tcp. We’ll see what happens!

Now all I need is an iPad. And a reason to print.

Updated 2010-11-21: repaired some mangled XML due to WordPress’s less-than-nice HTML-parsing

Updated 2010-11-23: added post subtitle, added screenshot from iPhone