Feed on
Posts
Comments

ikeacluster updates

I’ve made a few updates and overhauls to ikeacluster over the last year. Now the cluster is its own layer 3 rack running BGP to my home network, neater cabling, more bandwidth and uses some new Avoton motherboards.

https://binaryfury.wann.net/ikeacluster/#updates

Equipment teardowns

A quick list of equipment tear downs and photographs I’ve done for people who are interested in the internals:

unifi-ssl-header

For the longest time I wondered why Chrome would never save the password of my Ubiquiti UniFi controllers’ web interfaces. It turns out because the UniFi controller software ships out of the box with a self-signed SSL/TLS cert that’s untrusted, Chrome does a smart thing and won’t prompt you to save the password for the page.

The way to fix this (and otherwise let your browser trust the web UI) is to install your own certificate into the UniFi controller key store, either buy one from a commercial CA or your own organization’s CA that your browsers trust. Unfortunately it looks like there’s a lot of confusion on how to do this, even on Ubiquiti’s help pages there are articles that are titled for UniFi but are really geared for EdgeMAX products.

However you wind up with a trusted cert to import, here’s how to do it for UniFi controllers running on Mac OS X or Linux. Really both are the same procedure since UniFi uses Java’s keystore under the hood on both platforms, the paths for files are different. Ideally you’ll want to script this or set up something like a Chef recipe to manage the files for you, because you’ll need to repeat this whenever the cert expires or the key gets compromised.

Assumptions:

  • You have a way to get a trusted certificate to upload. How to self sign or setting up a CA is not covered here.
  • The UniFi (Java) keystore expects to import certificates in DER (binary) format. If you have certificates in PEM (which is really BASE64/ ASCII armored DER), you’ll need to use openssl or something to convert the PEM files to DER.
  • You’ll need to use the UniFi tool to generate certificate requests (CSRs). I didn’t put any time into looking at importing completely new private keys into the keystore, I just signed the CSR that was generated.
  • I run my own wann.net certificate authority (CA) for issuing certs for all of my devices. My browsers on all of my laptops and phones already trust this.
  • You know how to work with the CLI

Mac OS X

Java (JAR) contents of the UniFi controller are installed to /Applications/UniFi.app/Contents/Resources/. The lib/ace.jar is a UniFi-provided Java tool to manipulate the controller and key store.

Certificate requests and the keystore is stored in the data/ subdirectory which is a symlink to a user’s Library/Application Support directory, e.g. /Users/bwann/Library/Application Support/UniFi/data. (This is what preserves controller data between installs.) You must be in the main Resources/ directory before you can work with the UniFi keystore, else the tool gets unhappy with paths and can’t find things.

Generating a certificate request from the UniFi controller

The UniFi controller can generate a CSR for you, and it’ll keep the corresponding key in the local keystore.

# cd /Applications/UniFi.app/Contents/Resources/
# java -jar lib/ace.jar new_cert unifi.wann.net wann.net Fremont CA US
Certificate for unifi.wann.net generated

You should now have CSRs in PEM and DER format in the data/ directory:

# ls -l data/unifi*
-rw-r--r-- 1 root staff 708 Dec 31 14:42 data/unifi_certificate.csr.der
-rw-r--r-- 1 root staff 1042 Dec 31 14:42 data/unifi_certificate.csr.pem
#

Take the CSR (whichever format you prefer) and sign it with your CA.

Converting PEM certificates to DER

If you’re running your own CA, you’ll need to convert your CA’s public root key to DER format too in order to import it. In my case I always work with PEM certificates, so I need to convert both my newly signed certificate and root certificate:

# openssl x509 -outform der -in data/wannnet-ca-current-cert.pem -out data/wannnet-ca-current-cert.der
# openssl x509 -outform der -in data/unifi_certificate.cert.pem -out data/unifi_certificate.cert.der

You can store both of these DER files in the data/ directory.

Importing the certificates

Use the import_cert argument to ace.jar to import both the root CA and host certificate:

# java -jar lib/ace.jar import_cert data/unifi_certificate-cert.der data/wannnet-ca-current-cert.der
parse wannnet-ca-current-cert.der (DER, 1 certs): EMAILADDRESS=pk@wann.net, OU=wann.net CA, O=wann.net, L=Fremont, ST=California, C=US
parse unifi_certificate-cert.der (DER, 1 certs): CN=unifi.wann.net
Importing signed cert[unifi.wann.net]
... issued by [EMAILADDRESS=pk@wann.net, OU=wann.net CA, O=wann.net, L=Fremont, ST=California, C=US]
Certificates successfuly imported. Please restart the UniFi Controller.

Restart the UniFi controller. Done!

Linux (CentOS/Debian)

Basically the exact same process on CentOS/Debian/Ubuntu, except the paths to UniFi data is different. On at least Ubuntu the main binaries of the controller are installed to /usr/lib/unifi/, with /usr/lib/unifi/data/ being a symlink to /var/lib/unifi/.

Generating a certificate request from the UniFi controller

# cd /usr/lib/unifi
# java -jar lib/ace.jar new_cert unifi.wann.net wann.net Fremont CA US
 Certificate for unifi.wann.net generated
You should now have CSRs in PEM and DER format in the data/ directory:
# ls -l data/unifi*
-rw-r--r-- 1 root root 712 Dec 31 18:04 data/unifi_certificate.csr.der
-rw-r--r-- 1 root root 1050 Dec 31 18:04 data/unifi_certificate.csr.pem

Take the CSR (whichever format you prefer) and sign it with your CA.

Converting PEM certificates to DER

Follow the exact same steps in the OS X section to use openssl to convert from PEM to DER if necessary.

Importing the certificates

Use the import_cert argument to ace.jar to import both the root CA and host certificate:

# java -jar lib/ace.jar import_cert data/unifi_certificate-cert.der data/wannnet-ca-current-cert.der
 parse wannnet-ca-current-cert.der (DER, 1 certs): EMAILADDRESS=pk@wann.net, OU=wann.net CA, O=wann.net, L=Fremont, ST=California, C=US
 parse unifi_certificate-cert.der (DER, 1 certs): CN=unifi.wann.net
 Importing signed cert[unifi.wann.net]
 ... issued by [EMAILADDRESS=pk@wann.net, OU=wann.net CA, O=wann.net, L=Fremont, ST=California, C=US]
 Certificates successfuly imported. Please restart the UniFi Controller.

Restart the UniFi controller

service unifi restart

Done!

Using keytool

There’s a Java utility called keytool usually on your system you can use to view or work with the key store stored by the UniFi controller. For sake of compatibility and time I elected to use the import function of lib/ace.jar, but for the #yolo crowd you can play with this to make modifications to the keystore directly.

For example,  to list which certificates are in the key store file (by default there’s no keystore password):

keytool -list -keystore data/keystore

Verbose listing with certificate details:

keytool -list -v -keystore data/keystore

Speed!

My old dedicated server is quite old and ass slow. I finally got around to moving my website elsewhere running Nginx+HHVM, and now it’s tolerable once again! I can finally enforce 100% https without killing the CPU.

June rocks

June was such an exciting month and the good news over summer keeps getting better:

  • People finally wise up and take down the Confederate flag (the “rebel flag” where I grew up)
  • The US Supreme Court allows gay marriage (whether or not they hijacked diplomacy is another thing)
  • ESA’s Philae comet lander makes contact with Rosetta after several months of hibernation
  • The Oklahoma Supreme Court says the 10 Commandments monument at the state capitol must come down
  • ARIN basically ran out of IPv4 addresses at the end of June, only /23 and /24s remain
  • 50% of Xfinity/Comcast Facebook users now reach Facebook over IPv6 (tell me again IPv6 will never take off?)
  • The Internet largely absorbed the 2015 leap second without dying
  • My annual physical says I’m not dying, but need to get more exercise
  • I scheduled vacation at the end of July

Operations reading list

I love books. These days I buy most of my books for Kindle, but I still buy paper for books I really like and want to keep around. Tech books are notorious for being obsolete a couple of years after printing, but there are still several timeless books I use for reference and would recommend for anyone in UNIX/Linux systems engineering or networking, new or jaded veteran. Some are older than others, but here’s a few that have served me well:

Systems:

If you deal with the internet you must have a very solid understanding of the protocols involved, from ARP to TCP. By the time you’ve been in the industry for several years, you’ll encounter problems with every part of the stack covered by this book, along with lower levels such as Ethernet. tcpdump and other packet sniffers will be your best friend and you should use them liberally. My first edition of this book only covers IPv4 but the second edition covers IPv6 now, which you should be using!

(a/k/a “APUE”) The Internet is built on UNIX and C. This is more of a reference book rather than one you’d sit down and read, but I enjoy reading random bits when I’m curious or want more background on something. The book covers a lot detail of how the UNIX userland environment works with the kernel, giving snippets of C code to show exactly how something like syscalls are implemented under the hood. Ever ran strace and wondered what open(), write(), mkdir(), bind(), connect(), fork(), SIGUSR1 are? This book will show you in simple C code what’s going on.

Three recent additions this past year:

APUE was geared at a general System V / BSD UNIX audience. This book is very similar to APUE, but geared toward a Linux audience. It goes into the same level of detail and explaining things in C code as APUE. It’s a huge book coming in at 1,500+ pages so make room on your bookshelf for it.

Brendan has given many talks and authored several pieces on systems performance, benchmarking, and really digging in deep to troubleshooting bottlenecks. He authored DTrace and if you’ve ever seen the interesting “guy screaming at hard drives” (which shows effect of vibration on disk latency) video on YouTube, that’s him. You can’t change something if you can’t measure it, and this book explains how to get valid data to analyze performance of applications, CPU, memory, disks, kernels and networking. It also covers applications in a cloud environment and gives good insight on how virtualized kernels or system calls can impact performance.

In particular, I really like this book because it covers things from both a Linux and Solaris kernel perspective. I’ve used both over my career and while my Solaris is rusty this gives useful comparisons to get me through problems. I’ve heard Brendan speak a couple of times and his slides (and talk) from SCaLE 11x on Linux Performance Analysis are a great read. There are some very useful illustrations that show which tool to use for the job, e.g. in troubleshooting an issue do I use perf? iostat? sar? tcpdump? netstat? nicstat? strace?

I first ran across Sherri Davidoff by listening to her talk at DEF CON 21 about the do-it-yourself cellular sniffer^W IDS and later found her book. Most systems people are blissfully ignorant beyond the Ethernet interface of their servers. This doesn’t cut it anymore in a land of distributed systems, so you need to understand how to troubleshoot issues on the network too. This book is primarily written for doing forensic analysis and gathering evidence of events for an investigation, but there are still a lot of parallels in troubleshooting a production environment. Some of the same techniques for carefully collecting evidence and gathering logs are fantastic for writing up a root cause analysis, so some bad thing doesn’t happen again[tm].

I like this book because it covers traffic and packet analysis, a TL;DR of network protocols in real life, and the various network devices that data can flow through. This is the only practical book I’ve read that explains why you’d want to do flow analysis (e.g. NetFlow, sflow) to detect problems or see application activity, along with examples of using nfdump/NfSen. It covers intrusion detection, snort, switches, routers, firewalls, logging, tunneling, all good stuff.

Networks:

In a previous life I was dedicated to network engineering in a managed hosting environment for a few years with lots of snowflake customers. I touched a wide swath of different types of gear from multiple vendors, hardware load balancers, VPNs, firewalls, L2/L3 switches, routers, huge L2 domains with hundreds of VLANs. Enough to do the job, but not a master at any. I caused my share of outages with spanning tree before I got a real grasp of what was going on. These books are a bit dated since Cisco and IOS isn’t as dominate as it once was (thank god), but they still have useful network stuff that transfers to other platforms.

My go-to book for Cisco firewalls back in the day. I dealt a lot with all three platforms and it was often quicker to just grab this book than dig around on Cisco’s website for configuration examples. My book is all marked up with notes and bookmarks for packet processing order of operations, NAT and SNAT configuration, failover pairs, and logging. It was good because it usually gave the equivalent PIX, ASA, and FWSM (Cat 6500 Firewall Service Module) commands together when explaining how to configure something.

Oddly absent from this book was a treatment of VPNs, there’s barely any mention of IPsec. I have the companion book “The Complete Cisco VPN Configuration Guide” but was disappointed at its coverage of IPsec and SSL/DTLS VPNs, especially when it came to troubleshooting on firewalls. A good hunk of the book is centered around the Cisco VPN 3000 Concentrator which is way obsolete now.

This was my savior in learning the guts of layer 2 Ethernet and spanning tree in its various flavors. STP, PVST+, Rapid STP, MST, BPDUs, STP roots, enough trees to make a forest. Then there’s VLANs, VLAN trunking, 802.1q tagging, Q-in-Q, private VLANs and multicast. Then it goes into covering CatOS and IOS on the beloved, trusty workhorse of the 2000s, the Catalyst 6500 series of switches. I never did get that CCNP.

  • Designing Content Switching Solutions, by Naseh and Kahn

This book is positively dated now, but if you find yourself still managing an ancient Cisco load balancer (e.g. CSS 11501, CSM for 6500, or firewall load balancing), this is your book. Beyond this it gets into HTTP/RTSP/streaming/layer 7 load balancing, SSL offloading and global load balancing. Now that I think about it, don’t buy this book. Offloading SSL to a hardware load balancer is a terrible thing you don’t want to do. Your farm of Intel Xeons can handle the crypto overhead much better than a puny RISC processor from 2001. The world is much better now and standard Linux servers are the new load balancer.

It’s a classic that practically everyone in the 1990s learned BGP from. Heck, it even includes a CIDR conversion table in the front flap and explains what NAPS were. Nevertheless, it explains various scenarios and topologies where you’d use BGP internally and externally, and how the protocol behaves to control routes. The world has moved to running MPLS within the backbone, but BGP is still alive and kicking on the edges. In fact at work we use BGP right down to the rack switch and inject VIPs onto the network via BGP.

Notable mentions:

Sometimes I just want to read a book with Kindle on an airplane or at breakfast, because I’m that kind of guy.

I hate and love Kerberos, mainly because I was clueless and tossed into the deep end to support it. I want to love it more because distributed authentication and authorization are super useful in a large systems environment and I don’t know how I’d live without it now, so I bought this to read. So far it doesn’t disappoint in how to setup Kerberos realms, KDCs, slaves, and all that fun stuff.

I don’t put on my DBA hat very often and usually touch MySQL seldom enough I have to go remember how to set up replication. If I supported it again, this would probably be the book I’d be reading.

With EdgeOS 1.6 for the EdgeRouter line, Ubiquiti upgraded the Debian distribution from squeeze to wheezy. Along with a more modern 3.10 kernel this gets us a newer version of Ruby too, 1.9.1. During upgrades the system is blown away so I lost my Chef client. I had problems re-bootstrapping my routers until I finally realized that /etc/apt/sources.list still pointed at squeeze repos. I asked Ubiquiti about this and they say it’s intentional that you have to update sources.list to fetch from the new repo.

How to fail
Stepping back in time before I figured this out, this is what transpired.

When I would try to build gems things went sideways; the running system was wheezy but it was trying to install packages from the squeeze distribution. As such there were a lot of version conflicts and packages just refused to install. For the record, these are the sort of errors I was running into with a repo mismatch (mainly around libc6 and libc6-dev when trying to install ruby):

root@gw2:/home/ubnt# apt-get install ruby ruby-dev git ruby1.8-dev
...
The following packages have unmet dependencies:
 ruby1.8-dev : Depends: libc6-dev but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
root@gw2:/home/ubnt#

Trying to install libc6-dev fails:

root@gw2:/home/ubnt# apt-get install libc6-dev
The following packages have unmet dependencies:
 libc6-dev : Depends: libc6 (= 2.11.3-4) but 2.13-38+deb7u6 is to be installed
             Depends: libc-dev-bin (= 2.11.3-4) but it is not going to be installed
             Recommends: gcc but it is not going to be installed or
                         c-compiler
E: Unable to correct problems, you have held broken packages.
root@gw2:/home/ubnt#

You can pretend the problem doesn’t exist and ignore the problem by installing ruby 1.8 without the -dev package, but this will blow up on you later when you try to build gems such as ohai:

root@gw2:/tmp/rubygems-2.4.1# gem install ohai --no-rdoc --no-ri --verbose
...
/usr/lib/ruby/gems/1.8/gems/ffi-1.9.6/spec/ffi/variadic_spec.rb
/usr/lib/ruby/gems/1.8/gems/ffi-1.9.6/spec/spec.opts
Building native extensions.  This could take a while...
/usr/bin/ruby1.8 -r ./siteconf20141213-14984-bl1gm-0.rb extconf.rb
extconf.rb:4:in `require': no such file to load -- mkmf (LoadError)
	from extconf.rb:4
ERROR:  Error installing ohai:
	ERROR: Failed to build gem native extension.

    Building has failed. See above output for more information on the failure.
extconf failed, exit code 1

Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/ffi-1.9.6 for inspection.
Results logged to /usr/lib/ruby/gems/1.8/extensions/mips-linux/1.8/ffi-1.9.6/gem_make.out
root@gw2:/tmp/rubygems-2.4.1#

Aaaaand this fails because mkmf (Ruby MakeMakefile module) is provided by the ruby-dev package we couldn’t install earlier.

root@gw1:/home/ubnt# dpkg-query -L ruby1.9.1-dev | grep mkmf
/usr/lib/ruby/1.9.1/mkmf.rb
root@gw1:/home/ubnt#

So the lesson here is to make sure you’re fetching package from the correct repo. If you’ve found yourself in this situation, you’ll want to back things out and install the correct versions. First thing you want to do is dpkg --purge ruby1.8 libruby1.8 remove ruby 1.8. Then fix your apt sources and start all over again.

Chef/Ruby version caveat
One thing worth mentioning here is that you won’t be able to run the latest hotness, Chef client 12. The wheezy distro only has ruby 1.9.1, and Chef 12 requires ruby 2.0. The best I’ve been able to install from rubygems is Ohai v7.4.0 and Chef client 11.16.4.

gem install ohai --no-rdoc --no-ri --verbose -v 7.4.0
gem install chef --no-rdoc --no-ri --verbose -v 11.16.4

In honor of xkcd 979, I’m posting this so future generations of Courier-IMAP users won’t have to Bing for a solution in vain (and hit lots of useless advice). In the process of finally getting around to upgrading my 2008-era courier-imap 4.1.1 setup to the shiny new 4.15 hotness and putting things in Chef templates, I encountered this error in /var/log/maillog:

imapd-ssl: couriertls: /etc/pki/tls/private/blah-certkey.pem: error:0906D06C:PEM routines:PEM_read_bio:no start line

My certificate file has three things in it, my SSL certificate, the intermediate CA certificate, and the private key. After making sure I didn’t have wonky ^M, line feeds or malformed certificate START/END headers, I started bisecting the old config with my new template. I discovered I was missing the dhparams parameter configuration which is new in 4.15:

TLS_DHPARAMS="/usr/lib/courier-imap/share/dhparams.pem"

This file is generated by the courier-imap-mkdhparams cronjob. I read the release notes before upgrading but clearly forgot to check for this after upgrading. Added this to my template, now Courier IMAP is a happy camper.

I recently started using Ubiquiti’s EdgeRouter Lite to replace the routing functionality of my Airport Extreme at home and for some other networking projects. I really like this little box, $99 gets you a 3-port gigabit router and it can really churn through packets. It runs EdgeOS, which is based upon Debian along with the Vyatta routing stack.

EdgeOS provides a nice web UI as well as a cli shell for editing the configuration. If you’ve never used Vyatta, its configuration and cli look and feel are just like Juniper’s JunOS. Like JunOS, you can also get into a system OS bash shell.

I did some poking around and discovered that the router has 2 GB of flash storage (via USB thumb drive internally) and ordinary Debian packages can be installed. Further, once inside a bash shell there is a command-line interface to manipulate the running Vyatta router configuration. This made me wonder, can I run Chef on this thing to manage it?

Historically doing any sort of scripted management on a router has been a giant pain in the ass, either involving remote expect scripts or XML APIs, because there’s no way to run software directly on the router. Chef expects to run locally on a system and be able to run commands to converge the system to the desired state. If you try to bolt Chef onto a Cisco IOS router for example, you’d at the very least need some sort of proxy layer that takes Chef resources and translates those into IOS commands and runs them remotely. It’s hacky and ugly, I wouldn’t want to run it in production. With an EdgeRouter, its 2 GB of storage and Debian underpinnings, Chef can indeed run directly on the router eliminating the need for any sort of proxy layer!

The EdgeRouter has a Cavium 64-bit MIPS CPU so the standard omnibus Chef client packages won’t work because they’re intended for i686/x86_64. However Chef can be installed from RubyGems. This is the same way I install Chef on Raspberry Pis running Raspbian (which is ARM-based, a/k/a armv6l), and in fact the same knife bootstrap script I used for Raspbian worked for EdgeOS.

Bootstrapping takes a fresh EdgeOS router and installs Chef on it via ssh from another system. RubyGem packages are installed, Ohai and Chef client gems are installed, and the Chef client.rb and validation key are copied over. Once this is done, Chef can be ran directly on the router.

Bootstrapping EdgeOS

I’ve uploaded my knife bootstrap template to GitHub (https://github.com/bwann/chef/tree/master/knife). Drop this into your ~/.chef/bootstrap directory, then feed it to knife with the -d option. By default the username/password for EdgeOS is ‘ubnt‘ and ‘ubnt‘.

knife bootstrap -d debian7-edgeos-raspbian -x ubnt --sudo 192.168.1.20

Bootstrapping takes several minutes to run, with installing and updating gems taking 100% of a core for a while. Various things are fetched remotely from the internet with this template, such as Debian package updates and RubyGems.

Using Chef

I’ve only just started using Chef on EdgeOS and so far haven’t gotten to manipulating the Vyatta configuration. I imagine this will involve writing providers to handle running commands via shell. This is tricky because there are some config files (e.g. /etc/snmp/snmpd.conf, zebra) that are managed by EdgeOS and changes to them would be overwritten, therefore they’d have to be managed via Vyatta API. There’s documentation for the Vyatta shell API on the unofficial Vyatta wiki: http://www.vyattawiki.net/wiki/Cli-shell-api. Once I figure this out I’ll write up more about it.

In the meantime I’ve used chef to do things like this:

  • Install packages, e.g. iftop
  • Manage OpenSSL certificates for my private CA
  • Install OpenTSDB tcollectors for system data collection, e.g. CPU/memory/interface counters

Example ohai output

Ohai reports the platform/platform_family as debian/debian, so there’s no clear distinction here that we’re running on a router. So far my way around this in recipes has been keying on node['os_version'].end_with('UBNT').

  "kernel": {
    "machine": "mips64",
    "name": "Linux",
    "os": "GNU/Linux",
    "release": "2.6.32.13-UBNT"
    "version": "#1 SMP Wed Oct 24 01:08:06 PDT 2012",
  },
  "os_version": "2.6.32.13-UBNT",
  "platform": "debian"
  "platform_family": "debian",
  "platform_version": "6.0.6",

Other EdgeRouter hacking

A few other people have been hacking on EdgeRouters, even getting FreeBSD running on them. I was most surprised by internal flash storage being just a removable USB thumb drive sitting in a socket on the board. It has a 2 GB stick in it, which should easily be replaceable by something larger if so desired. The U-Boot bootloader is over in another piece of flash on the board.

I found a handy wiki page over at http://www.crashcourse.ca/wiki/index.php/EdgeRouter_Lite which has some good information and links on the internals of the EdgeRouter.

OpenTSDB tcollectors

tcollectors work just fine running on the EdgeRouter. I’m using them to collect counters every 15 seconds and push to TSD. There doesn’t seem to be any noticeable CPU lag when the Chef client or tcollectors do their thing. All network interfaces show up to the kernel, so all three ethernet ports, VPN tunnels, and IPv4-IPv6 tunnels can be measured.

Here’s an example of charting bytes in/out (red, green, left axis) of my DSL connection and CPU idle (blue, right axis):

Balcony plant waterer

I finally reached a critical mass of various plants, flowers, and succulents on my apartment balcony that required multiple trips to the sink when I needed to water them. While browsing the hardware store I was eyeing the garden drip irrigation systems and thought why not? Originally I wanted to use some sort of a jug which I could fill with water from the bathtub and then gravity feed out to my plants. I didn’t find anything off-the-shelf I could use and shelved the idea. Later I was at Harbor Freight and picked up a cheap 12V submersible pump with the intent to make a little fountain for the garden. I apparently got a defective panel because the one included didn’t do anything when I plugged in the pump. Fortunately I was going to hook the pump to my existing balcony setup anyways, so no loss.

Eventually the idea came to me to use a Tidy Cat litter jug and this submersible pump to water my flowers. I go through litter every few weeks and the jugs are sturdy plastic with a hinged lid and handle, and designed to hold 38 pounds of clay, so perfect for lugging water outside. I took the pump to the hardware store and found that its outlet slipped perfectly into 1/2″ garden sprinkler tubing. I had no idea how I’d wind up using it so I settled on using a 1/2″ tubing to connect the pump to a tee up outside of the jug. From there it split off to feed a pair of manifolds, which broke out into 1/4″ connectors for feed lines. When I want to refill the jugs, I lift the tube out and take the pump out, leaving it behind.

Each flowerpot has a 1 gallon per hour drip fitting and the planters have 2-3 drip fittings. There’s plenty of pressure to push water along, even with 1/4″ tubing going upwards. However, it’s not automated at all at this point. I still have to plug in the motor for several minutes to water the plants, which still beats going back and forth to the sink. I do have an OpenSprinkler controller I bought at Maker Faire that I’ll likely try next. This will let me automate watering and maybe not have to think about it. My plants have shown an amazing comeback after getting good watering!

« Newer Posts - Older Posts »