Can you see me now?

This is part three of my smart light adventure: be sure to check out part one and part two.

My (endearingly) luddite kids are going to hate this one even more than the last. Now my lights don’t just listen to my voice, they turn on automatically when they see me show up. Well, sometimes they do. Time will tell if this first attempt at motion sensing will be awesome or annoying, but this Raspberry Pi business just keeps getting cooler and cooler.

There are of course a ton of potential motion-triggered scenarios; I focused my effort on two that I encounter pretty much every day:

  1. Our ancient dog often needs to get up overnight to go to the bathroom (it’s like having a baby but you have to go outside in the rain). It’d be really nice if the lights came on just a little so that I could see where I was going, then turned themselves off after I was back into bed.
  2. When I get up on Fall or Winter mornings, it’s almost always still dark — a small but notable daily gut punch that comes with living at 48°N. It’d be nice if the lights just came on automatically so I could pretend it was sunny.

The “Easy” Way (Ring and IFTTT)

As it turns out, we already have Ring cameras in many of our rooms — the obvious approach was just to tap into those motion signals to trigger the lights. Unfortunately, Ring doesn’t have an open developer program. There are some reverse-engineered libraries out there (e.g., for Python and Typescript) and they seem fine, but you’re going to start using third party stuff anyways, the private integration at IFTTT.com is way more convenient. “If This Then That” has been around for years, a nerd’s paradise that enables connections between all sorts of Internet services and resources. IFTTT “applets” connect “IF THIS” conditions (“I get an email” or “the stock market closes” or “Camila Cabello posts to Instagram”) with “THEN THAT” actions (“Play a chime on my Alexa” or “Text me the closing price of MSFT” or “Open the Instagram app on my Roku TV”).

Since Ring is connected to IFTTT, The “IF THIS” part of my scenario is no problem:

  1. Sign up for the IFTTT service. You can create up to three applets for free; beyond that (and for more complex logic) you need to cough up for a “Pro” subscription.
  2. On the IFTTT Ring page, click “Connect” to start an authentication sequence where you grant IFTTT access to your Ring account.
  3. Create a new applet, click the IF THIS “Add” button, and search for and choose Ring.
  4. Click “New Motion Detected” and then pick the correct device in the dropdown.

Halfway there. But the “THEN THAT” side is a little more complicated. Remember the whole discussion last time about getting a message from the Internet back to our home controller app? Same issue here, but now we can just drop a message into the very same queue we created for Alexa. Given the ubiquity of Amazon Web Services, I assumed there would already be a “THEN THAT” action to handle this. Cue the Samuel L. Jackson voiceover — no dice. Building one was pretty easy, though.

Using the same AWS account we created to hold the SQS queue last time, I created an AWS Lambda function using the “microservice-http-endpoint” blueprint. I let it create a new Role for the function, and added an SQS SendMessage policy to that role the same way we did for the Alexa role. I chose “Create an API” with the “HTTP” option and set Security to “Open” (note this does mean there is a URL on the internet that can turn my lights on and off; good luck finding it).

Once this was all created, I added some code almost identical to what we used in Alexa; you can find it on github in zwave/lambda/index.js. After deploying the function, I copied the endpoint URL shown in the console and went back to IFTTT, chose the “webhook” action and pasted in the URL adding query string parameters for “screenName” and “settingName”.

Magic! Now whenever my Ring camera detects motion, it tells IFTTT, which calls the Lambda function, which drops a message into the SQS queue, which is picked up by the server in my house, which applies the requested setting to the lights. A little Rube Goldberg, but it works pretty well. Being able to re-use the queue is particularly satisfying; the home control server didn’t have to change at all for this to work.

An entertaining exercise, and it would certainly be nice to reuse motion sensors already in place. But there are a few ways in which this solution just doesn’t meet the brief:

  1. There are a million handoffs; in practice it can take up to ten seconds or even more for messages to make it to the queue. This delay is unacceptable; lights need to come on right away to be useful.
  2. Without adding the pro subscription, there’s no reasonable way to to turn the lights OFF once motion stops. We could handle this within the Lambda function (wait on a timer and then add another message to the queue), but it still wouldn’t be perfect, because there is no “motion ended” trigger from the Ring (we’ll talk more about optimizing this delay later on).
  3. Ring cameras are just expensive, and I don’t have them at all the places I’d eventually like to monitor.
  4. Even if I managed to work around these … I just can’t swallow adding more external services into my lighting solution. Remember, I started this whole journey because I got screwed by a random cloud service. There must be a better way!

The “Awesome” Way (PIR + Pi)

Of course there is. The Raspberry Pi already running our Z-Wave gateway is also an excellent device integration platform. It also turns out that reliable motion sensors are super-cheap. Seems like there’s a solution in there, and a fun one at that. Let’s play and see what happens!

First, the hardware. The most common way for electronics to detect human motion is with Passive Infrared Sensors (PIRs). These sensors respond to the heat signals emitted by all warm-blooded creatures and don’t rely on visible light, so they work reliably in the dark as well. Plus they’re very cheap; I picked up a five-pack of Stemedu HC-SR501 sensors for under $11 at Amazon, just nuts. These sensors add a bunch of logic beyond simple detection; specs are basically:

  • 5V input voltage required
  • 3.3V TTL output (low on quiet, high when motion detected)
  • Variable sensitivity from about 3 – 7 meters.
  • Variable delay (amount of time the signal will remain high) from about 3 seconds – 5 minutes.
  • Single or Repeatable Trigger Mode. In “Single” mode, motion detection starts the delay timer. TTL output remains high through the delay period and then drops low regardless of what happens during the delay. In “Repeatable” mode, the delay restarts with each motion event, only going low when there has been no motion for a full delay period. This second option basically keeps the signal high as long as there is activity within range of the sensor.

This is a pretty ideal match for the Pi, which has a number of General Purpose I/O (GPIO) pins made to directly accept TTL signals, and provides 5V power for attached devices. Repeatable Trigger Mode is also perfect for our “nightlight” use case — we can activate the lights when the sensor goes HIGH and turn them off when the value goes LOW — i.e., after I’ve left the room and gone back to bed.

Wiring it Up

The coolest thing about the Pi platform is its standard set of input and output pins, ready-made for attaching external devices (switches, sensors, lights, cameras, displays, etc.) with minimal additional circuitry. It’s perfect for somebody like me who kind of gets the basics of electronics, but quickly gets lost in the details of really making components work without frying themselves. Which is not to say that you can’t brick your Pi as well; it’s just a little harder.

Because the pins are (mostly) consistent across all Pi models, it’s created a huge market for daughter boards (amusingly also called “Hardware Attached on Top” or HATs) that easily extend the Pi for particular purposes. The RaZberry that was the star of the first Z-Wave post is one of these, which is awesome, except for one thing — the board covers up physical pins 2 and 4, which provide the five volts I need for the PIR sensor. There are many ways to work around this, but I chose to just use the 5V that standard USB ports provide instead. I started with a simple USB power cable and just hacked off the barrel plug to get access to the positive and negative wires. That left just the sensor’s TTL pin to attach. I used a utility knife to cut a little access port in the top of the Pi case, and connected that to physical pin 15 on the Pi (the RaZberry only covers physical pins 1-10, leaving a bunch open for other purposes).

The Pi model 3B has a header block attached, so instead of soldering anything, I was able to use Dupont-style crimp pins to make the connections. This was my first time using these, and it took a few tries to get comfortable with the process, but I am a convert! Basically you squish a male or female pin so that it grabs the insulation and contacts the wire, then insert it into a connector housing. The connectors can hold many wires in a row — a single female pin connected to the Pi, while three female pins in a triple housing attached to the sensor. I used this crimping tool and found this tutorial invaluable. The only trick is that the SB-28B crimper comes with the jaw plates installed the inverse of the tool in the video; if you pay attention it’s fine, but you can swap the jaws if you like, which also makes it a little easier for righties to use them.

Because the sensor outputs TTL voltage, this setup didn’t require any additional circuitry. Typically with an input sensor like this you’d want to add a pull-down resistor to ensure that voltage “float” doesn’t cause spurious events when the sensor is low. But the Pi comes through for us again — each GPIO pin has internal pull-up and pull-down resistors that can be enabled in software. These resistors, their values and default states cause significant confusion in the Pi community; I found this article to be definitive and super-helpful in figuring out what was up.

Accessing the GPIO Pins in Code

With the sensor powered up and connected, it was time to start coding. Pi4J is the “obvious choice” for Pi GPIO access from my old friend Java, so I dug in there. Pi4J is split into two very distinct major version branches: 1.x is built on the WiringPi library and supports Java 8 (except the very last release, 1.4). Version 2.x replaces WiringPi with PIGPIO and is a significant refactor that requires Java 11. I chose to go with Pi4J v1.3 in order to preserve Java 8 compatibility, and because 2.0 still seems a bit like a work in progress.

It’s worth understanding WiringPi a bit. It is installed by default on the Raspberry Pi OS and includes:

  • gpio, a command-line utility that runs setuid as root and can control the pins and read/write values to them. You can do a ton just with this utility and bash scripts.
  • A C library that that provides a programmatic interface to these same functions. For certain functions the library actually shells out to the gpio utility so that it can inherit the setuid behavior.

As often is the case in this world, there is some drama around WiringPi — back in 2019 its creator got (not unreasonably) fed up taking abuse from folks over software he was providing for free, and depreciated the package. That said, it is being maintained un-officially and is deeply embedded in the ecosystem, so it’ll surely be around for some time to come.

Pi4J v1.3 sits on top of the WiringPi C library and inherits much of its vibe. In particular it (by default) uses the “WiringPi” numbering scheme to identify pins. It turns out that there are a few different numbering schemes and folks are often less than clear about which one they’re using. You can get a sense of them all at https://pinout.xyz/ or by running the “pinout” command on your Pi. To wit:

  1. PHYSICAL or BOARD pin numbers just represent the physical location of each pin on the board. When I referred to “physical pins” above, this is the scheme I used. It’s most helpful when trying to find the pins on the actual hardware.
  2. WIRINGPI pin numbers assign a logical number to each pin based on the original Pi documentation. This is the default scheme used in our version of Pi4J; in the code you’ll see me be very explicit about this to try and avoid confusion! These numbers abstract away physical location, so code works across Pi models.
  3. BCM (or sometimes GPIO) pin numbers come from the underlying Broadcom hardware chip, and also provide an abstraction from physical location. Anecdotally this seems like the most common way of referring to pins in software; it’s also now the default scheme that the “gpio” utility uses.

Our PIR sensor is plugged into PHYSICAL pin 15, WIRINGPI pin 3, and BCM pin 22. All at the same time. Awesome.

Listening for Events

The code to actually listen to state change events from our sensor is quite simple and can be found in Motion.java. The object constructor allocates a GpioController, uses it to provision a GpioPinDigitalInput for each sensor we’ve configured (note the use of PinPullResistance.PULL_DOWN here to enable the pull-down resistor as we discussed earlier), and attaches a GpioPinListenerDigital implementation to receive callbacks.

The callback receives a GpioPinDigitalStateChangeEvent each time the sensor is rising (going from low to high) or falling (high to low). We translate these events into actions that are described in the next section.

Finally on close() we call shutdown on the GpioController to make sure resources are cleaned up and pins are unexported.

Putting it all Together

Finally we’re ready to actually turn some lights on or off! The system is configured using the Motion.Config class, which lists each sensor that is connected to the Pi, together with its WiringPi pin number and a set of actions to take when motion is detected.

On a RISING (motion detected) event, each associated Action is handled as followed:

  1. If the StartTimeHHMM config is present and the current time of day is before this value, the action is aborted.
  2. If the EndTimeHHMM config is present and the current time of day is after this value, the action is aborted.
  3. If the OnlyIfOff config is present and true, each vlight in the configured Screen/Setting is interrogated for its current value. If any lights are on at any brightness level, the action is aborted. This rule attempts to minimize undesired automated actions. For example, if it’s 3am but I’m still awake and reading, I don’t want motion to turn all the lights into “nightlight” mode.
  4. The configured “ActionSettingId” is executed.
  5. If a “QuietSettingId” value is present, it is added to the “QuietActions” list.

On a FALLING (no motion) event, the QuietActions list is searched for any actions tied to the relevant WiringPi pin number. If any are found, they are removed from the queue and the appropriate QuietSettingId is executed. As we discussed at the beginning, the sensor is configured for Repeatable Trigger Mode and with a maximum delay (about five minutes). The effect of these is that the QuietActionId will be executed approximately five minutes after in-range motion ends.

And can you believe it, it actually works. When I come into our family room between 2am and 5:30am, dim lights come on to help me see where I’m going, and turn off when I get back into bed. From 5:30am to 9am, when I walk into the room the lights come on fully and stay on until I change them manually. Exactly what I wanted, extremely reliable and self-contained. Wooohoo!

The only thing I plan to change about the current setup is to extract the motion sensing stuff into its own standalone application. Rather than configuring Screens and Settings there, I’ll just have the motion app hit well-known URLs on rising and falling signals. This will let me place motion sensors all around the house, away from the Z-Wave gateway. If I back each sensor with a $14 Pi Zero I’m looking at about $17 per unit, which isn’t bad at all. I might also see about Glowforging up some kind of enclosure for the sensor.

But at least for now, that is a wrap on the home lighting adventure. I have learned a ton about how all of this works. I hope you’ve found in interesting and useful as well … please let me know if you find bugs or mistakes, I’d very much appreciate it. Until next time!