Ephemerabot was born into the world this week. Here’s one of their first words:
And here’s their birth certificate.
Ephemerabot checks for and tweets out new scraps of ephemera daily, and tweets out a Throwback Thursday edition every…Thursday.
I’ve been itching to play with the Twitter API for a while. Twitter has the trifecta for making interesting computer-generated content:
- A steady flow of human-made content to programmatically sift through and read
- The ability to write new content in a myriad of ways
- A well-documented and accessible API
Other public forums like Instagram lack the accessible API: the ability for anyone to experiment with their platform.
I decided to try out Ephemerabot first because it’s a one-way stream: just sending simple(ish) tweets out without reading other peoples’ ones first.
I already use Airtable to publish records to the Ephemera web app via Airtable’s API. It works great1 and their API documentation is my go-to example of good documentation. So, Ephemerabot already had a data source (example Airtable base here).
How often should I pull from that data source? I’ve landed on the following schedules based on how often I actually upload new ephemera and how many records (200ish) already exist:
- Check for and tweet new ephemera every twelve hours
- Choose one piece of ephemera at random to tweet every Thursday
These are easy to change thanks to packages like node-schedule.
So, I’ve already got the Airtable base, a working web app with the Airtable API, and a pretty good idea of how I want these Airtable records to be tweeted. Hooking up Twitter was the next challenge, although that was pretty easy thanks to the twit package, a bunch of YouTube videos, and their documentation. The real challenge was tweeting with images.
Ephemera scraps come in all shapes and sizes. Twitter however has a very rigid image container dimensions: 1024x512. That’s 2:1. Wider than widescreen.
I don’t want one tweet’s image to be huge because it happens to fit the 2:1 ratio well and another to be tiny because it doesn’t. Here’s what I mean:
You get the gist. The images need to be scaled differently on the X and Y axes in order to appear proportional. This was the hardest part for me.
I settled on the jimp package because it’s (limited) documentation had a few different methods for scaling and cropping. It also had a built-in
.getBase64 method, which is the format Twitter wants images in.
I experimented with annotating the image with text. I ultimately decided against it as it seems to visually pollute the ephemera scrap whilst repeating text that’s right above the scrap in a more accessible format.
Speaking of accessibility, I can’t get alt-text to work. That’s annotating images so people with low or no vision have a description read out to them. I’ve kept my commented-out attempt in case anyone is able to help.
The alt-text struggle has me thinking: how many people are actually adding alt-text/descriptions to their photos on Twitter, Instagram, and elsewhere? I bet not many. This bot (among others) already does what I was thinking as a future project.
What about a bot that replies to photos with unsolicited image descriptions? Ones that are hilariously bad, like if I tell the bot that every photo it receives (when it could be of humans, a chair, whatever) is of birds. My recent machine learning experiments and the release of new tools like Lobe also have me thinking about making photos from text or drawings from images.
Then again, is this the time to goof off? I’m writing this as all three major TV networks cut their feeds away from the U.S. President making false claims about voter fraud. Last night, his supporters were chanting both “count the votes” and “stop the counting” in Arizona and Michigan respectively.
This election is tight, I think, not because of poor Democratic strategy but because of effective disinformation campaigns. I feel obligated to at least try designing for this if I’m going to goof off with it. My Twitter experiments won’t make or break democracy but I should at least think about tools to bridge the gap between echo chambers and reality.
Although let me know if you have a better idea ↩︎