Posts

Starting to break out from the walled garden

The AOL walled garden in the late 90’s. Image from https://invisibleup.neocities.org/articles/4/

When I was at college, I had a night job in a call centre, providing tech support to AOL’s UK customers. It was the late nineties, and although AOL was never as popular in Europe as it was in North America, the company’s vision of a proprietary walled garden giving an ‘internet-like’ experience was still a compelling option for many.

Of course AOL’s influence waned over the years, causing many of us to think that “the internet wants to be free” and that any attempts by a large corporation to supplant the internet was doomed to failure, confirmed by the quick rise and fall of services like Bebo and MySpace.

Now I’m not so sure.

The 2010s have seen the aggressive expansion of social networks such as Twitter and Facebook. When you sign up to these services, you feel like you’re a new customer but of course you’re not, you’re a product. Sophisticated algorithms suck you in to spend more time on these sites, ‘engaging’ more so that you will provide more information to be targeted by ever-more tailored (and profitable) ads.

And the effects have become even more chilling as the algorithms that social networks use spill over into society and politics, encouraging us all to become more polarised, more aggressive and more needy.

There is an alternative, and thankfully there are some very smart people working on it. A return to the open web, where we share for the simple joy of sharing, without profit-maximising algorithms getting in the way.

Host your own content

The basic idea of hosting your own content is not that you pretend that social networks don’t exist, but rather that you take control of your own content, under a domain name that you control, and choose what you share with other services.

I’ve hosted my own content on this site for the past twelve years but for shorter ‘status updates’ I have occasionally used social networks for this purpose.

I’ve created my own ‘microblog’ at https://t.bibby.ie that I will use for short pieces of content, originating from my own site. I will use a new service called micro.blog to share my posts to twitter and (maybe) facebook, but the content will always originate from my own site. If you don’t want to create your own site, micro.blog can do it for you – yes it costs money ($5/month) but if we are to take back control of the web, we need to start thinking like customers, not products. The micro.blog service is still invite-only at the moment, but I have a few invite codes if anyone would like to check it out.

I’m grateful that there are people working on products that can help us liberate ourselves from the walled garden, and I’m hopeful there will always be a place for independence on the web, away from the toxic profit-maximising behaviour of commercial social networks.

Make our street great again

To the project team in charge of redesigning Limerick’s main street:

Please remove through traffic from O’Connell Street.

Expecting people to compete with 1 tonne+ hunks of metal dashing from one side of the city to the other is a recipe for disaster. Let’s admit that “shared space” is the “sorry/not sorry” of urban design when it facilitates traffic throughput.

Let’s be ambitious for our city centre: O’Connell Street has enough room for green spaces, playgrounds, exhibition areas, and much much more.

To attract the best to live, work and learn in our city centre requires a city centre worthy of the best.

I genuinely appreciate the effort that the project team have made to consult the public. I know we share the same hopes and dreams for our city, and I hope you can revise your plans to remove through traffic and centre our great street around our city’s most important asset: our people.

Sent as a submission to the O’Connell St project design team, 29th June 2017.

Don’t use the Force, Luke

(tl;dr: force unwrapping in Swift with ! is bad)

At our last iOS Developer meetup in Limerick, my Worst Case Scenario podcast co-host Dave Sims gave an excellent talk on Optionals in Swift. I’m hoping that Dave will put the content online some day as he really managed to provide a simple yet powerful overview of what Optionals are. He also ended up being invited to give his talk to the CocoaHeads meetup in Indianapolis via Skype – check out Worst Case Scenario Episode 36 for the details about how that came about.

Dave mentioned in his talk that force unwrapping Optionals is generally a bad idea. Xcode doesn’t help the matter though by actually encouraging programmers to force-unwrap optionals:

This is terrible advice for programmers new (or not-so-new…) to Swift. Force unwrapping a nil optional will cause a runtime crash. The whole point of optionals is you make a whole class of bug, where a nil creeps in and causes havoc down the line, impossible. Force-unwrapping throws all of this away with the dubious upside of saving a line or two (and shutting up the compiler). You can argue that the Apple documentation is pretty clear about force unwrapping being a bad idea, but I feel the tools should be encouraging best behaviour. Dave mentioned three common unwrapping techniques:

  1. if let
  2. guard let
  3. The ?? nil coalescing operator to provide a default value

Dave also mentioned Chris Lattner’s recent appearance on Accidental Tech Podcast where Swift’s creator talked about the purpose of the guard statement was to allow early exit (Overcast link that jumps directly to that segment), a style which he is personally in favour of. I also like this style – it reduces the indentation for the main bit of the code. You just have to remember to actually exit the block, either by asserting or returning a default value.

Moving on

For nearly five years I’ve been running my startup Reg Point of Sale. Last week I sent an email round to all my customers to tell them that I’m shutting the business down.

Fortunately, due to the architecture of the system we developed, our customers will still be able to use their systems for the foreseeable future.

It was a tough decision to make. We didn’t take any outside investment, and we don’t owe money to anyone, but it’s still tough to shut down what really was a labour of love.

At some stage I’d like to write up a more detailed retrospective about what went right, and what went wrong. But for now, if you’re interested in more background, I did talk a bit more in detail about this on the latest episode of the Worst Case Scenario podcast that I host with David Sims and Baz Taylor. The relevant section starts about 33’40” in.

Hey Siri, turn the Christmas lights on

My fellow podcast hosts on Worst Case Scenario had both got smart home gear for Christmas. I was feeling a bit left out so I wanted to hack together a low-budget version. Here it is in action:

Now I can use Siri (or the iOS Home app) to turn on my Christmas lights from anywhere!

Ingredients:

  • Raspberry Pi
  • Relay shield from the gear we bought for the hackathon last year (it’s marked FE_SR1y)
  • Christmas lights

The relay shield has a transistor and a diode built in, so I didn’t need to do any fandaglement with a circuit, just hooked up the + to 5v on the Pi, – to Ground, and the switching pin to one of the GPIO pins.

On the Pi, I installed the excellent Homebridge which allowed me to create a HomeKit accessory that Siri can talk to. I used the homebridge-gpio-wpi plugin to talk to the GPIO pins.

It’s obviously super fast on my home wifi network but it’s also surprisingly speedy going through the phone network: iPhone -> phone network -> Apple’s servers -> Apple TV -> Raspberry Pi -> lights takes about 1.3 seconds.

I am toying with the idea of creating a low-budget connected home using 433MHz RF remote controlled sockets which I could control from the Pi with a transmitter board. The other two lads on Worst Case Scenario are also expanding their systems based on the Amazon Echo – subscribe and keep up to date with how we’re getting on!

iOS – checking to see if a word with blank letters is valid using NSSet, NSArray, Core Data and SQLite

Annoying Scrabble words like azo are even more annoying when you use a blank
Annoying Scrabble words like azo are even more annoying when you use a blank

An interesting problem I had recently was to check to see if a string was valid word or not, comparing against a word list with over 170,000 entries.

Checking the string is easy, after loading the words into an NSArray, you can just call containsObject:

NSString *wordToFind = @“the”;
BOOL wordIsValid = [wordArray containsObject:wordToFind];

The code above takes 0.0310 seconds on my iPhone 5s.

It’s even faster with an NSSet:

NSString *wordToFind = @“the”;
BOOL wordIsValid = [wordSet containsObject:wordToFind];

Using an NSSet is over 300x faster than NSArray in this instance: 0.00001s!

Blankety blank

What if you wanted to search to see if the word was valid using blanks, like in Scrabble? NSPredicate makes this very easy:

NSString *wordToFind = @“th?”;
NSPredicate = [NSPredicate predicateWithFormat:@“SELF LIKE %@“,wordToFind];
NSArray *filteredArray = [wordArray filteredArrayUsingPredicate:predicate];

But then I looked at how long it took: 0.44 seconds. Ouch! It’s not acceptable to block the main thread for that long.  There’s a similar method for NSSet:

NSString *wordToFind = @“th?”;
NSPredicate = [NSPredicate predicateWithFormat:@“SELF LIKE %@“,wordToFind];
NSSet *filteredSet = [wordSet filteredSetUsingPredicate:predicate];

which took 0.47 seconds – even worse!

Benchmarking

At this point I decided to benchmark the results by running the test on 500 randomly selected words, where 5% of the letters were turned into blanks. Here’s a sample from my test word list of 500 words:

unclu?tered
succumbs
piggeries
pseu?opregnant
combat?ve

(I have no idea what ‘pseudopregnant’ means…).

Running the test with 500 different words and measuring the time it took to do each test enabled me to record the mean, minimum, maximum and standard deviation of each method. Unfortunately I had to run these tests on the iOS simulator – turns out there’s a subtle bug when repeatedly running an NSPredicate which causes a huge number of NSComparisonPredicate objects to be malloc’ed and not freed, causing the tests above to blow up due to memory pressure on my 5S.

Using the simulator, here’s the results of the test with filteredArrayUsingPredicate: and filteredSetUsingPredicate:

Method Average Min Max Standard Deviation
filteredArrayUsingPredicate:  0.15 0.12 0.34 0.03
filteredSetUsingPredicate:  0.16  0.14  0.54  0.03

Other in-memory methods

I then tried a litany of various methods on NSSet and NSArray to see if I could come up with something faster.

 Using blocks: indexOfObjectPassingTest

    NSPredicate *predicate = [NSPredicate predicateWithFormat:@"SELF LIKE %@",wordToCheck];
    NSUInteger index = [wordArray indexOfObjectPassingTest:^BOOL(id obj, NSUInteger idx, BOOL *stop) {
        return [predicate evaluateWithObject:obj];
    }];
    if(index != NSNotFound)
    {
//can return the found word using index
}

Prefiltering NSArray

This method pre-filters the word list by narrowing it down to only words which have the same first letter (unless we have a blank as the first letter) as the word we’re trying to match.

    NSPredicate *predicate = [NSPredicate predicateWithFormat:@"SELF LIKE %@",wordToCheck];
    
    NSString *firstLetter = [wordToCheck substringToIndex:1];
    NSArray *smallerArray;
    //prefiltering on first letter will only work if first letter is not blank
    if([firstLetter isEqualToString:@"?"])
    {
        smallerArray = wordArray;
    }
    else
    {
        NSPredicate *firstLetterPredicate = [NSPredicate predicateWithFormat:@"SELF BEGINSWITH %@",firstLetter];
        smallerArray = [wordArray filteredArrayUsingPredicate:firstLetterPredicate];
    }
    
    NSArray *filteredArray = [smallerArray filteredArrayUsingPredicate:predicate];
    if([filteredArray count] > 0)
    {
//return matched word here if needed
}

Prefiltering NSSet

    NSPredicate *predicate = [NSPredicate predicateWithFormat:@"SELF LIKE %@",wordToCheck];
    
    NSString *firstLetter = [wordToCheck substringToIndex:1];
    //prefiltering on first letter will only work if first letter is not blank
    NSSet *smallerSet;
    if([firstLetter isEqualToString:@"?"])
    {
        smallerSet = wordSet;
    }
    else
    {
        NSPredicate *firstLetterPredicate = [NSPredicate predicateWithFormat:@"SELF BEGINSWITH %@",firstLetter];
        smallerSet = [wordSet filteredSetUsingPredicate:firstLetterPredicate];
    }
    
    NSSet *filteredSet = [smallerSet filteredSetUsingPredicate:predicate];
    if([filteredSet count] > 0)
    {
//return matched word
}

Using a compound predicate on NSArray

What if we combined the above two methods into a compound predicate – where we check to see if the first letter is the same AND the word is like the word we’re searching for. This might result in a faster match as we can avoid using the expensive LIKE predicate if the first letters are not matching:

    NSPredicate *predicate;
    //if first letter is blank we have to fall back to using normal predicate
    if([firstLetter isEqualToString:@"?"])
    {
        predicate = [NSPredicate predicateWithFormat:@"SELF LIKE %@",wordToCheck];
    }
    else
    {
        predicate = [NSPredicate predicateWithFormat:@"SELF BEGINSWITH %@ AND SELF LIKE %@",firstLetter,wordToCheck];
    }
    
    
    NSArray *filteredArray = [wordArray filteredArrayUsingPredicate:predicate];
    if([filteredArray count] > 0)
    {
//retrieve matched word here
}

Using a compound predicate on NSSet

    NSString *firstLetter = [wordToCheck substringToIndex:1];
    NSPredicate *predicate;
    //if first letter is blank we have to fall back to using normal predicate
    if([firstLetter isEqualToString:@"?"])
    {
        predicate = [NSPredicate predicateWithFormat:@"SELF LIKE %@",wordToCheck];
    }
    else
    {
        predicate = [NSPredicate predicateWithFormat:@"SELF BEGINSWITH %@ AND SELF LIKE %@",firstLetter,wordToCheck];
    }
    
    NSSet *filteredSet = [wordSet filteredSetUsingPredicate:predicate];
    if([filteredSet count] > 0)
    {
//match word here
}

NSArray concurrent enumeration

NSArray also has a method:

- (void)enumerateObjectsWithOptions:(NSEnumerationOptions)opts 
                         usingBlock:(void (^)(ObjectType obj, NSUInteger idx, BOOL *stop))block;

which allows concurrent enumeration when you pass the NSEnumerationConcurrent option:

__block NSString *stringToReturn;
    NSPredicate *predicate = [NSPredicate predicateWithFormat:@"SELF LIKE %@",wordToCheck];
    [wordArray enumerateObjectsWithOptions:NSEnumerationConcurrent
                                usingBlock:^(id obj, NSUInteger idx, BOOL *stop)  {
                                    if([predicate evaluateWithObject:obj])
                                    {
                                        stringToReturn = obj;
                                        *stop = YES;
                                    }
                                }];
    if(stringToReturn)
    {
}

NSSet concurrent enumeration

NSSet offers an almost identical method, except there is no index parameter as NSSets are unordered:

    __block NSString *stringToReturn;
    NSPredicate *predicate = [NSPredicate predicateWithFormat:@"SELF LIKE %@",wordToCheck];
    [wordSet enumerateObjectsWithOptions:NSEnumerationConcurrent
                                usingBlock:^(id obj, BOOL *stop)  {
                                    if([predicate evaluateWithObject:obj])
                                    {
                                        stringToReturn = obj;
                                        *stop = YES;
                                    }
                                }];
    if(stringToReturn)
    {
}

Results

Method Average Min Max Standard Deviation
indexOfObjectPassingTest  0.07 0.0001 0.31 0.04
Prefiltering NSArray  0.07 0.04 0.24 0.03
Prefiltering NSSet 0.08 0.05 0.41 0.03
NSArray compound predicate  0.09 0.06 0.33 0.03
NSSet compound predicate 0.10 0.08 0.26 0.03
NSArray concurrent enumeration  0.10 0.004 0.36 0.06
NSSet concurrent enumeration 0.10 0.01 0.31 0.06

We have managed to do a bit better with some of these methods, but we’re still nowhere near acceptable performance. Maybe Core Data might have some under-the-hood optimisations that might help us:

Core Data

I set up a simple Core Data stack, with one entity “Word” which had one attribute “word” (naming things is hard…). Here’s the method to retrieve a matching word in my data controller:

-(Word *)wordMatchingString:(NSString *)stringToMatch
{
    Word *wordToReturn;
    NSFetchRequest *request = [[NSFetchRequest alloc]init];
    //we want to retrieve all things
    NSEntityDescription *e = [[[persistenceController mom]entitiesByName]objectForKey:@"Word"];
    //set the entity description to the fetch request
    [request setEntity:e];
    NSPredicate *predicate = [NSPredicate predicateWithFormat:@"word LIKE %@",stringToMatch];
    [request setPredicate:predicate];
    //limit to one result
    [request setFetchLimit:1];
    
    NSError *error;
    //execute the fetch request and store it in an array
    NSArray *result = [[persistenceController mainContext] executeFetchRequest:request error:&error];
    //if not successful, throw an exception with the error
    if(!result)
    {
        [NSException raise:@"Fetch failed" format:@"Reason: %@",[error localizedDescription]];
    }
    
    //return the only object in the array
    wordToReturn = [result lastObject];
    return wordToReturn;
}

And here’s the method to match the word:

    Word *foundWord = [[WSDataController sharedController]wordMatchingString:wordToCheck];
    if(foundWord)
    {
}

The results for this were all over the place:

Method Average Min Max Standard Deviation
Core Data  0.10 0.001 0.76 0.06

I had high hopes for Core Data but it’s just not fast or consistent enough for this application.

Descent into SQLite

I was about to give up at this point. I asked Dave Sims if he had any advice and he suggested implementing a Directed Acyclic Word Graph (DAWG) which would be very efficient for finding matches – despite the opportunity to make “yo dawg” jokes I reckoned it might be a bit above my pay grade for some weekend pottering.

I’d read about some iOS developers preferring to deal directly with SQLite for their persistent storage, rather than using Core Data (which uses SQLite as one of its storage options). Relishing the opportunity to type in all caps, I followed a tutorial to get a basic SQLite setup going (the tutorial has a small error in, which XCode will catch with a warning, the solution is to replace the offending line with if (sqlite3_step(compiledStatement) == SQLITE_DONE) { ). Here’s my SQLite query:

    //SQLite uses underscores as the single character wildcard
    NSString *underscoreString = [wordToCheck stringByReplacingOccurrencesOfString:@"?" withString:@"_"];
    NSString *query = [NSString stringWithFormat:@"SELECT word FROM words WHERE word LIKE '%@'",underscoreString];
    NSArray *resultsArray = [[NSArray alloc] initWithArray:[self.dbManager loadDataFromDB:query]];
    if([resultsArray count]>0)
    {
}

This gave the following results:

Method Average Min Max Standard Deviation
SQLite 0.017 0.0014 0.056 0.004

Success!

Using SQLite not only resulted in faster matching, it was also much more reliable than any of the other methods, with a very small standard deviation. Obviously matching a string in an array of 170,000 strings is an edge case, and for most cases any of the other methods would have sufficed.

It’s also worth noting that I was able to run the Core Data and SQLite tests on my own iPhone 5S as they do not suffer the same NSPredicate memory leak as the other methods. Here are the results on the phone:

Method Average Min Max Standard Deviation
Core Data (iPhone 5S) 0.35 0.002 0.78 0.19
SQLite (iPhone 5S) 0.037 0.036 0.042 0.0006

Full results

Method Average Min Max Standard Deviation
filteredArrayUsingPredicate:  0.15 0.12 0.34 0.03
filteredSetUsingPredicate:  0.16  0.14  0.54  0.03
indexOfObjectPassingTest  0.07 0.0001 0.31 0.04
Prefiltering NSArray  0.07 0.04 0.24 0.03
Prefiltering NSSet 0.08 0.05 0.41 0.03
NSArray compound predicate  0.09 0.06 0.33 0.03
NSSet compound predicate 0.10 0.08 0.26 0.03
NSArray concurrent enumeration  0.10 0.004 0.36 0.06
NSSet concurrent enumeration 0.10 0.01 0.31 0.06
Core Data  0.10 0.001 0.76 0.06
SQLite 0.017 0.0014 0.056 0.004
Core Data (iPhone 5S) 0.35 0.002 0.78 0.19
SQLite (iPhone 5S) 0.037 0.036 0.042 0.0006

All hail the new ecosystem wars

pixelYesterday Google announced a new phone. This event wasn’t unusual, Google have announced many phones in its flagship Nexus line in the past – however this time Google proudly announced that they had designed their own phone, rather than commissioning it from a third party manufacturer. Now Apple is not the only player who claims to control ‘both the hardware and the software’. Although I develop for Apple platforms I thought Google’s announcement was intriguing, and if we really are at the start of a new phase in the ecosystem war, it could be really good for pushing the smartphone and related devices forward.

As an aside – We normally discuss events like yesterday’s Google announcement of new phones and other products on our Worst Case Scenario podcast with Baz Taylor and Dave Sims, and Atlantic 302 with Pat Carroll. Yesterday’s announcement intrigued me, and I wanted to jot down some thoughts ahead of recording our next episodes. So if you think any of the reflections below are wide of the mark, I’d encourage you to take a listen to Worst Case Scenario or Atlantic 302 – chances are that either Baz and Dave or Pat will have picked up on any of my spoofing! We’ll be releasing episodes soon that will discuss the Google announcement from a tech and business strategy perspective respectively, I’ll update this article with links when they drop.

The phone isn’t impressive, but the future might be

On first look, the new Google Pixel phone is embarrassingly similar in design to the iPhone 6/6S (granted, I would say the same about the iPhone 7, but c’mon Google, let’s bring a little design innovation here) but if this is a real push by Google to design phones then it makes sense to start out conservatively, and expand over time. If Google are really serious about this, they could bring some much-needed competition to Apple at the high-end of the smartphone market as they develop future models.

The mid-end of the market is there for the taking

The new Google Pixel phone is identically priced to the iPhone in almost all markets. For many people, in many markets, the pricing of the new iPhone and Google Pixel is far too high. We’re also at the stage where, thanks to large leaps in CPU/GPU/storage speed in the last few years, users don’t necessarily need the latest and greatest in technology to have a great smartphone experience. If Google are smart, they will continue to innovate at the high end, but will bring in a newer line of mid-range smartphones over time. Apple half-heartedly addresses this market with the iPhone SE (which I think is a great phone, but it’s Apple’s first device to address the mid-market that isn’t just continuing to sell an older model) but is pretty obviously chasing profit margins, not market share. Obviously there are other manufacturers competing at both the mid and the high end of the smartphone market, but the fragmentation of the market combined with the tendency of carriers and manufacturers to abandon phone models and not give software updates means that there is a real opportunity for Google here.

Apple’s early lead in the connected home market is toast

It frustrates me that Apple came out with a wonderful product for wireless audio a whole twelve years ago with the launch of the AirPort Express. This wonderful device allowed you to wirelessly play music from your Mac (and later, when it was released, your iPhone) to your stereo. It added the ability to play to multiple AirPorts back in 2006, only a year after Sonos had launched their first product (which was expensive and buggy at the start). Since then, Apple have virtually ignored this feature, and certainly failed to build other ‘connected home’ features to it. The AirPort express was last updated in 2012, and doesn’t support modern WiFi standards like wireless ac (introduced in the iPhone 6). Even more inexcusably, Apple never bothered bringing multi-zone AirPlay to iPhones and iPads, despite the fact that modern iPhones have vastly more power than the Macs that supported it back in 2006. Google have launched a new wifi router which uses modern mesh networking techniques (like Eero and Ubiquity), as well as expanding their Chromecast range, which, despite not having native iOS integration, is a better buy these days than the 4 year-old AirPort Express.

How Google might mess this up

All is not lost for Apple however. Google have a bad reputation of throwing things to the wall and seeing what sticks, abandoning products and allowing overlapping products to co-exist (the mess that is Hangouts, Voice and Allo being a prime example). It’s also interesting that these products where released by Google, as opposed to the Nest subdivision of Alphabet that was supposed to focus on consumer hardware. Google’s may not be able to overcome its creepy tendencies and think of its consumer hardware devices as simply another tool to slurp up information about its users. Apple has a stubbornness that can be a strength when it comes to iterating on products and technologies until they are good enough, especially in the hardware space.

The victor in Google v Apple might be us

Ultimately, competition is good. I have never been tempted by any of the flagship Android offerings over the years, including the latest Google Pixel, but that may change in the future. Apple as a company needs good competition to spur them on. In particular I’m hoping that Google’s WiFi and Chromecast products will force Apple to start competing in this market again. An interesting possibility is that with Google becoming an integrated phone manufacturer (despite protestations that the hardware and Android teams are completely separate) is that other Android manufacturers might be tempted to go full in on a third mobile phone operating system, a move that would encourage open standards and which might spur even further innovation.

How I edit podcasts

I do much of the post-production editing for the two podcasts that I’m on: Worst Case Scenario with Baz Taylor and David Sims, and Atlantic 302 with Pat Carroll. Some people have been kind enough to compliment the production quality of the podcasts but I’m still frustrated when I hear how good some of my favourite podcasts sound. I’m still learning about this stuff, and I know I have a long way to go, but I thought I would document my current method. Critiques welcome!

I’m indebted to Jason Snell of SixColors who wrote a very detailed guide on how he edits his podcasts. Much of this guide is based on Jason’s recommendations. Like Jason, I use Logic Pro X for editing, specifically for its Strip Silence function which isn’t available in GarageBand. I add an extra pre-production step of noise reduction in Audacity, as an unacceptable level of hiss was creeping into the recordings without this.

Recording

We sometimes record in person, but most of our episodes are recorded remotely. We sit in our own houses and chat over Skype. We don’t actually record the Skype call, instead we each record our own audio locally and then I mix it together later. We have a pre-recording ritual which consists of three steps:

  1. I count us in to hit Record in GarageBand. This means that the audio tracks will start at roughly the same point.
  2. I ask for five seconds of silence at the start of recording. This provides a convenient place at the start of each track to get a sample of background noise for noise reduction.
  3. I turn up the volume on my headphones, hold the headphones to my mic, and ask each host to talk into their microphone separately. This marks a point where the same audio is appearing both on my audio track (through the headphones) and on the host’s track, identical audio which allows me to sync up the tracks. Of course it’s not truly simultaneous as Skype introduces latency, but it’s good enough.

Sharing

The other hosts on the podcast then send their GarageBand files to me. Baz and Dave are users of the native macOS Mail app, which has a feature to send large files called MailDrop. For Atlantic 302, Pat and I have a shared Dropbox folder that he copies his file over. I mention this because the files involved are large: Atlantic 302 is a 30-minute show and our separate audio files are about 270Mb. Editing podcasts takes up a lot of disk space! (As a side note, I save my working podcast files in a subfolder of Downloads, and exclude the Downloads folder from Time Machine, to prevent backups getting too big).

Export

GarageBand has a default voice recording setting, which adds lots of effects such as reverb. The GarageBand interface is a little confusing, so I created a blank ‘patch’ with no effects whatsoever. I apply this patch in the Library before exporting. I’ve uploaded this blank patch in case you want to use it: it needs to be copied to
~/Music/Audio Music Apps/Patches
– you’ll need to create this folder if it doesn’t exist already.

I export each GarageBand file using the Share > Export Song to Disk menu command, and I save as an uncompressed 16-bit AIFF file. Rather annoyingly, this step creates a stereo AIFF file even though the recorded track from the microphone is mono, I haven’t bothered to figure out how to change this, so I just continue the rest of my workflow in stereo, and then convert to a mono MP3 file at the end.

Noise reduction

Our recordings tend to contain some hiss. If this hiss isn’t removed, it can be amplified at a later stage when applying compression. I use Audacity’s noise reduction function to pretty aggressively reduce noise in the AIFF files before dragging them in to Logic. I open a blank audio file, hit Cmd-Shift-I and select the AIFF for editing. I find the section of the audio at the start where everyone is silent, and select most of it, checking that there isn’t any breathing or noise that isn’t present through the whole recording. Then I select Effect > Noise Reduction and click Get Noise Profile to sample the background noise. I deselect the audio previously selected (important to do this, or the next step will only reduce the noise on the selection) and select Effect > Noise reduction again. You’ll notice there are lots of parameters to adjust, here are the settings that I use, chosen out of trial and error:

noise-reduction-progressNoise reduction (dB): 24; Sensitivity 11.50; Frequency smoothing (bands): 10. The last option, Noise: , should be set to Reduce, not Residue.

This takes a long time, typically nearly two minutes on my 2015 MacBook Pro. I use this time to start exporting the other tracks from GarageBand, and dragging completed files into Logic Pro X. Once the noise reduction has finished, I choose File > Export Audio, and export as another 16-bit AIFF file.

Logic Pro X

Logic Pro X costs €200 in the Irish Mac App Store. I know this is a huge amount – which is why I edited the first few episodes of Worst Case Scenario in Audacity. But Logic has one feature called Strip Silence which saves a huge amount of time and I’m much quicker at editing once I switched.

To start, create a project in Logic. It doesn’t matter what input settings you use, as you will be importing the previously-recorded tracks. Drag your tracks in, you can even drag them in one go, if you do just make sure that you ‘create separate tracks’ in the dialog. Once you have done this, you can delete the original empty track that Logic added for you.

Syncing audio tracks

By now you should have your three audio tracks in Logic. To sync them up, I look for the audio at the start of the file where each host speaks into my mic via my headphones (described earlier). I drag each track to the left and right around this point until the audio is roughly in sync. I find it helpful to zoom in (by pinching the trackpad) to do this, you can sometimes align visually using the waveforms on the track.

Audio effects and EQ

The Inspector sidebar in Logic, showing the Compressor, EQ and Noise Gate applied to Dave's audio (left) with the master Compressor on the right
The Inspector sidebar in Logic, showing the Compressor, EQ and Noise Gate applied to Dave’s audio (left) with the master Compressor on the right

I apply previously saved patches for my co-hosts. You can download these patches: Thomas, Dave, Baz, Pat (you add these to the same folder as the blank patch explained above – Logic and GarageBand use the same plugins in the same folders). These patches contain an EQ (to emphasise and de-emphasise different sound frequencies), a compressor (to make quiet noises louder) and a noise gate (to ignore sound below a certain level). They are different based on certain factors, Baz, Pat and I use the same Pyle dynamic microphone but Baz and I both have fairly quiet voices, so they need boosting. Dave uses a Rode condenser microphone, which provides a lovely loud sound (the Pyle mics are very quiet) but tends to pick up more background noise. For my own patch, I added an extra DeEsser plug-in, as my ‘s’ sound is very hissy. Finally I add a master compressor on the output (it’s listed under Dynamics > Compressor) which effects all tracks, I turn up the compression to 3 and leave everything else at the default.

I try to avoid any clipping on the output meter (any positive number is clipping and will be shown in red), and will tweak the individual track compressor if someone’s audio is clipping, normally using the Make Up knob.

Strip Silence

Strip silence has been applied on Dave and Baz's tracks, but not on mine
Strip silence has been applied on Dave and Baz’s tracks, but not on mine

Now we get to where Logic earns its €200: the Strip Silence feature. This essentially translates a single continuous audio track into ‘blocks’ where someone is actually talking. This makes editing so much easier. The keyboard shortcut for Strip Silence is Ctrl-X (make sure you have the track highlighted), the settings I use are a tweaked version of Marco Arment’s recommendations – 2%/0.6/0.2/0.3 for those of us using dynamic microphones. I change the threshold to 4% for Dave’s microphone: as a condenser it picks up more background noise. As Marco points out, annoyingly Logic doesn’t remember your choices so you have to manually enter these in for every editing session.

Select following

At this point I delete all the chatter before the show starts, and I start editing proper. The one disadvantage of using Strip Silence is that you are left with a lot of empty sections where there are sections of the podcast where nobody is talking. This can be noticeable to the listener (depending on the background noise), so a lot of my editing work involves moving audio ‘back’ so there is no overlap. When you are faced with a period of silence, you want to select the next ‘block’ after the period of silence, and then use the Shift-F to select all blocks of audio following this block. You can then drag the audio back so that it cuts in just after the last person has finished talking.

Editing

As well as removing silences, you’ll often want to cut bits out. In the early days of Worst Case scenario, I was obsessive in cutting every last ‘um’ and ‘ah’, especially with my own voice. I’ve calmed down a bit now I’ve got used to listening my own voice (this took quite a while) and realising that these artefacts appear in everyday speech and the brain is incredibly good at filtering them out.

secondary-editing-toolI do remove any bumps, clicks or any other background noise that I think might distract the listener. Often the noise is in a single block so I can just click and delete. Sometimes you need to divide up a block of audio, the fastest way to do this is to set Logic’s secondary editing tool to be the marquee tool (see graphic). The primary tool should be the pointer tool. The secondary editing tool is invoked by holding down Cmd. You can use the marquee tool to drag over a selection of audio you want to delete, or by clicking once and hitting backspace you can split a block into two pieces.

Intros

On Atlantic 302 we use a spoken word intro and outro that myself and Pat take turns doing and which is recorded at the same time as each episode. For Worst Case Scenario, we use a short noise that Dave created while messing around with an app called cfxr, a little app that creates various sound effects. I turn the input volume down to -13db because otherwise the master compressor will make it too loud.

Export to iTunes

I prefer to export the song to iTunes as an AIFF (File > Share > Song to iTunes). The only metadata I add is the track name. From iTunes I have the export format set as 64 kbps mono (iTunes > Preferences > CD Import Settings > MP3 Encoder > Custom > 128kbps stereo bit rate + mono) so I just convert to MP3 after the AIFF is imported from Logic. Finally I right click the MP3 and Show In Finder to locate the file.

 

Announcing Direach, a new WordPress theme aimed at readers

As I was discussing on the last episode of Worst Case Scenario, I’m a bit of a luddite when it comes to the web. I like reading, and I dislike anything that distracts me from that task. Fortunately the Safari web browser comes with a great Reader mode which strips all the cruft from a page and simply displays the text of the main article, nicely formatted for reading (Firefox has this feature too). I use it constantly.

I decided a few years ago that I wanted my own website to be as readable as the special Reader mode in my browser. Every four months or so I would check to see if anyone had created a WordPress theme that could do the job, but I was never able to find anything suitable. So I wrote my own!

Direach (from díreach in Irish, meaning direct) is the fruit of my efforts, and it is the theme running on this site today. It is my effort to create a WordPress theme that is reader-friendly and accessible.

Typography

Direach doesn’t download a custom font, it uses the Georgia typeface which is a serif font with a relatively large x-height. Created by Matthew Carter for Microsoft specifically for screen use, it is the same font used by Safari’s Reader mode and is installed on the vast majority of computers, phones and tablets. A default serif font is declared for operating systems that do not ship with this font. The default paragraph font on most browsers will render at 18px, increasing to 20px on wider screens. Elements that are not part of the main body of the page have a reduced font size.

Reduced clutter

Almost all WordPress sites have a header at the top containing the site title, subtitle, and the navigation menu. This can take up a large proportion of the screen, especially if the reader is on a mobile device. Direach shifts this content down to the bottom of the article when it is only showing a single post or page. For index pages and the home page, this content is placed at the top. All content is displayed in a single column, apart from two columns of ‘widgets’ at the bottom of the front page. On single item pages/posts, the site title is appended to the article heading.

Small and fast

Not only does Direach not load any external fonts, it doesn’t add any JavaScript either. The markup is relatively clean. Even on my fairly slow US-based shared host, it loads relatively quickly.

Accessibility

Direach is based on the Underscores base theme by Automattic, and so it has excellent support for screen readers.

Future developments

Direach is licenced as GPL v2 or later and the source is available on GitHub. There are a few issues I still need to address – the image in the masthead does not deliver a retina size for devices with high-density screens, despite poking around in the source of get_header_image_tag I couldn’t find a way to get it to emit srcset attributes, despite this functionality being added in version 4.4.0. I’m not 100% happy with the final design and there are still a few rough edges.

I may submit this theme to wordpress.org in the future after a bit of dog fooding, although I think the design may break one or two of WordPress’s theme guidelines.

Reflections

It took quite a bit of work getting Direach to this state, in particular to get the theme working with some of the weird cases in the WordPress test data set. At various stages I worried whether this theme is too stark – almost Brutalist in nature. I also worry that it gives this site a slightly-too-authoritative tone. But overall I’m happy with the results.

Installing

To install this theme on your own site, go to Direach’s home on GitHub and under Clone or Download choose Download ZIP (your browser may automatically decompress the theme, in which case you will have to recompress it). Upload this theme to your WordPress install via Appearance > Themes > Add New > Upload Theme. You can upload a custom image beside the site title under the Appearance tab of the admin interface.

Three podcasters fix a WordPress plugin bug (with chat logs)

On Worst Case Scenario, the podcast I host with Baz Taylor and Dave Sims, we use a WordPress plugin called Seriously Simple Podcasting to host our podcast feed. It’s a fantastic minimalist plugin with little bloat and works very well for us.

We also installed an add-on called Seriously Simple Stats which would give us download counts for each episode of the podcast.

However the plugin wasn’t showing the client information properly and this was annoying Dave.

Screen Shot 2016-07-20 at 20.12.17
The three of us poked at it at various points during the day, and despite not being WordPress experts (and in my case, being very far from a PHP expert), we managed to get it working.

Screen Shot 2016-07-20 at 19.49.28

Podcast app User agent Strings

Here are some user agent strings reported by one or two podcast apps:

  • “AppleCoreMedia/1.0.0.13F69 (iPhone; U; CPU OS 9_3_2 like Mac OS X; en_ie)” – native iOS podcast app, amongst other things
  • “Pocket Casts” a podcast app that is particularly popular on Android
  • “Overcast/1.0 Podcast Sync (x subscribers; feed-id=y; +http://overcast.fm/)” – from Marco Arment’s Overcast app, which helpfully tells you how many subscribers you have in the agent string
  • “iTunes/12.4.2 (Macintosh; OS X 10.11.5) AppleWebKit/601.6.17” – downloads from the Mac version of iTunes

Here’s the bit of the plugin that detects what client you’re using

if ( stripos( 'itunes', $user_agent ) !== false ) {

$referrer = 'itunes';

Hm. There was no entry for AppleCoreMediaPlayer, which was why downloads from the native podcast app were being detected as ‘Other’. But we tried downloading episodes from iTunes on the Mac and that was still recorded as Other.

In fact, apart from plays from the website, every client was being recorded as ‘Other’ – except for Pocket Casts. What was going on?

Screen Shot 2016-07-20 at 19.44.03When your needle and your haystack are the same size

Screen Shot 2016-07-20 at 19.46.40The stripos($haystack, $needle) function in PHP looks to see if a string $needle is in the string $haystack, if it is, it returns an integer with the position of $needle in $haystack, otherwise it returns false.

Absentmindedly, I typed ‘php’ in a mac terminal expecting to get an error, but instead getting a blank prompt, indicating I could type PHP code enclosed in <?php ?> and hit ctrl-D to make it run (why does OS X ship with PHP? I have no idea). I tried a smaller test case:

Screen Shot 2016-07-20 at 19.51.22

Those of you reading probably have the aha! moment by now, but it took a bit of probing from Baz before I got it:

Screen Shot 2016-07-20 at 19.52.53

And that was also the reason why Pocket Casts was showing up correctly – because the entirety of the user agent string is “Pocket Casts”!  Anyhow, changing the function around to read

if ( stripos( $user_agent , 'itunes' ) !== false ) {

$referrer = 'itunes';
and adding AppleCoreMedia to the list got everything working fine. Well almost.

Double hits from the iOS podcasts app

We then hit another problem:

Screen Shot 2016-07-20 at 19.57.11

Turns out any downloads from the iOS Podcasts app were being counted twice. It seems like Podcasts first sends a HEAD request for the mp3 file, presumably to get size information so it can populate the progress indicator, with the user agent “Podcasts/2.4”. The app then sends a normal GET request for the mp3 file with the AppleCoreMedia user agent mentioned earlier.

Screen Shot 2016-07-20 at 20.01.55

Being lazy I just added a return statement if we detected the Podcasts user agent:

if(stripos( $user_agent,'podcasts/' ) !== false ) {
 			return;
 		}

I submitted it as a pull request and the maintainer merged it within a few hours. It was fun chatting away with Baz and Dave on iMessage while we tried to fix it.