UK - 0800 085 4418

US - (855) 95 TAKE-1

Get a Quote

What is Live Closed Captioning?

Post Author Take1 / February 3rd 2015


Ever wonder how the captions at the bottom of your screen get there so quick in live TV? What wizardry is taking place that allows you to read what’s said a mere second after it has been said?

Well that ‘wizardry’ is live captioning. It’s all about the verbatim transcription of speech into text form as it happens. For many, this conjures an image of someone tucked away in a news studio frantically typing up the words spoken by the presenter.

This isn’t quite the case…

A stenographer at work

What’s most likely happening is that a stenographer is at work transcribing the dialogue. They use a shorthand keyboard and can often transcribe up to 240 words per minute (the record is 375 wpm!). From here the computer will translate the short hand text into written English. This is then formatted as a caption and encoded into the broadcast signal as the programme airs. A viewer will usually see these captions two or three seconds after the words are spoken – not ideal, but better than no caption at all.

Stenographers are trained individuals. The quality of the captions created depends on their skill level. It also depends on the time they have to prepare. This preparation involves entering names or words that are likely to appear in dialogue into the software’s ‘dictionary’. They’re also able to create ‘shortforms’ in advance. These ‘shortforms’ are phrases that are likely to be said during broadcast, allowing them to input several words with fewer keystrokes.

Voice recognition at work

The stenographer isn’t the only method of live captioning though. The other option is to use voice recognition software; software that understands what’s said and switches it to text.

The technology doesn’t work with enough accuracy to decipher direct audio. A captioner must re-speak the dialogue into a microphone in real time using a clear tone to produce a similar effect as a stenographer would. The voice recognition option is best suited to programmes where only one person is speaking at a time. This is because there’s usually only one captioner repeating the dialogue into a microphone.

Both are susceptible to mistakes. It’s generally said that an accuracy of around 95% is considered an acceptable benchmark.

Live captioning isn’t restricted to television. A deaf or hard of hearing person may need live captioning in work meetings, university lectures, workshops or conferences. You can check out our blog on the importance of captioning here.

Most recently however, a report from Ofcom stated that people who are hard of hearing are often left baffled by subtitles. Their report touched on live subtitling too, stating that “… [It] entails unavoidable delays which mean that speech and subtitling cannot be completely synchronised. Errors and omissions are also not uncommon.”

So improvements are being called upon and deaf and people hard of hearing are asking for broadcasters to measure the quality of their live TV subtitles so that they can improve both speed and accuracy.

Still, it’s a vital element to TV, making entertainment accessible to everyone. And while it isn’t perfect yet, there are efforts being made to improve the way it works.



view more articles