Talking Localisation

Article Featured Image

When talking pictures were first introduced in the US in the late 1920s, unintended consequences abounded for a film industry that was still relatively young but had nonetheless established some generally accepted and effective ways of doing business. For the first time, carpenters couldn’t hammer while cameras rolled, and delivery trucks could no longer grind their gears or rev their engines within range of the expensive sound recording gear with which the studios were now equipped. Most studios solved the problem by filming talkies at night.

One problem that caused a sudden drop in revenue was the distribution of movies to foreign markets, which reportedly accounted for as much as a third of Hollywood studios’ income in the years leading up to the introduction of pictures with sound. For silent films distributed overseas, the need to translate title cards was a simple and cheap “fix it in post” proposition. Infrequent “intertitles” were one thing, but superimposing text over ongoing in-scene dialogue was at least 20 years away. In 1949, British producer J. Arthur Rank introduced a subtitling method using captions etched on glass in a site-specific solution; 2 years later, Belgian filmmakers would introduce modern subtitling by etching captions directly onto film. The prolonged absence of captions not only closed off foreign markets to films, it also rendered movies inaccessible to the deaf and hearing-impaired as soon as talking pictures became the norm.

The dubbing approach that prevailed in Italy throughout most of the rest of the century emerged in the 1930s, as Benito Mussolini fairly quickly grasped the political advantages of controlling and censoring content through taking editorial license with English-language films distributed in Italy. Fascist regimes in Spain and Germany soon followed suit.

But before opportunistic governments got involved, Hollywood studios scrambled to find their own solutions to protect their overseas revenue streams. Many of the silent-movie-era actors attempting to extend their careers struggled to speak English with confidence on screen. To ask them to read their lines in phoneticised foreign dialogue would have been cruel and unusual punishment. The studios’ initial workaround involved translating the dialogue into various languages and bringing entirely new casts of actors onto the same sets and reshooting entire movies in multiple languages for overseas distribution. This would seem to be a ridiculously time-consuming and costly proposition, but apparently, it’s an approach studios deemed necessary, at least as a stopgap measure, to localise their content and keep foreign revenue streams flowing.

Today, localisation remains a critical budgetary line item for content owners delivering shows to diverse and transnational audiences, and it is probably one whose typical costs have not, until recently, changed considerably in quite some time. The increasingly prevalent use of AI in content localisation, subtitling, and translation promises to change all of that—particularly through the controversial and ethically fraught use of imitative synthetic voices—as Jake Ward describes this issue in his article on accessibility and localisation.

One of the most fascinating AI-powered localisation tools I’ve seen in 2024 caught my eye at NAB in April. Designed to identify potential regulatory and “cultural fit” issues and adapt content at multiple levels for international distribution, the multimodal analysis tool Spherex AI uses AI/ML to analyse content based on classified ratings for more than 200 countries, highlight scenes with cultural and audience sensitivities, and recommend risk-reducing adaptations. “With the rapid growth of streaming video and the expansion of global markets, it’s more critical than ever for content to be culturally relevant and compliant with local regulations,” says Spherex CEO and co-founder Teresa Phillips. “The machines take up all the edge cases that we as humans can’t possibly sit around and think about and turn into rules. There are a lot of rules that we have generated, but a lot of countries either are inconsistent in how they apply their own rules or their rules are unpublished or unspoken.”

When it comes to the labours and limits of localisation, the 1920s seem like much longer than a century ago.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Accessibility and Localisation: How AI Can Create More Accessible Content for Larger Audiences

With key streaming services such as Disney+, Amazon, and Netflix trying to drive down production costs across the board, premium content providers have spent considerable time looking at how they can develop or license content which isn't produced in English but can offer global appeal.

Synthetic Scabs Are Awful: Netflix, AI, and the M&E Industry's Ongoing Labor Struggle

Somehow, greenlighting "Joan Is Awful" has made Netflix look oddly actor strike-sympatico, via its winking endorsement of a show that warns against a writer-less, actor-less "profits without people" media and entertainment future from whose realisation they stand to benefit.