It’s been 5 years since we released Flicker Free, and we can for sure say flickering from artificial lights is still one of the main reasons creatives download our flicker removal plugin. From music videos and reality-based videos to episodics on major networks, small productions to feature-long films, we’ve seen strobing caused by LED and fluorescent lights. It happens all the time and we are glad our team could help fix flickering and see those productions look their best as they get distributed to the public.
Planning a shoot so you can have control of your camera settings, light setup and color balance is still definitely the best way to film no matter what type of videos you are making. However, flickering is a difficult problem to predict and sometimes we just can’t see it happening on set. Maybe it was a light way in the background or an old fluorescent that seemed fine on the small on-set monitor but looked horrible on the 27″ monitor in the edit bay.
Of course, in a perfect world we would take our time to shoot a few minutes of test footage, use a full size monitor to check what the footage looks like, match the frame rate of the artificial light to the frame rate of the camera and make sure the shutter speed is a multiple/division of the AC frequency of the country we are shooting in. Making absolutely sure the image looks sharp and is free of flicker! But we all know this is often not possible. In these situations, post-production tools can save the day and there’s nothing wrong about that!
Travel videos are the perfect example of how sometimes we need to surrender to post-production plugins to have a high-quality finished video. Recently, Handcraft Creative co-owner Raymond Friesen shot beautiful images from pyramids in Egypt. He was fascinated by the scenery but only had a Sony A73 and a 16-70mm lens with him. After working on set for 5 years, with very well planned shoots, he knew the images wouldn’t be perfect but decided to film anyways. Yes, the end result was lots of flicker from older LED lights in the tombs. Nothing that Flicker Free couldn’t fix in post. Here’s a before and after clip:
Spontaneous filmmaking is certainly more likely to need post-production retouches, but we’ve also seen many examples of scripted projects that need to be rescued by Flicker Free. Filmmaker Emmanuel Tenenbaum talked to us about two instances where his large experience with short films was not able to stop LED flicker from showing up on his footage. He purchased the plugin a few years ago for “I’m happy to see you”, and used it again to be able to finish and distribute Two Dollars (Deux Dollars), a comedy selected in 85 festivals around the world, winner of 8 awards, broadcasted on a dozen TV channels worldwide and chosen as Vimeo Staff Pick Premiere of the week. Curious why he got flicker while filming Two Dollars (Deux Dollars)? Tenenbaum talked to us about tight deadlines and production challenges in this user story!
Those are just a few examples of how artificial lights flickering couldn’t be avoided. Our tech support team often receives footage from music video clips, marketing commercials, and sports footage, and seeing Flicker Free remove very annoying, sometimes difficult, flicker in the post has been awesome. We posted some other user story examples on our website so check them out! And If you have some awful flickering footage that Flicker Free helped fix we’d love to see it and give you a shout out on our Social Media channels. Email email@example.com with a link to your video clip!
The struggle of making documentary films nowadays is real. Competition is high, and budget limitations can stretch a 6-year deadline to a 10 year-long production. To make a movie you need money. To get the money you need decent, and sometimes edited, footage material to show to funding organizations and production companies. And decent footage, well-recorded audio, as well as edited pieces cost money to produce. I’ve been facing this problem myself and discovered through my work at Digital Anarchy that finding an automated tool to transcribe footage can be instrumental in making small and low budget documentary films happen.
In this interview, I talked to filmmaker Chuck Barbee to learn how Transcriptive is helping him to edit faster and discussed some tips on how to get started with the plugin. Barbee has been in the Film and TV business for over 50 years. In 2005, after an impressive career in the commercial side of the Film and TV business, he moved to California’s Southern Sierras and began producing a series of personal “passion” documentary films. His projects are very heavy on interviews, and the transcribing process he used all throughout his career was no longer effective to manage his productions.
Barbee has been using Transcriptive for a month, but already consider the plugin a game-changer. Read on to learn how he is using the plugin to makea long-form documentary about the people who created what is known as “The Bakersfield Sound” in country music.
DA: You have worked in a wide variety of productions throughout your career. Besides co-producing, directing, and editing prime-time network specials and series for the Lee Mendelson Productions, you also worked as Director of Photography for several independent feature films. In your opinion. How important is the use of transcripts in the editing process?
CB: Transcripts are essential to edit long-form productions because they allow producers, editors, and directors to go through the footage, get familiarized with the content, and choose the best bits of footage as a team. Although interview oriented pieces are more dependent on transcribed content, I truly believe transcripts are helpful no matter what type of motion picture productions you are making.
On most of my projects, we always made cassette tape copies of the interviews, then had someone manually transcribe them and print hard copies. With film projects, there was never any way to have a time reference in the transcripts, unless you wanted to do that manually. Then in the video, it was easier to make time-coded transcripts, but both of these methods were time-consuming and relatively expensive labor wise. This is the method I’ve used since the late ’60s, but the sheer volume of interviews on my current projects and the awareness that something better probably exists with today’s technology prompted me to start looking for automated transcription solutions. That’s when I found Transcriptive.
DA: And what changed now that you are using Artificial Intelligence to transcribe your filmed interviews in Premiere Pro?
CB: I think Transcriptive is a wonderful piece of software. Of course, it is only as good as the diction of the speaker and the clarity of the recording, but the way the whole system works is perfect. I place an interview on the editing timeline, click transcribe and in about 1/3 of the time of the interview I have a digital file of the transcription, with time code references. We can then go through it, highlighting sections we want, or print a hard copy and do the same thing. Then we can open the digital version of the file in Premiere, scroll to the sections that have been highlighted, either in the digital file or the hard copy, click on a word or phrase and then immediately be at that place in the interview. It is a huge time saver and a game-changer.
The workflow has been simplified quite a bit, the transcription costs are down, and the editing process has sped up because we can search and highlight content inside of Premiere or use the transcripts to make paper copies. Our producers prefer to work from a paper copy of the interviews, so we use that TXT or RTF file to make a hard copy. However, Transcriptive can also help to reduce the number of printed materials if a team wants to do all the work digitally, which can be very effective.
DA: What makes you choose between highlighting content in the panel and using printed transcripts? Are there situations where one option works better than the other?
CB: It really depends on producer/editor choices. Some producers might want to have a hard copy because they would prefer that to work on a computer. It really doesn’t matter much from an editor’s point of view because it is no problem to scroll through the text in Transcriptive to find the spots that have been highlighted on the hard copy. All you have to do is look at the timecode next to the highlighted parts of a hard copy and then scroll to that spot in Transcriptive. Highlighting in Transcriptive means you are tying up a workstation, with Premiere, to do that. If you only have one editing workstation running Premiere, then it makes more sense to have someone do the highlighting with a printed hard copy or on a laptop or any other computer which isn’t running Premiere.
DA: You mentioned the AI transcription is not perfect, but you would still prefer that than paying for human transcripts or transcribing the interviews yourself. Why do you think the automated transcripts are a better solution for your projects?
CB:Transcriptive is amazing accurate, but it is also quite “literal” and will transcribe what it hears. For example, if someone named “Artie” pronounces his name “RD”, that’s what you’ll get. Also, many of our subjects have moderate to heavy accents and that does affect accuracy. Another thing I have noticed is that, when there is a clear difference between the sound of the subject and the interviewer, Transcriptive separates them quite nicely. However, when they sound alike, it can confuse them. When multiple voices speak simultaneously, Transcriptive also has trouble, but so would a human.
My team needs very accurate transcripts because we want to be able to search through 70 or more transcripts, looking for keywords that are important. Still, we don’t find the transcription mistakes to be a problem. Even if you have to go through the interview when it comes back to make corrections, It is far simpler and faster than the manual method and cheaper than the human option. Here’s what we do: right after the transcripts are processed, we go through each transcript with the interviews playing along in sync, making corrections to spelling or phrasing or whatever, especially with keywords such as names of people, places, themes, etc. It doesn’t take too much time and my tip is that you do it right after the transcripts are back, while you are watching the footage to become familiar with the content.
DA: Many companies are afraid of incorporating Transcriptive into an on-going project workflow. How was the process of using our transcription plugin in a long-form documentary film right away?
CB: We have about 70 interviews of anywhere from 30 minutes to one hour each. It is a low budget project, being done by a non-profit called “Citizens Preserving History“.The producers were originally going to try to use time-code-window DVD copies of the interviews to make notes about which parts of the interviews to use because of budget limitations. They thought the cost of doing manually typed transcriptions was too much. But as they got into the process they began to see that typed transcripts were going to be the only way to go. Once we learned about Transcriptive and installed it, it only took a couple of days to do all 70 interviews and the cost, at 12 cents per minute is small, compared to manual methods.
Transcriptive is very easy to use and It honestly took almost no time for me to figure out the workflow. The downloading and installation process was simple and direct and the tech support at Digital Anarchy is awesome. I’ve had several technical questions and my phone calls and emails have been answered promptly, by cheerful, knowledgeable people who speak my language clearly and really know what they are doing. They can certainly help quickly if people feel lost or something goes wrong so I would say do yourself a favor and use Transcriptive in your project!
Here’s a short version of the opening tease for “The Town That Wouldn’t Die”, Episode III of Barbee’s documentary series:
Recently, an increasing number of Transcriptive users have been requesting a way of using After Effects to create burned-in subtitles using SRTs from Transcriptive. This made us anarchists get excited about making a Free After Effects SRT Importer for Subtitling And Captions.
Captioning videos is more important now than ever before. With the growth of mobile and Social Media streaming, YouTube and Facebook videos are often watched without sound and subtitles are essential to retain your audience and make them watchable. In addition to that, the Federal Communications Commission (FCC) has implemented rules for online video that require subtitles so people with disabilities can fully access media content and actively participate in the lives of their communities.
As a consequence, a lot of companies have style guides for their burned-in subtitles and/or want to do something more creative with the subtitles than what you get with standard 608/708 captions. I mean, how boring is white, monospaced text on a black background? After Effects users can do better.
While Premiere Pro does allow some customization of subtitles, creators can get greater customization via After Effects. Many companies have style guides or other requirements that specify how their subtitles should look. After Effects can be an easier place to create these types of graphics. However, it doesn’t import SRT files natively so the SRT Importer will be very useful if you don’t like Premiere’s Caption Panel or need subtitles that are more ‘designed’ than what you can get with normal captions. The script makes it easy to customize subtitles and bring them into Premiere Pro. Here’s how it works:
Windows: C:\Program Files\Adobe\Adobe After Effects CC 2019\Support Files\Scripts\ScriptUI Panels
Mac: Applications\Adobe After Effects CC 2019\Scripts\ScriptUI Panels
4. Restart AE. It’ll show up in After Effects under the Window\Transcriptive_Caption
5. Create a new AE project with nothing in it. Open the panel and set the parameters to match your footage (frame rate, resolution, etc). When you click Apply, it’ll ask for an SRT file. It’ll then create a Comp with the captions in it.
Select the text layer and open the Character panel to set the font, font size, etc. Feel free to add a drop shadow, bug or other graphics.
7. Save that project and import the Comp into Premiere (Import the AE project and select the Comp). If you have a bunch of videos, you can run the script on each SRT file you have and you’ll end up with an AE project with a bunch of comps named to match the SRTs (currently it only supports SRT). Each comp will be named: ‘Captions: MySRT File’. Import all those comps into Premiere.
8. Drop each imported comp into the respective Premiere sequence. Double-check the captions line up with the audio (same as you would for importing an SRT into Premiere). Queue the different sequences up in AME and render away once they’re all queued up. (and keep in mind it’s beta and doesn’t create the black backgrounds yet).
Although especially beneficial to Transcriptive users, this free After Effects SRT Importer for Subtitling And Captions will work with any SRT from any program and it’s definitely easier than all the steps above make it sound and it is available for all and sundry on our website. Give it a try and let us know what you think! Contact: firstname.lastname@example.org
I decided to try Transcriptive way before I became part of the Digital Anarchy family. Just like any other aspiring documentary filmmaker, I knew relying on a crew to get my editing started was not an option. Without funding you can’t pay a crew; without a crew you can’t get funding. I had no money, an idea in my head, some footage shot with the help of friends, and a lot of work to do. Especially when working on your very first feature film.
Besides being an independent Filmmaker and Social Media strategist for DA, I am also an Assistive Technology Trainer for a private company called Adaptive Technology Services. I teach blind and low vision individuals how to take advantage of technology to use their phones and computers to rejoin the workforce after their vision loss. Since the beginning of my journey as an AT Trainer – I started as a volunteer 6 years ago – I have been using my work to research the subject and prepare for this film.
My movie is about the relationship between the sighted and non-sighted communities. It seeks to establish a dialog between people with and without visual disabilities so we can come together to demystify disabilities to those without them. I know it is an important subject, but right from the beginning of this project I learned how hard it is to gather funds for any disability-related initiative. I had to carefully budget the shoots and define priorities. Paying a post-production crew was not (and still is not) possible. I have to write and cut samples on my own for now. Transcriptive was a way for me to get things moving by myself so I can apply for grants in the near future and start paying producers, editors, camera operators, sound designers, and get the project going for real. The journey started with transcribing the interviews. Transcriptive did a pretty good job with transcribe the audio from the camera as you can see below. Accuracy got even better when transcribing audio from the mic.
The idea of getting accurate automated transcripts brought a smile to my face. But could Artificial Intelligence really get the job done for me? I never believed so, and I was right. The accuracy for English interviews was pretty impressive. I barely had to do any editing on those. The situation changed as soon as I tried transcribing audio in my native language, Brazilian Portuguese. The AI transcription didn’t just get a bit flaky; it was completely unusable so I decided to do not waste more time and start doing my manual transcriptions.
I have been using Speechmatics for most of my projects because the accuracy is considerably higher than Watson with English. However, after trying to transcribe in Portuguese for the first time, it occurred to me Speechmatics actually offers Portuguese from Portugal while Watson transcribes Portuguese from Brazil. I decided to give Watson a try, but the transcription was not much better than the one I got from Speechmatics.
It is true the Brazilian Portuguese footage I was transcribing was b-roll clips recorded with a Rhode mic; placed on top of my DSLR. They were not well mic’d sit down interviews. The clips do have decent audio, but also involve some background noise that does not help foreign language speech-to-text conversion. At the time I had a deadline to match and was not able to record better audio and compare Speechmatics and Watson Portuguese transcripts. Will be interesting to give it another try, with more time to further compare and evaluate if there are advantages on using Watson for my next batch of footage.
Days after my failed attempt to transcribe Brazilian Portuguese with Speechmatics, I went back to the Transcriptive panel for Premiere, found an option to import my human transcripts, gave it a try, and realized I could still use Transcriptive to speed up my video production workflow. I could still save time by letting Transcriptive assign timecode to the words I transcribed, which would be nearly impossible for me to do on my own. The plugin allowed me to quickly find where things were said in 8 hours of interviews. Having the timecode assigned to each word allowed me to easily search the transcript and jump to that point in my video where I wanted to have a cut, marker, b-roll or transition effect applied.
My movie is still in pre-production and my Premiere project is honestly not that organized yet so the search capability was also a huge advantage. I have been working on samples to apply for grants, which means I have tons of different sequences, multicam sequences, markers that now live in folders inside of folders. Before I started working for DA I was looking for a solution to minimize the mess without having to fully organize it or spend too much money and Power Search came to the rescue. Also, being able to edit my transcripts inside of Premiere made my life a lot easier.
Last month, talking to a few film clients and friends, I found out most filmmakers still clean up human transcripts. In my case, I go through the transcripts to add punctuation marks and other things that will remind me how eloquent speakers were in that phrase. Ellipses, question marks and exclamation points remind me of the tone they spoke allowing me to get paper cuts done faster. I am not sure ASR technology will start entering punctuation in the future, but it would be very handy to me. While this is not a possibility, I am grateful Transcriptive now offers a text edit interface, so I can edit my transcripts without leaving Premiere.
For the movie I am making now I was lucky enough to have a friend willing to help me getting this tedious and time-consuming part of the work done so I am now exporting all my transcripts to Transcriptive.com. The app will allow us to collaborate on the transcript. She will be helping me all the way from LA, editing all the Transcripts without having to download a whole Premiere project to get the work done.