We occasionally get questions from customers asking why we charge .04/min ($2.40/hr) for transcription (if you pre-pay), when some competitors charge .25/min or even .50/min. Is it lower accuracy? Are you selling our data?
No and no. Ok, but why?
Transcriptive and PowerSearch work best when all your media has transcripts attached to it. Our goal is to make Transcriptive as useful as possible. We hope the less you have to think about the cost of the transcripts, the more media you’ll transcribe… resulting in making Transcriptive and PowerSearch that much more powerful.
The Transcriptive-AI service is equal to, or better, than what other services are using. We’re not tied to one A.I. and we’re constantly evaluating the different A.I. services. We use whatever we think is currently state-of-the-art. Since we do such a high volume we get good pricing from all the services, so it doesn’t really matter which one we use.
Do we make a ton of money on transcribing? No.
The services that charge .25/min (or whatever) are probably making a fair amount of money on transcribing. We’re all paying about .02/min or less. Give or take, that’s the wholesale/volume price.
If you’re getting your transcripts for free… those transcripts are probably being used for training, especially if the service is keeping track of the edits you make (e.g. YouTube, Otter, etc.). Transcriptive is not sending your edits back to the A.I. service. That’s the important bit if you’re going to train the A.I. Without the corrected version, the A.I. doesn’t know what it got wrong and can’t learn from it.
So, for us, it all comes down to making Transcriptive.com, the Transcriptive Premiere Pro panel, and PowerSearch as useful as possible. To do so, we want the most accurate transcripts and we want them to be as low cost as possible. We know y’all have a LOT of footage. We’d rather reduce the barriers to you transcribing all of it.
We often get asked what the differences are between Transcriptive 2.0 and 1.0. So here is the full list of new features! As always there are a lot of other bug fixes and behind the scenes changes that aren’t going to be apparent to our customers. So this is just a list of features you’ll encounter while using Transcriptive.
NEW FEATURES IN TRANSCRIPTIVE 2.0
Works with clips or sequences: You no longer have to have clips in sequences to get them transcribed. Clips can be transcribed and edited just by selecting them in the Project panel. This opens up many different workflows and is something the new caption system in Premiere can’t do. Watch the tutorial on transcribing clips in Premiere
A clip selected in the Project panel. Setting In/Out points in TS!
Editing with Text: Clip Mode enables you to search through clips to find sound bites. You can then set IN/OUT points in the transcript and insert them into your edit. This is a powerful way of compiling rough cuts without having to scrub through footage. Watch the Tutorial on editing video using a transcript!
Collaborate by Sharing/Send/receive to Transcriptive.com: Collaborate on creating a paper edit by sharing the transcript with your team and editor. Send transcripts or videos from Premiere to Transcriptive.com, letting a client, AE, or producer edit them in a web browser or add Comments or strike-through text. The transcript can then be sent back to the video editor in Premiere to continue working with it. Watch the tutorial on collaborating in Premiere using Transcriptive.com! There’s also this blog post on collaborative workflows.
Now includes PowerSearch for free! Transcriptive can only search one transcript at a time. With PowerSearch, you can search every clip and sequence in your project! It’s a search engine for Premiere. Search for text and get search results like Google. Click on a result and it jumps to exactly where the dialog is in that clip or sequence. Watch the tutorials on PowerSearch, the search engine for Premiere.
Reduced cost: As low as .04/min. by prepaying minutes you can get the cost down to .04/min! Why is it so inexpensive? Is it worse than the other services that charge .25 or .50/min? No! We’re just as good or better (don’t take my word, run your own comparisons). Transcriptive only works if you’ve transcribed your footage. By keeping the cost of minutes low, hopefully we make it an easy decision to transcribe all your footage and make Transcriptive as useful as possible!
Ability to add comments/notes at any point in the transcript. The new Comments feature lets you add a note to any line of dialog. Incredibly useful if you’re working with someone else and need to share information. It’s also great if you want to make notes for yourself as you’re going through footage.
Strikethrough text: Allows you to strikethrough text to indicate dialog that should be removed. Of course, you can just delete it but if you’re working with someone and you want them to see what you’ve flagged for deletion OR if you’re just unsure if you want to definitely delete it, strikethrough is an excellent way of identifying that text.
More ‘word processor’ like text editor: A.I. isn’t perfect, even though it’s pretty close in many cases (usually 96-99% accurate with good audio). However, you can correct any mistake you find with the new text editor! It’s quick and easy to use because it works just like a word processor built into Premiere. Watch the tutorial on editing text in Transcriptive!
Align English transcripts for free: If you already have a script, you can sync the text to your audio track at no cost. You’ll get all the benefits of the A.I. (per word timing, searchability, etc) without the cost. It’s a free way of making use of transcripts you already have. Watch the tutorial on syncing transcripts in Premiere!
Adjust timing for words: If you’re editing text and correcting any errors the A.I. might have made it can result in the new words having timecode that doesn’t quite sync with the spoken dialog. This new feature lets you adjust the timecode for any word so it’s precisely aligned with the spoken word.
Ability to save the transcript to any audio or video file: In TS 1.0 the transcript always got saved to the video file. Now you can save it to any file. This is very helpful if you’ve recorded the audio separately and want the transcript linked to that file.
More options for exporting markers: You can set the duration of markers and control what text appears in them.
Profanity filter: **** out words that might be a bit much for tender ears.
More speaker management options: Getting speaker names correct can be critical. There are now more options to control how this feature works.
Additional languages: Transcriptive now supports over 30 languages!
Checks for duplicate transcripts: Reduces the likelihood a clip/sequence will get transcribed twice unnecessarily. Sometimes users will accidentally transcribe the same clip twice. This helps prevent that and save you money!
Lock to prevent editing: This allows other people to view the transcript in Premiere or on Transcriptive.com and prevent them from accidentally making changes.
Sync Transcript to Sequence: Often you’ll get the transcript before you make any edits. As you start cutting and moving things around, the transcript will no longer match the edit. This is a one-click way of regenerating the transcript to match the edit.
Streamlined payment/account workflow: Access multiple speech engines with one account. Choose the one most accurate for your footage.
We’ve been working on Transcriptive for like 3 years now. In that time, the A.I. has heard my voice saying ‘Digital Anarchy’ umpteen million times. So, you would think it would easily get that right by now. As the below transcript from our SRT Importing tutorial shows… not so much. (Dugal Accusatorial? Seriously?)
ALSO, you would think that by now I would have a list of terms that I would copy/paste into Transcriptive’s Glossary field every time I get a transcript for a tutorial. The glossary helps the A.I. determine what ‘vocal sounds’ should be when it translates those sounds into words. Uh, yeah… not so much.
So… don’t be like AnarchyJim. If you have words you know the A.I. probably won’t get: company names, industry jargon, difficult proper names (cool blog post on applying player names to an MLB video here), etc., then use Transcriptive’s glossary (in the Transcribe dialog). It does work. (and somebody should mention that to the guy that designed the product. Oy.)
Overall the A.I. is really accurate and does usually get ‘Digital Anarchy’ correct. So I get lazy about using the glossary. It is a really useful thing…
If you work with a team to deliver high quality videos then you know how important it is to keep everything organized between editors, assistant editors, producers, and everyone else involved in a project. With basically everyone working remotely, the need to keep all data under one account seemed like a big priority for our clients. So Transcriptive for Premiere Pro now allows users to log in to one account in order to share prepaid minutes balances, have access to the same projects on Transcriptive.com, track transcribed jobs to avoid duplicates, and access every invoice in one place.
The licensing for Transcriptive for Premiere Pro has not changed: each license purchased equals one serial number that can be installed on two computers. However, multiple editors can now share the same account and pre-paid minutes if no serial number is attached to the Transcriptive account. It sounds confusing, but it’s a simple process. After the Transcriptive licenses are purchased and the team account is created, you can share the login info with whoever is going to be using Transcriptive. All you need to do is to make sure your team members follow the steps below.
Choose the option to “Click here to register using just your serial number” in the Serial number setup window and enter the unrestricted trial serial number.
Go to Profile the menu on the upper right corner of the panel and use the Transcriptive account credentials to connect the panel to the account created on https://app.transcriptive.com
Following steps 3 and 4 each time Transcriptive is setup will authorize the full version of our Premiere Pro plugin without requiring users to create multiple accounts. This means all editors will be able to use one set of pre-paid minutes packages, assistant editors can quickly access the transcripts in Premiere and Transcriptive.com without having to request editors share them between accounts and producers have fewer invoices to track each month.
It’s important to keep in mind that having everyone logged into the same account also means they all have access to the account information, including the credit card information and transcripts. If this is a big concern for you, please keep in mind it is not the only way to use Transcriptive, as transcripts can still be shared between accounts. See this video to learn more! However, using the same account within a team is still the best way to centralize all the info related to Transcriptive.
If you are ready to give this setup a try but have not yet purchased a Transcriptive for Premiere Pro license, please send an email to email@example.com.
We have the initial beta builds of native Silicon versions of Flicker Free and Beauty Box for FCP. FCP is the only released app that is currently Universal and supports Silicon plugins. Samurai Sharpen will be released for FCP/Silicon soon.
Builds for other host apps will be released once they release their Silicon versions. The plan right now is to get the FCP versions solid and that’ll make it more likely the builds for other apps will work out of the gate. Also, I don’t love releasing beta plugins for a beta host app (e.g. Resolve).
Overall they seem in pretty good shape. One caveat is that Analyze Frame doesn’t work in Beauty Box, so you need to manually select the Light and Dark Colors with the color picker. This is not ideal, as it’s not exactly the same thing as using Analyze Frame. But it’s what we’ve got right now. It’s actually more of a problem with FCP’s new FxPlug 4 API, so it won’t be fixed until the next release of FCP.
On that note, I’ll mention that there’s a lot of new stuff going on with the Apple builds. FCP announced the new API, which is completely different from FxPlug 3, so it’s required a lot of re-working. Eventually the FxPlug 3 plugins will stop working in FCP, so you’ll need the FxPlug 4 builds sooner or later. We’re also finally porting the GPU code to Metal. So look for new builds that incorporate all that for both Silicon and Intel very soon. Apple is keeping us pretty busy.
As you’ve probably heard, Adobe announced a new caption system a few weeks ago. We’ve been fielding a bunch of questions about it and how it affects Transcriptive, so I figured I’d let y’all know what our take on it is, given what we know.
Overall it seems like a great improvement to how Premiere handles captions. Adobe is pretty focused on captions. So that’s mainly what the new system is designed to deal with and it looks impressive. While there is some overlap with Transcriptive in regards to editing the transcript/captions, as far as we can tell there isn’t really anything to help you edit video. And there’s a lot of functionality in Transcriptive that’s designed to help you do that. As such, we’re focused on enhancing those features and adding to that part of the product.
Also, it also looks like it’s only going to work with sequences. It _seems_ that when they add the speech-to-text (it’s not available in the beta yet), it’s mostly designed for generating captions for the final edit.
However, being able to transcribe clips and use the transcript to search the clip in the Source panel is one powerful feature that Transcriptive will allow you to do. You can even set in/out points in Transcriptive and then drop that cut into your main sequence.
The ability to send the transcript to a client/AE that doesn’t have Premiere and let them edit it in a web browser is another.
With Transcriptive’s Conform feature, you can take the edited transcript and use it as a Paper Cut. Conform will build a sequence with all the edits.
Along with a bunch of other smaller features, like the ability to add Comments to the transcript.
So… we feel there will still be a lot of value even once the caption system is released. If we didn’t… we would’ve stopped development on it. But we’re still adding features to it… v2.5.1, which lets you add Comments to the transcript, is coming out this week sometime (Dec. 10th, give or take).
One thing we do know, is that the caption system will only import/export caption files (i.e. SRT, SCC, etc). From our perspective, this is not a smart design. It’s one of my annoyances with the current caption system. Transcriptive users have to export a caption file and re-import that into Premiere. It’s not a good workflow, especially when we should just be able to save captions directly to your timeline. Adobe is telling us it’s going to be the same klugy workflow.
So if that doesn’t sound great to you, you can go to the Adobe site and leave a comment asking for JSON import/export. (URL: https://tinyurl.com/y4hofqoa) Perhaps if they hear from enough people, they’ll add that.
Why would that help us (and you)? When we get a transcript back from the A.I., it’s a rich-data text file (JSON format). It has a lot of information about the words in it. Caption formats are data poor. It’s kind of like comparing a JPEG to a RAW file. You usually lose a lot of information when you save as a caption format (as you do with a JPEG).
It’ll make it much easier for us and other developers to move data back and forth between the caption system and other tools. For example: If you want someone to make corrections to the Adobe transcript outside of Premiere (on Transcriptive.com for example :-), it’s easier to keep the per-word timecode and metadata with a JSON file.
Historically Adobe has had products that were very open. It’s why they have such a robust plugin/third-party ecosystem. So we’re hopeful they continue that by making it easy to access high resolution data from within the caption system or anywhere else data/metadata is being generated.
It’s great Adobe is adding a better caption workflow or speech-to-text. The main reason Transcriptive isn’t more caption-centric is we knew Adobe was going to upgrade that sooner or later. But the lack of easy import/export is a bummer. It really doesn’t help us (or any developer) extend the caption system or help Premiere users that want to use another product in conjunction with the system. As mentioned, it’s still beta, so we’ll see what happens. Hopefully they make it a bit more flexible and open.
Most of the updates we release are free for users that have purchased the most recent version of the plugin. However, because we are not subscription based (we still do that old fashioned perpetual license thing), if you don’t own the latest version of the plugin… you have to upgrade to it.
It requires a TON of work to keep software working with all the changes Apple, Adobe, Nvidia and everyone else keeps making. Most of this work we do for free because they’re small incremental changes. Every time you see Beauty Box v4.0.1 or 4.0.7 or 4.2.4 (the current one)… you can assume a lot of work went into that and you don’t have to pay anything. However, eventually the changes add up or Apple (most of the time it’s Apple) does some crazy thing that means we need to rewrite large portions of the plug-in. In either case, we rev the version number (i.e. 4.x to 5.0) and an upgrade is required.
We do not go back and ‘fix’ older versions of the software. We only update the most recent one. Such is the downside of Perpetual licenses. You can use that license forever, but if your host app or OS changes and that change breaks the version of the plugin you have… you need to upgrade to get a fix.
If one of your clients comes to you with a video you did for them in HD, and says ‘hey, I need this in 4K’. Would you redo the video for free? Probably not. They have a perpetual license for the HD version. It doesn’t entitle them to new versions of the video forever.
We want to support our customers. The reason we develop this stuff is because it’s awesome to see the cool things you all do with what we throw out there. If we didn’t have to do any work to maintain the software, we wouldn’t charge upgrade fees. Unfortunately, it is a lot of work. We want to support you, but if we go out of business, that’s probably not going to benefit either of us.
Apple may say it only takes two hours to recompile for Silicon and that may be true. But to go from that to a stable plugin that can be used in a professional environment and support different host apps and graphics cards and all that… it’s more like two months or more.
So, that’s why we charge upgrade fees. You’re paying for all the coding, design, and testing that goes into creating a professional product that you can rely on. Not too mention the San Francisco based support team to help you out with all of it. We’re here to help you be successful. The flipside is we need to do what’s necessary to make sure we’re successful ourselves.
We’re extremely excited about the speed improvements we’ve enhanced Flicker Free 2.0 with! Yes, we have actually seen 1500% performance increase with 4K footage, but on average across all resolutions and computers it’s usually 300-400% increase. Still pretty good and 4K is more like 700-800% on average.
You can see our performance benchmarks in this Google Doc. And download the benchmark projects for Premiere Pro (700mb) and for Final Cut Pro to run your own tests! (However, you need to run the FF1 sequences with FF1 and the FF2 (FF1 settings) with FF2. If you just turn off the GPU in FF2 you won’t get the same results (they’ll be slower than they would be in FF1)
However, it’s pretty dependent on your computer and what video editing app you’re using. We’ve been disappointed by MacBook Pros across the board. They’re just really under powered for the price. If you’re running a MacBook, we highly recommend getting an external GPU enclosure and putting in a high end AMD card. We’d recommend Nvidia as we do on Windows, but… Apple. Oh well.
It’s possible once we implement Metal (Apple’s technology to replace OpenCL) we’ll see some additional improvements. That’s coming in a free update shortly. In fact, because After Effects/Mac only supports Metal, Flicker Free isn’t accelerated at all in AE. It does great in Premiere which does support OpenCL. (Adobe’s GPU support is really lacking, and frustrating, across their video apps, but that’s a topic for another blog post)
Not every computer ran every test. We changed the benchmark and didn’t have access to every machine to render the additional sequences.
Windows generally saw more improvement than Mac.
FCP saw some really significant gains. It’s much faster/efficient to get multiple frames in FCP using the GPU than the CPU. 1.0 was really slow in FCP.
The important bit is at the right edge of the spreadsheet where you see the percentages.
We’d love to see you run the benchmarks on your computer and please send us the results. If you do, please send results to firstname.lastname@example.org. However, you need to run the FF1 sequences with FF1 and the FF2 (FF1 settings) with FF2. If you just turn off the GPU in FF2 you won’t get the same results (they’ll be slower than they would be in FF1).
After Effects isn’t in the benchmark because AE/Mac doesn’t support OpenCL for GPU acceleration.
One of the things Flicker Free 1.0 doesn’t do well is deal with moving cameras or fast moving subjects. This tends to result in a lot of ghosting… echos from other frames Flicker Free is analyzing as it tries to remove the flicker (no people aren’t going to stop talking to you on dating apps because you’re using FF). You can see this in the below video as sort of a motion blur or trails.
Flicker Free 2.0 does a MUCH better job of handling this situation. We’re using optical flow algorithms (what’s used for retiming footage) as well as a better motion detection algorithm to isolate areas of motion while we deflicker the rest of the frame. You can see the results side-by-side below:
Better handling of fast motion, called Motion Compensation, is one of the big new features of 2.0. While the whole plugin is GPU accelerated, Motion Compensation will slow things down significantly. So if you don’t need it, it’s best to leave it off. But when you need it… you really need it and the extra render time is worth the wait. Especially if it’s critical footage and it’s either wait for the render or re-shoot (which might not be so easy if it’s a wedding or sports event!).
We’re getting ready to release 2.0 in the next week or so, so just a bit of tease of some of the amazing new tech we’ve rolled into it!
Using Transcriptive with multicam sources is something we’ve wanted to implement for a while now. If you are a multicam fan and have been using Transcriptive for Premiere Pro, you know there isn’t a straightforward solution to transcribe Multicam source sequences. But Adobe is adding a way for panels to access multicam sequences correctly. So Transcriptive finally has multicam support!
When we launched Transcriptive 2.0, which gave users the ability to transcribe Clips as well as Sequences, we started thinking that maybe if Transcriptive could treat multicam sources as clips instead of sequences it would be possible to transcribe them using Clip Mode.
Multicam is an odd duck. Technically they’re sequences, but Premiere treats them as clips. Sometimes. It’s a strange implementation which made it impossible for Transcriptive to know what they were. Adobe has made some changes with the newest Premiere Pro build. It’s currently the public beta but should be released soon (14.3.2 when it comes out). The upcoming release of Transcriptive 2.5, which is in BETA, already supports these changes.
Multicam sources can now be transcribed in Clip Mode, allowing users to click on a multicam source in the project window and use the transcript and find the sections they want to add to a sequence. Merged clips seem to work the same way, and can also be transcribed in Clip Mode. The transcript will be saved to that merged clip in that project, and the transcript will load when you open that merged clip with Clip Mode on. Here’s a step-by-step of what we are testing:
Create a multicam or merged clip
Use Transcriptive to transcribe it in Clip Mode
Use the transcript to add in and out points and insert those sections into a sequence.
It’s a very simple and standard workflow with some caveats. One thing to keep in mind is that, with a multicam clip, you will want to use the Insert command in the source Monitor (,) and not in Transcriptive (Ctrl+,). This is because we don’t currently have the ability to detect the active camera when inserting from Transcriptive. If a multicam clip is inserted from Transcriptive, you won’t be able to change the camera in the sequence with Multicam View. So you can add in and out points in either Transcriptive or the Source Monitor, but make sure you insert any sound bites from the Source Monitor and not from the Transcriptive panel.
Another thing to keep in mind is that, if you are using the Transcriptive web app to share transcripts with team members, the multicam functionalities you find in Premiere Pro won’t be available on the web. You can share a Multicam Clip to the web app the same way you share any other clips. However, sharing the clip will use a default camera, and not the active camera. If you want to choose a specific camera to show on the Transcriptive web app, drop the multicam clip into a sequence and share the sequence, so that you can set what camera is uploaded. More on sharing Multicam Sequences to Transcriptive.com to come!
Multicam and Merged clips support are likely to be included in our next Transcriptive 2.5 release. Stay tuned! Questions? Email email@example.com.
The Glossary feature in Transcriptive is one way of increasing the accuracy of the transcripts generated by artificial intelligence services. The A.I. services can struggle with names of people or companies and it’s a big of mixed bag with technical terms or industry jargon. If you have a video with names/words you think the A.I. will have a tough time with, you can enter them into the Glossary field to help the A.I. along.
For example, I grabbed this video of MLB’s top 30 draft picks in 2018:
Obviously a lot names that need to be accurate and since we know what they are, we can enter them into the Glossary.
As the A.I. creates the transcript, words that sound similar to the names will usually be replaced with the Glossary terms. As always, the A.I. analyzes the sentence structure and makes a call on whether the word it initially came up with fits better in the sentence. So if the Glossary term is ‘Bohm’ and the sentence is ‘I was using a boom microphone’, it probably won’t replace the word. However if the sentence is ‘The pick is Alex boom’, it will replace it. As the word ‘boom’ makes no sense in that sentence.
Here’s a short sample to give you an idea of the difference. Again, all we did was add in the last names to the Glossary (Mize, Bart, Bohm):
With the Glossary:
The Detroit Tigers select Casey Mize, a right handed pitcher. From Auburn University in Auburn, Alabama. With the second selection of the 2018 MLB draft, the San Francisco Giants select Joey Bart a catcher. A catcher from Georgia Tech in Atlanta, Georgia, with the third selection of a 2018 MLB draft. The Philadelphia Phillies select Alec Bohm, third baseman
Without the Glossary:
The Detroit Tigers select Casey Mys, a right handed pitcher. From Auburn University in Auburn, Alabama. With the second selection of the 2018 MLB draft, the San Francisco Giants select Joey Bahrke, a catcher. A catcher from Georgia Tech in Atlanta, Georgia, with the third selection of a 2018 MLB draft. The Philadelphia Phillies select Alec Bomb. A third baseman
As you can see it corrected the names it should have. If you have names or words that are repeated often in your video, the Glossary can really save you a lot of time fixing the transcript after you get it back. It can really improve the accuracy, so I recommend testing it out for yourself!
It’s also worth trying both Speechmatics and Transcriptive-A.I. Both are improved by the glossary, however Speechmatics seems to be a bit better with glossary words. Since Transcriptive-A.I. has a bit better accuracy normally, you’ll have to run a test or two to see which will work best for your video footage.
If you have any questions, feel free to hit us up at firstname.lastname@example.org!
Since we announced the bundle between Transcriptive and PowerSearch a few months back, our team has been working even harder to improve the plugin so users can make the most of having transcripts and search engine capabilities inside Premiere Pro. This means we are releasing Transcriptive 2.0.5, which fixes some critical bugs reported, and PowerSearch 2.0: a much faster and efficient version of our metadata search tool.
Having accurate transcripts available in Premiere is already a big help on speeding up video production workflows, especially while working remotely. (See this previous post about Transcriptive’s sharing capabilities for remote collaboration!) But we truly believe, and have been hearing this from clients as well, that having all the content in your video editing project – especially transcripts! – converted into searchable metadata makes it much easier to find content if you have large amounts of footage, markers, sequences, and media files. And this is why the PowerSearch and Transcriptive combo makes it much easier tofind soundbites, different takes of a script, or pinpoint any time a name or place is mentioned.
PowerSearch 1.0 was decently fast but could be slow on larger projects. Our next release makes use of a powerful SQL database to make PowerSearch an order of magnitude faster. The key to PowerSearch is that it indexes an entire Premiere Pro project, much like Google indexes websites, to optimize search performance. An index of hundreds of videos that used to take 10-12 hours to create is now indexed in less than an hour and the same database makes searching all that data significantly faster. Another advantage is the ability to use common search symbols, such as minus signs and quotes, for more precise, accurate searching. For editors with hundreds of hours of video, this can help narrow down searches from hundreds of results to a few dozen.
PowerSearch still returns search results like any search engine. Showing you the search term, the words around it, what clip/sequence/marker it’s in, and the timecode. Clicking on the result will open the clip or sequence and jump straight to the correct timecode in the Source or Program panel.
PowerSearch 2.0 can still be purchased separately and help your production even if you are getting transcripts from a different source or just want to search markers. However, it is now bundled with Transcriptive and you can get both for $149 while PowerSearch costs $99 on its own. So if you haven’t tried using PowerSearch and Trabscriptive together, give it a try! We are constantly working on Transcriptive to add more capabilities, reduce transcription costs, and improve the sharing options now available in the panel. Features like Clip Mode and the new Text Editor go beyond just transcribing media and sequences, and combining it with a much faster PowerSearch makes finding content much faster.
Transcriptive 2.0 users can use their Transcriptive license to activate PowerSearch. Trial licenses for both Transcriptive and PowerSearch are available here and our team would be happy to help if you need support figuring out a workflow for you and your team. Send any questions, concerns, or feedback to email@example.com! We would love to hear from you.
It’s been a long time coming, so we’re pretty excited to announce that Flicker Free 2.0 is in beta! The beta serial number is good until June 30th and will make the plugin fully functional with no watermark. Please contact firstname.lastname@example.org to get added to the beta list and get the serial number.
There are a lot of cool improvements, but the main one is GPU support. On Windows, on average it’s about 350% faster vs. Flicker Free 1.0 with the same settings, but often it’s 500% or more. On Mac, it’s more complicated. Older machines see a bigger increase than newer ones, primarily because they support OpenCL better. Apple is doing what it can to kill OpenCL, so newer machines, which are AMD only, suffer because of it. We are working on a Metal port and that’ll be a free upgrade for 2.0, but it won’t be in the initial release. So on Mac you’re more likely to see a 200% or so increase over FF 1.0. Once the Metal port is finished we expect performance similar to what we’re seeing on Windows. Although, on both platforms it varies a bit depending on your CPU, graphic card, and what you’re trying to render.
The other big improvement is better motion detection, that uses optical flow algorithms. For shots with a moving camera or a lot of movement in the video, this makes a big difference. The downside is that this is relatively slow. However, if you’re trying to salvage a shot you can’t go and reshoot (e.g. a wedding), it will fix footage that was previously unfixable.
A great example of this is in the footage below. It’s a handheld shot with rolling bands. The camera is moving around Callie, our Director of IT Obsolescence, and this is something that gives 1.0 serious problems. I show the original, what FF 1.0 could do, and what the new FF 2.0 algorithms are capable of. It does a pretty impressive job.
You can download the Premiere project and footage of Callie here:
A couple important things to note… 1) if you’re on Mac, make sure the Mercury Engine is set to OpenCL. We don’t support Metal yet. We’re working on it but for now the Mercury Engine HAS to be set to OpenCL. 2) Unfortunately, Better AND Faster wasn’t doable. So if you want Faster, use the settings for 1.0. This is probably what you’ll usually want. For footage with a lot of motion (e.g. handheld camera), that’s where the 2.0 improvements will really make a difference, but it’s slower. See the ReadMe for more details (I know… nobody reads the ReadMe. But it’s not much longer than this email… you should read it!).
Here’s a benchmark Premiere Pro project that we’d like you to run. It helps to also have Flicker Free 1.0 installed if you have it. If not, just render the FF 2.0 sequences. Please queue everything up in Media Encoder and render everything when you’re not using the machine for something else. Please send the results (just copy the media encoder log for the renders: File>Show Log), what graphics card you have, and what processer/speed you have to email@example.com.
Benchmark project with footage (if you’ve already downloaded this, please re-download it as the project has changed):
It’s a lot of work supporting different host apps. Every company has a different API (application programming interface) and they usually work very differently from each other. So development takes a lot of time, as does testing, as does making sure our support staff knows each host app well enough to troubleshoot and help you with any problems.
Our goal with all our software is to provide a product that 1) does what it claims to do as well or better than anything else available, 2) is reasonably bug free and 3) completely supported if you call in with a problem (yes, you can still call us and, no, you won’t be routed to an Indian call center). All of that is expensive. But we pride ourselves on great products with great support at a reasonable cost. By having crossgrades we can do all of the above, since you’re not paying for things you don’t need.
If you create a video for a client in HD and then they tell you they want the video in a vertical format for mobile, do you do it for free? Probably not. While clients might think you just need to re-render it, you know that because you need to make the video compelling in the new format, make sure all text is readable, and countless other small things… it requires a fair amount of work.
That’s the way it is with developing for multiple APIs. So the crossgrade fee covers those costs. And since all of our plugins are perpetual licenses, you don’t have to pay a subscription fee forever to keep using our products.
If we didn’t charge crossgrade fees, we’d include the costs of development for all applications in the initial price of the plugin (which is what some companies do). This way you only pay for what you need. Most customers only use one host application, so this results in a lower initial cost. Only users that require multiple hosts have to pay for them.
And we don’t actually charge per applications. For example, After Effects and Premiere use the same API, so if you buy one of our plugins for Adobe, it works in both.
The crossgrades come as a surprise to some customers, but there really are good reasons for them. I wanted you all to understand what they are and how much work goes into our products.
If you’ve been using Speechmatics credits to transcribe in Transcriptive, our transcription plugin for Premiere Pro, then you noticed that accessing your credits in Transcriptive 2.0.2 and later is not an option anymore. Speechmatics is discontinuing the API that we used to support their service in Transcriptive, which means your Speechmatics credentials can no longer be validated inside of the Transcriptive panel.
We know a lot of users still have Speechmatics credits and have been working closely with Speechmatics so those credits can be available in your Transcriptive account as soon as possible. Hopefully in the next week or two.
In the meantime, there are a couple ways users can still transcribe with Speechmatics credits. 1) Use an older version of Transcriptive like v1.5.2 or v2.0.1. Those should still work for a bit longer but uses the older, less accurate API or 2) Upload directly on their website and export the transcript as a JSON file to be imported into Transcriptive. It is a fairly simple process and a great temporary solution for this. Here’s a step-by-step guide:
1. Head to the Speechmatics website – To use your Speechmatics credits, head to www.speechmatics.com and login to your account. Under “What do you want to do?”, choose “Transcription” and select the language of your file.
2. Upload your media file to the Speechmatics website – Speechmatics will give you the option to drag and drop or select your media from a folder on your computer. Choose whatever option works best for you and then click on “Upload”. After the file is uploaded, the transcription will start automatically and you can check the status of the transcription on your “Jobs” list. 3. Download a .JSON file – After the transcription is finished (refresh the page if the status doesn’t change automatically!), click on the Actions icon to access the transcript. You will then have the option to export the transcript as a .JSON file
4. Import the .JSON file into any version of Transcriptive – Open your Transcriptive panel in Premiere. If you are usingTranscriptive 2.0, be sure Clip Mode is turned on. Select the clip you have just transcribed on Speechmatics and click on “Import”. If you are using an older version of Transcriptive, drop the clip into a sequence before choosing “Import”.
You will then have the option to “Choose an Importer”. Select the JSON option and import the Speechmatics file saved on your computer. The transcript will be synced with the clip automatically at no additional charge.
One important thing to know is that, although Transcriptive v1.x still have Speechmatics as an option and it still works, we would still recommend following the steps above to transcribe with Speechmatics credits. The option available in these versions of the panel is an older version of their API and less accurate than the new version. So we recommend you transcribe on the Speechmatics website if you want to use your Speechmatics credits now and not wait for them to be transferred.
However, we should have the transfer sorted out very soon, so keep an eye open for an email about it if you have Speechmatics credits. If the email address you use for Speechmatics is different than the one you use for Transcriptive.com, please email firstname.lastname@example.org. We want to make sure we get things synced up so the credits go to the right place!
A lot of you have a ton of footage that you want to transcribe. One of our goals with Transcriptive has been to enable you to transcribe everything that goes into your Premiere project. To search it, to create captions, to easily see what talent is saying, etc. But if you’ve got 100 hours of footage, even at $0.12/min the costs can add up. So…
Transcriptive has a new feature that will help you cut your transcribing costs by 50%. The latest version of our Premiere Pro transcription plugin has already cut the costs of transcribing from $0.012 to $0.08. However, our new prepaid minutes’ packages goes even further… allowing users to purchase transcribing credits in bulk! You can save 50% per minute, transcribing for $2.40/hr or .04/min. This applies to both Transcriptive AI or Speechmatics.
The pre-paid minutes option will reduce transcription costs to $0.04/min which can be purchased in volume for $150 or $500. For small companies and independent editors, the $150 package will make it possible to secure 62.5 hours of transcription without breaking the bank. If you and your team are transcribing large amounts of footage, going for the $500 will allow you to save even more.
The credits are good for 24 months, so you don’t need to worry about them expiring.
You don’t HAVE to pre-pay. You can still Pay-As-You-Go for $0.08/min. That’s still really inexpensive for transcription and if you’re happy with that, we’re happy with it too.
However, if you’re transcribing a lot of footage, pre-paying is a great way of getting costs down. It also has other benefits, you don’t need to share your credit card with co-workers and other team members. For bigger companies, production managers, directors or even an account department can be in charge of purchasing the minutes and feeding credits into the Premiere Pro Transcriptive panel so editors no longer have to worry about the charges submitted to the account holder’s credit card.
Buying the minutes in advance is simple! Go to your Premiere Pro panel, click on your profile icon, choose “Pre-Pay Minutes” and select the option that better suits your needs. You can also pre-pay credits from your web app account by logging into app.transcriptive.com, opening your “Dashboard” and clicking on “Buy Minutes”. A pop-up window will ask you to choose the pre-paid minutes package and ask for the credit card information. Confirm the purchase and your prepaid minutes will show under “Balance” on your homepage. The prepaid minutes’ balance will also be visible in your Premiere Pro panel, right next to the cost of the transcription.
Applying purchased credits to your transcription jobs is also a quick and easy process. While submitting a clip or sequence for transcription, Transcriptive will automatically deduct the amount required to transcribe the job from your balance. If the available credit is not enough to transcribe your job, the remaining minutes will be charged to the credit card on file.
The 50% discount on prepaid minutes will only apply to transcribing, but minutes can be used to Align existing transcripts at regular cost. English transcripts can be imported into Transcriptive and aligned to your clips or sequences for free, while text in other languages will align for $0.02/min with Transcriptive AI and $0.04/min with Transcriptive Speechmatics.
Or you must turn ‘NewWorld’ off (instructions are below)
Or keep using Premiere Pro 14.0.1
If you’re using Transcriptive 1.x, it’s still not exactly a problem but does require some hoop jumping. (and eventually ‘Old World’ will not be supported in Premiere and you’ll be forced to upgrade TS. That’s a ways off, though.)
Turning Off New World
Here are the steps to turn off ‘NewWorld’ and have Premiere revert back to using ‘Old World’:
Press Control + F12 or Command + F12. This will bring up Premiere’s Console.
From the Hamburger menu (three lines next to the word ‘Console’), select Debug Database View
Scroll down to ScriptLayerPPro.EnableNewWorld and uncheck the box (setting it to False).
Restart Premiere Pro
When Premiere restarts, NewWorld will be off and Transcriptive 1.x should work normally.
So far there are no new major bugs and relatively few minor ones that we’re aware of when using Transcriptive 2.0.3 with Premiere 14.0.2 (with NewWorld=On). There are also a LOT of other improvements in 2.0.3 that have nothing to do with this.
Adobe actually gave us a pretty good heads up on this. Of course, in true Anarchist fashion, we tested it early on (and things were fine) and then we tested it last week and things were not fine. So it’s been an interesting week and a half scrambling to make sure everything was working by the time Adobe sent 14.0.2 out into the world.
So everything seems to be working well at this point. And if they aren’t, you now know how to turn off all the new fangled stuff until we get our shit together! (but we do actually think things are in good shape)
Have you ever considered using Transcriptive to build an effective Search Engine Optimization (SEO) strategy and increase the reach of your Social Media videos? Having your footage transcribed right after the shooting can help you quickly scan everything for soundbites that will work for instant social media posts. You can find the terms your audience searches for the most, identify high ranked keywords in your footage, and shape the content of your video based on your audience’s behavior.
According to vlogger and Social Media influencer Jack Blake, being aware of what your audience is doing online is a powerful tool to choose when and where to post your content, but also to decide what exactly to include in your Social Media Videos, which tend to be short and soundbite-like. The content of your media, titles, video descriptions and thumbnails, tags and post mentions should all be part of a strategy built based on what your audience is searching for. And this is why Blake is using Transcriptive not only to save time on editing but also to carefully curate his video content and attract new viewers.
Right after shooting his videos, the vlogger transcribes everything and exports the transcripts as rich text so he can quickly share the content with his team. After that, a Copywriter scans through the transcribed audio and identifies content that will bring traffic to the client’s website and increase ROI. “It’s amazing. I transcribe the audio in minutes, edit some small mistakes without having to leave Premiere Pro, and share the content with my team. After that, we can compare the content with our targeted keywords and choose what I should cut. The editing goes quickly and smoothly because the words are already time-stamped and my captions take no time to create. I just export the transcripts as an SRT and it is pretty much done, explains Blake.
Of course, it all starts with targeting the right keywords and that can be tricky, but there are many analytics and measurement applications offering this service nowadays. If you are just getting started in the whole keyword targeting game, the easiest and most accessible way is connecting your In-site Search queries with Google Analytics. This will allow you to get information on how users are interacting with your website, including how much your audience searches, who is performing searches and who is not, and where they begin searching, as well where they head to afterward. Google Analytics will also allow you to find out what exactly people are typing into Google when searching for content on the web.
For Blake, using competitors’ hashtags from Youtube has been very helpful to increase video views. “One of the differentials in my work is that I research my client’s competitors on Youtube and identify the VidIQs (Youtube keyword tags) they have been using on their videos so we can use competitive tagging in our content description and video title. This allows the content I produced for the client to show when people search for this specific hashtag on Youtube,” he explains. Blake’s team is also using Google Trends, a website that analyzes the popularity of top search queries in Google Search across various regions and languages. It’s a great tool to find out how often a search term is entered in Google’s search engine, compare it to their total search volume, and learn how search trends varied within a certain interval of time.
When asked what would be the last thing he would recommend to video makers wanting to boost their video views on Social Media, Blake had no hesitation in choosing captions. “Social media feeds are often very crowded, fast-moving, and competitive. Nobody has time to open the video as full screen, turn the sound on and watch the whole thing, they often watch the videos without sound, and if the captions are not there then your message will not get through. And Transcriptive makes captioning a very easy process,” he says.
It’s been 5 years since we released Flicker Free, and we can for sure say flickering from artificial lights is still one of the main reasons creatives download our flicker removal plugin. From music videos and reality-based videos to episodics on major networks, small productions to feature-long films, we’ve seen strobing caused by LED and fluorescent lights. It happens all the time and we are glad our team could help fix flickering and see those productions look their best as they get distributed to the public.
Planning a shoot so you can have control of your camera settings, light setup and color balance is still definitely the best way to film no matter what type of videos you are making. However, flickering is a difficult problem to predict and sometimes we just can’t see it happening on set. Maybe it was a light way in the background or an old fluorescent that seemed fine on the small on-set monitor but looked horrible on the 27″ monitor in the edit bay.
Of course, in a perfect world we would take our time to shoot a few minutes of test footage, use a full size monitor to check what the footage looks like, match the frame rate of the artificial light to the frame rate of the camera and make sure the shutter speed is a multiple/division of the AC frequency of the country we are shooting in. Making absolutely sure the image looks sharp and is free of flicker! But we all know this is often not possible. In these situations, post-production tools can save the day and there’s nothing wrong about that!
Travel videos are the perfect example of how sometimes we need to surrender to post-production plugins to have a high-quality finished video. Recently, Handcraft Creative co-owner Raymond Friesen shot beautiful images from pyramids in Egypt. He was fascinated by the scenery but only had a Sony A73 and a 16-70mm lens with him. After working on set for 5 years, with very well planned shoots, he knew the images wouldn’t be perfect but decided to film anyways. Yes, the end result was lots of flicker from older LED lights in the tombs. Nothing that Flicker Free couldn’t fix in post. Here’s a before and after clip:
Spontaneous filmmaking is certainly more likely to need post-production retouches, but we’ve also seen many examples of scripted projects that need to be rescued by Flicker Free. Filmmaker Emmanuel Tenenbaum talked to us about two instances where his large experience with short films was not able to stop LED flicker from showing up on his footage. He purchased the plugin a few years ago for “I’m happy to see you”, and used it again to be able to finish and distribute Two Dollars (Deux Dollars), a comedy selected in 85 festivals around the world, winner of 8 awards, broadcasted on a dozen TV channels worldwide and chosen as Vimeo Staff Pick Premiere of the week. Curious why he got flicker while filming Two Dollars (Deux Dollars)? Tenenbaum talked to us about tight deadlines and production challenges in this user story!
Those are just a few examples of how artificial lights flickering couldn’t be avoided. Our tech support team often receives footage from music video clips, marketing commercials, and sports footage, and seeing Flicker Free remove very annoying, sometimes difficult, flicker in the post has been awesome. We posted some other user story examples on our website so check them out! And If you have some awful flickering footage that Flicker Free helped fix we’d love to see it and give you a shout out on our Social Media channels. Email email@example.com with a link to your video clip!
The struggle of making documentary films nowadays is real. Competition is high, and budget limitations can stretch a 6-year deadline to a 10 year-long production. To make a movie you need money. To get the money you need decent, and sometimes edited, footage material to show to funding organizations and production companies. And decent footage, well-recorded audio, as well as edited pieces cost money to produce. I’ve been facing this problem myself and discovered through my work at Digital Anarchy that finding an automated tool to transcribe footage can be instrumental in making small and low budget documentary films happen.
In this interview, I talked to filmmaker Chuck Barbee to learn how Transcriptive is helping him to edit faster and discussed some tips on how to get started with the plugin. Barbee has been in the Film and TV business for over 50 years. In 2005, after an impressive career in the commercial side of the Film and TV business, he moved to California’s Southern Sierras and began producing a series of personal “passion” documentary films. His projects are very heavy on interviews, and the transcribing process he used all throughout his career was no longer effective to manage his productions.
Barbee has been using Transcriptive for a month, but already consider the plugin a game-changer. Read on to learn how he is using the plugin to makea long-form documentary about the people who created what is known as “The Bakersfield Sound” in country music.
DA: You have worked in a wide variety of productions throughout your career. Besides co-producing, directing, and editing prime-time network specials and series for the Lee Mendelson Productions, you also worked as Director of Photography for several independent feature films. In your opinion. How important is the use of transcripts in the editing process?
CB: Transcripts are essential to edit long-form productions because they allow producers, editors, and directors to go through the footage, get familiarized with the content, and choose the best bits of footage as a team. Although interview oriented pieces are more dependent on transcribed content, I truly believe transcripts are helpful no matter what type of motion picture productions you are making.
On most of my projects, we always made cassette tape copies of the interviews, then had someone manually transcribe them and print hard copies. With film projects, there was never any way to have a time reference in the transcripts, unless you wanted to do that manually. Then in the video, it was easier to make time-coded transcripts, but both of these methods were time-consuming and relatively expensive labor wise. This is the method I’ve used since the late ’60s, but the sheer volume of interviews on my current projects and the awareness that something better probably exists with today’s technology prompted me to start looking for automated transcription solutions. That’s when I found Transcriptive.
DA: And what changed now that you are using Artificial Intelligence to transcribe your filmed interviews in Premiere Pro?
CB: I think Transcriptive is a wonderful piece of software. Of course, it is only as good as the diction of the speaker and the clarity of the recording, but the way the whole system works is perfect. I place an interview on the editing timeline, click transcribe and in about 1/3 of the time of the interview I have a digital file of the transcription, with time code references. We can then go through it, highlighting sections we want, or print a hard copy and do the same thing. Then we can open the digital version of the file in Premiere, scroll to the sections that have been highlighted, either in the digital file or the hard copy, click on a word or phrase and then immediately be at that place in the interview. It is a huge time saver and a game-changer.
The workflow has been simplified quite a bit, the transcription costs are down, and the editing process has sped up because we can search and highlight content inside of Premiere or use the transcripts to make paper copies. Our producers prefer to work from a paper copy of the interviews, so we use that TXT or RTF file to make a hard copy. However, Transcriptive can also help to reduce the number of printed materials if a team wants to do all the work digitally, which can be very effective.
DA: What makes you choose between highlighting content in the panel and using printed transcripts? Are there situations where one option works better than the other?
CB: It really depends on producer/editor choices. Some producers might want to have a hard copy because they would prefer that to work on a computer. It really doesn’t matter much from an editor’s point of view because it is no problem to scroll through the text in Transcriptive to find the spots that have been highlighted on the hard copy. All you have to do is look at the timecode next to the highlighted parts of a hard copy and then scroll to that spot in Transcriptive. Highlighting in Transcriptive means you are tying up a workstation, with Premiere, to do that. If you only have one editing workstation running Premiere, then it makes more sense to have someone do the highlighting with a printed hard copy or on a laptop or any other computer which isn’t running Premiere.
DA: You mentioned the AI transcription is not perfect, but you would still prefer that than paying for human transcripts or transcribing the interviews yourself. Why do you think the automated transcripts are a better solution for your projects?
CB:Transcriptive is amazing accurate, but it is also quite “literal” and will transcribe what it hears. For example, if someone named “Artie” pronounces his name “RD”, that’s what you’ll get. Also, many of our subjects have moderate to heavy accents and that does affect accuracy. Another thing I have noticed is that, when there is a clear difference between the sound of the subject and the interviewer, Transcriptive separates them quite nicely. However, when they sound alike, it can confuse them. When multiple voices speak simultaneously, Transcriptive also has trouble, but so would a human.
My team needs very accurate transcripts because we want to be able to search through 70 or more transcripts, looking for keywords that are important. Still, we don’t find the transcription mistakes to be a problem. Even if you have to go through the interview when it comes back to make corrections, It is far simpler and faster than the manual method and cheaper than the human option. Here’s what we do: right after the transcripts are processed, we go through each transcript with the interviews playing along in sync, making corrections to spelling or phrasing or whatever, especially with keywords such as names of people, places, themes, etc. It doesn’t take too much time and my tip is that you do it right after the transcripts are back, while you are watching the footage to become familiar with the content.
DA: Many companies are afraid of incorporating Transcriptive into an on-going project workflow. How was the process of using our transcription plugin in a long-form documentary film right away?
CB: We have about 70 interviews of anywhere from 30 minutes to one hour each. It is a low budget project, being done by a non-profit called “Citizens Preserving History“.The producers were originally going to try to use time-code-window DVD copies of the interviews to make notes about which parts of the interviews to use because of budget limitations. They thought the cost of doing manually typed transcriptions was too much. But as they got into the process they began to see that typed transcripts were going to be the only way to go. Once we learned about Transcriptive and installed it, it only took a couple of days to do all 70 interviews and the cost, at 12 cents per minute is small, compared to manual methods.
Transcriptive is very easy to use and It honestly took almost no time for me to figure out the workflow. The downloading and installation process was simple and direct and the tech support at Digital Anarchy is awesome. I’ve had several technical questions and my phone calls and emails have been answered promptly, by cheerful, knowledgeable people who speak my language clearly and really know what they are doing. They can certainly help quickly if people feel lost or something goes wrong so I would say do yourself a favor and use Transcriptive in your project!
Here’s a short version of the opening tease for “The Town That Wouldn’t Die”, Episode III of Barbee’s documentary series:
Recently, an increasing number of Transcriptive users have been requesting a way of using After Effects to create burned-in subtitles using SRTs from Transcriptive. This made us anarchists get excited about making a Free After Effects SRT Importer for Subtitling And Captions.
Captioning videos is more important now than ever before. With the growth of mobile and Social Media streaming, YouTube and Facebook videos are often watched without sound and subtitles are essential to retain your audience and make them watchable. In addition to that, the Federal Communications Commission (FCC) has implemented rules for online video that require subtitles so people with disabilities can fully access media content and actively participate in the lives of their communities.
As a consequence, a lot of companies have style guides for their burned-in subtitles and/or want to do something more creative with the subtitles than what you get with standard 608/708 captions. I mean, how boring is white, monospaced text on a black background? After Effects users can do better.
While Premiere Pro does allow some customization of subtitles, creators can get greater customization via After Effects. Many companies have style guides or other requirements that specify how their subtitles should look. After Effects can be an easier place to create these types of graphics. However, it doesn’t import SRT files natively so the SRT Importer will be very useful if you don’t like Premiere’s Caption Panel or need subtitles that are more ‘designed’ than what you can get with normal captions. The script makes it easy to customize subtitles and bring them into Premiere Pro. Here’s how it works:
Windows: C:\Program Files\Adobe\Adobe After Effects CC 2019\Support Files\Scripts\ScriptUI Panels
Mac: Applications\Adobe After Effects CC 2019\Scripts\ScriptUI Panels
4. Restart AE. It’ll show up in After Effects under the Window\Transcriptive_Caption
5. Create a new AE project with nothing in it. Open the panel and set the parameters to match your footage (frame rate, resolution, etc). When you click Apply, it’ll ask for an SRT file. It’ll then create a Comp with the captions in it.
Select the text layer and open the Character panel to set the font, font size, etc. Feel free to add a drop shadow, bug or other graphics.
7. Save that project and import the Comp into Premiere (Import the AE project and select the Comp). If you have a bunch of videos, you can run the script on each SRT file you have and you’ll end up with an AE project with a bunch of comps named to match the SRTs (currently it only supports SRT). Each comp will be named: ‘Captions: MySRT File’. Import all those comps into Premiere.
8. Drop each imported comp into the respective Premiere sequence. Double-check the captions line up with the audio (same as you would for importing an SRT into Premiere). Queue the different sequences up in AME and render away once they’re all queued up. (and keep in mind it’s beta and doesn’t create the black backgrounds yet).
Although especially beneficial to Transcriptive users, this free After Effects SRT Importer for Subtitling And Captions will work with any SRT from any program and it’s definitely easier than all the steps above make it sound and it is available for all and sundry on our website. Give it a try and let us know what you think! Contact: firstname.lastname@example.org
When cutting together a documentary (or pretty much anything, to be honest), you don’t usually have just a single clip. Usually there are different clips, and different portions of those clips, here, there and everywhere.
Our transcription plugin, Transcriptive, is pretty smart about handling all this. So in this blog post we’ll explain what happens if you have total chaos on your timeline with cuts and clips scattered about willy nilly.
If you have something like this:
Transcriptive will only transcribe the portions of the clips necessary. Even if the clips are out of order. For example, the ‘Drinks1920’ clip at the beginning might be a cut from the end of the actual clip (let’s say 1:30:00 to 1:50:00) and the Drinks cut at the end might be from the beginning (e.g. 00:10:00 to 00:25:00).
If you transcribe the above timeline, only 10:00-25:00 and 1:30:00-1:50:00 of Drinks1920.mov will be transcribed.
If you Export>Speech Analysis, select the Drinks clip, and then look in the Metadata panel, you’ll see the Speech Analysis for the Drinks clip will have the transcript for those portions of the clip. If you drop those segments of the Drinks clip into any other project, the transcript comes along with it!
The downside to _only_ transcribing the portion of the clip on the timeline is, of course, the entire clip doesn’t get transcribed. Not a problem for this project and this timeline, but if you want to use the Drinks clip in a different project, the segment you choose to use (say 00:30:00 to 00:50:00) may not be previously transcribed.
However, if you drop the clip into another sequence, transcribe a time span that wasn’t previously transcribed and then Export>Speech Analysis, that new transcription will be added to the clips metadata. It wasn’t always this way, so make sure you’re using Transcriptive v1.5.2. If you’re in a previous version of Transcriptive and you Export>Speech Analysis to a clip that already has part of a transcript in SA, it’ll overwrite any transcripts already there.
So feel free to order your clips any way you want. Transcriptive will make sure all the transcript data gets put into the right places. AND… make sure to Export>Speech Analysis. This will ensure that the metadata is saved with the clip, not just your project.
Vertical Video is here to stay. It still makes me cringe a bit when I see people filming portrait. Since my early video journalism classes back in Brazil, shooting landscape ratio was a set rule that has always felt natural. However, nowadays the reality is that, sooner or later, a client will ask you to shoot and edit high-quality videos for their Social Media pages. And Social Media channels are mainly accessed through smartphones and tablets, which means posting portrait videos will be essential to engage and build a strong audience.
Shooting vertical is easy when you just want to post some footage of your weekend fun, but requires a change of perspective when the goal is to produce, shoot and edit professional videos instead. In that case, it’s important to produce a video that has a vertical aspect ratio in mind from the beginning of the process. But what happens when your production is meant to screen across different platforms and needs to fit vertical aspect ratio requirements? In this case, shooting 4K is gives you a lot of flexibility in post.
Most social video is posted at HD resolution, so why 4K? Cropping horizontal video to fit vertical screen usually leads to very pixelated and low-quality footage. When your frames need to be taller than they are wide, your standard 16:9 frame will need to be dramatically resized to fit the 9:16 smartphone screen and your regular HD resolution won’t allow for the image to stay sharp and clean. Shooting 4K will give you extra pixels to work with and make it easy to reposition the frame in post as you wish.
In addition to having more room for reframing, if your original footage has a quadrupled resolution then you can zoom in cleanly since you have a much better source video to work with. This is a huge advantage because Vertical Video is all about showing detail so you can make a deeper connection with your audience. 4K will give you the flexibility to efficiently adjust to vertical and square formats, and still preserve the option to watch a broader image of your subject on our beloved 16:9 standard film and television format.
Of course, you can always just upload a horizontal video on Instagram or Snachap, but don’t expect your audience to take the time to turn their phones around just to watch your video. Chances are they will keep holding their phone with one hand and careless watch your footage in a small window across the screen. It’s obvious that adjusting to 9:16 aspect ratio requires a change of perspective and demands us to rethink the way we produce, shoot and edit video. But isn’t it what film school is always trying to teach us?
Formats are changing, vertical streaming is a very strong distribution method, and mobile filmmaking is growing every day. It’s up to us, video makers, to reflect on the changes and find a balance between adjusting to our audiences and not losing image quality. I don’t believe vertical video will ever replace landscape aspect ratios, but I do think it is a solid format for short internet videos so let’s take advantage of it and get ready for the next challenge.
Recently our CEO Jim Tierney invited me to start a Podcast for Digital Anarchy. I have a journalism background and at first, the idea did not sound too bad: it would actually be awesome to take the time to chat with industry folks in a regular basis and be paid for it. The challenge began when he said I would do a video podcast, interviewing all these awesome people on camera.
It may sound silly to some people, but the idea of watching myself on camera terrifies me. Believe it or not, to this day I have not watched a video interview I gave at NAB last April. I have only listened to it, and noticing my accent in each answer was enough to make me skip the image part. Since that day Jim invited me to start the “videocast”, I have been trying to understand my fear of being on camera and my relationship with my own image. As a media professional, why can’t I look at myself on the screen? Digging into that question brought unexpected answers and the need to talk about a problem every woman faces at least once in their lives, if not all the time: beauty standards.
Being skinny has always been a prerequisite to be beautiful in my culture. It is difficult, painful, and traumatizing to grow up in Brazil as a not-so-skinny girl. If you are overweight it means you are also sedentary, unhealthy and unattractive by proxy. And believe me, you do not need to have much fat to be considered overweight in Brazil. My curly hair also did not help. Although I am from Salvador, which has the biggest African descendance in Brazil, curly hair was not a thing until very recently. I grew up straightening my hair with chemicals and only stopped doing that 4 years ago. It is hard to admit and think back, but looking at my graduation pictures from 10 years ago, looking at the popular girls at school, I realized I was just trying to belong.
I always knew most of my insecurities came from the dissatisfaction with the way a look, but I also learned very early on that not feeling pretty does not mean I am not pretty. What it means is that society sets unachievable beauty standards for women and that I must fight that daily if I want to be productive and help to minimize the harm our industry has caused to women. This was enough to deal with my own insecurity and keep me going. What I didn’t realize is that it wasn’t enough to solve the problem.
Every day the media reminds you of what it means to be beautiful to society: tall, skinny, and mostly white. Blacks, Latinas, middle-eastern are now accepted. They just need to be skinny. It’s an old and well-known problem, and although a lot of women are freeing themselves from it, most of us still compare ourselves to this woman we see on TV sometimes. In my case, I started to notice that those intangible standards can impact not only my eating and exercising habits; what I wear and how I wear the clothes I buy; but it can influence my behavior and stop me from growing professionally if I don’t face it.
What can we can do to minimize the harm our industry has already caused to women is clear to me: we must stand up and fight for inclusion, equal rights, full access to every job position available in the industry. We must include all body types in commercials, magazines, TV shows. We must have women featuring not only as personal assistant AI voices, but also coding and training the AI technology. However, for those who are already aware of this or working on solving the big picture, I ask: what can we do to do not only free other women but truly free ourselves and stop shaming our own images silently? I don’t fully know the answer, but I will start with producing, editing and hosting the Digital Anarchy podcast. It will be incredibly difficult, but I can’t wait to discuss media-making with you all. Stay tuned! More info coming up soon.
Releasing new products is awesome, but to me, the best part of working for a video/photo plugin company is to see how our clients are using our products day-to-day. From transcription to flicker removal and skin retouching, content creators all over the world are using plugins to create better content and images. There are so many talented content creators making cool stuff out there!
This week we talked to Margarita Monet, lead singer of Edge of Paradise. The band — Dave Bates-guitars, David Ruiz – guitars, Vanya Kapetanovic – bass, and Jimmy Lee – drums — has been taking advantage of visual effects to enrich their music and create unique videos. In this interview, Margarita discusses how visual effects are helping to shape Edge of Paradise’s identity and explains how she has been using Beauty Box Video to improve the image quality of her videos.
Digital Anarchy: How would you describe the Edge of Paradise music and style?
Monet: Our music has evolved over the years. I would say we started with traditional hard rock and heavy metal, influenced by the classic bands like Black Sabbath and Iron Maiden. But our music evolved into something more of a cinematic hard rock with an industrial edge. I incorporated the piano and keyboard which gave some songs a symphonic feel to them. Our music is very dynamic, with blood pumping drums, epic choruses all moved by heavy guitar riffs. But we also have very melodic and dynamic piano ballads. The upcoming album Universe really showcases what Edge Of Paradise is all about, and we are so excited to share this unique sound we created!
Digital Anarchy: Since the very beginning, your music videos have been full of visual effects. Where do all the VFX ideas come from? Are they mostly done in post-production?
Monet: Most of the visual effects we actually tried to capture on camera and enhance it in post. Except for one of the Lyric Videos ( Dust To Dust ) that was all done in After Effects.
Dust to Dust, 2017:
Usually, ideas came from me, and whoever we were working with helped us bring them to life. We’ve had to get very creative playing with light, with props, building the settings. And as the band grows our videos get more and more elaborate and we all get more creative. We recently released a music video we shot in Iceland (Face of Fear), that one was directed by Val Rassi and edited by Robyn August. No visual effects there, just all scenery captured by an amazing drone pilot Darren LaFreniere!
Face of Fear, 2019:
Digital Anarchy: How long does it usually take to produce your videos? Is the whole band always involved in each stage of production?
Monet: Depends on the video. Some take about a month, where I come up with an idea and location/setting and we shoot it. Some videos take longer with a lot of planning and it’s a group effort. And there is always something we have to do in between, whether it’s playing shows or touring. Filming usually is a 1-2 day shoot, and we allow about 1-2 months for editing to be done.
We plan as much as possible and try to create beautiful shots for each take. However, things don’t always go as planned or we can’t achieve the perfect look we want. That’s when visual effects come handy. Recently we shot a live video of an acoustic version of one of our songs. It was shot in a recording studio and we had some limitation with lighting. I was searching for something I could do to polish up the look and came across Digital Anarchy. When using 4K cameras, it creates a very high-quality image, and all the details are visible, so we decided to try Beauty Box video. It is such a great tool to polish up the look! Extremely effective and time efficient.
Digital Anarchy: How is Beauty Box helping you to achieve the look you want on your music videos?
Monet: We put so much effort into creating the settings and the “world” of the video that it’s only expected to have everything look polished and coherent. Sometimes we might have this great shot, but one of our faces looks shiny, or the light is not completely flattering. Beauty Box can fix those issues and allow us to use the shot we want!
Digital Anarchy: What was your first music video as a band and what do you think has changed so far?
Monet: Our first video was Mask, it sounds and looks like a completely different band. We had to start somewhere. It’s a well-done video, we’ve had probably the largest crew working on that to date, over 10 people, and we learned a lot from it! It was also a different lineup, so the band was still evolving. But it does not even come close to what we look and sound like now!
Digital Anarchy: Would you say the visual effects applied to photos and videos nowadays are part of the band’s identity?
Monet: Yes, we want to transport people to another world, and we want to do that in our live show as well. That is why we are building our stage show to reflect the imagery of the band when we start touring in support of the upcoming album Universe. Our vision from the beginning was always larger than life so I would say it’s a part of our identity.
I want our content to make a big impact visually. We put so much time and effort into our songs to make sure all our music, from songwriting to production, is the best it can be. We have to do the same with video! And now we can put more time and effort into creating videos that tell great stories; that are visually stunning and are of the highest quality. That is essential to keeping the band growing.
I think the fact that we do have quite a few videos, not just music videos, but promo videos as well, helped us keep building momentum. Especially today, people expect that from you. Being a newer band, especially in the beginning, it was a big challenge and I didn’t know much about video creation, so I had to learn very fast.
Digital Anarchy: Every member of the band is somehow connected to other art forms besides music. How do you think this impacts the aesthetics of the band now?
Monet: I think these days, being in a band is not just about making music, we must create a world that people will want to be a part of. And I love that, I love the visual aspect of it, I love creating a stage show, creating music videos. I make a lot of graphics and art for the band as well, and in a way that helps me with the songwriting, because I can really visualize the world I’m creating. We have a great collection of people, all their skills and ideas come into play when we evolve our world!
Digital Anarchy: After producing and editing so many music videos, what is your favorite visual effect?
Monet: The last video we worked on with Nick Peterson, he created a really cool effect where he filmed us at different playback speeds/frame rates that gave certain parts of the video more of a static/robotic feel, some parts are smooth, slow motion. It created a really cool effect and gave the video the right dynamics and motion that flows right with the song. Some other effects in the past that I liked was playing with light flares and earthquake effect is also great for music videos!
Dave, I, and the rest of the band members are very hands-on nowadays. We have a smaller 2-5 people crew, which helps everything run smoother and more efficient. Most of the time we have 1 or 2 days to shoot and as the videos get more elaborate, we must work fast and get very creative. The last video we shot with Nick Peterson (Universe) we captured so much in 1 day. It’s great to work with people who understand how to maximize the time to capture what we need to achieve the vision!
The trainer for Universe is not yet ready, but here is a sneak peek!
With a solid line-up, Edge of Paradise is working on new music videos and getting ready to release their new Album, Universe.Check their website to learn more!
Are you a content creator using Digital Anarchy plugins to produce video materials? Get in touch! We would love to learn more about your work and spread the word.
Unless you’ve been living under a rock, you know it’s March Madness… time for the NCAA Basketball Tournament. This is actually my favorite two weekends of sports a year. I’m not a huge sports guy, but watching all the single elimination games, rooting for underdogs, the drama, players putting everything they have into these single games… it’s really a blast. All the good things about sport.
It’s also the time of year that flicker drives me a little crazy. One of the downside of developing Flicker Free is that I start to see flicker everywhere it happens. And it happens a lot during the NCAA tournament. Especially slow motion shots . Now, I understand that those are during live games and playing it back immediately is more important than removing some flicker. Totally get it.
However, for human interest stories recorded days or weeks before the tournament? Slow motion shots used two days after they happened? C’mon! Spend 5 minutes to re-render it with Flicker Free. Seriously.
Here’s a portion of a story about Gonzaga star Rui Hachimura:
Most of the shots have the camera/light sync problem that Flicker Free is famous for fixing. The original has the rolling band flicker that’s the symptom of this problem, the fixed version took all of three minutes to fix. I applied Flicker Free, selected the Rolling Bands 4 preset (this is always the best preset to start with) and rendered it. It looks much better.
So if you know anyone at the NCAA in post production, let them know they can take the flicker out of March Madness!
We’ve released PowerSearch 1.0 for Premiere Pro! It’s a new part of the Transcriptive suite of tools that’s essentially a search engine for Premiere letting you search clips, sequences, markers, metadata and captions all in one place.
It streamlines your editing by allowing you to quickly search hours of video for words or phrases. While it works best when used in conjunction with Transcriptive, it plays well with any service that can get transcripts or SRTs (captions) into Premiere Pro. It’s all about helping you find data, we don’t care where the data comes from.
Like any search engine, it displays a list of results . In most cases, clicking on the result takes you to the exact moment the words were spoken in either the Source panel (clips) or the Timeline panel (sequences). If you’ve ever been asked to find a 15 second quote and had to dig through 50 hours of footage to find it, you know how valuable of a time saving tool this is.
I decided to try Transcriptive way before I became part of the Digital Anarchy family. Just like any other aspiring documentary filmmaker, I knew relying on a crew to get my editing started was not an option. Without funding you can’t pay a crew; without a crew you can’t get funding. I had no money, an idea in my head, some footage shot with the help of friends, and a lot of work to do. Especially when working on your very first feature film.
Besides being an independent Filmmaker and Social Media strategist for DA, I am also an Assistive Technology Trainer for a private company called Adaptive Technology Services. I teach blind and low vision individuals how to take advantage of technology to use their phones and computers to rejoin the workforce after their vision loss. Since the beginning of my journey as an AT Trainer – I started as a volunteer 6 years ago – I have been using my work to research the subject and prepare for this film.
My movie is about the relationship between the sighted and non-sighted communities. It seeks to establish a dialog between people with and without visual disabilities so we can come together to demystify disabilities to those without them. I know it is an important subject, but right from the beginning of this project I learned how hard it is to gather funds for any disability-related initiative. I had to carefully budget the shoots and define priorities. Paying a post-production crew was not (and still is not) possible. I have to write and cut samples on my own for now. Transcriptive was a way for me to get things moving by myself so I can apply for grants in the near future and start paying producers, editors, camera operators, sound designers, and get the project going for real. The journey started with transcribing the interviews. Transcriptive did a pretty good job with transcribe the audio from the camera as you can see below. Accuracy got even better when transcribing audio from the mic.
The idea of getting accurate automated transcripts brought a smile to my face. But could Artificial Intelligence really get the job done for me? I never believed so, and I was right. The accuracy for English interviews was pretty impressive. I barely had to do any editing on those. The situation changed as soon as I tried transcribing audio in my native language, Brazilian Portuguese. The AI transcription didn’t just get a bit flaky; it was completely unusable so I decided to do not waste more time and start doing my manual transcriptions.
I have been using Speechmatics for most of my projects because the accuracy is considerably higher than Watson with English. However, after trying to transcribe in Portuguese for the first time, it occurred to me Speechmatics actually offers Portuguese from Portugal while Watson transcribes Portuguese from Brazil. I decided to give Watson a try, but the transcription was not much better than the one I got from Speechmatics.
It is true the Brazilian Portuguese footage I was transcribing was b-roll clips recorded with a Rhode mic; placed on top of my DSLR. They were not well mic’d sit down interviews. The clips do have decent audio, but also involve some background noise that does not help foreign language speech-to-text conversion. At the time I had a deadline to match and was not able to record better audio and compare Speechmatics and Watson Portuguese transcripts. Will be interesting to give it another try, with more time to further compare and evaluate if there are advantages on using Watson for my next batch of footage.
Days after my failed attempt to transcribe Brazilian Portuguese with Speechmatics, I went back to the Transcriptive panel for Premiere, found an option to import my human transcripts, gave it a try, and realized I could still use Transcriptive to speed up my video production workflow. I could still save time by letting Transcriptive assign timecode to the words I transcribed, which would be nearly impossible for me to do on my own. The plugin allowed me to quickly find where things were said in 8 hours of interviews. Having the timecode assigned to each word allowed me to easily search the transcript and jump to that point in my video where I wanted to have a cut, marker, b-roll or transition effect applied.
My movie is still in pre-production and my Premiere project is honestly not that organized yet so the search capability was also a huge advantage. I have been working on samples to apply for grants, which means I have tons of different sequences, multicam sequences, markers that now live in folders inside of folders. Before I started working for DA I was looking for a solution to minimize the mess without having to fully organize it or spend too much money and Power Search came to the rescue. Also, being able to edit my transcripts inside of Premiere made my life a lot easier.
Last month, talking to a few film clients and friends, I found out most filmmakers still clean up human transcripts. In my case, I go through the transcripts to add punctuation marks and other things that will remind me how eloquent speakers were in that phrase. Ellipses, question marks and exclamation points remind me of the tone they spoke allowing me to get paper cuts done faster. I am not sure ASR technology will start entering punctuation in the future, but it would be very handy to me. While this is not a possibility, I am grateful Transcriptive now offers a text edit interface, so I can edit my transcripts without leaving Premiere.
For the movie I am making now I was lucky enough to have a friend willing to help me getting this tedious and time-consuming part of the work done so I am now exporting all my transcripts to Transcriptive.com. The app will allow us to collaborate on the transcript. She will be helping me all the way from LA, editing all the Transcripts without having to download a whole Premiere project to get the work done.
For the last 14 years I’ve created the Audio Art Tour for Burning Man. It’s kind of a docent led audio guide to the major art installations out there, similar to an audio guide you might get at a museum.
Burning Man always has a different ‘theme’ and this year it was ‘I, Robot’. I generally try and find background music related to the theme. EDM is big at Burning Man, land of 10,000 DJs, so I could’ve just grabbed some electronic tracks that sounded robotic. Easy enough to do. However I decided to let Artificial Intelligence algorithms create the music! (You can listen to the tour and hear the different tracks)
This turned out to be not so easy, so I’ll break down what I had to do to get seven unique sounding, usable tracks. I had a bit more success with AmperMusic, which is also currently free (unlike Jukedeck), so I’ll discuss that first.
Getting the Tracks
The problem with both services was getting unique sound tracks. The A.I. has a tendency of creating very similar sounding music. Even if you select different styles and instruments you often end up with oddly similar music. This problem is compounded by Amper’s inability to render more than about 30 seconds of music.
What I found I had to do was let it generate 30 seconds randomly or with me selecting the instruments. I did this repeatedly until I got a 30 second sample I liked. At which point I extended it out to about 3 or 4 minutes and turned off all the instruments but two or three. Amper was usually able to render that out. Then I’d turn off those instruments and turn back on another three. Then render that. Rinse, repeat until you’ve rendered all the instruments.
Now you’ve got a bunch of individual tracks that you can combine to get your final music track. Combine them in Audition or even Premiere Pro (or FCP or whatever NLE) and you’re good to go. I used that technique to get five of the tracks.
Jukedeck didn’t have the rendering problem but it REALLY suffered from the ‘sameness’ problem. It was tough getting something that really sounded unique. However, I did get a couple good tracks out of it.
Problems Using Artificial Intelligence
This is another example of A.I. and Machine Learning that works… sort of. I could have found seven stock music tracks that I like much faster (this is what I usually do for the Audio Art Tour). The amount of time it took me messing around with these services was significant. Also, if Jukedeck is any indication, a music track from one of these services will cost as much as a stock music track. Just go to Pond5 to see what you can get for the same price. With a much, much wider variety. I don’t think living, breathing musicians have much to worry about. At least for now.
That said, I did manage to get seven unique, cool sounding tracks out of them. It took some work, but it did happen.
As with most A.I./ML, it’s difficult to see what the future looks like. There has certainly been a ton of advances, but I think in a lot of cases, it’s some of the low hanging fruit. We’re seeing that with Speech-to-text algorithms in Transcriptive where they’re starting to plateau and cluster around the same accuracy levels. The fruit (accuracy) is now pretty high up and improvement are tough. It’ll be interesting to see what it takes to break through that. More data? Faster servers? A new approach?
I think music may be similar. It seems like it’s a natural thing for A.I. but it’s deceptively difficult to do in a way that mimics the range and diversity of styles and sounds that many human musicians have. Particularly a human armed with a synth that can reproduce an entire orchestra. We’ll see what it takes to get A.I. music out of the Valley of Sameness.
1) Practically every company exhibiting was talking about A.I.-something.
2) VR seemed to have disappeared from vendor booths.
The last couple years at NAB, VR was everywhere. The Dell booth had a VR simulator, Intel had a VR simulator, booths had Oculuses galore and you could walk away with an armful of cardboard glasses… this year, not so much. Was it there? Sure, but it was hardly to be seen in booths. It felt like the year 3D died. There was a pavilion, there were sessions, but nobody on the show floor was making a big deal about it.
In contrast, it seemed like every vendor was trying to attach A.I. to their name, whether they had an A.I. product or not. Not to mention, Google, Amazon, Microsoft, IBM, Speechmatics and every other big vendor of A.I. cloud services having large booths touting how their A.I. was going to change video production forever.
I’ve talked before about the limitations of A.I. and I think a lot of what was talked about at NAB was really over promising what A.I. can do. We spent most of the six months after releasing Transcriptive 1.0 developing non-A.I. features to help make the A.I. portion of the product more useful. The release were announcing today and the next release coming later this month will focus on getting around A.I. transcripts completely by importing human transcripts.
There’s a lot of value in A.I. It’s an important part of Transcriptive and for a lot use cases it’s awesome. There are just also a lot of limitations. It’s pretty common that you run into the A.I. equivalent of the Uncanny Valley (a CG character that looks *almost* human but ends up looking unnatural and creepy), where A.I. gets you 95% of the way there but it’s more work than it’s worth to get the final 5%. It’s better to just not use it.
You just have to understand when that 95% makes your life dramatically easier and when it’s like running into a brick wall. Part of my goal, both as a product designer and just talking about it, is to help folks understand where that line in the A.I. sand is.
I also don’t buy into this idea that A.I. is on an exponential curve and it’s just going to get endlessly better, obeying Moore’s law like the speed of processors.
When we first launched Transcriptive, we felt it would replace transcriptionists. We’ve been disabused of that notion. ;-) The reality is that A.I. is making transcriptionists more efficient. Just as we’ve found Transcriptive to be making video editors more efficient. We had a lot of folks coming up to us at NAB this year telling us exactly that. (It was really nice to hear. :-)
However, much of the effectiveness of Transcriptive comes more from the tools that we’ve built around the A.I. portion of the product. Those tools can work with transcripts and metadata regardless of whether they’re A.I. or human generated. So while we’re going to continue to improve what you can do with A.I., we’re also supporting other workflows.
Over the next couple months you’re going to see a lot of announcements about Transcriptive. Our goal is to leverage the parts of A.I. that really work for video production by building tools and features that amplify those strengths, like PowerSearch our new panel for searching all the metadata in your Premiere project, and build bridges to other technology that works better in other areas, such as importing human created transcripts.
Should be a fun couple months, stay tuned! btw… if you’re interested in joining the PowerSearch beta, just email us at email@example.com.
Addendum: Just to be clear, in one way A.I. is definitely NOT VR. It’s actually useful. A.I. has a lot of potential to really change video production, it’s just a bit over-hyped right now. We, like some other companies, are trying to find the best way to incorporate it into our products because once that is figured out, it’s likely to make editors much more efficient and eliminate some tasks that are total drudgery. OTOH, VR is a parlor trick that, other than some very niche uses, is going to go the way of 3D TV and won’t change anything.
Chief Executive Anarchist
A.I. is definitely changing how editors get transcripts and search video for content. Transcriptive demonstrates that pretty clearly with text. Searching via object recognition is something that also is already happening. But what about actual video editing?
One of the problems A.I. has is finishing. Going the last 10% if you will. For example, speech-to-text engines, at best, have an accuracy rate of about 95% or so. This is about on par with the average human transcriptionist. For general purpose recordings, human transcriptionists SHOULD be worried.
But for video editing, there are some differences, which are good news. First, and most importantly, errors tend to be cumulative. So if a computer is going to edit a video, at the very least, it needs to do the transcription and it needs to recognize the imagery. (we’ll ignore other considerations like style, emotion, story for the moment) Speech recognition is at best 95%, object recognition is worse. The more layers of AI you have, usually those errors will multiply (in some cases there might be improvement though) . While it’s possible automation will be able to produce a decent rough cut, these errors make it difficult to see automation replacing most of the types of videos that pro editors are typically employed for.
Secondly, if the videos are being done for humans, frequently the humans don’t know what they want. Or at least they’re not going to be able to communicate it in such a way that a computer will understand and be able to make changes. If you’ve used Alexa or Echo, you can see how well A.I. understands humans. Lots of situations, especially literal ones (find me the best restaurant), it works fine, lots of other situations, not so much.
Many times as an editor, the direction you get from clients is subtle or you have to read between the lines and figure out what they want. It’s going to be difficult to get A.I.s to take the way humans usually describe what they want, figure out what they actually want and make those changes.
Third… then you get into the whole issue of emotion and storytelling, which I don’t think A.I. will do well anytime soon. The Economist recently had an amusing article where it let an A.I. write the article. The result is here. Very good at mimicking the style of the Economist but when it comes to putting together a coherent narrative… ouch.
It’s Not All Good News
There are already phone apps that do basic automatic editing. These are more for consumers that want something quick and dirty. For most of the type of stuff professional editors get paid for, it’s unlikely what I’ve seen from the apps will replace humans any time soon. Although, I can see how the tech could be used to create rough cuts and the like.
Also, for some types of videos, wedding or music videos perhaps, you can make a pretty solid case that A.I. will be able to put something together soon that looks reasonably professional.
You need training material for neural networks to learn how to edit videos. Thanks to YouTube, Vimeo and the like, there is an abundance of training material. Do a search for ‘wedding video’ on YouTube. You get 52,000,000 results. 2.3 million people get married in the US every year. Most of the videos from those weddings are online. I don’t think finding a few hundred thousand of those that were done by a professional will be difficult. It’s probably trivial actually.
Same with music videos. There IS enough training material for the A.I.s to learn how to do generic editing for many types of videos.
For people that want to pay $49.95 to get their wedding video edited, that option will be there. Probably within a couple years. Have your guests shoot video, upload it and you’re off and running. You’ll get what you pay for, but for some people it’ll be acceptable. Remember, A.I. is very good at mimicking. So the end result will be a very cookie cutter wedding video. However, since many wedding videos are pretty cookie cutter anyways… at the low end of the market, an A.I. edited video may be all ‘Bridezilla on A Budget’ needs. And besides, who watches these things anyways?
Let The A.I Do The Grunt Work, Not The Editing
The losers in the short term may be assistant editors. Many of the tasks A.I. is good for… transcribing, searching for footage, etc.. is now typically given to assistants. However, it may simply change the types of tasks assistant editors are given. There’s a LOT of metadata that needs to be entered and wrangled.
While A.I. is already showing up in many aspects of video production, it feels like having it actually do the editing is quite a ways off. I can see creating A.I. tools that help with editing: Rough cut creation, recommending color corrections or B roll selection, suggesting changes to timing, etc. But there’ll still need to be a person doing the edit.
Time lapse is always challenging… you’ve got a high resolution image sequence that can seriously tax your system. Add Flicker Free on top of that… where we’re analyzing up to 21 of those high resolution images… and you can really slow a system down. So I’m going to go over a few tips for speeding things up in Premiere or other video editor.
First off, turn off Render Maximum Depth and Maximum Quality. Maximum Depth is not going to improve the render quality unless your image sequence is HDR and the format you’re saving it to supports 32-bit images. If it’s just a normal RAW or JPEG sequence, it won’t make much of a difference. Render Maximum Quality may make a bit of difference but it will likely be lost in whatever compression you use. Do a test or two to see if you can tell the difference (it does improve scaling) but I rarely can.
RAW: If at all possible you should shoot your time lapses in RAW. There are some serious benefits which I go over in detailed in this video: Shooting RAW for Time Lapse. The main benefit is that Adobe Camera RAW automatically removes dead pixels. It’s a big f’ing deal and it’s awesome. HOWEVER… once you’ve processed them in Adobe Camera RAW, you should convert the image sequence to a movie or JPEG sequence (using very little compression). It will make processing the time lapse sequence (color correction, effects, deflickering, etc.) much, much faster. RAW is awesome for the first pass, after that it’ll just bog your system down.
Nest, Pre-comp, Compound… whatever your video editing app calls it, use it. Don’t apply Flicker Free or other de-flickering software to the original, super-high resolution image sequence. Apply it to whatever your final render size is… HD, 4K, etc.
Why? Say you have a 6000×4000 image sequence and you need to deliver an HD clip. If you apply effects to the 6000×4000 sequence, Premiere will have to process TWELVE times the amount of pixels it would have to process if you applied it to HD resolution footage. 24 million pixels vs. 2 million pixels. This can result in a HUGE speed difference when it comes time to render.
How do you Nest?
This is Premiere-centric, but the concept applies to After Effects (pre-compose) or FCP (compound) as well. (The rest of this blog post will be explaining how to Nest. If you already understand everything I’ve said, you’re good to go!)
First, take your original image sequence (for example, 6000×4000 pixels) and put it into an HD sequence. Scale the original footage down to fit the HD sequence.
The reason for this is that we want to control how Premiere applies Flicker Free. If we apply it to the 6000×4000 images, Premiere will apply FF and then scale the image sequence. That’s the order of operations. It doesn’t matter if Scale is set to 2%. Flicker Free (and any effect) will be applied to the full 6000×4000 image.
So… we put the big, original images into an HD sequence and do any transformations (scaling, adjusting the position and rotating) here. This usually includes stabilization… although if you’re using Warp Stabilizer you can make a case for doing that to the HD sequence. That’s beyond the scope of this tutorial, but here’s a great tutorial on Warp Stabilizer and Time Lapse Sequences.
Next, we take our HD time lapse sequence and put that inside a different HD sequence. You can do this manually or use the Nest command.
Now we apply Flicker Free to our HD time lapse sequence. That way FF will only have to process the 1920×1080 frames. The original 6000×4000 images are hidden in the HD sequence. To Flicker Free it just looks like HD footage.
Voila! Faster rendering times!
So, to recap:
Turn off Render Maximum Depth
Shoot RAW, but apply Flicker Free to a JPEG sequence/Movie
Apply Flicker Free to the final output resolution, not the original resolution
Those should all help your rendering times. Flicker Free still takes some time to render, none of the above will make it real time. However, it should speed things up and make the render times more manageable if you’re finding them to be really excessive.
Using Transcriptive with multicam sequences is not a smooth process and doesn’t really work. It’s something we’re working on coming up with a solution for but it’s tricky due to Premiere’s limitations.
However, while we sort that out, here’s a workaround that is pretty easy to implement. Here are the steps:
1- Take the clip with the best audio and drop it into it’s own sequence.
2- Transcribe that sequence with Transcriptive.
3- Now replace that clip with the multicam clip.
4- Voila! You have a multicam sequence with a transcript. Edit the transcript and clip as you normally would.
This is not a permanent solution and we hope to make it much more automatic to deal with Premiere’s multicam clips. In the meantime, this technique will let you get transcripts for multicam clips.
Thanks to Todd Drezner at Cohn Creative for suggesting this workaround.
Artificial Intelligence (A.I.) and machine learning are changing how video editors deal with some common problems. 1) how do you get accurate transcriptions for captions or subtitles? And 2) how do you find something in hours of footage if you don’t know exactly where it is?
Getting out of the Transcription Dungeon
Kelley Slagle, director, producer and editor for Cavegirl Productions, has been working on Eye of the Beholder, a documentary on the artists that created the illustrations for the Dungeons and Dragon game. With over 40 hours of interview footage to comb through searching through it all has been made much easier by Transcriptive, a new A.I. plugin for Adobe Premiere Pro.
Imagine having Google for your video project. Turning all the dialog into text makes everything easily searchable (and it supports 28 languages). Not too mention making it easy to create captions and subtitles.
The Dragon of Time And Money
Using a traditional transcription service for 40 hours of footage, you’re looking at a minimum of $2400 and a few days to turn it all around. Not exactly cost or time effective. Especially if you’re on a doc budget. However, it’s a problem for all of us.
Transcriptive helps solve the transcription problem, and the problems of searching video and captions/subtitles. It uses A.I. and machine learning to automatically generate transcripts with up to 95% accuracy and bring them into Premiere Pro. And the cost? About $4/hour (or much less depending on the options you choose) So, 40 hours is $160 vs $2400. And you’ll get all of it back in a few hours.
Yeah, it’s hard to believe.
Read what these three filmmakers have to say and try the Transcriptive demo out on your own footage. It’ll make it much easier to believe.
“We are using Transcriptive to transcribe all of our interviews for EYE OF THE BEHOLDER. The idea of paying a premium for that much manual transcription was daunting. I am in the editing phase now and we are collaborating with a co-producer in New York. We need to share our ideas for edits and content with him, so he is reviewing transcripts generated by Transcriptive and sending us his feedback and vice versa. The ability to get a mostly accurate transcription is fine for us, as we did not expect the engine to know proper names of characters and places in Dungeons & Dragons.” – Kelley Slagle, Cavegirl Productions
Google Your Video Clips and Premiere Project?
Since everything lives right within Premiere, all the dialog is fully searchable. It’s basically a word processor designed for transcripts, where every word has time code. Yep, every word of dialog has time code. Click on the word and jump to that point on the timeline. This means you don’t have to scrub through footage to find something. Search and jump right to it. It’s an amazing way for an editor to find any quote or quip.
As Kelley says, “We are able to find what we need by searching the text or searching the metadata thanks to the feature of saving the markers in our timelines. As an editor, I am now able to find an exact quote that one of my co-producers refers to, or find something by subject matter, and this speeds up the editing process greatly.”
Joy E. Reed of Oh My! Productions, who’s directing the documentary, ‘Ren and Luca’ adds, “We use sequence markers to mark up our interviews, so when we’re searching for specific words/phrases, we can find them and access them nearly instantly. Our workflow is much smoother once we’ve incorporated the Transcriptive markers into our project. We now keep the Markers window open and can hop to our desired areas without having to flip back and forth between our transcript in a text document and Premiere.”
Workflow, Captions, and Subtitles
Captions and subtitles are one of the key uses of Transcriptive. You can use it with the Premiere’s captioning tool or export many different file formats (SRT, SMPTE, SCC, MCC, VTT, etc) for use in any captioning application.
“We’re using Transcriptive to transcribe both sit down and on-the-fly interviews with our subjects. We also use it to get transcripts of finished projects to create closed captions/subtitles.”, says Joy. “We can’t even begin to say how useful it has been on Ren and Luca and how much time it saves us. The turnaround time to receive the transcripts is SO much faster than when we sent it out to a service. We’ve had the best luck with Speechmatics. The transcripts are only as accurate as our speakers – we have a teenage boy who tends to mumble, and his stuff has needed more tweaking than some of our other subjects, but it has been great for very clearly recorded material. The time it saves vs the time you need to tweak for errors is significant.”
Transcriptive is fully integrated into Premiere Pro, you never have to leave the application or pass metadata and files around. This makes creating captions much easier, allowing you to easily edit each line while playing back the footage. There are also tools and keyboard shortcuts to make the editing much faster than a normal text editor. You then export everything to Premiere’s caption tool and use that to put on the finishing touches and deliver them with your media.
Another company doing documentary work is Windy Films. They are focused on telling stories of social impact and innovation, and like most doc makers are usually on tight budgets and deadlines. Transcriptive has been critical in helping them tell real stories with real people (with lots of real dialog that needs transcribing).
They recently completed a project for Planned Parenthood. The deadline was incredibly tight. Harvey Burrell, filmmaker at Windy, says, “We were trying to beat the senate vote on the healthcare repeal bill. We were editing while driving back from Iowa to Boston. The fact that we could get transcripts back in a matter of hours instead of a matter of days allowed us to get it done on time. We use Transcriptive for everything. The integration into premiere has been incredible. We’ve been getting transcripts done for a long time. The workflow was always a clunky; particularly to have transcripts in a word document off to one side. Having the ability to click on a word and just have Transcriptive take you there in the timeline is one of our favorite features.”
Getting Accurate Transcripts using A.I.
Audio quality matters. So the better the recording and the more the talent enunciates correctly, the better the transcript. You can get excellent results, around 95% accuracy, with very well recorded audio. That means your talent is well mic’d, there’s not a lot of background noise and they speak clearly. Even if you don’t have that, you’ll still usually get very good results as long as the talent is mic’d. Even accents are ok as long as they speak clearly. Talent that’s off mic or if there’s crosstalk will cause it to be less accurate.
Transcriptive lets you sign up with the speech services directly, allowing you to get the best pricing. Most transcription products hide the service they’re using (they’re all using one of the big A.I. services), marking up the cost per minute to as much as .50/min. When you sign up directly, you get Speechmatics for $0.07/min. And Watson gives you the first 1000 minutes free. (Speechmatics is much more accurate but Watson can be useful).
So let’s talk about something that’s near and dear to my heart: Fonts.
I recently discovered Adobe TypeKit. I know…some of you are like… ‘You just discovered that?’.
Yeah, yeah… well, in case there are other folks that are clueless about this bit of the Creative Cloud that’s included with your subscription: It’s a massive font library that can be installed on your Creative Cloud machine… much of which is free (well, included in the cost of CC).
Up until a week ago I just figured it was a way for Adobe to sell fonts. I was mistaken. You find the font you like and, more often than not, you click the SYNC button and, boom… font is installed on your machine for use in Photoshop or After Effects or whatever.
Super cool feature of Creative Cloud that if you’re as clued in as I am about everything CC includes… you might not know about. Now you do. :-) Here’s a bit more info from Adobe.
I realize this probably comes off as a bit of an ad for TypeKit, but it really is pretty cool. I just designed a logo using a new font I found there. And since it’s Adobe, the fonts are of really high quality, not like what you find on free font sites (which is what I’ve relied on for many uses).
One of the fun challenges of developing graphics software is dealing with the many, varied video cards and GPUs out there. (actually, it’s a total pain in the ass. Hey, just being honest :-)
There are a lot of different video cards out there and they all have their quirks. Which are complicated by the different operating systems and host applications… for example, Apple decides they’re going to more or less drop OpenCL in favor of Metal, which means we have to re-write quite a bit of code, Adobe After Effects and Adobe Premiere Pro handle GPUs differently even though it’s the same API, etc. etc. From the end user side of things you might not realize how much development goes into GPU Acceleration. It’s a lot.
The latest release of Beauty Box Video for Skin Retouching (v4.1) contains a bunch of fixes for video cards that use OpenCL (AMD, Intel). So if you’re using those cards it’s a worthwhile download. If you’re using Resolve and Nvidia cards, you also want to download it as there’s a bug with CUDA and Resolve and you’ll want to use Beauty Box in OpenCL mode until we fix the CUDA bug. (Probably a few weeks away) Fun times in GPU-land.
Just wanted to give you all some insight on how we spend our days around here and what your hard earned cash goes into when you buy a plugin. You know, just in case you’re under the impression all software developers do is ‘work’ at the beach and drive Ferraris around. We do have fun, but usually it involves nailing the video card of the month to the wall and shooting paintballs at it. ;-)
We here at Digital Anarchy want to make sure you have a wonderful Christmas and there’s no better way to do that than to take videos of family and colleagues and turn them into the Grinch. They’ll love it! Clients, too… although they may not appreciate it as much even if they are the most deserving. So just play it at the office Christmas party as therapy for the staff that has to deal with them.
Our free plugin Ugly Box will make it easy to do! Apply it to the footage, click Make Ugly, and then make them green! This short tutorial shows you how:
You can download the free Ugly Box plugin for After Effects, Premiere Pro, Final Cut Pro, and Avid here:
One of the challenges with stop motion animation is flicker. Lighting varies slightly for any number of reasons causing the exposure of every frame to be slightly different. We were pretty excited when Bix Pix Entertainment bought a bunch of Flicker Free licenses (our deflicker plugin) for Adobe After Effects. They do an amazing kids show for Amazon called Tumble Leaf that’s all stop motion animation. It’s won multiple awards, including an Emmy for best animated preschool show.
Many of us, if not most of us, that do VFX software are wannabe (or just flat out failed ;-) animators. We’re just better at the tech than the art. (exception to the rule: Bob Powell, one of our programmers, who was a TD at Laika and worked on Box Trolls among other things)
So we love stop motion animation. And Bix Pix does an absolutely stellar job with Tumble Leaf. The animation, the detailed set design, the characters… are all off the charts. I’ll let them tell it in their own words (below). But check out the 30 second deflicker example below (view at full screen as the Vimeo compression makes the flicker hard to see). I’ve also embedded their ‘Behind The Scenes’ video at the end of the article. If you like stop motion, you’ll really love the ‘Behind the Scenes’.
Bix Pix Entertainment is an animation studio that specializes in the art of stop-motion animation, and is known for their award-winning show Tumble Leaf on Amazon Prime.
It is not uncommon for an animator to labor for days sometimes weeks on a single stop motion shot, working frame by frame. With this process, it is natural to have some light variations between each exposure, commonly referred to as ‘flicker’ – There are many factors that can cause the shift in lighting. For instance, a studio light or lights may blow out or solar flare. Voltage and/ power surges can brighten or dim lights over a long shot. Certain types of lights, poor lighting equipment, camera malfunctions or incorrect camera settings. Sometimes an animator might wear a white t-shirt unintentionally adding fill to the shot or accidentally standing in front of a light casting a shadow from his or her body.
The variables are endless. Luckily these days compositors and VFX artists have fantastic tools to help remove these unwanted light shifts. Removing unwanted light shifts and flicker is a very important and necessary first step when working with stop-motion footage. Unless by chance it’s an artistic decision to leave that tell-tale flicker in there. But that is a rare decision that does not come about often.
Here at Bix Pix we use Adobe After Effects for all of our compositing and clean-up work. Having used 4 different flicker removal plugins over the years, we have to say Digital Anarchy’s flicker Free is the fastest, easiest and most effective flicker removal software we have come across. And also quite affordable.
During a season of Tumble Leaf we will process between 1600 and 2000 shots averaging between 3 seconds and up to a couple minutes in length. That is an average of about 5 hours of footage per season, almost three times the length of a feature film. With a tight schedule of less than a year and a small team of ten or so VFX artists and compositors. Nearly every shot has an instance of flicker free applied to it as an effect. The plugin is so fast, simple to use and reliable. De-flickering can be done in almost real time.
Digital Anarchy’s Flicker free has saved us thousands of hours of work and reduced overtime and crunch time delays. This not only saves money but frees up artists to do more elaborate effects that we could not do before due to time constraints, allowing them to focus on making their work stand out even more.
Sharpening video can be a bit trickier than sharpening photos. The process is the same of course… increasing the contrast around edges which creates the perception of sharpness.
However, because you’re dealing with 30fps instead of a single image some additional challenges are introduced:
1- Noise is more of a problem.
2- Video is frequently compressed more heavily than photos, so compression artifacts can be a serious problem.
3- Oversharpening is a problem with stills or video but can create motion artifacts when the video is played back that can be visually distracting.
4- It’s more difficult to mask out areas like skin that you don’t want sharpened.
These are problems you’ll run into regardless of the sharpening method. However, probably unsurprising, in addition to discussing the solutions using regular tools, we do talk about how our Samurai Sharpen plugin can help with them.
Noise in Video Footage
Noise is always a problem regardless of whether you’re shooting stills or videos. However, with video the noise changes from frame to frame making it a distraction to the viewer if there’s too much or it’s too pronounced.
Noise tends to be much more obvious in dark areas, as you can see below where it’s most apparent in the dark, hollow part of the guitar:
Using a mask to protect the darker areas makes it possible to increase the sharpening for the rest of the video frame. Samurai Sharpen has masks built-in, so it’s easy in that plugin, but you can do this manually in any video editor or compositing program by using keying tools, building a mask and compositing effects.
Many consumer video cameras, including GoPros and some drone cameras heavily compress footage. Especially when shooting 4K.
It’s difficult, and sometimes impossible to sharpen footage like this. The compression artifacts become very pronounced, since they tend to have edges like normal features. Unlike noise, the artifacts are visible in most areas of the footage, although they tend to be more obvious in areas with lots of detail.
In Samurai you can increase the Edge Mask Strength to lessen the impact of sharpening on the artifact (often they’re in low contrast) but depending on how compressed the footage is you may not want to sharpen it.
Sharpening is a local contrast adjustment. It’s just looking at significant edges and sharpening those areas. Oversharpening occurs when there’s too much contrast around the edges, resulting in visible halos.
Especially if you look at the guitar strings and frets, you’ll see a dark halo on the outside of the strings and the strings themselves are almost white with little detail. Way too much contrast/sharpening. The usual solution is to reduce the sharpening amount.
In Samurai Sharpen you can also adjust the strength of the halos independently. So if the sharpening results in only the dark or light side being oversharpened, you can dial back just that side.
The last thing you usually want to do is sharpen someone’s skin. You don’t want your talent’s skin looking like a dried-up lizard. (well, unless your talent is a lizard. Not uncommon these days with all the ridiculous 3D company mascots)
Especially with 4K and HD, video is already showing more skin detail than most people want (hence the reason for our Beauty Box Video plugin for digital makeup). If you’re using UnSharp Mask you can use the Threshold parameter, or in Samurai the Edge Mask Strength parameter is a more powerful version of that. Both are good ways of protecting the skin from sharpening. The skin area tends to be fairly flat contrast-wise and the Edge Mask generally does a good job of masking the skin areas out.
Either way, you want to keep an eye on the skin areas, unless you want a lizard. (and if so, you should download are free Ugly Box plugin. ;-)
You can sharpen video and most video footage will benefit from some sharpening. However, there are numerous issues that you run into and hopefully this gives you some idea of what you’re up against whether you’re using Samurai Sharpen for Video or something else.
One problem that users can run into with our Flicker Free deflicker plugin is that it will look across edits when analyzing frames for the correct luminance. The plugin looks backwards as well as forwards to gather frames and does a sophisticated blend of all those frames. So even if you create an edit, say to remove an unwanted camera shift or person walking in front of the camera, Flicker Free will still see those frames.
This is particularly a problem with Detect Motion turned OFF.
The way around this is to Nest (i.e. Pre-compose (AE), Compound Clip (FCP)) the edit and apply the plugin to the new sequence. The new sequence will start at the first frame of the edit and Flicker Free won’t be able to see the frames before the edit.
This is NOT something you always have to do. It’s only if the frames before the edit are significantly different than the ones after it (i.e. a completely different scene or some crazy camera movement). 99% of the time it’s not a problem.
This tutorial shows how to solve the problem in Premiere Pro. The technique works the same in other applications. Just replacing ‘Nesting’ with whatever your host application does (pre-composing, making a compound clip, etc).
We get a lot of questions about how Beauty Box compares to other filters out there for digital makeup. There’s a few things to consider when buying any plugin and I’ll go over them here. I’m not going to compare Beauty Box with any filter specifically, but when you download the demo plugin and compare it with the results from other filters this is what you should be looking at:
Quality of results
Ease of use
I’ll start with Support because it’s one thing most people don’t consider. We offer as good of support as anyone in the industry. You can email or call us (415-287-6069). M-F 10am-5pm PST. In addition, we also check email on the weekends and frequently in the evenings on weekdays. Usually you’ll get a response from Tor, our rockstar QA guy, but not infrequently you’ll talk to myself as well. Not often you get tech support from the guy that designed the software. :-)
Quality of Results
The reason you see Beauty Box used for skin retouching on everything from major tentpole feature films to web commercials, is the incredible quality of the digital makeup. Since it’s release in 2009 as the first plugin to specifically address skin retouching beyond just blurring out skin tones, the quality of the results has been critically acclaimed. We won several awards with version 1.0 and we’ve kept improving it since then. You can see many examples here of Beauty Box’s digital makeup, but we recommend you download the demo plugin and try it yourself.
Things to look for as you compare the results of different plugins:
Skin Texture: Does the skin look realistic? Is some of the pore structure maintained or is everything just blurry? It should, usually, look like regular makeup unless you’re going for a stylized effect. Skin Color: Is there any change in skin tones? Temporal Consistency: Does it look the same from frame to frame over time? Are there any noticeable seams where the retouching stops. Masking: How accurate is the mask of the skin tones? Are there any noticeable seams between skin and non-skin areas? How easy is it to adjust the mask?
Ease of Use
One of the things we strive for with all our plugins is to make it as easy as possible to get great results with very little work on your end. Software should make your life easier.
In most cases, you should be able to click on Analyze Frame, make an adjustment to the Skin Smoothing amount to dial in the look you want and be good to go. There are always going to be times when it requires a bit more work but for basic retouching of video, there’s no easier solution than Beauty Box.
When comparing filters, the thing to look for here is how easy is it to setup the effect and get a good mask of the skin tones? How long does it take and how accurate is it?
If you’ve used Beauty Box for a while, you know that the only complaint we had with it with version 1.0 was that it was slow. No more! It’s now fully GPU optimized and with some of the latest graphics cards you’ll get real time performance, particularly in Premiere Pro. Premiere has added better GPU support and between that the Beauty Box’s use of the GPU, you can get real time playback of HD pretty easily.
And of course we support many different host apps, which gives you a lot of flexibility in where you can use it. Avid, After Effects, Premiere Pro, Final Cut Pro, Davinci Resolve, Assimilate Scratch, Sony Vegas, and NUKE are all supported.
Hopefully that gives you some things to think about as you’re comparing Beauty Box with other plugins that claim to be as good. All of these things factor into why Beauty Box is so highly regarded and considered to be well worth the price.
Shooting slow motion footage, especially very high speed shots like 240fps or 480fps, results in flicker if you don’t have high quality lights. Stadiums often have low quality industrial lighting, LEDs, or both. Resulting in flicker during slow motion shots even on nationally broadcast, high profile sporting events.
I was particularly struck by this watching the NCAA Basketball Tournament this weekend. Seemed like I was seeing flicker on half of the slow motion shots. You can see a few in this video (along with Flicker Free plugin de-flickered versions of the same footage):
The LED lights are most often the problem. They circle the arena and depending on how bright they are, for example if it’s turned solid white, they can cast enough light on the players to cause flicker when played back in slow motion. Even if they don’t cast light on the players they’re visible in the background flickering. Here’s a photo of the lights I’m talking about in Oracle arena (white band of light going around the stadium):
While Flicker Free won’t work for live production, it works great for de-flickering this type of flicker if you can render it in a video editing app, as you can see in the original example.
It’s a common problem even for pro sports or high profile sporting events (once you start looking for it, you see it a lot). So if you run into with your footage, check out the Flicker Free plugin for most video editing applications!
Drones are all the rage at the moment, deservedly so as some of the images and footage being shot with them are amazing.
However, one problem that occurs is that if the drone is shooting with the camera at the right angle to the sun, shadows from the props cause flickering in the video footage. This can be a huge problem, making the video unusable. It turns out that our Flicker Free plugin is able to do a good job of removing or significantly reducing this problem. (of course, this forced us to go out and get one. Research, nothing but research!)
Here’s an example video showing exactly what prop flicker is and why it happens:
There are ways around getting the flicker in the first place: Don’t shoot into the sun, have the camera pointing down, etc. However, sometimes you’re not able to shoot with ideal conditions and you end up with flicker.
Our latest tutorial goes over how to solve the prop flicker issue with our Flicker Free plugin. The technique works in After Effects, Final Cut Pro, Avid, Resolve, etc. However the tutorial shows Flicker Free being used in Premiere Pro.
One key way of speeding up the Flicker Free plugin is putting it first in the order of effects. What does this mean? Let’s say you’re using the Lumetri Color Corrector in Premiere. You want to apply Flicker Free first, then apply Lumetri. You’ll see about a 300+% speed increase vs. doing it with Lumetri first. So it looks like this:
Why the Speed Difference?
Flicker Free has to analyze multiple frames to de-flicker the footage you’re using. It looks at up to 21 frames. If you have the effect applied before Flicker Free it means Lumetri is being applied TWENTY ONE times for every frame Flicker Free renders. And especially with a slow effect like Lumetri that will definitely slow everything down.
It fact, on slower machines it can bring Premiere to a grinding halt. Premiere has to render the other effect on 21 frames in order to render just one frame for Flicker Free. In this case, Flicker Free takes up a lot of memory, the other effect can take up a lot of memory and things start getting ugly fast.
Renders with Happy Endings
So to avoid this problem, just apply Flicker Free before any other effects. This goes for pretty much every video editing app. The render penalty will vary depending on the host app and what effect(s) you have applied. For example, using the Fast Color Corrector in Premiere Pro resulted in a slow down of only about 10% (vs. Lumetri and a slow down of 320%). In After Effects the slow down was about 20% with just the Synthetic Aperture color corrector that ships with AE. However, if you add more filters it can get a lot worse.
Either way, you’ll have much happier render times if you put Flicker Free first.
Hopefully this makes some sense. I’ll go into a few technical details for those that are interested. (Feel free to stop reading if it’s clear you just need to put Flicker Free first) (oh, and here are some other ways of speeding up Flicker Free)
With all host applications, Flicker Free, like all plugins, has to request frames through the host application API. With most plugins, like the Beauty Box Video plugin, the plugin only needs to request the current frame. You want to render frame X: Premiere Pro (or Avid, FCP, etc) has to load the frame, render any plugins and then display it. Plugins get rendered in the order you apply them. Fairly straightforward.
The Flicker Free plugin is different. It’s not JUST looking at the current frame. In order to figure out the correct luminance for each pixel (thus removing flicker) it has to look at pixels both before and after the current frame. This means it has to ask the API for up to 21 frames, analyze them, return the result to Premiere, which then finishes rendering the current frame.
So the API says, “Yes, I will do your bidding and get those 21 frames. But first, I must render them!”. And so it does. If there are no plugins applied to them, this is easy. It just hands Flicker Free the 21 original frames and goes on its merry way. If there are plugins applied, the API has to render those on each frame it gives to Flicker Free. FF has to wait around for all 21 frames to be rendered before it can render the current frame. It waits, therefore that means YOU wait. If you need a long coffee break these renders can be great. If not, they are frustrating.
If you use After Effects you may be familiar with pre-comping a layer with effects so that you can use it within a plugin applied to a different layer. This goes through a different portion of the API than when a plugin requests frames programmatically from AE. In the case of a layer in the layer pop-up the plugin just gets the original image with no effects applied. If the plugin actually asks AE for the frame one frame before it, AE has to render it.
One other thing that affects speed behind the scenes… some apps are better at caching frames that plugins ask for than other apps. After Effects does this pretty well, Premiere Pro less so. So this helps AE have faster render times when using Flicker Free and rendering sequentially. If you’re jumping around the timeline then this matters less.
Hopefully this helps you get better render times from Flicker Free. The KEY thing to remember however, is ALWAYS APPLY FLICKER FREE FIRST!
However, many, if not most, of our customers are like Brian Smith. Using Beauty Box for corporate clients or local commercials. They might not be winning Emmy awards for their work but they’re still producing great videos with, usually, limited budgets. “The time and budget does not usually afford us the ability to bring in a makeup artist. People that aren’t used to being on camera are often very self-conscious, and they cringe at the thought of every wrinkle or imperfection detracting from their message.”, said Brian, Founder of Ideaship Studios in Tulsa, OK. “Beauty Box has become a critical part of our Final Cut X pipeline because it solves a problem, it’s blazing fast, and it helps give my clients and on-camera talent confidence. They are thrilled with the end result, and that leads to more business for us.”
An Essential Tool for Beauty Work and Retouching
Beauty Box Video has become an essential tool at many small production houses or in-house video departments to retouch makeup-less/bad lighting situations and still end up with a great looking production. The ability to quickly retouch skin with an automatic mask without needing to go frame by frame is important. However, it’s usually the quality of retouching that Beauty Box provides that’s the main selling point.
image courtesy of Ideaship Studios
Beauty Box goes beyond just blurring skin tones. We strive to keep the skin texture and not just mush it up. You want to have the effect of the skin looking like skin, not plastic, which is important for beauty work. Taking a few years off talent and offsetting the harshness that HD/4K and video lights can add to someone. The above image of one of Brian’s clients is a good example.
When viewed at full resolution, the wrinkles are softened but not obliterated. The skin is smoothed but still shows pores. The effect is really that of digital makeup, as if you actually had a makeup artist to begin with. You can see this below in the closeup of the two images. Of course, the video compression in the original already has reduced the detail in the skin, but Beauty Box does a nice job of retaining much of what is there.
” On the above image, we did not shoot her to look her best. The key light was a bit too harsh, creating shadows and bringing out the lines. I applied the Beauty Box Video plugin, and the shots were immediately better by an order of magnitude. This was just after simply applying the plugin. A few minutes of tweaking the mask color range and effects sliders really dialed in a fantastic look. I don’t like the idea of hiding flaws. They are a natural and beautiful part of every person. However, I’ve come to realize that bringing out the true essence of a person or performance is about accentuating, not hiding. Beauty Box is a great tool for doing that.” – Brian Smith
Go for Natural Retouching
Of course, you can go too far with it, as with anything. So some skill and restraint is often needed to get the effect of regular makeup and not making the subject look ‘plastic’ or blurred. As Brain says, you want things to look natural.
However, when used appropriately you can get some amazing results, making for happy clients and easing the concerns of folks that aren’t always in front of a camera. (particularly men, since they tend to not want to wear makeup… and don’t realize how much they need it until they see themselves on a 65″ 4K screen. ;-)
One last tip, you can often easily improve the look of Beauty Box even more by using tracking masks for beauty work, as you can see in the tutorials that link goes to. The ability of these masks to automatically track the points that make up the mask and move them as your subject moves is a huge deal for beauty work. It makes it much easier to isolate an area like a cheek or the forehead, just as a makeup artist would.
First off, the important bit: All the current versions of our plugins are updated for El Capitan and should be working, regardless of host application (After Effects, Premiere Pro, Final Cut Pro, Davinci Resolve, etc). So you can go to our demo page:
And download the most recent version of your plugins.
If you haven’t upgraded to El Capitan, I’ll add to the chorus of people saying… Don’t. Overall we’re disappointed by Apple as continues its march towards making the Mac work like the iPhone. Making professional uses more and more obsolete. They’re trying way too hard to make the machines idiot proof and in the process dumbing down what can be done with it.
One of the latest examples is, of all things, Disk Utility. You can no longer make a RAID using it and have to use a terminal command. They’ve removed other functionality as well, but for many professional users RAIDs are essential as is Disk Utility. However, it’s now been crippled.
Of course, then there’s Final Cut Pro (which has gotten better but still doesn’t feel like a professional app to many people), Photos which replaced Apple’s pro app Aperture, and the Mac Pro trashcan. (kind of sad that when we need a ‘new’ Mac, usually we buy a 2010-12 12-core Mac Pro, they outperform our D500 trashcan)
Apple isn’t alone in this ‘dumbing down’ trend. Just look at latest releases of Acrobat (which I’ve heard referred to as the Fischer Price version) and Lightroom.
Note to Application Developers- Just because we’re doing a lot of things with our phones does not mean we want to do everything on them or have our desktop apps work like phone apps. There’s a difference between simplicity, making the user experience clear and intuitive but retaining features that make the apps powerful, and stupidity, i.e. making the apps idiot proof.
Anyways, end of rant… I spend a fair amount of time thinking about software usability, since we have to strike that balance between ease of use and power with our own video plugins, and using the host applications and OS professionally. So this ‘dumbing down’ concerns me both for my personal uses and having to help DA customers navigate new ‘features’ that affect our photo and video plugins.
Chief Executive Anarchist
We have a new set of tutorials up that will show you how to easily create masks and animate them for Beauty Box. This is extremely useful if you want to limit the skin retouching to just certain areas like the cheeks or forehead.
Traditionally this type of work has been the province of feature films and other big budget productions that had the money and time to hire rotoscopers to create masks frame by frame. New tools built into After Effects and Premiere Pro or available from third parties for FCP make this technique accessible to video editors and compositors on a much more modest budget or time constraints.
How Does Retouching Work Traditionally?
In the past someone would have to create a mask on Frame 1 and move forward frame by frame, adjusting the mask on EVERY frame as the actor moved. This was a laborious and time consuming way of retouching video/film. The idea for Beauty Box came from watching a visual effects artist explain his process for retouching a music video of a high profile band of 40-somethings. Frame by frame by tedious frame. I thought there had to be an easier way and a few years later we released Beauty Box.
However, Beauty Box affects the entire image by default. The mask it creates affects all skin areas. This works very well for many uses but if you wanted more subtle retouching… you still had to go frame by frame.
The New Tools!
After Effects and Premiere have some amazing new tools for tracking mask points. You can apply bezier masks that only masks the effect of a plugin, like Beauty Box. The bezier points are ‘tracking’ points. Meaning that as the actor moves, the points move with him. It usually works very well, especially for talking head type footage where the talent isn’t moving around a lot. It’s a really impressive feature. It’s available in both AE and Premiere Pro. Here’s a tutorial detailing how it works in Premiere:
After Effects also ships with Mocha Pro, another great tool for doing this type of work. This tutorial shows how to use Mocha and After Effects to control Beauty Box and get some, uh, ‘creative’ skin retouching effects!
The power of Mocha is also available for Final Cut Pro X as well. It’s available as a plugin from CoreMelt and they were kind enough to do a tutorial explaining how Splice X works with Beauty Box within FCP. It’s another very cool plugin, here’s the tutorial:
We’re excited to announce that Beauty Box Video 4.0 is now available for Avid and OpenFX Apps: Davinci Resolve, Assimilate Scratch, Sony Vegas, NUKE, and more. This is in addition to After Effects, Premiere Pro, and Final Cut Pro which were announced in April.
Beauty Box Video 4.0 adds real time rendering to the high quality, automatic skin retouching that Beauty Box is famous for. It’s not only the best retouching plugin available but it’s now one of the fastest, especially on newer graphics cards like the Nvidia GTX 980. We’re seeing real time or near real time performance in Premiere Pro, Resolve, and FCP. Other apps may not see quite that performance but they still get a significant speed increase over what was possible in Beauty Box 3.0.
Easily being able to retouch video is becoming increasingly important. HD is everywhere and 4K is widely available allowing viewers to see more detail on closeups of talent than ever before. This makes skin or makeup problems much more visible and being able to apply digital makeup easily is critical to high quality productions.
You can also incorporate masks to limit the retouching to just certain areas like cheeks or the talent’s forehead. (as can be seen in this tutorial using Premiere Pro’s tracking masks)
So head over to digitalanarchy.com for more info and to download a free trial and free tutorials on how to get started and more advanced topics. You’ll be blown away by the ease of use, high quality retouching, and now… speed!
As many of you know, we’ve come out with a real time version of Beauty Box Video. In order for that to work, it requires a really fast GPU and we LOVE the GTX 980. (Amazing price/performance) Nvidia cards are generally fastest for video apps (Premiere, After Effects, Final Cut Pro, Resolve, etc) but we are seeing real time performance on the higher end new Mac Pros (or trash cans, dilithium crystals, Job’s Urn or whatever you want to call them).
BUT what if you have an older Mac Pro?
With the newer versions of Mac OS (10.10), in theory, you can put any Nvidia card in them and it should work. Since we have lots of video cards lying around that we’re testing, we wondered if our GTX 980, Titan and Quadro 5200 would work in our Early 2009 Mac Pro. The answer is…
So, how does it work? For one you need to be running Yosemite (Mac OS X 10.10)
A GTX 980 is the easier of the two GeFroce cards, mainly because of the power needed to drive it. It only needs two six-pin connectors, so you can use the power supply built into the Mac. Usually you’ll need to buy an extra six-pin cable, as the Mac only comes standard with one, but that’s easy enough. The Quadro 5200 has only a single 6-pin connector and works well. However, for a single offline workstation, it’s tough to justify the higher price for the extra reliability the Quadros give you. (and it’s not as fast as the 980)
The tricky bit about the 980 is that you need to install Nvidia’s web driver. The 980 did not boot up with the default Mac OS driver, even in Yosemite. At least, that’s what happened for us. We have heard of reports of it working with the Default Driver, but I’m not sure how common that is. So you need to install the Nvidia Driver Manager System Pref and, while still using a different video card, set the System Pref to the Web Driver. As so:
Install those, set it to Web Driver, install the 980, and you should be good to go.
What about the Titan or other more powerful cards?
There is one small problem… the Mac Pro’s power supply isn’t powerful enough to handle the card and doesn’t have the connectors. The Mac can have two six pin power connectors, but the Titan and other top of the line cards require a 6 pin and an 8 pin or even two 8-pin connectors. REMINDER: The GTX 980 and Quadro do NOT need extra power. This is only for cards with an 8-pin connector.
The solution is to buy a bigger power supply and let it sit outside the Mac with the power cables running through the expansion opening in the back.
As long as the power supply is plugged into a grounded outlet, there’s no problem with it being external. I used a EVGA 850W Power Supply, but I think the 600w would do. The nice thing about these is they come with long cables (about 2 feet or so) which will reach inside the case to the Nvidia card’s power connectors.
One thing you’ll need to do is plug the ‘test’ connector (comes with it) into the external power supply’s motherboard connector. The power supply won’t power on unless you do this.
Otherwise, it should work great! Very powerful cards and definitely adds a punch to the Mac Pros. With this setup we had Beauty Box running at about 25fps (in Premiere Pro, AE and Final Cut are a bit slower). Not bad for a five year old computer, but not real time in this case. On newer machines with the GTX 980 you should be getting real time play back. It really is a great card for the price.
All of our current plugins have been updated to work with After Effects and Premiere Pro in Creative Cloud 2015. That means Beauty Box Video 4.0.1 and Flicker Free 1.1 are up to date and should work no problem.
What if I have an older plugin like Beauty Box 3.0.9? Do I have to pay for the upgrade?
Yes, you probablyneed to upgrade and it is a paid upgrade. After Effects changed the way it renders and Premiere Pro changed how they handle GPU plugins (of which Beauty Box is one). The key word here is probably. Our experience so far has been mixed. Sometimes the plugins work, sometimes not.
– Premiere Pro: Beauty Box 3.0.9 seems to have trouble in Premiere if it’s using the GPU. If you turn ‘UseGPU’ off (at the bottom of the BB parameter list), it seems to work fine, albeit much slower. Premiere Pro did not implement the same re-design that After Effects did, but they did add an API specifically for GPU plugins. So if the plugin doesn’t use the GPU, it should work fine in Premiere. If it uses the GPU, maybe it works, maybe not. Beauty Box seems to not.
– After Effects: Legacy plugins _should_ work but slow AE down somewhat. In the case of Beauty Box, it seems to work ok but we have seen some problems. So the bottom line is: try it out in CC 2015, if it works fine, you’re good to go. If not, you need to upgrade. We are not officially supporting 3.0.9 in Creative Cloud 2015.
– The upgrade from 3.0 is $69 and can be purchased HERE.
– The upgrade from 1.0/2.0 is $99 and can be purchased HERE.
The bottom line is try out the older plugins in CC 2015. It’s not a given that they won’t work, even though Adobe is telling everyone they need to update. It is true that you will most likely need to update the plugins for CC 2015 so their advice isn’t bad. However, before paying for upgrades load the plugins and see how they behave. They might work fine. Of course, Beauty Box 4 is super fast in both Premiere and After Effects, so you might want to upgrade anyways. :-)
We do our best not to force users into upgrades, but since Adobe has rejiggered everything, only the current releases of our products will be rejiggered in turn.
It’s been almost 4 years since the last update of FCP 7. The last officially supported OS was 10.6.8. It’s time to move on people.
Beauty Box Video 4.0 (due out in a month) will be our first product that does not officially support FCP 7.
It’s a great video editor but Apple make it very hard to support older software. Especially if you’re trying to run it on newer systems. If FCP 7 is a mission critical app for you, you’re taking a pretty big risk by trying to keep it grinding along. We started seeing a lot of weird behaviors with it and 10.9. I realize people are running it successfully on the new systems but we feel there are a lot of cracks beneath the surface. Those are only going to get more pronounced with newer OSes.
I know people love their software, hell there are still people using Media 100, but Premiere Pro, Avid, and even FCP X are all solid alternatives at this point. Those of us that develop software and hardware can’t support stuff that Apple threw under the bus 3 and a half years ago.
We will continue to support people using Beauty Box 3.0 with FCP 7 on older systems (10.8 and below) but we can’t continue to support it when most likely the problems we’ll be fixing are not caused by our software but by old FCP code breaking on new systems.
What causes Final Cut Pro X to re-render? If you’ve ever wondered why sometimes the orange ‘unrendered’ bar shows up when you make a change and sometimes it doesn’t… I explain it all here. This is something that will be valuable to any FCP user but can be of the utmost importance if you’re rendering Beauty Box, our plugin for doing skin retouching and beauty work on HD/4K video. (Actually we’re hard at work making Beauty Box a LOT faster, so look for an announcement soon!)
Currently, if you’ve applied Beauty Box to a long clip, say 60 minutes, you can be looking at serious render times (this can happen for any non-realtime effect), possibly twelve hours or so on slower computers and video cards. (It can also be a few hours, just depends on how fast everything is)
Recently we had a user with that situation. They had a logo in .png format that was on top of the entire video being used as a bug. So they rendered everything out to deliver it, but, of course, the client wanted the bug moved slightly. This caused Final Cut Pro to want to re-render EVERYTHING, meaning the really long Beauty Box render needed to happen as well. Unfortunately, this is just the way Final Cut Pro works.
Why does it work that way and what can be done about it?
Stephen Smith, a long time videographer, used a recent trip to Italy as an opportunity to hone is time lapse skills. The result is a compilation of terrific time lapse sequences from all over Italy.
He used Flicker Free to deflicker the videos and use Premiere Pro and After Effects for editing, and Davinci Resolve for color correction. It’s a great example of how easily Flicker Free fits into pretty much any workflow and produces great results.
Since he was traveling with his wife, it allowed her to explore areas where he was shooting more thoroughly. This is not always the case. Significant others are not always thrilled to be stuck in one place for an hour while you stand around watching your camera take pictures!
Although, he said it did give him an opportunity to watch how agressive the street vendors were and to meet other folks.
We’re happy that he gave us a heads up about the video which is on Vimeo or you can see it below. Of course, we’re thrilled he used Flicker Free on it as well. :-)
It’s always cool to see folks posting how they’ve used Beauty Box Video. One of the most common uses is for music videos, including many top artists. Most performers are a little shy about letting it be known they need retouching, so we get pretty excited when something does get posted (even if we don’t know the performer). Daniel Schweinert just posted this YouTube and blog post breaking down his use of Beauty Box Video (and Mocha) for a music video in After Effects. Pretty cool stuff!