Category Archives: Video Editing Tutorials

Tutorials and tips and tricks for video editing or visual effects applications, video plugins, video production hardware or all of the above. Applications we usually cover are After Effects, Premiere Pro, Final Cut Pro, & Davinci Resolve.

Photoshop’s Generative Fill Isn’t Great, But It Works Well at Fixing Other GenAI Images

One problem with generative AI is that it’s difficult to get exactly what you want. You can often get something that’s good enough but more often than not, you get 90% of the way to what you want and getting the AI to make the correct changes to get to a 100% is daunting.

(which is why I’m a little skeptical about GenAI for video. For generic B-roll stuff, sure maybe, but wrangling the correct prompts for a 30 second video that needs to be exactly this or that is going to be difficult to say the least. It’s hard enough for a single still image.)

Photoshop’s Generative AI (called Generative Fill) is pretty subpar when compared to some of the more cutting edge ones (DALL-E, Stability AI, etc) for creating images from scratch. However, what it does pretty well is extending images. i.e. If you’ve got an image that you want wider or more head room than it was originally shot with.

OR… if you’ve created something with another AI tool, like DALL-E, as I’ve done here. DALL-E gave me more or less what I wanted but without much of a tail. I spent another 20 minutes or so trying to get DALL-E to give me this fish with a tail before giving up. It really wanted to redo the entire image. So it got frustrating.

This is where Photoshop’s GenAI turned out to be useful. To be fair, they market it as more of a way to extend/improve existing images than creating stuff from scratch. It can create from scratch but the results often aren’t great. But when it comes to extending images, there’s a big advantage to being in Photoshop… selections!

You can make the canvas wider, select the empty area to the side and type in ‘extend image’. Boom.

Now of course it gave me some other variations that didn’t work at all, but doesn’t matter. It gave me a great variation that did work.

Also, prompting something like ‘extend image with skeleton of an angler fish’ didn’t work. It was the simpler prompt ‘extend image’ that did the trick.

(Prompts are weird and a whole art unto themselves. Figuring out what the AI is going to respond to takes a LOT of trial and error. And then you still need to get it to do what you want.)

I then selected the other side and it created that easily.

You can see slight seams where the image was extend. When having Photoshop create the extensions, I tried both selecting the area by itself and selecting a little of the original image (including feathering it). It didn’t really make much difference. You got slightly different imagery but the seams tended to show up no matter what.

The tail was the worst problem however. There was an obvious change in style from the original to the Photoshop extension.

So I selected just that bit and ran Content Aware Fill a few times to cover up the seam. And that worked reasonably well despite CA Fill not being AI. It’s just sampling from other parts of the image.

Selecting the seam and running Generative Fill (prompt: ‘remove seam’) on it created three variations. Two of the three didn’t work but the third one arguably looks better than CA Fill. But they’re both pretty good. So just realize CA Fill can help touch up slight imperfections as well.

Getting DALLE, Midjourney or whatever to give you exactly what you want can be difficult. If you get most of the way there, but are having trouble prompting those tools to fill in the details, Photoshop’s Generative Fill may be able to touch things up or extend the image more easily.

Here’s the final image:

Only Beauty Box 5.x Supports Metal GPUs and Apple Silicon

Beauty Box 5.0 and higher supports Metal and Apple Silicon (M1, M2, etc.). This includes the upcoming Beauty Box 6.

However, Beauty Box 4.0 does not support Metal GPU rendering on Macintosh. It uses the older OpenCL technology for GPU processing. (on Windows, 4.0 works fine)

Premiere Pro/After Effects 2022 and later dropped support for OpenCL rendering, and only supports Metal on the M/Silicon chips and Intel Macs. This means Beauty Box 4.0 does not support GPU rendering in the current Intel builds of After Effects or Premiere. It doesn’t work at all on Silicon Macs.

If you’re experiencing slow rendering in Adobe products on a Mac with Beauty Box 4.0 or it’s not showing up at all, that’s probably why.

So if you have 4.0 and have an Intel Mac, you’ll probably want to upgrade to 5.0.

If you have a M/Silicon Mac you’ll need to upgrade. 5.0 was released before the Silicon chips and that’s the only version of Beauty Box that’s been re-written for those chips.

On Windows, Beauty Box 4.0 should still work fine. Both OpenCL and CUDA (for Nvidia) are still supported by Premiere and After Effects.

If you’re experiencing slow render times in 5.0 on Intel, double check that Hardware rendering is set to Metal. (on Apple Silicon Macs, it is always set to Metal and you can’t change it)

In both Premiere and After Effects go to File>Project Settings>General to change

If this is not why you’re having a problem with Beauty Box, try these articles or contact support:

Reasons Plugins Might Not Show in the Effects Menu

Use GPU can be turned off in the plugin or the Beauty Box ‘About’ dialog.

THE PROCESS OF STORYCRAFT

(This is a guest blog post by editor/director Kyle Koch)

I was recently asked about how to craft/edit narrative projects and thought this would be a cool share for my colleagues.
~~~~~~~~~~~~~~~~~~~~~~~~

THE PROCESS OF STORYCRAFT
I have about 30+ years under my belt and much has been doc cutting. I can give a few tips.

STORY IS EVERYTHING
Craft a journey that hooks and keeps the audience watching. Weave ever rising plateaus of insight and intrigue with your interviews and location sound ups.

CONTENT DRIVES THE PROCESS
In other words, if the story relies heavily on historical footage, you’ll need to craft story around those assets. Is the content esoteric, philosophical, spiritual? You may need to create support visuals that only come to light from the content itself. Either way, story and timing is the foundation for all the imagery that paints the picture of the narrative.

BE HONEST WITH THE GENRE
Is it a story if humour, drama, both? When is it taking itself too seriously? When not seriously enough? Through the stages of post, this may change. It’s important to have a solid sense of what the genre is before you start.

TRANSCRIPTS ARE KEY
Being time efficient with a large amount of material is challenging. Decisions on how to process assets can exponentially impact the time it takes to craft the story. I always start with transcripts of my interviews. Using the script/outline as my window upon which to view the content, I identify themes, keywords/phrases, colour code them, and add markers. Finding special moments where the delivery by an interviewee is particularly strong is critical.

KNOW YOUR CONTENT
One of the most important things I do to help me craft story is make sound files that I can use to listen to source material or the edit while I’m walking the dog, driving, etc – anything but being in front of the computer. I’ll add markers to the file with my phone (or take screenshots). Ideally, use Frame to add comments so that you can upload them to your sequence as markers.

FIND THE GOLD
Make selects of your content by using stringouts (source sequences). Raise good shots/content up in the video tracks (v1 is the base, v2 good, v3 great, v4 awesome). This process will allow you to get to know the materials and move the project forward as you’re watching assets.

TEXT DRIVEN VIDEO CUTS ARE AMAZING!!!
Jim Tierney of Digital Anarchy is one of the first code warriors to give us this superpower his tool Transcriptive. 5 years ahead of the curve! Adobe has on-boarded much of the functions now, but those Jim has been a pioneer for transcription based editing. Certainly, Transcriptive offers functionality that is deeper than what is included with an Adobe CC account.

I’ll be attending the upcoming AdobeMax in Los Angeles the week of October 9-20, 2023. The event has a bunch of good in-person and on-line sessions for all sorts of topics including the creative process, AI augmentation, team building, content authentication (DRM), etc. If you’re attending, please say hello.

Feel free to reach out to me directly:
kwk@nullTrueNorthEntertainment.com

– I look forward to additional insight/tips from my pals in production! 😉

~~~~~~~~~~~~~~
Kyle’s has been crafting content for 25+ years. He owns a creative agency, True North Entertainment and is Admin of the largest professional editors group dedicated to Adobe workflow solutions.

http://www.truenorthentertainment.com/
https://www.facebook.com/groups/adobepremierepro/,

Skin Detail Smoothing and 4K

Beauty Box’s settings are resolution dependent. This means the same settings you have for HD may not work for 4K. On a basic level, it’s similar to applying a blur. A Gaussian Blur of 1.0 might be too much for a low res, 640×480 image, but might be almost unnoticeable on a 4K image.

Also, the ‘right’ settings may depending on the framing of the shot. Is the footage a tight close up where the talent’s face fills most of the frame? Or is it pulled back to show three or four actors? The settings that are ideal for one of those examples, probably won’t be ideal for the other one.

The default settings for Beauty Box are really designed for HD. And even for HD they may be a bit heavy, depending on the footage.

Often they aren’t the ideal settings for 4K (or 12K or whatever).

So in this post we’ll talk about what to do if you have 4K footage and aren’t getting the look you want.

Mainly I want to focus on Skin Detail Smoothing, as I think that plays a bigger role than most people think. AND you can set it negative!

Skin Detail Smoothing

As you might expect from the name, this attempts to smooth out smaller features of the skin: pores and other small textures. It provides sort of a second level of smoothing on top of the overall skin smoothing. You generally want this set to a lower value than the Smoothing Amount parameter.

If it’s set too high relative to Smoothing Amount, you can end up with the skin looking somewhat blurry and blotchy. This is due to Skin Detail Smoothing working on smaller areas of the skin. So instead of the overall smoothing, you get a very localized blur which can look blotchy.

So, first off: Set Skin Detail Smoothing to a lower value than Smoothing Amount. (usually: there are no hard and fast rules with this. It’s going to depend on your footage. But most of the time that’s a very good rule of thumb.

Negative Skin Detail Smoothing

With 4k and higher resolutions it’s sometimes helpful to have a slightly negative value for Skin Detail Smoothing. Like -5 or -10. The smoothing algorithms occasionally add too much softness and a slightly negative value brings back some of the skin texture.

In the example, the area around her nose gets a bit soft and using a negative value, IMO, gives it a better look. The adjustment is pretty subtle but it does have an effect. You may have to download the full res images and compare them in Photoshop to truly see the difference. (click on the thumbnails below to see the full res images)

This definitely isn’t the case for all 4K footage and, as always, you’ll need to dial in the exact settings that work for your footage. But it’s important to know that Skin Detail Smoothing can be set negative and sometimes that’s beneficial.

Of course, I want to emphasize SLIGHTLY negative. Our Ugly Box free plugin makes use of negative Skin Detail Smoothing in a way that won’t make your footage look better. If you set it to -400… it’s good for Halloween but usually your clients won’t like you very much.

Testing A.I. Transcript Accuracy (most recent test)

Periodically we do tests of various AI services to see if we should be using something on the backend of Transcriptive-A.I. We’re more interested in having the most accurate A.I. than we are with sticking with a particular service (or trying to develop our own). The different services have different costs, which is why Transcriptive Premium costs a bit more. Gives us more flexibility in deciding which service to use.

This latest test will give you a good sense of how the different services compare, particularly in relation to Adobe’s transcription AI that’s built into Premiere.

The Tests

Short Analysis (i.e. TL;DR):

 For well recorded audio, all the A.I. services are excellent. There isn’t a lot of difference between the best and worst A.I… maybe one or two words per hundred words. There is a BIG drop off as audio quality gets worse and you can really see this with Adobe’s service and the regular Transcriptive-A.I. service.

A 2% difference in accuracy is not a big deal. As you start getting up around 6-7% and higher, the additional time it takes to fix errors in the transcript starts to become really significant. Every additonal 1% in accuracy means 3.5 minutes less of clean up time (for a 30 minute clip). So small improvements in accuracy can make a big difference if you (or your Assistant Editor) needs to clean up a long transcript.

So when you see an 8% difference between Adobe and Transcriptive Premium, realize it’s going to take you about 25-30 minutes longer to clean up a 30 minute Adobe transcript.

Takeaway: For high quality audio, you can use any of the services… Adobe’s free service or the .04/min TS-AI service. For audio of medium to poor quality, you’ll save yourself a lot of time by using Transcriptive-Premium. (Getting Adobe transcripts into Transcriptive requires a couple hoops to jump through, Adobe didn’t make it as easy as they could’ve, but it’s not hard. Here’s how to import Adobe transcripts into Transcriptive)

(For more info on how we test, see this blog post on testing AI accuracy)

Long Analysis

When we do these tests, we look at two graphs: 

  1. How each A.I. performed for specific clips
  2. The accuracy curve for each A.I. which shows how it did from its Best result to Worst result.

The important thing to realize when looking at the Accuracy Curves (#2 above) is that the corresponding points on each curve are usually different clips. The best clip for one A.I. may not have been the best clip for a different A.I. I find this Overall Accuracy Curve (OAC) to be more informative than the ‘clip-by-clip’ graph. A given A.I. may do particularly well or poorly on a single clip, but the OAC smooths the variation out and you get a better representation of overall performance.

Take a look at the charts for this test (the audio files used are available at the bottom of this post):

Click to zoom in on the image
Overall accuracy curve for AI Services

All of the A.I. services will fall off a cliff, accuracy-wise, as the audio quality degrades. Any result lower than about 90% accuracy is probably going to be better done by a human. Certainly anything below 80%. At 80% it will very likely take more time to clean up the transcript than to just do it manually from scratch.

The two things I look for in the curve is where does it break below 95% and where does it break below 90%. And, of course, how that compares to the other curves. The longer the curve stays above those percentages, the more audio degradation a given A.I. can deal with. 

You’re probably thinking, well, that’s just six clips! True, but if you choose six clips with a good range of quality, from great to poor, then the curve will be roughly the same even if you had more clips. Here’s the full test with about 30 clips:

Accuracy of Adobe vs. Transcriptive, full test results

While the curves look a little different (the regular TS A.I. looks better in this graph), mostly it follows the pattern of the six clip OAC. And the ‘cliffs’ become more apparent… Where a given level of audio causes AI performance to drop to a lower tier. Most of the AIs will stay at a certain accuracy for a while, then drop down, hold there for a bit, drop down again, etc. until the audio degrades so much that the AI basically fails.

Here are the actual test results:

TS A.I.AdobeSpeechmaticsTS Premium
Interview97.2%97.2%97.8%100.0%
Art97.6%97.2%99.5%97.6%
NYU91.1%88.6%95.1%97.6%
LSD92.3%96.9%98.0%97.4%
Jung89.1%93.9%96.1%96.1%
Zoom85.5%80.7%89.8%92.8%
Remember: Every additonal 1% in accuracy means 3.5 minutes less of clean up time (for a 30 minute clip).

So that’s the basics of testing different A.I.s! Here are the clips we used for the smaller test to give you an idea of what’s meant by ‘High Quality’ or ‘Poor Quality’. The more jargon, background noise, accents, soft speaking, etc there is in a clip, the harder it’ll be for the A.I. to produce good results. And you can hear that below. You’ll notice that all the clips are 1 to 1.5 minutes long. We’ve found that as long as the clip is representative of the whole clip it’s taken from, you don’t get any additional info from the whole clip. An hour long clip will product similar results to one minute, as long as that one minute has the same speakers, jargon, background noise, etc.

Any questions or feedback, please leave a note in the Comments section! (or email us at cs@nulldigitalanarchy.com)

‘Art’ test clip
‘Interview’ test clip
‘Jung’ test clip
‘NYU’ test clip
‘LSD’ test clip
‘Zoom’ test clip

Removing flicker from concert videos (or anything with stage lights)

LED lights are everywhere and nowhere moreso than concerts or other performances. Since they are a common source of flicker when shot with a video camera, it’s something we get asked about fairly regularly.

Other types of stage lights can also be problematic (especially in slow motion), but LED lights are the more common culprit. You can see this clearly in this footage of a band. It’s a slow motion clip shot with an iPhone… which will shoot a few seconds of regular speed video before switching to slo-mo. So the first five seconds are regular speed and the next five are slo-mo:

To remove the flicker we’re using our plugin Flicker Free, which supports After Effects, Premiere, Final Cut Pro, Resolve, and Avid. You can learn more about it and download the trial version here.

The regular lights are fine, but there are some LED lights (ones with the multiple lights in a hexagon) that are flickering. This happens in both the regular speed and slow motion video portions of the video. You’ll notice, of course, the flickering is slower in the slo-mo portion. (mixed frame rates can sometimes be a problem as well but not in this case)

Usually this is something Flicker Free can fix pretty easily and does so in this case, but there are a few variables that are present in this video that can sometimes complicate things. It’s a handheld shot (shaky), there are multiple lights, and there are performers (who, luckily, aren’t moving much).

Handheld shot: The camera is moving erratically. This can be a problem for Flicker Free and it’s something the Motion Compensation checkbox was specifically designed to deal with (in addition to the Detect Motion settings). However, in this case, the camera isn’t moving quickly which is where this really becomes a problem. So we can get away with only having Detect Motion on. Also… with stage performances there is also often a lot of movement of the performers. Not a problem here, but if there is a lot of performer movement, it’s likely you’ll really need to turn Motion Compensation on.

Motion Compensation increases the render time, so if you don’t need to turn it on, then it’s best not to. But some footage will only be fixable with it on, so if the default settings aren’t working, turn on Motion Compensation.

As is often the case the default settings, the preset Rolling Bands, works great. This is very common with LED lights as they produce a certain type of flicker that the preset/default works very well on.

Multiple Lights: It’s possible to have multiple lights in the scene that flicker, and do so at different rates. Flicker Free can usually handle this scenario, but sometimes you need to apply two instances of Flicker Free. If you do this, it’s highly recommended not to use Motion Compensation and either turn Detect Motion off or set it to Fast. If you have Motion Compensation on and use two instances of FF, you’ll get exponentially longer render times and you might run out of memory on the GPU causing a crash.

Slow Motion: Slo-mo footage can really slow the flickering down, requiring you to max out Time Radius. Again, this is a setting that can increase render times, so lower values are better if you can get away with them and it fixes the flicker.

This clip was fairly easy. Only one light was an LED light and flickering, So the default settings worked great. If the default settings don’t work there are a few other presets to try: Stage Lights, Projection Screen, etc. But, even if those don’t work right off the bat, hopefully this gives you some tips on how to fix even the most challenging videos of performances.

Transcriptive OnBoarding: Where to Find Everything

Welcome to Transcriptive Rough Cutter! This page/video will give you an overview of the UI, so you know where to find everything! The five minute video above will give the quickest explanation. But for those that prefer reading, skip past the video and all the info is below it. Happy Transcribing! (and so much more :-)

We’re going to go over the different areas of Transcriptive Rough Cutter, so you know where to find stuff, but we’ll leave deeper explanations for other tutorials. So let’s get started.

Transcriptive Rough Cutter, Editing video with text and searching your entire Premiere project

The first thing to know is the Clip Mode switch. If this is on. Then Transcriptive is going to be looking at the project panel and as you select different clips, it’s going to show the transcript for that clip. If you have this turned off, it will be in sequence mode and the transcript will be for whichever sequence is currently active. So as we switch to different sequences, it will load the transcript for that sequence.

Transcriptive Rough Cutter Top Navigation and Features

Next up is getting the transcript itself. Now, if you already have a transcript in text format, you can click on the import dialog and you can import the transcript and the variety of different ways. But if you need to get the transcript, then you click on Transcribe.

Getting a transcript with Transcriptive

And this will allow you to select which speech service to go with. Alignment is also here. So if you’ve imported a text based transcript that does not have timecode, alignment will analyze the audio and add timecode to your imported text file. Now you can also select the language. Glossary allows you to type in names or terminology that the A.I. might not necessarily know, and they can definitely improve accuracy. If you have a lot of terms that are maybe unusual. So Transcribe dialogue is incredibly important.

The Auto Load button that’s next to Transcribe tells Transcriptive Rough Cutter to automatically load the transcript for whatever you click on. So if I’m down in the project panel and clicking on different clips, Transcriptive will load the transcript for those clips. If you want to lock it to just one clip/sequence, say you’re using a sequence and you always want to see the transcript for that sequence… turn on Sequence Mode (Clip Mode = Off) and no matter what we do this transcript will be shown if Auto Load = Off. So as I move to different sequences or clips, the transcript for the sequence that was shown when we turned Auto Load off, is now always shown.

Editing with text in

And of course, you can edit text. it works similar to a word processor, except every word has timecode. You can just click on words and start typing away. Because every word has timecode, as you click on different words, it’s going to move to different points on the Sequence/Clip timeline. You can also change speakers by clicking on the speakers and selecting from the dropdown menu. On the right hand side, you’ll see three icons. This is where you add Comments or Delete entire blocks of text or Strike Through the text for use with the Rough Cut feature.

And that brings us to the main menu where you can Manage Speakers. You can click on that. It will allow you to add speakers, change the name of speakers, and all that will then be reflected in the dropdown you see in the Text Edit window.

Sync Transcript to Sequence Edit is an important item. If you edit your sequence and delete stuff, it will become out of sync with the transcript. The transcript will have stuff that’s no longer in your sequence. If you select Sync Transcript To Sequence Edit, Transcriptive will go through your sequence and rebuild the transcript to match what you’ve edited out.

You can also Batch Files. Which is very important if you have lots of clips that you want to get transcribed. Batch is a very easy way of processing as many clips as you want at the same time. There are multiple ways you can do it, Batch Project will transcribe anything selected in the Project panel and with Batch Files/Folder, you select files from the operating system. (kind of like importing files into Premiere) If you need to do a lot of transcribing this is a VERY important feature.

At the bottom of the Transcriptive Rough Cutter panel you have the ability to Export Files like a text file of the entire transcript. And… There’s the Rough Cut button, which is a key feature of Transcriptive Rough Cutter. It will take a transcript that you have edited and build a sequence based on that transcript. So if you delete text, it will delete that portion of the clip or sequence from the new Rough Cut. So this is a feature that requires a bit of explaining, so I definitely encourage you to check out the in depth tutorial on Rough Cut.

You also have the ability to search. Search is one of the most powerful features of Transcriptive Rough Cutter, along with Power Search, which is the other panel that ships with transcript. Here you can search the entire transcript that’s in the Transcriptive Rough Cutter Panel. You can also replace words, but the real power comes from Power Search, which can search the entire project. So if you’re looking for something and you’re not quite sure what transcript or clip it’s from. You can type in the term and get a bunch of search results, much like any other search engine. And when you click on those results, it’s going to open up that clip and jump right to where that happens. So we can click on these other ones, it’ll work for sequences as well. And then when you load that up, that transcript will appear in Transcriptive Rough Cutter. And since there’s nothing else like this anywhere in Premiere itself, this is a really powerful way of making use of your transcripts.

If you’d like to share the transcript with another Transcriptive Premiere user or even if Transcriptive.com user, you can come up here to this T icon. And that’s where you can start sharing the sequence or clip with another user. And then you have your job management list, which is all of the jobs that you’ve had done. You can reload them if you need to.

And last but not least, is the account menu. Here you can buy additional minutes, you can get to your account settings. And most importantly, this will take you to your dashboard and that will show you all your charges, all your invoices, upcoming subscription payments, pretty much everything involving your account. So that’s pretty much the onboarding tutorial. That’s the basics.

Like I said, we have lots of other tutorials that go in depth into all of these features, but just want to give you a kind of brief overview of where everything is at and where to find everything. So hopefully you enjoyed that. And like I said, definitely check out the other tutorials and you’ll be up and running.

The Rule of Thirds in Practice

Most of us have heard of the rule of thirds. And probably for most readers of this blog it’s second nature by now. But for those somewhat new to photo/videography or if you just want to see how someone else uses/breaks the rules, I figured a serendipitous photoshoot recently would be a good example.

What is the Rule of Thirds? It’s splitting an image into three parts vertically and horizontally. This can help you create a more pleasing image composition. And like all rules, it’s meant to be bent and broken. Let’s talk about how to use it.

Sometimes you use the rule of thirds while you’re shooting. If you’re doing a portrait, you can pose your model, frame her in camera and take the shot.

Personally, I tend to be more of a wildlife photographer. Birds and whales don’t usually pose for you… you’re just trying to take the shot as fast as f’ing possible while you have the chance! You can crop the photo later to make it fit the rule of thirds (or not).

Recently I was sitting on the balcony of my house and a hawk decided to perch himself right in front of me on a neighbors house. So, grabbed the camera for an impromptu photoshoot:

Those are the cropped ‘rule of third’ shots. Here are the original shots (which are cool in their own way by showing more of the environment):

Let’s talk about why I cropped them the way I did. First off, look at the cropped images. Did you notice that I’m trying to tell a small story with how they’re cropped? (or, at least, framing things so there’s some context to the sequence of images)

Let’s take a look at the first image. One of the things that makes the Rule of Third compelling is that asymmetrical compositions generally look better. But not always! Here we have the ‘hero’ shot of our hawk. I’m introducing him and, as such, he’s pretty much in the center of the frame.

In the next picture, he turns his head to look at something. Now he’s off center and edging towards left and down. We’re creating space off to the right side of the image. Where is he looking? What is he looking at? I want the viewer to be as curious about that as the hawk is. So I want to add space in the image so you can follow his gaze. 

Now he’s preparing to take off! His wings are up and he’s getting ready to fly. I want to add even more space to the right and above him. So I crop the image so he’s split down the middle by the first third line. Because his wings are raised, he’s centered vertically, but he’s still weighted towards the lower third. Hopefully your eye is drawn to where he might be going.

Lift Off! His wings come down and he levitates in preparation to fly. Again, I want the greenery in the shot, so he’s a little lower in the frame than is ideal, but it works. He’s about to take off so having a lot of space up and in the direction he’s going to be flying is all good. (I love this shot… birds of prey are just so amazing) However, usually you don’t want your subject quite so close to the edge. I think it’s a great shot, but you could definitely make the case there’s too much space in the rest of the image. If the trees were closer, I would’ve cropped it differently, but to get them in the image, I had to stretch it a bit towards the upper, right corner. With wildlife you don’t always get to pick your shot!

And he’s off! And… so is this image. Why is this not a great composition? The hawk really should be centered more vertically. He’s a little low in the frame. To correct it, I’d at least move where the wing bends into the upper third.

Bonus tip: the glaring issue with all these photos… well, you can see it hopefully. It’s something easily fixed with Photoshop’s Content Aware Fill. And if you can’t see the problem, perhaps it’s absence will give you a clue:

So hopefully that’s a good intro on how to use the rule of thirds. It’s really about drawing the eye in the direction the subject is looking or heading towards. And, of course, it’s not a hard and fast ‘rule’. Just one way to think about composing your images.

Transcription Accuracy: Adobe Sensei vs Transcriptive A.I.

Speechmatics, one of the A.I. engines we support, recently released a new speech model which promised much higher accuracy. Transcriptive Rough Cutter now supports that if you choose the Speechmatics option. Also with Premiere able to generate transcripts with Adobe Sensei, we get a lot of questions about how it compares to Transcriptive Rough Cutter. 

So we figured it was a good time to do a test of the various A.I. speech engines! (Actually we do this pretty regularly, but only occasionally post the results when we feel there’s something newsworthy about them)

You can read about the A.I. testing methodology in this post if you’re interested or want to run your own tests. But, in short, Word Error Rate is what we pay most attention to. It’s simply:

NumberOfWordsMissed / NumberOfWordsInTranscript

where NumberOfWordsMissed = the number of words in the corrected transcript that the A.I. failed to recognize. If instead of  the word ‘Everything’ the A.I. produced ‘Even ifrits sing’, it still missed just one word. In the reverse situation, it would count as three missed words.

We also track punctuation errors, but those can be somewhat subjective, so we put less weight on that.

What’s the big deal between 88% and 93% Accuracy?

Every 1% of additional accuracy means roughly 15% less incorrect words. A 30 minute video has, give or take, about 3000 words. So with Speechmatics you’d expect to have, on average, 210 missed words (7% error rate) and with Adobe Sensei you’d have 360 missed words (12% error rate). Every 10 words adds about 1:15 to the clean up time. So it’ll take about 18 minutes more to clean up that 30 minute transcript if you’re using Adobe Sensei.

Every additonal 1% in accuracy means 3.5 minutes less of clean up time (for a 30 minute clip). So small improvements in accuracy can make a big difference if you (or your Assistant Editor) needs to clean up a long transcript.

Of course, the above are averages. If you have a really bad recording with lots of words that are difficult to make out, it’ll take longer to clean up than a clip with great audio and you’re just fixing words that are clear to you but the A.I. got wrong. But the above numbers do give you some sense of what the accuracy value means back in the real world.

The Test Results!

All the A.I.s are great at handling well-recorded audio. If the talent is professionally mic’d and they speak well, you should get 95% or better accuracy. It’s when the audio quality drops off that Transcriptive and Speechmatics really shine (and why we include them in Transcriptive Rough Cutter). And I 100% encourage you to run your own tests with your own audio. Again, this post outlines exactly how we test and you can easily do it yourself.

Speechmatics New is the clear winner, with a couple first place finishes, no last place finishes, and at 93.3% rate overall (you can find the spreadsheet with results and the audio files further down the post). One caveat… Speechmatics takes about 5x as long to process. So a 30 minute video will take about 3 minutes with Transcriptive A.I. and 15-20 minutes with Speechmatics. If you select Speechmatics in Transcriptive, you’re getting the new A.I. model.

Adobe Sensei is the least accurate with two last place finishes and no first places, for a 88.3% accuracy overall. Google, which is another A.I. service we evaluate but currently don’t use, is all over the place. Overall, it’s 80.6%, but if you remove the worst and best examples, it’s a more pedestrian 90.3%. No idea why it failed so badly on the Bill clip, but it’s a trainwreck. The Bible clip is from a public domain reading of the bible, which I’m guessing was part of Google’s training corpus. You rarely see that kind of accuracy unless the A.I. was trained on it. Anyways, this inconsistency is why we don’t use it in Transcriptive.

Here are the clips we used for this test:

Bill Clip
Zoom clip
Bible clip
Scifi clip
Flower clip

Here’s the spreadsheet of the results (SM = Speechmatics, Green means best performance, Orange means worst). Again, mostly we’re focused on the Word Accuracy. Punctuation is a secondary consideration:

How do We Test Speech-to-Text Services for Accuracy?

Transcriptive-A.I. doesn’t use a single A.I. services on the backend. We don’t have our own A.I., so like most companies that offer transcription we use one of the big companies (Google, Watson, Speechmatics, etc).

We initially started off with Speechmatics as the ‘high quality’ option. And they’re still very good (as you’ll see shortly), but not always. However, since we had so many users that liked them, we still give you the option to use them if you want.

However, we’ve now added Transcriptive-A.I. This uses whatever A.I. services we think is best. It might use Speechmatics, but it might also use one of a dozen other services we test.

Since we encourage users to test Transcriptive-A.I. against any service out there, I’ll give you some insight on how we test the different services and choose which to use behind the scenes.

Usually we take 5-10 audio clips of varying quality that are about one minute long. Some very well recorded, some really poorly recorded and some in between. The goal is to see which A.I. works best overall and which might work better is certain circumstances.

When grading the results, I save out a plain text file with no timecode, speakers or whatever. I’m only concerned about word accuracy and, to a lesser degree, punctuation accuracy. Word accuracy is the most important thing. (IMO) For this purpose, Word 2010 has an awesome Compare function to see the difference between the Master transcript (human corrected) and the A.I. transcript. Newer versions of Word might be better for comparing legal documents, but Word 2010 is the best for comparing A.I. accuracy.

Also, let’s talk about the rules for grading the results. You can define what an ‘error’ is however you want. But you have to be consistent about how you apply the definition. Applying them consistently matters more than the rules themselves. So here are the rules I use:

1) Every word in the Master transcript that is missed counts as one error. So ‘a reed where’ for ‘everywhere’ is just one error, but ‘everywhere’ for ‘every hair’ is two errors.
2) ah, uh, um are ignored. Some ASRs include them, some don’t. I’ll let ‘a’ go, but if an ‘uh’ should be ‘an’ it’s an error.
3) Commas are 1/2 error and full stops (period, ?) are also 1/2 error but there’s an argument for making them a full error.
4) If words are correct but the ASR tries to separate/merge them (e.g. ‘you’re’ to ‘you are’, ‘got to’ to ‘gotta’, ‘because’ to ’cause) it does not count as an error.

That’s it! We then add up the errors, divide that by the number of words that are in the clip, and that’s the error rate!

Upgraded to FCP 10.6? Please Update Your Plugins.

Apple just launched Final Cut Pro 10.6 which has some cool new features, like Face Tracking. Unfortunately they also introduced a bug or two. One of which prevents our plugins from registering. So… we updated all our plugins to work around the issue. Please go here: https://digitalanarchy.com/demos/psd_mac.html

And you can download the updated version of any plugin you own. You only need to do this if you’re doing a fresh install of the plugins. Updating FCP should not cause the problem. But if you’re re-installing the plugin, then you might need the updated version.

The issue is that the Yellow Question Mark at the top of the Inspector, doesn’t open a dialog when it’s clicked. It should open up our registration dialog (or about box) as shown here:

Registration dialog for Beauty Box or any Digital Anarchy plugin

So if you’re clicking on the Question Mark to register and nothing happens… Please update your plugins!

These are free updates if you own the most recent version of the plugin.

If you own an older version and don’t want to upgrade, the licensing dialog DOES work in Motion. It’s only an FCP problem. So if you have Motion, you can apply the plugin there and register it.

Adobe Transcripts and Captions & Transcriptive: Differences and How to Use Them Together

Adobe just released a big new Premiere update that includes their Speech-to-Text service. We’ve had a lot of questions about whether this kills Transcriptive or not (it doesn’t… check out the new Transcriptive Rough Cutter!). So I thought I’d take a moment to talk about some of the differences, similarities, and how to use them together.

The Adobe system is basically what we did for Transcriptive 1.0 in 2017. So Transcriptive Rough Cutter has really evolved into an editing and collaboration tool, not just something you use to get transcripts.

The Adobe solution is really geared towards captions. That’s the problem they were trying to solve and you can see this in the fact you can only transcribe sequences. And only one at a time. So if you want captions for your final edit, it’s awesome. If you want to transcribe all your footage so you can search it, pull out selects, etc… it doesn’t do that.

So, in some ways the Transcriptive suite (Transcriptive Rough Cutter, PowerSearch, TS Web App) is more integrated than Adobe’s own service. Allowing you to transcribe clips and sequences, and then search, share, or assemble rough cuts with those transcripts. There are a lot of ways using text in the editing process can make life a lot easier for an editor, beyond just creating captions.

Sequences Only

Adobe's Text panel for transcribing sequences

The Adobe transcription service only works for Sequences. It’s really designed for use with the new Caption system they introduced earlier this year.

Transcriptive can transcribe media and sequences, giving the user a lot more flexibility. One example: they can transcribe media first, use that to find soundbites or information in the clips and build a sequence off that. As they edit the sequence, add media, or make changes they can regenerate the transcript without any additional cost. The transcripts are attached to the media… so Transcriptive just looks for which portions of the clips are in the sequence and grabs the transcript for that portion.

Automatic Rough Cut

Rough Cut: There are two ways of assembling a ‘rough cut’ with Transcriptive Rough Cutter. What we’re calling Selects, which is basically what I mention above in the ‘Sequences Only’ paragraph: Search for a soundbite, you set In/Out points in the transcript of the clip with that soundbite, and insert that portion of the video into a sequence.

Then there’s the Rough Cut feature, where Transcriptive RC will take a transcript that you edit and assemble a sequence automatically: creating edits where you’ve deleted or struckthrough text and removing the video that corresponds to those text edits. This is not something Adobe can do or has made any indication they will do, so far anyways.

Editing with text in Premiere Pro and Transcriptive Rough Cutter

Collaboration with The Transcriptive Web App

One key difference is the ability to send transcripts to someone that does not have Premiere. They can edit those transcripts in a web browser and add comments, and then send it all back to you. They can even delete portions of the text and you can use the Rough Cut feature to assemble a sequence based on that.

Searching Your Premiere Project

PowerSearch: This separate panel (but included with TS) lets you search every piece of media in your Premiere project that has a transcript in metadata or in clip/sequence markers. Premiere is pretty lacking in the Search department and PowerSearch gives you a search engine for Premiere. It only works for media/sequences transcribed by Transcriptive. Adobe, in their infinite wisdom, made their transcript format proprietary and we can’t read it. So unless you export it out of Premiere and then import it into Transcriptive, PowerSearch can’t read the text unfortunately.

Easier to Export Captions

Transcriptive RC let’s you output SRT, VTT, SCC, MCC, SMPTE, or STL just by clicking Export. You can then use these in any other program. With Adobe you can only export SRT, and even that takes multiple steps. (you can get other file formats when you export the rendered movie, but you have to render the timeline to have it generate those.)

I assume Adobe is trying to make it difficult to use the free Adobe transcripts anywhere other than Premiere, but I think it’s a bit shortsighted. You can’t even get the caption file if you render out audio… you have to render a movie. Of course, the workaround is just to turn off all the video tracks and render out black frames. So it’s not that hard to get the captions files, you just have to jump through some hoops.

Sharing Adobe Transcripts with Transcriptive Rough Cutter and Vice Versa

I’ve already written a blog post specifically about showing how to use Adobe Transcripts with Transcriptive. But, in short… You can use Adobe transcripts in Transcriptive by exporting the transcript as plain text and using Transcriptive’s Alignnment feature to sync the text up to the clip or sequence. Every word will have timecode just as if you’d transcribed it in Transcriptive. (this is a free feature)

AND… If you get your transcript in Transcriptive Rough Cutter, it’s easy to import it into the Adobe Caption system… just Export a caption file format Premiere supports out of Transcriptive RC and import it into Premiere. As mentioned, you can Export SRT, VTT, MCC, SCC, SMPTE, and STL.

Two A.I. Services

Transcriptive Rough Cutter gives you two A.I. services to choose from, allowing you use whatever works best for your audio. It is also usually more accurate than Adobe’s service, especially on poor quality audio. That said, the Adobe A.I. is good as well, but on a long transcript, even a percentage point or two of accuracy will add up to saving a significant amount of time cleaning up the transcript.

Using Adobe Premiere Pro Transcripts and Captions with Transcriptive (updated for Premiere 2022)

In this post we’ll go over how to use transcripts from Premiere’s Text panel with Transcriptive. This could be easier if Adobe exported the transcript with all the timecode data. We’ve asked them to do this but it will probably mean more coming from users. So please feel free to request that from them. Currently it’s not hard, but does require a couple more steps than it should.

Anyways, once you export the Adobe transcript, you’ll use Transcriptive’s Alignment feature to convert it! Easy and free.

Also, if you’re trying to get captions out of Transcriptive and into Premiere, you can do that with any version of Premiere. Since this is easy, just Export out of Transcriptive and Import in Premiere, I’ll cover it last.

Getting Transcripts from Adobe Sensei (Premiere’s Text panel) into Transcriptive

You can use either SRTs or Plain Text files to get the transcript into Transcriptive. Usually once the transcript is in Transcriptive you’ll want to run Alignment on it (which is free). This will sync the text up to the audio and give you per-word timecode. If you do this, exporting as a plain text file is better as you’ll be able to keep the speakers. (Adobe SRT export doesn’t support speakers)

However, SRTs have more frequent timestamps so if Alignment doesn’t work or you want to skip that step, SRTs are better. However, the per-word timecode may not be perfect as Transcriptive will need to interpolate between timestamps.

One advantage of SRTs is you can use Transcriptive Adobe Importer which will import the SRT and automatically align it. Making it a bit easier. But it’s not that big of a deal to manually run alignment. This does not support Text files.

Getting the transcript in Premiere

1. Open up the Text panel from the Window menu.

2. You should see three options, one of which is Transcribe Sequence

You can only transcribe sequences with Adobe’s service. If you want to transcribe individual clips, you’ll still need to use Transcriptive. (or get transcripts by dropping each one into a different sequence)

3. With your sequence selected, click the Transcribe Sequence button and Premiere will process it and return the transcript! (This can take a few minutes)

Exporting a Text File in Premiere 2022

Once the transcript is back, go to the menu in the upper, right corner and select Export to Text File. In Premiere 2022 you can do this with either Transcript or Captions selected. In 2021, this only works from Captions. (Export Transcript saves to a proprietary Adobe format that is not readable by third party plugins, so it has to be a Text File. )

Exporting a Text File in Premiere 2021

Step 1: In Premiere 2021, once the transcript is back, you need to turn it into captions. You can not export it from the Transcript tab as in Premiere 2022. So click the Caption button to convert the transcript into captions.

Step 2: Premiere will create the captions. From the Caption tab, you can export as SRT or Plain Text. Select ‘Export to text file’ and save the file.

Exporting SRTs in Premiere 2022 and 2021

It is basically the same as the steps above for exporting a Text file in Premiere 2021. In both 2022 and 2021 you need to turn the transcript into captions and then Export to SRT File from the Caption menu. (so in Step 2 above, do that instead of Export to Text File)

Note that in Premiere 2022 the ‘create captions’ button is the closed caption icon.

Back in Transcriptive Rough Cutter

1. Going back to Transcriptive, we can now import the Plain Text file. With your sequence or clip selected, click Transcriptive’s Import button and select the Plain Text file.

The settings in Import don’t really matter that much, unless you have Speakers. Since we’re going to use Alignment to get the per-word accurate timecode, the Import timecode settings are mostly moot.

That should bring the text into Transcriptive.

2. Click on the Transcribe Button. When the Transcribe dialog appears, select Alignment from the ‘Transcribe With’ dropdown. This is done offline and it’s free for English! There is an option to align using the A.I. services. However, those are not free. But if you want to align languages other than English that’s the only option currently.

3. Click OK… and Transcriptive will start processing the text and audio of the sequence, adding accurate timecode to the text, just as if you’d transcribed it from scratch!

So that’s how you get the Adobe transcription into Transcriptive!

(If Adobe had just added a feature to export the transcript with the timecode it already had in the Text panel… none of the above would be necessary. But here we are. So you should put in a feature request for that!)

Again, Adobe’s transcription service only works for sequences. So if you have a bunch of clips or media you want to transcribe, the easiest way is to use our Batch Transcribe function. And while Transcriptive’s transcription isn’t free, it’s only .04/min ($2.40/hr). However, as mentioned, you can drop each clip into a sequence and transcribe them individually that way. Once you’ve done that, you can use our Batch Alignment feature to get all the transcripts into Transcriptive!

Getting Captions from Transcriptive into Premiere’s Caption System

This is an easy one. You can export a variety of different caption formats from Transcriptive: SRT, MCC, SCC, EBL, SMPTE, and more.

1. Click the Export button in Transcriptive. From there you can select the caption format you want to use. SRT and SCC are the common ones.

2. Back in Premiere, Import the caption file into your project. Premiere will automatically recognize it as a caption file. When you drop it onto your sequence, it’ll automatically load into the Caption tab of Premiere’s Text panel.

Easy peasy. That’s all there is to it!

Transcriptive Keyboard Shortcuts

Keyboard Shortcuts are a huge part of Transcriptive and can make working in it much faster/easier. These are for Transcriptive 2.x/3.x. If you’re still using 1.x, please check the manual.

Ctrl + Space: Play / Stop

Undo: Ctrl + Z (Mac and PC)
Redo: Ctrl + Shift + Z

MAC USERS: Mac OS assigns Cmd+Z to the application (Premiere) and we can’t change that.

Editing text:

Ctrl + Left Arrow – Previous Word  |  Ctrl + Right Arrow – Next Word

Merging/Splitting Lines/Paragraphs:
Ctrl + Shift + Up OR [Delete]: Merge Line/paragraph with line above.
Ctrl + Shift + Down OR [Enter}: Split Line/paragraph into two lines.
(These behave slightly differently. ‘Control+Shift+up’ will merge the two lines together no matter where the cursor is. If you’re trying to combine a bunch of lines together, this is very fast. [Delete] uses the cursor position, which has to be at the beginning of the line to merge the lines together.)

Up or Down Arrow: Change Capitalization

Ctrl + Backspace: Delete Word | Ctrl + Delete: Delete Word

Ctrl + Up: Previous Speaker | Ctrl + Down: Next Speaker

Editing Video (Clip Mode only):

Control + i: Set In Point in Source panel
Control + o: Set Out Point in Source panel
Control + , (comma): Insert video segment into active sequence (this does the same thing as , (comma) in the Source panel)
Control + u : Clear In & Out Points (necessary for sharing)

Converting an SRT (or VTT) Caption File to Plain Text File for Free

This is a quick blog post showing you how to use the free Transcriptive trial version to convert any SRT caption file into a text file without timecode or line numbers (which SRTs have). You can do this on Transcriptive.com or if you have Premiere, you can use Transcriptive for Premiere Pro.

This can occur because you have a caption file (SRT or VTT) but don’t have access to the original transcript. SRT files tend to look like this:

1
00:00:02,299 –> 00:00:09,100
The quick brown fox

2
00:00:09,100 –> 00:00:17,200
hit the gas pedal and

And you might want normal human readable text so someone can read the dialog, without the line numbers and timecode. So this post will show you how to do that with Transcriptive for free!

We are, of course, in the business of selling software. So we’d prefer you bought Transcriptive BUT if you’re just looking to convert an SRT (or any caption file) to a text file, the free trial does that well and you’re welcome to use it. (btw, we also have some free plugins for After Effects, Premiere, FCP, and Resolve HERE. We like selling stuff, but we also like making fun or useful free plugins)

Getting The Free Trial License

As mentioned, this works for the Premiere panel or Transcriptive.com, but I’ll be using screenshots from the panel. So if you’re using Transcriptive.com it may look a little bit different.

You do need to create a Transcriptive account, which is free. When the panel first pops up, click the Trial button to start the registration process:

Click the Trial button to start the registration process
You then need to create your account, if you don’t have one. (If you’re using Transcriptive.com, this will look different. You’ll need to manually select the ‘free’ account option.)

Transcriptive Account Creation
Importing the SRT

Once you register the free trial license, you’ll need to import the SRT. If you’re on Transcriptive.com, you’ll need to upload something (could be 10sec of black video, doesn’t matter what, but there has to be some media). If you’re in Premiere, you’ll need to create a Sequence first, make sure Clip Mode is Off (see below) and then you can click IMPORT.

Importing an SRT into Premiere
Once you click Import, you can select SRT from the dropdown. You’ll need to select the SRT file using the file browser (click the circled area below). Then click the Import button at the bottom.

You can ignore all the other options in the SRT Import Window. Since you’re going to be converting this to a plain text file without timecode, none of the other stuff matters.

SRT Import options in Transcriptive

After clicking Import, the Transcriptive panel will look something like this. The text from the SRT file along with all the timecode, speakers, etc:

An editable transcript in Transcriptive


Exporting The Plain Text File

Alright… so how do we extract just the text? Easy! Click the Export button in the lower, left corner. In the dialog that gets displayed, select Plain Text:
Exporting a plain text file in Premiere Pro

The important thing here is to turn OFF ‘Display Timecode’ and ‘Include Speakers’. This will strip out any extra data that’s in the SRT and leave you with just the text. (After you hit the Export button)

That’s it!

Ok, well, since caption files tend to have lines that are 32 characters long you might have a text file that looks like this:

The quick brown fox
hit the gas pedal and

If you want that to look normal, you’ll need to bring it into Word or something and replace the Paragraphs with a Space like this:

replace

And that will give you:

The quick brown fox hit the gas pedal and

And now you have human readable text from an SRT file! A few steps, but pretty easy. Obviously there are lots of other things you can do with SRTs in Transcriptive, but converting the SRT to a plain text file is one that can be done with the free trial. As mentioned, this also works with VTT files as well.

So grab the free trial of Transcriptive here and you can do it yourself! You can also request an unrestricted trial by emailing cs@nulldigitalanarchy.com. While this SRT to Plain Text functionality works fine, there are some other limitations if you’re testing out the plugins for transcripts or editing the text.

A.I. Speech-to-Text: How to make sure your data isn’t being used for training

We get a fair number of questions from Transcriptive users that are concerned the A.I. is going to use their data for training.

First off, in the Transcriptive preferences, if you select ‘Delete transcription jobs from server’ your data is deleted immediately. This will delete everything from the A.I. service’s servers and from the Digital Anarchy servers. So that’s an easy way of making sure your data isn’t kept around and used for anything.

However, generally speaking, the A.I. services don’t get more accurate with user submitted data. Partially because they aren’t getting the ‘positive’ or corrected transcript.

When you edit your transcript we aren’t sending the corrections back to the A.I. (some services are doing this… e.g. if you correct YouTube’s captions, you’re training their A.I.)

So the audio by itself isn’t that useful. What the A.I. needs to learn is the audio file, the original transcript AND the corrected transcript. So even if you don’t have the preference checked, it’s unlikely your audio file will be used for training.

This is great if you’re concerned about security BUT it’s less great if you really WANT the A.I. to learn. For example, I don’t know how many videos I’ve submitted over the last 3 years saying ‘Digital Anarchy’. And still to this day I get: Dugal Accusatorial (seriously), Digital Ariki, and other weird stuff. A.I. is great when it works, but sometimes… it definitely does not work. And people want to put this into self-driving cars? Crazy talk right there.

 If you want to help the A.I. out, you can use the Speech-to-Text Glossary (click the link for a tutorial). This still won’t train the A.I., but if the A.I. is uncertain about a word, it’ll help it select the right one.

How does the glossary work? The A.I. analyzes a word sound and then comes up with possible words for that sound. Each word gets a ‘confidence score’. The one with the highest score is the one you see in your transcript. In the case above, ‘Ariki’ might have had a confidence of .6 (out 0 to 1, so .6 is pretty low) and ‘Anarchy’ might have been .53. So my transcript showed Ariki. But if I’d put Anarchy into the Glossary, then the A.I. would have seen the low confidence score for Ariki and checked if the alternatives matched any glossary terms.

So the Glossary can be very useful with proper names and the like.

But, as mentioned, nothing you do in Transcriptive is training the A.I. The only thing we’re doing with your data is storing it and we’re not even doing that if you tell us not to.

It’s possible that we will add the option in the future to submit training data to help train the A.I. But that’ll be a specific feature and you’ll need to intentionally upload that data.

Dumb A.I., Dumb Anarchist: Using the Transcriptive Glossary

We’ve been working on Transcriptive for like 3 years now. In that time, the A.I. has heard my voice saying ‘Digital Anarchy’ umpteen million times. So, you would think it would easily get that right by now. As the below transcript from our SRT Importing tutorial shows… not so much. (Dugal Accusatorial? Seriously?)

ALSO, you would think that by now I would have a list of terms that I would copy/paste into Transcriptive’s Glossary field every time I get a transcript for a tutorial. The glossary helps the A.I. determine what  ‘vocal sounds’ should be when it translates those sounds into words. Uh, yeah… not so much.

So… don’t be like AnarchyJim. If you have words you know the A.I. probably won’t get: company names, industry jargon, difficult proper names (cool blog post on applying player names to an MLB video here), etc., then use Transcriptive’s glossary (in the Transcribe dialog). It does work. (and somebody should mention that to the guy that designed the product. Oy.)

Use the Glossary field in the Transcribe dialog!Overall the A.I. is really accurate and does usually get ‘Digital Anarchy’ correct. So I get lazy about using the glossary. It is a really useful thing…

A.I. Glossary in Transcriptive

Importing an SRT into Premiere Pro 2020 & 2021

(The above video covers all this as well, but for those who’d rather read, than watch a video… here ya go!)

Getting an SRT file into Premiere is easy!

But, then it gets not so easy getting it to display correctly.

This is mostly fixed in the new caption system that Premiere 2021 has. We’ll go over that in a minute, but first let’s talk about how it works in Premiere Pro 2020. (if you only care about 2021, then jump ahead)

Premiere Pro 2020 SRT Import

1: Like you would import any other file, go to File>Import or Command/Control+I.

2: Select the SRT file you want.

3: It’ll appear in your Project panel.

4: You can drag it onto your timeline as you would any other file.

Now the fun starts.

Enable Captions from the Tool menu
5: From the Tools menu in the Program panel (the wrench icon), make sure Closed Captions are enabled.

5b: Go into Settings and select Open Captions

6: The captions should now display in your Program panel.

7: In many cases, SRT files start off being displayed very small.

You're gonna need bigger captionsThose bigger captions sure look good!

8: USUALLY the easiest way to fix this is to go to the Caption panel and change the point size. You do this by Right+Clicking on any caption and ‘Select All’. (this is the only way you can select all the captions)

Select all the captions

8b: With all the captions selected, you can then change the Size for all of them. (or change any other attribute for that matter)

9: The other problem that occurs is that Premiere will bring in an SRT file with a 720×486 resoltion. Not helpful for a 1080p project. In the lower left corner of the Caption panel you’ll see Import Settings. Click that to make sure it matches your Project settings.

Import settings for captions

Other Fun Tricks: SRTs with Non-Zero Start Times

If your video has an opening without any dialog, your SRT file will usually start with a timecode other than Zero. However, Premiere doesn’t recognize SRTs with non-zero start times. It assumes ALL SRT files start at zero. If yours does not, as in the example below, you will have to move it to match the start of the dialog.

You don’t have to do this with SRTs from Transcriptive. Since we know you’re likely using it in Premiere, we add some padding to the beginning to import it correctly.

Premiere doesn't align the captions wtih the audioIf your captions start at 05:00, Premiere puts them at 00:00

Importing an SRT file in Premiere 2021: The New Caption System!

(as of this writing, I’m using the beta. You can download the beta by going to the Beta section of Creative Cloud.)

0: If you’re using the beta, you need to enable this feature from the Beta menu. Click it on it and ‘Enable New Captions’.

1: Like you would import any other file, go to File>Import or Command/Control+I.

2: Select the SRT file you want.

3: It’ll appear in your Project panel.

4: You can drag it onto your timeline as you would any other file… BUT

This is where things get different!

4b: Premiere 2021 adds it to a new caption track above the normal timeline. You do need to tell Premiere you want to treat them as Open Captions (or you can select a different option as well)

4c: And Lo! It comes in properly sized! Very exciting.

5: There is no longer a Caption panel. If you want to edit the text of the captions, you need to select the new Text panel (Windows>Text). There you can edit the text, add new captions, etc.

6: To change the look/style of the captions you now need to use the Essential Graphics panel. There you can change the font, size, and other attributes.

Overall it’s a much better captions workflow. So far, from what I’ve seen it works pretty well. But I haven’t used it much. As of this writing it’s still in beta and regardless there may be some quirks that show up with heavier use. But for now it looks quite good.

Fixing Flicker in Videos with Lots of Motion – Fast Moving Cameras or Subjects

One of the things Flicker Free 1.0 doesn’t do well is deal with moving cameras or fast moving subjects. This tends to result in a lot of ghosting… echos from other frames Flicker Free is analyzing as it tries to remove the flicker (no people aren’t going to stop talking to you on dating apps because you’re using FF). You can see this in the below video as sort of a motion blur or trails.

Flicker Free 2.0 does a MUCH better job of handling this situation. We’re using optical flow algorithms (what’s used for retiming footage) as well as a better motion detection algorithm to isolate areas of motion while we deflicker the rest of the frame. You can see the results side-by-side below:

Better handling of fast motion, called Motion Compensation, is one of the big new features of 2.0. While the whole plugin is GPU accelerated, Motion Compensation will slow things down significantly. So if you don’t need it, it’s best to leave it off. But when you need it… you really need it and the extra render time is worth the wait. Especially if it’s critical footage and it’s either wait for the render or re-shoot (which might not be so easy if it’s  a wedding or sports event!).
We’re getting ready to release 2.0 in the next week or so, so just a bit of tease of some of the amazing new tech we’ve rolled into it!

Improving Accuracy of A.I. Transcripts with Custom Vocabulary

The Glossary feature in Transcriptive is one way of increasing the accuracy of the transcripts generated by artificial intelligence services. The A.I. services can struggle with names of people or companies and it’s a big of mixed bag with technical terms or industry jargon. If you have a video with names/words you think the A.I. will have a tough time with, you can enter them into the Glossary field to help the A.I. along.

For example, I grabbed this video of MLB’s top 30 draft picks in 2018:

Obviously a lot names that need to be accurate and since we know what they are, we can enter them into the Glossary.

Transcriptive's Glossary to add custom vocabulary

As the A.I. creates the transcript, words that sound similar to the names will usually be replaced with the Glossary terms. As always, the A.I. analyzes the sentence structure and makes a call on whether the word it initially came up with fits better in the sentence. So if the Glossary term is ‘Bohm’ and the sentence is ‘I was using a boom microphone’, it probably won’t replace the word. However if the sentence is ‘The pick is Alex boom’, it will replace it. As the word ‘boom’ makes no sense in that sentence.

Here are the resulting transcripts as text files: Using the Glossary and Normal without Glossary

Here’s a short sample to give you an idea of the difference. Again, all we did was add in the last names to the Glossary (Mize, Bart, Bohm):

With the Glossary:

The Detroit Tigers select Casey Mize, a right handed pitcher. From Auburn University in Auburn, Alabama. With the second selection of the 2018 MLB draft, the San Francisco Giants select Joey Bart a catcher. A catcher from Georgia Tech in Atlanta, Georgia, with the third selection of a 2018 MLB draft. The Philadelphia Phillies select Alec Bohm, third baseman

Without the Glossary:

The Detroit Tigers select Casey Mys, a right handed pitcher. From Auburn University in Auburn, Alabama. With the second selection of the 2018 MLB draft, the San Francisco Giants select Joey Bahrke, a catcher. A catcher from Georgia Tech in Atlanta, Georgia, with the third selection of a 2018 MLB draft. The Philadelphia Phillies select Alec Bomb. A third baseman

As you can see it corrected the names it should have. If you have names or words that are repeated often in your video, the Glossary can really save you a lot of time fixing the transcript after you get it back. It can really improve the accuracy, so I recommend testing it out for yourself!

It’s also worth trying both Speechmatics and Transcriptive-A.I. Both are improved by the glossary, however Speechmatics seems to be a bit better with glossary words. Since Transcriptive-A.I. has a bit better accuracy normally, you’ll have to run a test or two to see which will work best for your video footage.

If you have any questions, feel free to hit us up at cs@nulldigitalanarchy.com!

Transcriptive and 14.x: Why New World Needs to be Off

Update: For Premiere 14.3.2 and above New World is working pretty well at this point.  Adobe has fixed various bugs with it and things are working as they better.

However, we’re still recommending people keep it off if they can. On long transcripts ( over 90 minutes or so) New World usually does cause performance problems. But if having it off causes any problems, you can turn it on and Transcriptive should work fine. It just might be a little slow on long transcripts.

Original Post:

There are a variety of problems with Adobe’s new Javascript engine (dubbed New World) that’s part of 14.0.2 and above. Transcriptive 2.0 will now automatically turn it off and you’ll need to restart Premiere. Transcriptive 2.0 will not work otherwise.

If you’re using Transcriptive v1.5.2, please see this blog post for instructions on turning it off manually.

For the most part Transcriptive, our plugin for transcribing in Premiere, is written in Javascript. This relies on Premiere’s ability to process and run that code. In Premiere 14.0.x, Adobe has quietly replaced the very old Extendscript interpreter with a more modern Javascript engine (It’s called ‘NewWorld’ in Adobe parlance and you can read more about it and some of the tech-y details on the Adobe Developer Blog). On the whole, this is a good thing.

However, for any plugin using Javascript, it’s a big, big deal. And, unfortunately, it’s a big, big deal for Transcriptive. There are a number of problems with it that, as of 14.1, break both old and new versions of Transcriptive.

As with most new systems, Adobe fixes a bunch of stuff and breaks a few new things. So we’re hoping over the next couple months they work all the kinks out and it all sorts itself out.

There is no downside to turning New World off at this point. Both the old and new Javascript engines are in Premiere, so it’s not a big deal as of now. Eventually they will remove the old one, but we’re not expecting that to happen any time soon.

As always, we will keep you updated.

Fwiw, here’s what you’ll see in Transcriptive if you open it with New World turned on:

Premiere needs to be restarted in order to use TranscriptiveThat message can only be closed by restarting Premiere. If New World is on, Transcriptive isn’t usable. So you _must_ restart.

What we’re doing in the background is setting a flag to off. You can see this by pulling up the Debug Console in Premiere. Use Command+F12 (mac) or Control+F12 (windows) to bring up the console and choose Debug Database from the hamburger menu.

You’ll see this:

New World flag set to OffIf you want to turn it back on at some point, this is where you’ll find it. However, as mentioned, there’s no disadvantage to having it off and if you have it on, Transcriptive won’t run.

If you have any questions, please reach out to us at cs@nulldigitalanarchy.com.

Transcriptive: Here’s how to transcribe using your Speechmatics credits for now.

If you’ve been using Speechmatics credits to transcribe in Transcriptive, our transcription plugin for Premiere Pro, then you noticed that accessing your credits in Transcriptive 2.0.2 and later is not an option anymore. Speechmatics is discontinuing the API that we used to support their service in Transcriptive, which means your Speechmatics credentials can no longer be validated inside of the Transcriptive panel.

We know a lot of users still have Speechmatics credits and have been working closely with Speechmatics so those credits can be available in your Transcriptive account as soon as possible. Hopefully in the next week or two.

In the meantime, there are a couple ways users can still transcribe with Speechmatics credits. 1) Use an older version of Transcriptive like v1.5.2 or v2.0.1. Those should still work for a bit longer but uses the older, less accurate API or 2) Upload directly on their website and export the transcript as a JSON file to be imported into Transcriptive.  It is a fairly simple process and a great temporary solution for this. Here’s a step-by-step guide:

1. Head to the Speechmatics website – To use your Speechmatics credits, head to www.speechmatics.com and login to your account. Under “What do you want to do?”, choose “Transcription” and select the language of your file. 

Speechmatics_Uploading

2. Upload your media file to the Speechmatics website – Speechmatics will give you the option to drag and drop or select your media from a folder on your computer. Choose whatever option works best for you and then click on “Upload”. After the file is uploaded, the transcription will start automatically and you can check the status of the transcription on your “Jobs” list.  
Speechmatics_Transcribing3. Download a .JSON file –  After the transcription is finished (refresh the page if the status doesn’t change automatically!), click on the Actions icon to access the transcript. You will then have the option to export the transcript as a .JSON file

Speechmatics_JSON

4. Import the .JSON file into any version of Transcriptive – Open your Transcriptive panel in Premiere. If you are usingTranscriptive 2.0,  be sure Clip Mode is turned on. Select the clip you have just transcribed on Speechmatics and click on “Import”.  If you are using an older version of Transcriptive, drop the clip into a sequence before choosing “Import”. 

Transcriptive_Import

You will then have the option to “Choose an Importer”. Select the JSON option and import the Speechmatics file saved on your computer. The transcript will be synced with the clip automatically at no additional charge.

Transcriptive_Json

One important thing to know is that, although Transcriptive v1.x still have Speechmatics as an option and it still works, we would still recommend following the steps above to transcribe with Speechmatics credits. The option available in these versions of the panel is an older version of their API and less accurate than the new version. So we recommend you transcribe on the Speechmatics website if you want to use your Speechmatics credits now and not wait for them to be transferred.

However, we should have the transfer sorted out very soon, so keep an eye open for an email about it if you have Speechmatics credits. If the email address you use for Speechmatics is different than the one you use for Transcriptive.com, please email cs@nulldigitalanarchy.com. We want to make sure we get things synced up so the credits go to the right place!

Adobe Premiere 14.0.2 and Transcriptive: What You Need to Know

Adobe has slipped in a pretty huge change into 14.0.2 and it seriously affects Transcriptive, the A.I. transcript plugin for Premiere. I’ll get into the details in a moment, but let me get into the important stuff right off the bat:

  • If you are using Premiere 14.0.2 (the latest release)
    • And own Transcriptive 2.0…
    • And own Transcriptive 1.x…
      • You can upgrade to Transcriptive 2.x
      • Or you must turn ‘NewWorld’ off (instructions are below)
      • Or keep using Premiere Pro 14.0.1

For the most part Transcriptive is written in Javascript. This relies on Premiere’s ability to process and run that code. In Premiere 14.0.2, Adobe has quietly replaced the very old Extendscript interpreter with a more modern Javascript engine (It’s called ‘NewWorld’ in Adobe parlance and you can read more about it and some of the tech-y details on the Adobe Developer Blog). On the whole, this is a good thing.

However, for any plugin using Javascript, it’s a big, big deal. And, unfortunately, it’s a big, big deal for Transcriptive. It completely breaks old versions of Transcriptive.

If you’re running Transcriptive 2.x, no problem… we just released v2.0.3 which should work fine with both old and new Javascript Interpreter/engine.

If you’re using Transcriptive 1.x, it’s still not exactly a problem but does require some hoop jumping. (and eventually ‘Old World’ will not be supported in Premiere and you’ll be forced to upgrade TS. That’s a ways off, though.)

Turning Off New World

Here are the steps to turn off ‘NewWorld’ and have Premiere revert back to using ‘Old World’:

  • Press Control + F12 or Command + F12. This will bring up Premiere’s Console.
  • From the Hamburger menu (three lines next to the word ‘Console’), select Debug Database View
  • Scroll down to ScriptLayerPPro.EnableNewWorld and uncheck the box (setting it to False).
  • Restart Premiere Pro

When Premiere restarts, NewWorld will be off and Transcriptive 1.x should work normally.

Screenshot of Premiere's Debug console
So far there are no new major bugs and relatively few minor ones that we’re aware of when using Transcriptive 2.0.3 with Premiere 14.0.2 (with NewWorld=On). There are also a LOT of other improvements in 2.0.3 that have nothing to do with this.

Adobe actually gave us a pretty good heads up on this. Of course, in true Anarchist fashion, we tested it early on (and things were fine) and then we tested it last week and things were not fine. So it’s been an interesting week and a half scrambling to make sure everything was working by the time Adobe sent 14.0.2 out into the world.

So everything seems to be working well at this point. And if they aren’t, you now know how to turn off all the new fangled stuff until we get our shit together! (but we do actually think things are in good shape)

Testing The Accuracy of Artificial Intelligence (A.I.) Services

When A.I. works, it can be amazing. BUT you can waste a lot of time and money when it doesn’t work. Garbage in, garbage out, as they say. But what is ‘garbage’ and how do you know it’s garbage? That’s one of the things, hopefully, I’ll help answer.

Why Even Bother?

It’s a bit tedious to do the testing, but being able to identify the most accurate service will save you a lot of time in the long run. Cleaning up inaccurate transcripts, metadata, or keywords is far more tedious and problematic than doing a little testing up front. So it really is time well spent.

One caveat… There’s a lot of potential ways to use A.I., and this is only going to cover Speech-to-Text because that’s what I’m most familiar with due to Transcriptive and getting A.I. transcripts in Premiere. But if you understand how to evaluate one use, you should, more or less, be able to apply your evaluation method to others. (i.e. for testing audio, you want varying audio quality among your samples. If testing images you want varying quality (low light, blurriness, etc) among your samples)

At Digital Anarchy, we’re constantly evaluating a basket of A.I. services to determine what to use on the backend of Transcriptive. So we’ve had to come up with a methodology to fairly test how accurate they are. Most of the people reading this are in a bit different situation… testing solutions from various vendors that use A.I. instead of testing the A.I. directly. However, since different vendors use different A.I. services, this methodology will still be useful for you in comparing the accuracy of the A.I. at the core of the solutions. There may be, of course, other features of a given solution that may affect your decision to go with one or the other, but at least you’ll be able to compare accuracy objectively.

Here’s an outline of our method:

  1. Always use new files that haven’t been processed before by any of the A.I. services.
  2. Keep them short. (1-2min)
  3. Choose files of varying quality.
  4. Use a human transcription service to create the ‘test master’ transcript.
    • Have someone do a second pass to correct any human errors.
  5. Create a set of rules on word/punctuation errors for what counts as an error (or 1/2 or two).
    • If you change them halfway through the test, you need to re-test everything.
  6. Apply them consistently. If something is ambiguous, create a rule for how it will be handled and alway apply it that way.
  7. Compare the results and may the best bot win.

May The Best Bot Win : Visualizing

Accuracy rates for different A.I. services

The main chart compares each engine on a specific file (i.e. File #1, File # 2, etc), using both word and punctuation accuracy. This is really what we use to determine which is best, as punctuation matters. It also shows where each A.I. has strengths and weaknesses. The second, smaller chart shows each service from best result to worst result, using only word accuracy. Every A.I. will eventually fall off a cliff in terms of accuracy. This chart shows you the ‘profile’ for each service and can be a little bit clearer way of seeing which is best overall, ignoring specific files.

First it’s important to understand how A.I. works. Machine Learning is used to ‘train’ an algorithm. Usually millions of bits of data that have been labeled by humans are used to train it. In the case of Speech-to-Text, these bits are audio files with a human transcripts. This allows the A.I. to identify which audio waveforms, the word sounds, go with which bits of text. Once the algorithm has been trained, we can then send audio files to the algorithms and it makes it’s best guess as to which word every waveform corresponds to.

A.I. algorithms are very sensitive to what they’ve been trained on. The further you get away from what they’ve been trained on, the more inaccurate they are. For example, you can’t use an English A.I. to transcribe Spanish. Likewise, if an A.I. has been trained on perfectly recorded audio with no background noise, as soon as you add in background noise it goes off the rails. In fact, the accuracy of every A.I. eventually falls off a cliff. At that point it’s more work to clean it up than to just transcribe it manually.

Always Use New Files

Any time you submit a file to an A.I. it’s possible that the A.I. learns from that file. So you really don’t want to use the same file over and over and over again. To ensure you’re getting unbiased results it’s best to use new files every time you test.

Keep The Test Files Short

First off, comparing transcripts is tedious. Short transcripts are better than long ones. Secondly, if the two minutes you select is representative of an hour long clip, that’s all you need. Transcribing and comparing the entire hour won’t tell you anything more about the accuracy. The accuracy of two minutes is usually the same as the accuracy of the hour.

Of course, if you’re interviewing many different people over that hour in different locations, with different audio quality (lots of background noise, no background noise, some with accents, etc)… two minutes won’t be representative of the entire hour.

Chose Files of Varying Quality

This is critical! You have to choose files that are representative of the files you’ll be transcribing. Test files with different levels of background noise, different speakers, different accents, different jargon… whatever issues usually occur in the dialog typically in your videos. ** This is how you’ll determine what ‘garbage’ means to the A.I. **

Use Human Transcripts for The ‘Test Master’

Send out the files to get transcribed by a person. And then have someone within your org (or you) go over them for errors. There usually are some, especially when it comes to jargon or names (turns out humans aren’t perfect either! I know… shocker.). These transcripts will be the what you compare the A.I. transcripts against, so they need to be close to perfect.  If you change something after you start testing, you need to re-test the transcripts you’ve already tested.

Create A Set of Rules And Apply Them Consistently

You need to figure out what you consider one error, a 1/2 error or two errors. In most cases it doesn’t matter exactly what you decide to do, only that you do it consistently. If a missing comma is 1/2 an error, great! But it ALWAYS has to be a 1/2 error. You can’t suddenly make it a full error just because you think it’s particularly egregious. You want to remove judgement out of the equation as much as possible. If you’re making judgement calls, it’s likely you’ll choose the A.I. that most resembles how you see the world. That may not be the best A.I. for your customers. (OMG… they used an Oxford Comma! I hate Oxford commas! That’s at least TWO errors!).

And NOW… The Moment You’ve ALL Been Waiting For…

Add up the errors, divide that by the number of words, put everything into a spreadsheet… and you’ve got your winner!

It’s a bit tedious to do the testing, but being able to identify the most accurate service will save you a lot of cleanup time in the long run. So it really is time well spent.

Hopefully this post has given you some insights into how to test whatever type of A.I. services you’re looking into using. And, of course, if you haven’t checked out Transcriptive, our A.I. transcript plugin for Premiere Pro, you need to!Thanks for reading and please feel free to ask questions in the comment section below!

Using After Effects to create burned-in subtitles from SRTs

Recently, an increasing number of Transcriptive users have been requesting a way of using After Effects to create burned-in subtitles using SRTs from Transcriptive. This made us anarchists get excited about making a  Free After Effects SRT Importer for Subtitling And Captions.

Captioning videos is more important now than ever before. With the growth of mobile and Social Media streaming, YouTube and Facebook videos are often watched without sound and subtitles are essential to retain your audience and make them watchable. In addition to that, the Federal Communications Commission (FCC) has implemented rules for online video that require subtitles so people with disabilities can fully access media content and actively participate in the lives of their communities. 

As a consequence, a lot of companies have style guides for their burned-in subtitles and/or want to do something more creative with the subtitles than what you get with standard 608/708 captions. I mean, how boring is white, monospaced text on a black background? After Effects users can do better.

While Premiere Pro does allow some customization of subtitles, creators can get greater customization via After Effects. Many companies have style guides or other requirements that specify how their subtitles should look. After Effects can be an easier place to create these types of graphics. However, it doesn’t import SRT files natively so the SRT Importer will be very useful if you don’t like Premiere’s Caption Panel or need subtitles that are more ‘designed’ than what you can get with normal captions. The script makes it easy to customize subtitles and bring them into Premiere Pro. Here’s how it works:

  1. Go to the registration page our registration page.
  1. Download the .jsxbin file. 
  1. Put it here: 
  • Windows: C:\Program Files\Adobe\Adobe After Effects CC 2019\Support Files\Scripts\ScriptUI Panels
  • Mac:  Applications\Adobe After Effects CC 2019\Scripts\ScriptUI Panels

3. folder location

4. Restart AE. It’ll show up in After Effects under the Window\Transcriptive_Caption

3.select panel

5. Create a new AE project with nothing in it. Open the panel and set the parameters to match your footage (frame rate, resolution, etc). When you click Apply, it’ll ask for an SRT file. It’ll then create a Comp with the captions in it.

5. import SRT

  1. Select the text layer and open the Character panel to set the font, font size, etc. Feel free to add a drop shadow, bug or other graphics.

6.character style

7. Save that project and import the Comp into Premiere (Import the AE project and select the Comp). If you have a bunch of videos, you can run the script on each SRT file you have and you’ll end up with an AE project with a bunch of comps named to match the SRTs (currently it only supports SRT). Each comp will be named: ‘Captions: MySRT File’. Import all those comps into Premiere.

7. import comp

8. Drop each imported comp into the respective Premiere sequence. Double-check the captions line up with the audio (same as you would for importing an SRT into Premiere). Queue the different sequences up in AME and render away once they’re all queued up. (and keep in mind it’s beta and doesn’t create the black backgrounds yet).

Although especially beneficial to Transcriptive users, this free After Effects SRT Importer for Subtitling And Captions will work with any SRT from any program and it’s definitely easier than all the steps above make it sound and it is available for all and sundry on our website. Give it a try and let us know what you think! Contact: sales@nulldigitalanarchy.com

Your transcripts are out of order! This whole timeline’s out of order!

When cutting together a documentary (or pretty much anything, to be honest), you don’t usually have just a single clip. Usually there are different clips, and different portions of those clips, here, there and everywhere.

Our transcription plugin, Transcriptive, is pretty smart about handling all this. So in this blog post we’ll explain what happens if you have total chaos on your timeline with cuts and clips scattered about willy nilly.

If you have something like this:

Premiere Pro Timeline with multiple clips
Transcriptive will only transcribe the portions of the clips necessary. Even if the clips are out of order. For example, the ‘Drinks1920’ clip at the beginning might be a cut from the end of the actual clip (let’s say 1:30:00 to 1:50:00) and the  Drinks cut at the end might be from the beginning (e.g. 00:10:00 to 00:25:00).

If you transcribe the above timeline, only 10:00-25:00 and 1:30:00-1:50:00 of Drinks1920.mov will be transcribed.

If you Export>Speech Analysis, select the Drinks clip, and then look in the Metadata panel, you’ll see the Speech Analysis for the Drinks clip will have the transcript for those portions of the clip. If you drop those segments of the Drinks clip into any other project, the transcript comes along with it!

The downside to _only_ transcribing the portion of the clip on the timeline is, of course, the entire clip doesn’t get transcribed. Not a problem for this project and this timeline, but if you want to use the Drinks clip in a different project, the segment you choose to use (say 00:30:00 to 00:50:00) may not be previously transcribed.

If you want the entire clip transcribed, we recommend using Batch Transcribe.

However, if you drop the clip into another sequence, transcribe a time span that wasn’t previously transcribed and then Export>Speech Analysis, that new transcription will be added to the clips metadata. It wasn’t always this way, so make sure you’re using Transcriptive v1.5.2.  If you’re in a previous version of Transcriptive and you Export>Speech Analysis to a clip that already has part of a transcript in SA, it’ll overwrite any transcripts already there.

So feel free to order your clips any way you want. Transcriptive will make sure all the transcript data gets put into the right places. AND… make sure to Export>Speech Analysis. This will ensure that the metadata is saved with the clip, not just your project.

Someone Tell The NCAA about Flicker Free

Unless you’ve been living under a rock, you know it’s March Madness… time for the NCAA Basketball Tournament. This is actually my favorite two weekends of sports a year. I’m not a huge sports guy, but watching all the single elimination games, rooting for underdogs, the drama, players putting everything they have into these single games… it’s really a blast. All the good things about sport.

It’s also the time of year that flicker drives me a little crazy. One of the downside of developing Flicker Free is that I start to see flicker everywhere it happens. And it happens a lot during the NCAA tournament. Especially slow motion shots . Now, I understand that those are during live games and playing it back immediately is more important than removing some flicker. Totally get it.

However, for human interest stories recorded days or weeks before the tournament? Slow motion shots used two days after they happened? C’mon! Spend 5 minutes to re-render it with Flicker Free. Seriously.

Here’s a portion of a story about Gonzaga star Rui Hachimura:

Most of the shots have the camera/light sync problem that Flicker Free is famous for fixing. The original has the rolling band flicker that’s the symptom of this problem, the fixed version took all of three minutes to fix. I applied Flicker Free, selected the Rolling Bands 4 preset (this is always the best preset to start with) and rendered it. It looks much better.

So if you know anyone at the NCAA in post production, let them know they can take the flicker out of March Madness!

Downloading The Captions Facebook or YouTube Creates

So you’ve uploaded your video to Facebook or YouTube and you’d like to import the captions they automatically generate with Artificial Intelligence into Transcriptive. This can be a good, FREE way of getting a transcript.

Transcriptive imports SRT files, so… all you need is an SRT file from those services. That’s easy peasy with YouTube, you just go to the Captions section and download>SRT.

Screenshot of where to download an SRT file of YouTube CaptionsDownload the SRT and you’re done. Import the SRT into Transcriptive with ‘Combine Lines into Paragraphs’ turned on… Easy, free transcription.

With Facebook it’s more difficult as they don’t let you just download an SRT file. Or any file for that matter. So you need to get tricky.

Open Facebook in Firefox and go to the Web Developer>Network. This will open the inspector at the bottom of you browser window.

Firefox's web developer tool, the Network tabWhich will give you something that looks like this:

Using the Network tab to get a Facebook caption fileGo to the Facebook video you want to get the caption file for.

Once the video starts playing, type SRT into the Filter field (as shown above)

This _should_ show an XHR file. (we’ve seen instances where it doesn’t, not sure why. So this might not work for every video)

Right Click on it, select Copy>Copy URL (as shown above)

Open a new Tab and paste in the URL.

You should now be asked to download a file. Save this as an SRT file (e.g. MyVideo.srt).

Import the SRT into Transcriptive with ‘Combine Lines into Paragraphs’ turned on… Easy, free transcription.

So that’s it. This worked as of this writing. It’s entirely possible Facebook will make a change at some point preventing this, but for now, it’s a good way of getting free transcriptions.

You can also do this in other browsers, I’m just using Firefox as an example.

Photographing Lightning during The Day or Night with a DSLR

Capturing lightning using a neutral density filter and long exposure

As many of you know, I’m an avid time lapse videographer, and the original purpose of our Flicker Free filter was time lapse. I needed a way to deflicker all those night to day and day to night time lapses. I also love shooting long exposure photos.

As it turns out, this was pretty good experience to have when it came to capturing a VERY rare lightning storm that came through San Francisco late last year.

Living in San Francisco, you’re lucky if you see more than a 3 or 4 lightning bolts a year. Very different from the lightning storms I saw in Florida when I lived there for a year. However, we were treated to a definitely Florida-esqe lightning storm last September. Something like 800 lightning strikes over a few hours. It was a real treat and gave me a chance to try and capture lightning! (in a camera)

The easiest way to capture lightning is just flip your phone’s camera into video mode and point in the direction you hope the lightning is going to be at. Get the video and then pull out a good frame. This works… but video frames are usually heavily compressed and much lower resolution than a photo.

I wanted to use my 30mp Canon 5DmarkIV to get photos, not the iPhone’s mediocre video camera.

Problems, Problems, Problems

To get the 5D to capture lightning, I needed at the very least: 1) a tripod and 2) an intervalometer.

Lightning happens fast. Like, speed of light fast. Until you try and take a picture of it, you don’t realize exactly how fast. If you’re shooting video (30fps), the bolt will happen over 2, maybe 3 frames. if you’ve got a fancy 4K (or 8K!) camera that will shoot 60 or 120fps, that’s not a bad place to start.

However, if you’re trying to take advantage of your 5D’s 6720 × 4480 sensor… you’re not going to get the shot handholding it and manually pressing the shutter. Not going to happen. Cloudy with a chance of boring-ass photos.

So set the camera up on a tripod and plugin in your intervalometer. You can use the built-in, but the external one gives you more options. You want the intervalometer firing as fast as possible but that means only once every second. During the day, that’s not going to work.

Lightning And Daylight

The storm started probably about an hour before sunset. It was cloudy, but there was still a fair amount of light.

At first I thought, “once every second should be good enough”. I was wrong. Basically, the lightning had to happen the exact moment the camera took the picture. Possible, but the odds are against you getting the shot.

As mentioned, I like shooting long exposures. Sometimes at night but often during the day. To achieve this, I have several neutral density filters which I stack on top of each other. They worked great for this. I stacked a couple .9 ND filters on the lens, bringing it down 6 stops. This was enough to let me have a 1/2 sec. shutter speed.

1/2 sec. shutter speed and 1 sec. intervals… I’ve now got a 50/50 chance of getting the shot… assuming the camera is pointed in the direction of the lightning. Luckily it was striking so often, that I could make a good guess as to the area it was going to be in.  As you can see from the above shot, I got some great shots out of it.

Night Lightning

Photographing lightning at night with a Canon 5D

To the naked eye, it was basically night. So with a 2 second exposure and a 2 second interval… as long as the lightning happened where the camera was pointed, I was good to go. (it wasn’t quite night, so with the long exposure you got the last bits of light from sunset) I did not need the neutral density filters as it was pretty dark.

By this point the storm had moved. The lightning was less consistent and a bit further away. So I had to zoom in a bit, reducing the odds of getting the shot. But luck was still with me and I got a few good shots in this direction as well.

I love trying to capture stuff you can’t really see with the naked eye, whether it’s using time lapse to see how clouds move or long exposure to see water flow patterns. Experimenting with capturing lightning was a blast. Just wish we saw more of it here in SF!

So hopefully this gave you some ideas about how to capture lightning, or anything else that moves fast, next time you have a chance!

Speeding Up De-flickering of Time Lapse Sequences in Premiere

Time lapse is always challenging… you’ve got a high resolution image sequence that can seriously tax your system. Add Flicker Free on top of that… where we’re analyzing up to 21 of those high resolution images… and you can really slow a system down. So I’m going to go over a few tips for speeding things up in Premiere or other video editor.

First off, turn off Render Maximum Depth and Maximum Quality. Maximum Depth is not going to improve the render quality unless your image sequence is HDR and the format you’re saving it to supports 32-bit images. If it’s just a normal RAW or JPEG sequence, it  won’t make much of a difference. Render Maximum Quality may make a bit of difference but it will likely be lost in whatever compression you use. Do a test or two to see if you can tell the difference (it does improve scaling) but I rarely can.

RAW: If at all possible you should shoot your time lapses in RAW. There are some serious benefits which I go over in detailed in this video: Shooting RAW for Time Lapse. The main benefit is that Adobe Camera RAW automatically removes dead pixels. It’s a big f’ing deal and it’s awesome. HOWEVER… once you’ve processed them in Adobe Camera RAW, you should convert the image sequence to a movie or JPEG sequence (using very little compression). It will make processing the time lapse sequence (color correction, effects, deflickering, etc.) much, much faster. RAW is awesome for the first pass, after that it’ll just bog your system down.

Nest, Pre-comp, Compound… whatever your video editing app calls it, use it. Don’t apply Flicker Free or other de-flickering software to the original, super-high resolution image sequence. Apply it to whatever your final render size is… HD, 4K, etc.

Why? Say you have a 6000×4000 image sequence and you need to deliver an HD clip. If you apply effects to the 6000×4000 sequence, Premiere will have to process TWELVE times the amount of pixels it would have to process if you applied it to HD resolution footage. 24 million pixels vs. 2 million pixels. This can result in a HUGE speed difference when it comes time to render.

How do you Nest?

This is Premiere-centric, but the concept applies to After Effects (pre-compose) or FCP (compound) as well. (The rest of this blog post will be explaining how to Nest. If you already understand everything I’ve said, you’re good to go!)

First, take your original image sequence (for example, 6000×4000 pixels) and put it into an HD sequence. Scale the original footage down to fit the HD sequence.

Hi-Res images inside an HD sequenceThe reason for this is that we want to control how Premiere applies Flicker Free. If we apply it to the 6000×4000 images, Premiere will apply FF and then scale the image sequence. That’s the order of operations. It doesn’t matter if Scale is set to 2%. Flicker Free (and any effect) will be applied to the full 6000×4000 image.

So… we put the big, original images into an HD sequence and do any transformations (scaling, adjusting the position and rotating) here. This usually includes stabilization… although if you’re using Warp Stabilizer you can make a case for doing that to the HD sequence. That’s beyond the scope of this tutorial, but here’s a great tutorial on Warp Stabilizer and Time Lapse Sequences.

Next, we take our HD time lapse sequence and put that inside a different HD sequence. You can do this manually or use the Nest command.

Apply Flicker Free to the HD sequence, not the 6000x4000 imagesNow we apply Flicker Free to our HD time lapse sequence. That way FF will only have to process the 1920×1080 frames. The original 6000×4000 images are hidden in the HD sequence. To Flicker Free it just looks like HD footage.

Voila! Faster rendering times!

So, to recap:

  • Turn off Render Maximum Depth
  • Shoot RAW, but apply Flicker Free to a JPEG sequence/Movie
  • Apply Flicker Free to the final output resolution, not the original resolution

Those should all help your rendering times. Flicker Free still takes some time to render, none of the above will make it real time. However, it should speed things up and make the render times more manageable if you’re finding them to be really excessive.

Flicker Free is available for Premiere Pro, After Effects, Final Cut Pro, Avid, Resolve, and Assimilate Scratch. It costs $149. You can download a free trial of Flicker Free here.

Getting transcripts for Premiere Multicam Sequences

Using Transcriptive with multicam sequences is not a smooth process and doesn’t really work. It’s something we’re working on coming up with a solution for but it’s tricky due to Premiere’s limitations.

However, while we sort that out, here’s a workaround that is pretty easy to implement. Here are the steps:

1- Take the clip with the best audio and drop it into it’s own sequence.
Using A.I. to transcribe Premiere Multicam Sequences
2- Transcribe that sequence with Transcriptive.
3- Now replace that clip with the multicam clip.
Transcribing multicam in Adobe premiere pro

4- Voila! You have a multicam sequence with a transcript. Edit the transcript and clip as you normally would.

This is not a permanent solution and we hope to make it much more automatic to deal with Premiere’s multicam clips. In the meantime, this technique will let you get transcripts for multicam clips.

Thanks to Todd Drezner at Cohn Creative for suggesting this workaround.

Creating the Grinch on Video Footage with The Free Ugly Box Plugin

We here at Digital Anarchy want to make sure you have a wonderful Christmas and there’s no better way to do that than to take videos of family and colleagues and turn them into the Grinch. They’ll love it! Clients, too… although they may not appreciate it as much even if they are the most deserving. So just play it at the office Christmas party as therapy for the staff that has to deal with them.

Our free plugin Ugly Box will make it easy to do! Apply it to the footage, click Make Ugly, and then make them green! This short tutorial shows you how:

You can download the free Ugly Box plugin for After Effects, Premiere Pro, Final Cut Pro, and Avid here:

https://digitalanarchy.com/register/register_ugly.php

Of course, if you want to make people look BETTER, there’s always Beauty Box to help you apply a bit of digital makeup. It makes retouching video easy, get more info on it here:

https://digitalanarchy.com/beautyVID/main.html

Sharpening Video Footage

Like Digital Anarchy On Facebook

 

Sharpening video can be a bit trickier than sharpening photos. The process is the same of course… increasing the contrast around edges which creates the perception of sharpness.

However, because you’re dealing with 30fps instead of a single image some additional challenges are introduced:

1- Noise is more of a problem.
2- Video is frequently compressed more heavily than photos, so compression artifacts can be a serious problem.
3- Oversharpening is a problem with stills or video but can create motion artifacts when the video is played back that can be visually distracting.
4- It’s more difficult to mask out areas like skin that you don’t want sharpened.

These are problems you’ll run into regardless of the sharpening method. However, probably unsurprising, in addition to discussing the solutions using regular tools, we do talk about how our Samurai Sharpen plugin can help with them.

Noise in Video Footage

Noise is always a problem regardless of whether you’re shooting stills or videos. However, with video the noise changes from frame to frame making it a distraction to the viewer if there’s too much or it’s too pronounced.

Noise tends to be much more obvious in dark areas, as you can see below where it’s most apparent in the dark, hollow part of the guitar:

You can use Samurai Sharpen to avoid sharpening noise in video footage

Using a mask to protect the darker areas makes it possible to increase the sharpening for the rest of the video frame. Samurai Sharpen has masks built-in, so it’s easy in that plugin, but you can do this manually in any video editor or compositing program by using keying tools, building a mask and compositing effects.

Compression Artifacts

Many consumer video cameras, including GoPros and some drone cameras heavily compress footage. Especially when shooting 4K.

It can be difficult to sharpen video that's been heavily compressed

It’s difficult, and sometimes impossible to sharpen footage like this. The  compression artifacts become very pronounced, since they tend to have edges like normal features. Unlike noise, the artifacts are visible in most areas of the footage, although they tend to be more obvious in areas with lots of detail.

In Samurai you can increase the Edge Mask Strength to lessen the impact of sharpening on the artifact (often they’re in low contrast) but depending on how compressed the footage is you may not want to sharpen it.

Oversharpening

Sharpening is a local contrast adjustment. It’s just looking at significant edges and sharpening those areas. Oversharpening occurs when there’s too much contrast around the edges, resulting in visible halos.

Too much sharpening of video can result in visible halos
Especially if you look at the guitar strings and frets, you’ll see a dark halo on the outside of the strings and the strings themselves are almost white with little detail. Way too much contrast/sharpening. The usual solution is to reduce the sharpening amount.

In Samurai Sharpen you can also adjust the strength of the halos independently. So if the sharpening results in only the dark or light side being oversharpened, you can dial back just that side.

Sharpening Skin

The last thing you usually want to do is sharpen someone’s skin. You don’t want your talent’s skin looking like a dried-up lizard. (well, unless your talent is a lizard. Not uncommon these days with all the ridiculous 3D company mascots)

Sharpening video can result in skin being looking rough

Especially with 4K and HD, video is already showing more skin detail than most people want (hence the reason for our Beauty Box Video plugin for digital makeup). If you’re using UnSharp Mask you can use the Threshold parameter, or in Samurai the Edge Mask Strength parameter is a more powerful version of that. Both are good ways of protecting the skin from sharpening. The skin area tends to be fairly flat contrast-wise and the Edge Mask generally does a good job of masking the skin areas out.

Either way, you want to keep an eye on the skin areas, unless you want a lizard. (and if so, you should download are free Ugly Box plugin. ;-)

Wrap Up

You can sharpen video and most video footage will benefit from some sharpening. However, there are numerous issues that you run into and hopefully this gives you some idea of what you’re up against whether you’re using Samurai Sharpen for Video or something else.

Thoughts on The Mac Pro and FCP X

Like Digital Anarchy On Facebook

 

There’s been some talk of the eminent demise of the Mac Pro. The Trash Can is getting quite old in the tooth… it was overpriced and underpowered to begin  with and is now pretty out of date. Frankly it’d be nice if Apple just killed it and moved on. It’s not where they make their money and it’s clear they’re not that interested in making machines for the high end video production market. At the very least, it would mean we (Digital Anarchy) wouldn’t have to buy Trash Can 2.0 just for testing plugins. I’m all for not buying expensive machines we don’t have any use for.

But if they kill off the Mac Pro, what does that mean for FCP X? Probably nothing. It’s equally clear the FCP team still cares about pro video. There were multiple folks from the FCP team at NAB this year, talking to people and showing off FCP at one of the sub-conferences. They also continue to add pro-level features.

That said, they may care as much (maybe even more) about the social media creators… folks doing YouTube, Facebook, and other types of social media creation. There are a lot of them. A lot more than folks doing higher end video stuff, and these creators are frequently using iPhones to capture and the Mac to edit. They aren’t ‘pro editors’ and I think that demographic makes up a good chunk of FCP users. It’s certainly the folks that Apple, as a whole, is going after in a broader sense.

If you don’t think these folks are a significant focus for Apple overall, just look at how much emphasis they’ve put on the camera in the iPhone 6 & 7… 240fps video, dual lenses, RAW shooting, etc. To say nothing of all the billboards with nothing but a photo ‘taken with the iPhone’. Everyone is a media creator now and ‘Everyone’ is more important to Apple than ‘Pro Editors’.

The iMacs are more than powerful enough for those folks and it wouldn’t surprise me if Apple just focused on them. Perhaps coming out with a couple very powerful iMacs/MacBook Pros as a nod to professionals, but letting the MacPro fade away.

Obviously, as with all things Apple, this is just speculation. However, given the lack of attention professionals have gotten over the last half decade, maybe it’s time for Apple to just admit they have other fish to fry.

Tutorial: Removing Flicker from Edited Video Footage

Like Digital Anarchy On Facebook

 

One problem that users can run into with our Flicker Free deflicker plugin is that it will look across edits when analyzing frames for the correct luminance. The plugin looks backwards as well as forwards to gather frames and does a sophisticated blend of all those frames. So even if you create an edit, say to remove an unwanted camera shift or person walking in front of the camera, Flicker Free will still see those frames.

This is particularly a problem with Detect Motion turned OFF.

The way around this is to Nest (i.e. Pre-compose (AE), Compound Clip (FCP)) the edit and apply the plugin to the new sequence. The new sequence will start at the first frame of the edit and Flicker Free won’t be able to see the frames before the edit.

This is NOT something you always have to do. It’s only if the frames before the edit are significantly different than the ones after it (i.e. a completely different scene or some crazy camera movement). 99% of the time it’s not a problem.

This tutorial shows how to solve the problem in Premiere Pro. The technique works the same in other applications. Just replacing ‘Nesting’ with whatever your host application does (pre-composing, making a compound clip, etc).

Comparing Beauty Box To other Video Plugins for Skin Retouching/Digital Makeup

We get a lot of questions about how Beauty Box compares to other filters out there for digital makeup. There’s a few things to consider when buying any plugin and I’ll go over them here. I’m not going to compare Beauty Box with any filter specifically, but when you download the demo plugin and compare it with the results from other filters this is what you should be looking at:

  • Quality of results
  • Ease of use
  • Speed
  • Support

Support

I’ll start with Support because it’s one thing most people don’t consider. We offer as good of support as anyone in the industry. You can email or call us (415-287-6069). M-F 10am-5pm PST. In addition, we also check email on the weekends and frequently in the evenings on weekdays. Usually you’ll get a response from Tor, our rockstar QA guy, but not infrequently you’ll talk to myself as well. Not often you get tech support from the guy that designed the software. :-)

Quality of Results

The reason you see Beauty Box used for skin retouching on everything from major tentpole feature films to web commercials, is the incredible quality of the digital makeup. Since it’s release in 2009 as the first plugin to specifically address skin retouching beyond just blurring out skin tones, the quality of the results has been critically acclaimed. We won several awards with version 1.0 and we’ve kept improving it since then. You can see many examples here of Beauty Box’s digital makeup, but we recommend you download the demo plugin and try it yourself.

Things to look for as you compare the results of different plugins:

Skin Texture: Does the skin look realistic? Is some of the pore structure maintained or is everything just blurry? It should, usually, look like regular makeup unless you’re going for a stylized effect.
Skin Color: Is there any change in skin tones?
Temporal Consistency: Does it look the same from frame to frame over time? Are there any noticeable seams where the retouching stops.
Masking: How accurate is the mask of the skin tones? Are there any noticeable seams between skin and non-skin areas? How easy is it to adjust the mask?

Ease of Use

One of the things we strive for with all our plugins is to make it as easy as possible to get great results with very little work on your end. Software should make your life easier.

In most cases, you should be able to click on Analyze Frame, make an adjustment to the Skin Smoothing amount to dial in the look you want and be good to go. There are always going to be times when it requires a bit more work but for basic retouching of video, there’s no easier solution than Beauty Box.

When comparing filters, the thing to look for here is how easy is it to setup the effect and get a good mask of the skin tones? How long does it take and how accurate is it?

Speed

If you’ve used Beauty Box for a while, you know that the only complaint we had with it with version 1.0 was that it was slow. No more! It’s now fully GPU optimized and with some of the latest graphics cards you’ll get real time performance, particularly in Premiere Pro. Premiere has added better GPU support and between that the Beauty Box’s use of the GPU, you can get real time playback of HD pretty easily.

And of course we support many different host apps, which gives you a lot of flexibility in where you can use it. Avid, After Effects, Premiere Pro, Final Cut Pro, Davinci Resolve, Assimilate Scratch, Sony Vegas, and NUKE are all supported.

Hopefully that gives you some things to think about as you’re comparing Beauty Box with other plugins that claim to be as good. All of these things factor into why Beauty Box is so highly regarded and considered to be well worth the price.

Back Care for Video Editors Part 3: Posture Exercises: The Good and The Bad

Like Digital Anarchy On Facebook

 

Posture Exercises: The Good and The Bad

There are a lot of books out there on how to deal with back pain. Most of them are relatively similar and have good things to say. Most of them also have minor problems, but overall, with a little guidance from a good physical therapist, they’re very useful.

Editing Video while sitting on ice is rather unusualYou don’t need to sit on ice to get good posture!

The two of I’ve been using are:

Back RX by Vijay Vad

8 Steps to a Pain Free Back (Gokhale Method)

Both have some deficiencies but overall are good and complement each other. I’ll talk about the good stuff first and get into my problems with them later (mostly minor issues).

There’s also another book, Healing Back Pain, which I’m looking into and says some valuable things. It posits that the main cause of the pain is not actually structural (disc problems, arthritis, etc) but in most cases caused by stress and the muscles tensing. I’ll do a separate post on it as I think the mind plays a significant role and this book has some merit.

BackRX

Back RX is a series of exercise routines designed to strengthen your back. It pulls from Yoga, Pilates, and regular physical therapy for inspiration. If you do them on a regular basis, you’ll start improving the strength in your abs and back muscles which should help relieve pain over the long term.

backRX

As someone that’s done Yoga for quite some time, partially in response to the repetitive stress problems I had from using computers, I found the routines very natural. Even if you haven’t done Yoga, the poses are mostly easy, many of them have you lying on the floor, and are healthy for your back. You won’t find the deep twisting and bending poses you might be encouraged to do at a regular yoga studio.

It also encourages mind/body awareness and focuses a bit on breathing exercises. The book doesn’t do a great job of explaining how to do this. If you’re not already a yoga practitioner or have a meditation practice you’ll need some guidance. The exercises have plenty of value even if you don’t get into that part of it. However, mindfulness is important. Here are a few resources on using meditation for chronic pain:

Full Catastrophe Living
Mindfulness Based Stress Reduction
You Are Not Your Pain

Gokhale Method

The 8 Steps to a Pain Free Back (Gokhale Method) is another good book that takes a different approach. BackRX provides exercise routines you can do in about 20 minutes. The Gokhale Method shows modifications to the things we do all the time… lying, sitting, standing, bending, etc. These are modifications you’re supposed to make throughout the day.

She has something of a backstory about how doctors these days don’t know what a spine should look like  and that people had different shaped spines in the past. In a nutshell, the argument is that because we’ve become so much more sedentary over the last 100 years (working in offices, couch potato-ing, etc) our spines are less straight and doctors now think this excessively curved spine is ‘normal’. I’m very skeptical of this as some of her claims are easily debunked (more on that later). However, it does not take away from the value of the exercises. Whether you buy into her marketing or not, she’s still promoting good posture and that’s the important bit.

Some of her exercises you will find similar to other Posture books. Other Gokhale exercises are novel. They may not all resonate with you, but I’ve found several to be quite useful.

Some good posture advice if you're sitting in front of a computerAll of the exercises focus on lengthening the spine and provide ways to hold that posture above and beyond the usual ‘Sit up straight!’. She sells a small cushion that mounts on the back of your chair. I’ve found this useful, if only in constantly reminding me to not slump in my Steelcase chair (completely offsetting why you spent the money on a fancy chair). It prevents me from leaning back in the chair, which is the first step to slumping. It also does help keep your back a bit more straight. There are some chairs that are not well designed and the cushion does help.

In both books, there’s an emphasis on stretching your spine and strengthening your ab/core muscles and back muscles. BackRX focuses more on the strengthening, Gokhale focuses more on the stretching.

But ultimately they only work if you’re committed to doing them over the long term. You also have to be vigilant about your posture. If you’re in pain, this isn’t hard as your back will remind with pain whenever you’re not doing things correctly. It’s harder if you’re just trying to develop good habits and you’re not in pain already.

Most people don’t think about this at all, which is why 80% of the US population will develop back pain problems at some point. So even if you only read the Gokhale book and just work on bending/sitting/walking better you’ll be ahead of the game.

So what are the problems with the books?

Both the Gokhale Method and BackRX have some issues. (again, these don’t really detract from the exercises in the book… but before you run out and tell your doctor his medical school training is wrong, you might want to consider these points)

Gokhale makes many claims in her book. Most of them involve how indigenous cultures sit/walk/etc and how little back pain is in those cultures. These are not easily testable. However, she makes other claims that can be tested. For one, she shows a drawing of a spine from around 1900 and drawing that she claims was in a recent anatomy book. She put this forth as evidence that spines used to look different and that modern anatomy books don’t show spines they way they’re supposed to look. This means modern doctors are being taught incorrectly and thus don’t know what a spine should look like. The reality is that modern anatomy books show spines that look nothing like her example, which is just a horrible drawing of a spine. In fact, illustrations of ‘abnormal’ spines are closer to what she has in her book.

Also, most of the spine illustrations from old anatomy books are pretty similar to modern illustrations. On average the older illustrations _might_ be slightly straighter than modern illustrations, but mostly they look very similar.

She also shows some pictures of statues to illustrate everyone in ancient times walked around with a straight back. She apparently didn’t take Art History in college and doesn’t realize these statues from 600 BC are highly stylized and were built like that because they lacked the technology to sculpt more lifelike statues. So, No, everyone in ancient Greece did not ‘walk like an Egyptian’.

BackRX has a different issue. Many of the photos they show of proper poses are correct for the Back, BUT not for the rest of the body. A common pose called Tree Pose is shown with the foot against the knee, similar to this photo:

How not to do tree pose - don't put your foot on your opposite knee This risks injury to the knee!  The foot should be against the side of the upper thigh.

Likewise, sitting properly at a desk is shown with good back posture, but with forearms and wrist positioned in such a way to ensure that the person will get carpel tunnel syndrome. These are baffling photos for a book discussing how to take care of your body.

Most of the exercises in this book are done lying down and are fine. For sitting and standing poses I recommend googling the exercise to make sure it’s shown correctly. For example, google ‘tree pose’ and compare the pictures to what’s in the book.

Overall they’re both good books despite the problems. The key thing is to listen to your body.  Everything that is offered may not work for you so you need to experiment a bit. This includes working with your mind, which definitely has an effect on pain and how you deal with it.

Computers and Back Care part 2: Forward Bending

Like Digital Anarchy On Facebook

 

Go to Part 1 in the Back Care series

Most folks know how to pick up a heavy box. Squat down, keep your back reasonably flat and upright and use your legs to lift.

However, most folks do not know how to plug in a power cord. (as the below photo shows)

How to bend forward if you're plugging in a power cord

Forward bending puts a great deal of stress on your back and we do it hundreds of times a day. Picking up your keys, putting your socks on, plugging in a power cord, and on and on. This is why people frequently throw their backs out sneezing or picking up some insignificant thing off the floor like keys or clothing.

While normally these don’t cause much trouble, the hundreds of bends a day add up. Especially if you sit in a chair all day and are beating up your back with a bad chair or bad posture. Over time all of it weakens your back, degrades discs, and causes back pain.

So what to do?

There are a couple books I can recommend. Both have some minor issues but overall they’re very good. I’ll talk about them in detail in Part 3 of this series.

Back RX by Vijay Vad
8 Steps To a Pain Free Back by Esther Gokhale

Obviously for heavy objects, keep doing what you’re probably already doing: use your legs to lift.

But you also want to use your legs to pick up almost any object. Using the same technique to pick up small objects works as well. That said, all the squatting can be a bit tough on the knees, so lets talk about hip hinging.

Woman hinging from the hips in a way that puts less pressure on your back(the image shows a woman stretching but she’s doing it with a good hip hinge. Since it’s a stretch, it’s, uh, a bit more exaggerated than you’d do picking something up. Not a perfect image for this post, but we’ll roll with it.)

Imagine your hip as a door hinge. Your upright back as the door and your legs as the wall. Keep your back mostly flat and hinge at the hips. Tilting your pelvis instead of bending your back. Then bend your legs to get the rest of the way to the floor. This puts less strain on your back and not as much strain on your knees as going into a full squat. Also, part of it is to engage your abs as you’re hinging. Strong abs help maintain a strong back.

Directions on how to hip hinge, showing a good posture

There’s some disagreement on the best way to do this. Some say bend forward (with your knees slightly bent) until you feel a stretch in your hamstrings, then bend your knees. I usually hinge the back and bend the knees at the same time. This feels better for my body, but everyone is different so try it both ways. There is some truth that the more length you have in your hamstrings, the more you can hinge. However, since most people, especially those that sit a lot, have tight hamstrings, it’s just easier to hinge and bend at the same time.

But the really important bit is to be mindful of when you’re bending, regardless of how you do it. Your back isn’t going to break just from some forward bending, but the more you’re aware of how often you bend and doing it correctly as often as possible, the better off you’ll be.

This also applies to just doing regular work, say fixing a faucet or something where you have to be lower to the ground. If you can squat and keep a flat back instead of bending over to do the work, you’ll also be better off.

If this is totally new to you, then your back may feel a little sore as you use muscles you aren’t used to using. This is normal and should go away. However, it’s always good to check in with your doctor and/or physical therapist when doing anything related to posture.

In Part 3 I’ll discuss the books I mentioned above and some other resources for exercises and programs.

Taking Care of Your Back for Video Editors, Part 1: The Chair

Like Digital Anarchy On Facebook

 

Software developers, like video editors, sit a lot. I’ve written before about my challenges with Repetitive  Stress Problems and how I dealt with them. (Awesome chair, great ergonomics, and a Wacom tablet). These problems are more about my wrists, shoulders, and neck.

I fully admit to ignoring everyone’s advice about sitting properly and otherwise taking care of my back, so I expect you’ll probably igrnore this (unless you already have back pain). But you shouldn’t. And maybe some of you will listen and get some tips to help you avoid having to take a daily diet of pain meds just to get through a video edit.

Video editors need good posture

I’ve also always had problems with my back. The first time I threw it out I was 28, playing basketball. Then add in being physically active in a variety of other ways… martial arts, snowboarding, yoga, etc… my back has taken some beatings over the years. And then you factor in working at a job for the last 20 years that has me sitting a lot.

And not sitting very well for most of those 20 years. Hunched over a keyboard and slouching in your chair at the same time is a great way of beating the hell out of your back and the rest of your body. But that was me.

So, after a lot of pain and an MRI showing a couple degraded discs, I’m finally taking my back seriously. This is the first of several blog posts detailing some of the things I’ve learned and what I’m doing for my back. I figure it might help some of you all.

I’ll start with the most obvious thing: Your chair. Not only your chair BUT SITTING UPRIGHT IN IT. It doesn’t help you to have a $1000 chair if you’re going to slouch in it. (which I’m known to be guilty of)

A fully adjustable chair can help video editors reduce back pain

The key thing about the chair is that it’s adjustable in as many ways as possible. This way you can set it up perfectly for your body, which is key. Personally, I have a Steelcase chair which I like, but most high end chairs are very configurable and come in different sizes. (I’m not sure the ‘ball chair’ is going to be good for video editing, but some people love them for normal office work) There are also adjustable standing desks, which allow you to alternate between sitting and standing, which is great. Being in any single position for too long is stressful on your body.

The other key thing is your posture. Actually sitting in the chair correctly. There are slightly different opinions  on what is precisely the best sitting posture (see Part 3 for more on this), but generally, the illustration below is a good upright position. Feet on the ground, knees at right angles, butt all the way back with some spine curvature, but not too much, the shoulders slightly back and the head above the shoulders (not forward as we often do, which puts a lot of strain on the neck. If you keep leaning in to see your monitor, get glasses or move the monitor closer!).

It can also help to have your abdominal muscle engaged to prevent to much curvature in the spine. This can be a little bit of work, but if you’re paying attention to your posture, then it should just come naturally as you maintain the upright position.

You want to sit upright in your chair for good back healthThere’s a little bit of disagreement on how much curvature you should have while sitting. Some folks recommend even less than what you see above. We’ll talk more about it in Part 3.

One other important thing is to take breaks, either walk around or stretch. Sitting for long periods really puts a lot of stress on your discs and is somewhat unnatural for your body, as your ancestors probably weren’t doing a lot of chair sitting. Getting up to walk, do a midday yoga class, or just doing a little stretching every 45 minutes or so will make a big difference. This is one of the reasons a standing desk is helpful.

So that’s it for part 1. Get yourself a good chair and learn how to sit in it! It’ll greatly help you keep a healthy, happy back.

In Part 2 we’ll discuss picking up your keys, sneezing, and other dangers to back health lurking in plain sight.

The Problem of Slow Motion Flicker during Big Sporting Events: NCAA Tournament

Like Digital Anarchy On Facebook

 

Shooting slow motion footage, especially very high speed shots like 240fps or 480fps, results in flicker if you don’t have high quality lights. Stadiums often have low quality industrial lighting, LEDs, or both. Resulting in flicker during slow motion shots even on nationally broadcast, high profile sporting events.

I was particularly struck by this watching the NCAA Basketball Tournament this weekend. Seemed like I was seeing flicker on  half of the slow motion shots. You can see a few in this video (along with Flicker Free plugin de-flickered versions of the same footage):

To see how to get rid of the flicker you can check out our tutorial on removing flicker from slow motion sports.

The LED lights are most often the problem. They circle the arena and depending on how bright they are, for example if it’s turned solid white, they can cast enough light on the players to cause flicker when played back in slow motion. Even if they don’t cast light on the players they’re visible in the background flickering. Here’s a photo of the lights I’m talking about in Oracle arena (white band of light going around the stadium):

Deflickering stadium lights can be done with Flicker Free

While Flicker Free won’t work for live production, it works great for de-flickering this type of flicker if you can render it in a video editing app, as you can see in the original example.

It’s a common problem even for pro sports or high profile sporting events (once you start looking for it, you see it a lot). So if you run into with your footage, check out the Flicker Free plugin for most video editing applications!

Tips on Photographing Whales – Underwater and Above

Like Digital Anarchy On Facebook

 

I’ve spent the last 7 years going out to Maui during the winter to photograph whales. Hawaii is the migration destination of the North Pacific Humpback Whales. Over the course of four months, it’s estimated that about 12,000 whales migrate from Alaska to Hawaii. During the peak months Jan 15 – March 15th or so, there’s probably about 6000+ whales around Hawaii. This creates a really awesome opportunity to photograph them as they are EVERYWHERE.

Many of the boats that go out are small, zodiac type boats. This allows you to hang over the side if you’ve got an underwater camera. Very cool if they come up to the boat, as this picture shows! (you can’t dive with them as it’s a national sanctuary for the whales)

A photographer can hang over the side of a boat to get underwater photos of the humpback whales.

The result is shots like this below the water:

Photographing whales underwater is usually done hanging over the side of a boat.

Or above the water:

A beautiful shot of a whale breaching in Maui

So ya wanna be whale paparazzi? Here are a few tips on getting great photographs of whales:

1- Patience: Most of the time the whales are below the water surface and out of range of an underwater camera. There’s a lot of ‘whale waiting’ going on. It may take quite a few trips before a whale gets close enough to shoot underwater. To capture the above the water activity you really need to pay attention. Frequently it happens very quickly and is over before you can even get your camera up if you’re distracted by talking or looking at photos on your camera. Stay present and focused.

2- Aperture Priority mode: Both above and below the water I set the camera to Aperture Priority and set the lowest aperture I can, getting it as wide open as possible. You want as fast of a shutter speed as possible (for 50 ton animals they can move FAST!) and setting it to the widest aperture will do that. You also want that nice depth of field a low fstop will give you.

3- AutoFocus: You have to have autofocus turned on. The action happens to fast to manually focus. Also, use AF points that are calculated in both the horizontal and vertical axes. Not all AF points are created the same.

4- Lenses: For above the water, the 100mm-400mm is a good lens for the distance the boats usually tend to stay from the whales. It’s not great if the whales come right up to the boat… but that’ s when you bust out your underwater camera with a very wide angle or fisheye lens. With underwater photography, at least in Maui, you can only photograph the whales if they come close to the boat.  You’re not going to be able to operate a zoom lens hanging over the side of a boat. So set a pretty wide focal length when you put it into the housing. I’ve got a 12-17mm Tokina fisheye and usually set it to about 14mm. This means the whale has to be within about 10 feet of the boat to get a good shot. But due to underwater visibility, that’s pretty much the case no matter what lens you have on the camera.

5- Burst Shooting: Make sure you set the camera to burst mode. The more photos the camera can take when you press and hold the shutter button the better.

6- Luck: You need a lot of luck. But part of luck is being prepared to take advantage of the opportunities that come up. So if you get a whale that’s breaching over and over, stay focused with your camera ready because you don’t know where he’s going to come up. Or if a whale comes up to the boat make sure that underwater camera is ready with a fully charged battery, big, empty flash card and you know how to use the controls on the housing. (trust me… most of these tips were learned the hard way)

Many whale watches will mostly be comprised of ‘whale waiting’. But if you stay present and your gear is set up correctly, you’ll be in great shape to capture those moments when you’re almost touched by a whale!

Whale photographed that was just out of arms reach. The whale is just about touching the camera.

Avoiding Prop Flicker when Shooting Drone Video Footage

Like Digital Anarchy On Facebook

 

We released a new tutorial showing how to remove prop flicker, so if you have flicker problems on drone footage, check that out. (It’s also at the bottom of this post)

But what if you want to avoid prop flicker altogether? Here’s a few tips:

But first, let’s take a look at what it is. Here’s an example video:

1- Don’t shoot in such a way that the propellers are between the sun and the camera. The reason prop flicker happens is the props are casting shadows onto the lens. If the sun is above and in front of the lens, that’s where you’ll get the shadows and the flicker. (shooting sunrise or sunset is fine because the sun is below the props)

1b- Turning the camera just slightly from the angle generating the flicker will often get rid of the flicker. You can see this in the tutorial below on removing the flicker.

2- Keep the camera pointed down slightly. It’s more likely to catch the shadows if it’s pointing straight out from the drone at 90 degrees (parallel to the props). Tilt it down a bit, 10 or 20 degrees, and that helps a lot.

3- I’ve seen lens hoods for the cameras. Sounds like they help, but I haven’t personally tried one.

Unfortunately sometimes you have to shoot something in such a way that you can’t avoid the prop flicker. In which cases using a plugin like Flicker Free allows you to eliminate or reduce the flicker problem. You can see how to deflicker videos with prop flicker in the below tutorial.

Removing Flicker from Drone Video Footage caused by Prop Flicker

Like Digital Anarchy On Facebook

 

Drones are all the rage at the moment, deservedly so as some of the images and footage  being shot with them are amazing.

However, one problem that occurs is that if the drone is shooting with the camera at the right angle to the sun, shadows from the props cause flickering in the video footage. This can be a huge problem, making the video unusable. It turns out that our Flicker Free plugin is able to do a good job of removing or significantly reducing this problem. (of course, this forced us to go out and get one. Research, nothing but research!)

Here’s an example video showing exactly what prop flicker is and why it happens:

There are ways around getting the flicker in the first place: Don’t shoot into the sun, have the camera pointing down, etc. However, sometimes you’re not able to shoot with ideal conditions and you end up with flicker.

Our latest tutorial goes over how to solve the prop flicker issue with our Flicker Free plugin. The technique works in After Effects, Final Cut Pro, Avid, Resolve, etc. However the tutorial shows Flicker Free being used in Premiere Pro.

The full tutorial is below. You can even download the original flickering drone video footage and AE/Premiere project files by clicking here.

Speeding Up Flicker Free: The Order You Apply Plugins in Your Video Editing App

Like Digital Anarchy On Facebook

 

One key way of speeding up the Flicker Free plugin is putting it first in the order of effects. What does this mean? Let’s say you’re using the Lumetri Color Corrector in Premiere. You want to apply Flicker Free first, then apply Lumetri. You’ll see about a 300+% speed increase vs. doing it with Lumetri first. So it looks like this:

Apply Flicker Free first in your video editing application to increase the rendering speed.

Why the Speed Difference?

Flicker Free has to analyze multiple frames to de-flicker the footage you’re using. It looks at up to 21 frames. If you have the effect applied before Flicker Free it means Lumetri is being applied TWENTY ONE times for every frame Flicker Free renders. And especially with a slow effect like Lumetri that will definitely slow everything down.

It fact, on slower machines it can bring Premiere to a grinding halt. Premiere has to render the other effect on 21 frames in order to render just one frame for Flicker Free. In this case, Flicker Free takes up a lot of memory, the other effect can take up a lot of memory and things start getting ugly fast.

Renders with Happy Endings

So to avoid this problem, just apply Flicker Free before any other effects. This goes for pretty much every video editing app. The render penalty will vary depending on the host app and what effect(s) you have applied. For example, using the Fast Color Corrector in Premiere Pro resulted in a slow down of only about 10% (vs. Lumetri and a slow down of 320%). In After Effects the slow down was about 20% with just the Synthetic Aperture color corrector that ships with AE. However, if you add more filters it can get a lot worse.

Either way, you’ll have much happier render times if you put Flicker Free first.

Hopefully this makes some sense. I’ll go into a few technical details for those that are interested. (Feel free to stop reading if it’s clear you just need to put Flicker Free first) (oh, and here are some other ways of speeding up Flicker Free)

Technical Details

With all host applications, Flicker Free, like all plugins, has to request frames through the host application API. With most plugins, like the Beauty Box Video plugin, the plugin only needs to request the current frame. You want to render frame X: Premiere Pro (or Avid, FCP, etc) has to load the frame, render any plugins and then display it. Plugins get rendered in the order you apply them. Fairly straightforward.

The Flicker Free plugin is different. It’s not JUST looking at the current frame. In order to figure out the correct luminance for each pixel (thus removing flicker) it has to look at pixels both before and after the current frame. This means it has to ask the API for up to 21 frames, analyze them, return the result to Premiere, which then finishes rendering the current frame.

So the API says, “Yes, I will do your bidding and get those 21 frames. But first, I must render them!”. And so it does. If there are no plugins applied to them, this is easy. It just hands Flicker Free the 21 original frames and goes on its merry way. If there are plugins applied, the API has to render those on each frame it gives to Flicker Free. FF has to wait around for all 21 frames to be rendered before it can render the current frame. It waits, therefore that means YOU wait. If you need a long coffee break these renders can be great. If not, they are frustrating.

If you use After Effects you may be familiar with pre-comping a layer with effects so that you can use it within a plugin applied to a different layer. This goes through a different portion of the API than when a plugin requests frames programmatically from AE. In the case of a layer in the layer pop-up the plugin just gets the original image with no effects applied. If the plugin actually asks AE for the frame one frame before it, AE has to render it.

One other thing that affects speed behind the scenes… some apps are better at caching frames that plugins ask for than other apps. After Effects does this pretty well, Premiere Pro less so. So this helps AE have faster render times when using Flicker Free and rendering sequentially. If you’re jumping around the timeline then this matters less.

Hopefully this helps you get better render times from Flicker Free. The KEY thing to remember however, is ALWAYS APPLY FLICKER FREE FIRST!

Happy Rendering!

Beauty Work for Corporate Video

Like Digital Anarchy On Facebook

 

We love to talk about how Beauty Box Video is used on feature films by the likes of Local Hero Post and Park Road Post Production  or broadcast TV by NBC or Fox. That’s the big, sexy stuff.

However, many, if not most, of our customers are like Brian Smith. Using Beauty Box for corporate clients or local commercials. They might not be winning Emmy awards for their work but they’re still producing great videos with, usually, limited budgets.   “The time and budget does not usually afford us the ability to bring in a makeup artist.  People that aren’t used to being on camera are often very self-conscious, and they cringe at the thought of every wrinkle or imperfection detracting from their message.”, said Brian, Founder of Ideaship Studios in Tulsa, OK. “Beauty Box has become a critical part of our Final Cut X pipeline because it solves a problem, it’s blazing fast, and it helps give my clients and on-camera talent confidence.  They are thrilled with the end result, and that leads to more business for us.”

An Essential Tool for Beauty Work and Retouching

Beauty Box Video has become an essential tool at many small production houses or in-house video departments to retouch makeup-less/bad lighting situations and still end up with a great looking production. The ability to quickly retouch skin with an automatic mask without needing to go frame by frame is important. However, it’s usually the quality of retouching that Beauty Box provides that’s the main selling point.

Example of Brian Smith's skin retouching for a corporate clientimage courtesy of Ideaship Studios

Beauty Box goes beyond just blurring skin tones. We strive to keep the skin texture and not just mush it up. You want to have the effect of the skin looking like skin, not plastic, which is important for beauty work. Taking a few years off talent and offsetting the harshness that HD/4K and video lights can add to someone. The above image of one of Brian’s clients is a good example.

When viewed at full resolution, the wrinkles are softened but not obliterated. The skin is smoothed but still shows pores. The effect is really that of digital makeup, as if you actually had a makeup artist to begin with. You can see this below in the closeup of the two images. Of course, the video compression in the original already has reduced the detail in the skin, but Beauty Box does a nice job of retaining much of what is there.

Closeup of the skin texture retained by Beauty Box

” On the above image, we did not shoot her to look her best. The key light was a bit too harsh, creating shadows and bringing out the lines.  I applied the Beauty Box Video plugin, and the shots were immediately better by an order of magnitude.  This was just after simply applying the plugin.  A few minutes of tweaking the mask color range and effects sliders really dialed in a fantastic look. I don’t like the idea of hiding flaws.  They are a natural and beautiful part of every person.  However, I’ve come to realize that bringing out the true essence of a person or performance is about accentuating, not hiding.  Beauty Box is a great tool for doing that.” – Brian Smith

Go for Natural Retouching

Of course, you can go too far with it, as with anything. So some skill and restraint is often needed to get the effect of regular makeup and not making the subject look ‘plastic’ or blurred. As Brain says, you want things to look natural.

However, when used appropriately you can get some amazing results, making for happy clients and easing the concerns of folks that aren’t always in front of a camera. (particularly men, since they tend to not want to wear makeup… and don’t realize how much they need it until they see themselves on a 65″ 4K screen. ;-)

One last tip, you can often easily improve the look of Beauty Box even more by using tracking masks for beauty work, as you can see in the tutorials that link goes to. The ability of these masks to automatically track the points that make up the mask and move them as your subject moves is a huge deal for beauty work. It makes it much easier to isolate an area like a cheek or the forehead, just as a makeup artist would.

Removing Flicker from Stadium Lights in Slow Motion Football Video

Like Digital Anarchy On Facebook
One common problem you see a lot is flickering from stadium lights when football or other sports are played back in slow motion. You’ll even see it during the NFL football playoffs. Stadium lights tend to be low quality lights and the brightness fluctuates. You can’t see it normally, but play video back at 240fps… and flicker is everywhere.

Aaron at Griffin Wing Video Productions ran into this problem shooting video of the high school football championship at the North Carolina State stadium. It was a night game and he got some great slomo shots shooting with the Sony FS700, but a ton of flicker from the stadium lights.

Let’s take a look at a couple of his examples and break down how our Flicker Free plugin fixed the problem for him.

First example is just a player turning his head as he gazes down on the field. There’s not a lot of fast movement so this is relatively easy. Here are the Flicker Free plugin parameters from within After Effects (although it works the same if you’re using Premiere, FCP, Avid, etc.)

Video Footage of Football Player with Flickering LightsNotice that ‘Detect Motion’ is turned off and the settings for Sensitivity and Time Radius. Well discuss those in a moment.

Here’s a second example of a wide receiver catching the football. Here there’s a lot more action (even in slow motion), so the plugin needs different settings to compensate for that motion. Here’s the before/after video footage:

Here are the Flicker Free plugin settings:

Football player catching ball under flickering lights

So, what’s going on? You’ll notice that Detect Motion is off. Detect Motion tries to eliminate the ghosting (see below for an example) that can happen when removing flicker from a bunch of frames. (FF analyzes multiple frames to find the correct luminance for each pixel. But ghosts or trails can appear if the pixel is moving) Unfortunately it also reduces the flicker removal capabilities. The video footage we have of the football team has some pretty serious flicker so we need Detect Motion off.

With Detect Motion off we need to worry about ghosting. This means we need to reduce the Time Radius to a relatively low value.

Time Radius tells Flicker Free how many frames to look at before and after the current frame. So if it’s set to 5, it’ll analyze 11 frames: the current frame, 5 before it, and 5 after it. The more frames you analyze, the greater the chance objects will have moved in other frames… resulting in ghosting.

With the player looking our the window, there’s not a lot of motion. Just the turning of his head. So we can get away with a Time Radius of 5 and a Sensitivity of 3. (More about Sensitivity in a moment)

The video with the receiver catching the ball has a LOT more motion. Each frame is very different from the next. So there’s a good chance of ghosting. Here we’ve set Time Radius to 3, so it’s analyzing a total of 7 frames, and set Sensitivity to 10. A Time Radius of 3 is about as low as you can realistically go. In this case it works and the flicker is gone. (As you can see in the above video)

Here’s an example of the WRONG settings and what ‘ghosting’ looks like:

Blurry Video Caused by incorrect Flicker Free settings

Sensitivity is, more or less,  how large of an area the Flicker Free plugin analyzes. Usually I start with a low value like 3 and increase it to find a value that works best. Frequently a setting of 3 works as lower values reduce the flicker more. However, low values can result in more ghosting, so if you have a lot of motion sometimes 5 or 10 works better. For the player turning his head, three was fine. For the receiver we needed to increase it to 10.

So that’s a breakdown of how to get rid of flicker from stadium lights! Thanks to Aaron at Griffin Wing Video Productions for the footage. You can see their final video documenting the High School Football Championship on YouTube.

And you can also view more Flicker Free tutorials if you need additional info on how to get the most out of the Flicker Free plugin in After Effects, Premiere Pro, Final Cut Pro, Avid, or Resolve.

Easy Ways of Animating Masks for Use with Beauty Box in After Effects, Premiere, and Final Cut Pro

Like Digital Anarchy On Facebook

 

We have a new set of tutorials up that will show you how to easily create masks and animate them for Beauty Box. This is extremely useful if you want to limit the skin retouching to just certain areas like the cheeks or forehead.

Traditionally this type of work has been the province of feature films and other big budget productions that had the money and time to hire rotoscopers to create masks frame by frame. New tools built into After Effects and Premiere Pro or available from third parties for FCP make this technique accessible to video editors and compositors on a much more modest budget or time constraints.

Using Masks that track the video to animate them with Beauty Box for more precise retouching

How Does Retouching Work Traditionally?

In the past someone would have to create a mask on Frame 1 and  move forward frame by frame, adjusting the mask on EVERY frame as the actor moved. This was a laborious and time consuming way of retouching video/film. The idea for Beauty Box came from watching a visual effects artist explain his process for retouching a music video of a high profile band of 40-somethings. Frame by frame by tedious frame. I thought there had to be an easier way and a few years later we released Beauty Box.

However, Beauty Box affects the entire image by default. The mask it creates affects all skin areas. This works very well for many uses but if you wanted more subtle retouching… you still had to go frame by frame.

The New Tools!

After Effects and Premiere have some amazing new tools for tracking mask points. You can apply bezier masks that only masks the effect of a plugin, like Beauty Box. The bezier points are ‘tracking’ points. Meaning that as the actor moves, the points move with him. It usually works very well, especially for talking head type footage where the talent isn’t moving around a lot. It’s a really impressive feature. It’s  available in both AE and Premiere Pro. Here’s a tutorial detailing how it works in Premiere:

After Effects also ships with Mocha Pro, another great tool for doing this type of work. This tutorial shows how to use Mocha and After Effects to control Beauty Box and get some, uh, ‘creative’ skin retouching effects!

The power of Mocha is also available for Final Cut Pro X as well. It’s available as a plugin from CoreMelt and they were kind enough to do a tutorial explaining how Splice X works with Beauty Box within FCP. It’s another very cool plugin, here’s the tutorial:

Using a Nvidia GTX 980 (or Titan or Quadro) in a Mac Pro

Like Digital Anarchy On Facebook

 

As many of you know, we’ve come out with a real time version of Beauty Box Video. In order for that to work, it requires a really fast GPU and we LOVE the GTX 980. (Amazing price/performance) Nvidia cards are generally fastest  for video apps (Premiere, After Effects, Final Cut Pro, Resolve, etc) but we are seeing real time performance on the higher end new Mac Pros (or trash cans, dilithium crystals, Job’s Urn or whatever you want to call them).

BUT what if you have an older Mac Pro?

With the newer versions of Mac OS (10.10), in theory, you can put any Nvidia card in them and it should work. Since we have lots of video cards lying around that we’re testing, we wondered if our GTX 980, Titan and Quadro 5200 would work in our Early 2009 Mac Pro. The answer is…

Nvidia GTX GPU in Mac Pro

YES!!!

So, how does it work? For one you need to be running Yosemite (Mac OS X 10.10)

A GTX 980 is the easier of the two GeFroce cards, mainly because of the power needed to drive it. It only needs two six-pin connectors, so you can use the power supply built into the Mac. Usually you’ll need to buy an extra six-pin cable, as the Mac only comes standard with one, but that’s easy enough. The Quadro 5200 has only a single 6-pin connector and works well. However, for a single offline workstation, it’s tough to justify the higher price for the extra reliability the Quadros give you. (and it’s not as fast as the 980)

The tricky bit about the 980 is that you need to install Nvidia’s web driver. The 980 did not boot up with the default Mac OS driver, even in Yosemite. At least, that’s what happened for us. We have heard of reports of it working with the Default Driver, but I’m not sure how common that is. So you need to install the Nvidia Driver Manager System Pref and, while still using a different video card, set the System Pref to the Web Driver. As so:

Set this to Web Driver to use the GTX 980
Set this to Web Driver to use the GTX 980

You can download the Mac Nvidia Web Drivers here:

For 10.10.2

For 10.10.3

For 10.10.4

Install those, set it to Web Driver, install the 980, and you should be good to go.

What about the Titan or other more powerful cards?

There is one small problem… the Mac Pro’s power supply isn’t powerful enough to handle the card and doesn’t have the connectors. The Mac can have two six pin power connectors, but the Titan and other top of the line cards require a 6 pin and an 8 pin or even two 8-pin connectors. REMINDER: The GTX 980 and Quadro do NOT need extra power. This is only for cards with an 8-pin connector.

The solution is to buy a bigger power supply and let it sit outside the Mac with the power cables running through the expansion opening in the back.

As long as the power supply is plugged into a grounded outlet, there’s no problem with it being external. I used a EVGA 850W Power Supply, but I think the 600w would do. The nice thing about these is they come with long cables (about 2 feet or so) which will reach inside the case to the Nvidia card’s power connectors.

Mac Pro external power supply

One thing you’ll need to do is plug the ‘test’ connector (comes with it) into the external power supply’s motherboard connector. The power supply won’t power on unless you do this.

Otherwise, it should work great! Very powerful cards and definitely adds a punch to the Mac Pros. With this setup we had Beauty Box running at about 25fps (in Premiere Pro, AE and Final Cut are a bit slower). Not bad for a five year old computer, but not real time in this case. On newer machines with the GTX 980 you should  be getting real time play back. It really is a great card for the price.

Creating GIFs from Video: The 4K Animated GIF?

Like Us On FacebookLIKE Digital Anarchy!

I was at a user group recently and a video editor from a large ad agency was talking about the work he does.

‘web video’ encompasses many things, especially when it comes to advertising. The editor mentioned that he is constantly being asked to create GIF animations from the video he’s editing. The video may go on one site, but the GIF animation will be used on another one. So while one part of the industry is trying to push 4K and 8K, another part is going backwards to small animated GIFs for Facebook ads and the like.

Online advertising is driving the trend, and it’s probably something many editors deal with daily… creating super high resolution for the broadcast future (which may be over the internet), but creating extremely low res versions for current web based ads.

Users want high resolution when viewing content but ads that aren’t in the video stream (like traditional ads) can slow down a users web browsing experience and cause them to bounce if the file size is too big.

Photoshop for Video?

Photoshop’s timeline is pretty useless for traditional video editing. However, for creating these animated GIFs, it works very well. Save out the frames or short video clip you want to make into a GIF, import them into Photoshop and lay them out on the Timeline, like you would video clips in an editing program. Then select Save For Web… and save it out as a GIF. You can even play back the animation in the Save for Web dialog. It’s a much better workflow for creating GIFs than any of the traditional video editors have.

So, who knew? An actual use for the Photoshop Timeline. You too can create 4K animated GIFs! ;-)

One particularly good example of an animated GIF. Rule #1 for GIFs: every animated GIF needs a flaming guitar.

Odyssey 7Q+ .wav Problem – How to Fix It and Import It into Your Video Editor

Like Digital Anarchy On Facebook

 

We have a Sony FS700 hanging around the Digital Anarchy office for shooting slow motion and 4K footage to test with our various plugins ( We develop video plugins for Premiere Pro, After Effects, Avid, Final Cut Pro, Resolve, etc., etc.) . In order to get 4K out of the camera we had to buy an Odyssey 7Q+ from Convergent Designs (don’t you love how all these cameras are ‘4K – capable’, meaning if you want 4K, it’s another $2500+. Yay for marketing.)

(btw… if you don’t care about the back story, and just want to know how to import a corrupted .wav file into a video editing app, then just jump to the last couple paragraphs. I won’t hold it against you. :-)

The 7Q+ overall is a good video recorder and we like it a lot but we recently ran into a problem. One of the videos we shot didn’t have sound. It had sound when played back on the 7Q+, but when you imported it into any video editing application. no audio.

The 7Q+ records 4K as a series of .dng files with a sidecar .wav file for the audio. The wav file had the appropriate size as if it had audio data (it wasn’t a 1Kb file or something) but importing into FCP, Premiere Pro, Quicktime, or Windows Media Player showed no waveform and no audio.

Convergent Designs wasn’t particularly helpful. The initial suggestion was to ‘rebuild’ the SSD drives. This was suggested multiple times, as if it was un-imaginable this wouldn’t fix it and/or I was an idiot not doing it correctly. The next suggestion was to buy file recovery software. This didn’t really make sense either. The .dng files making up the video weren’t corrupted, the 7Q+ could play it back, and the file was there with the appropriate size. It seemed more likely that the 7Q+ wrote the file incorrectly, in which case file recovery software would do nothing.

So Googling around for people with similar problems I discovered 1) at least a couple other 7Q users have had the same problem and 2) there were plenty of non-7Q users with corrupted .wav files. One technique for the #2 folks was to pull them into VLC Media Player. Would this work for the 7Q+?

YES! Pull it into VLC, then save it out as a different .wav (or whatever) file. It then imported and played back correctly. Video clip saved and I didn’t need to return the 7Q+ to Convergent and lose it for a couple weeks.

Other than this problem the Odyssey 7Q+ has been great… but this was a pretty big problem. Easily fixed though thanks to VLC.

4K Showdown! New MacPro vs One Nvidia GTX 980

Like Digital Anarchy On Facebook

 

For NAB this year we finally bought into the 4K hype and decided to have one of our demo screens be a 4K model, showing off Beauty Box Video and Flicker Free in glorious 4K.

NAB Booth Beauty Box Video and Flicker Free in 4k
The Digital Anarchy NAB Booth

So we bought a 55” 4K Sony TV to do the honors. We quickly realized if we wanted to use it for doing live demos we would need a 4K monitor as well. (We could have just shown the demo reel on it) For live demos you need to mirror the computer monitor onto the TV. An HD monitor upscaled on the 4K TV looked awful, so a 4K monitor it was (we got a Samsung 28″, gorgeous monitor).

Our plan was to use our Mac Pro for this demo station. We wanted to show off the plugins in Adobe’s AE/Premiere apps and Apple’s Final Cut Pro. Certainly our $4000 middle of the line Mac Pro with two AMD D500s could drive two 4K screens. Right?

We were a bit dismayed to discover that it would drive the screens at the cost of slowing the machine down to unusable. Not good.

For running Beauty Box in GPU accelerated mode, our new favorite video card for GPU performance is Nvidia’s GTX 980. The price/performance ratio is just amazing. So we figured we’d plug the two 4K screens into our generic $900 Costco PC that had the GTX 980 in it and see what kind of performance we’d get out of it.

Not only did the 980 drive the monitors, it still ran Beauty Box Video in real time within Premiere Pro. F’ing amazing for a $550 video card.

The GTX 980 single handedly knocked out the Mac Pro and two AMD D500s. Apple should be embarrassed.

I will note, that for rendering and using the apps, the Mac Pro is about on par with the $900 PC + 980. I still would expect more performance from Apple’s $4000 machine but at least it’s not an embarrassment.

iPhone 6 vs Sony FS700: Comparison of Slow Motion Modes (240fps and Higher)

Like Digital Anarchy On Facebook

 

Comparing slow motion modes of the iPhone 6 vs the Sony FS700

The Sony FS700 is a $6000 video camera that can shoot HD up to 960fps or 4K at 60fps. It’s an excellent camera that can shoot some beautiful imagery, especially at 240fps (the 960fps footage really isn’t all that, however).

The iPhone 6 is a $700 phone with a video camera that shoots at 240fps. I thought it’d be pretty interesting to compare the iPhone 6 to the Sony FS700. I mean, the iPhone couldn’t possibly compare to a video camera that is dedicated to shooting high speed video, right? Well, ok yes, you’re right. Usually. But surprisingly, the iPhone 6 holds its own in many cases and if you have a low budget production, could be a solution for you.

Let’s compare them.

kickboxing at 240fps

First the caveats:

1: The FS700 shoots 1080p, the iPhone shoots 720p. Obviously if your footage HAS to be 1080p, then the iPhone is a no go. However, there are many instances where 720p is more than adequate.

2: The iPhone has no tripod mount. So you need something like this Joby phone tripod:

3: You can’t play the iPhone movies created in slow motion on Windows. The Windows version of QuickTime does not support the feature. They can be converted with a video editing app, but this is a really annoying problem for Windows users trying to shoot with the iPhone. The Sony movies play fine on a Mac or Windows machine.

4: The iPhone will automatically try and focus and adjust brightness. This is the biggest problem with the iPhone. If you’re going to shoot with the iPhone you HAVE to consider this. We’ll discuss it a lot more in this article.

5: The iPhone does let you zoom and record, but it’s not an optical zoom so it’s lower quality than the non-zoomed image. With the FS700 you can change lenses, put on a sweet zoom lens, and zoom in to your hearts content. But that’s one of the things you pay the big bucks for. We did not use the iPhone’s zoom feature for any of these shots, so in some cases the iPhone is a bit wider than the FS700 equivalent.

 

The Egg

Our first example is a falling egg. The FS700 footage is obviously better in this case.

The iPhone does very poorly in low light. You can see this in the amount of noise on the black background. It’s very distracting. Particularly since the egg portion IS well lit. Also, you’ll notice that the highlight on the egg is completely blown out.

Unfortunately, there’s nothing you can do about this except light better. One of the problems with the iPhone is the automatic brightness adjustment. It shows up here in the blown out highlight, with no way to adjust the exposure. You get what you get, so you NEED to light perfectly.

In the video there’s also an example of the FS700 shooting at 480fps. The 960fps mode of the FS700 is pretty lacking, but the 480fps does produce pretty good footage. For something like the egg, the 480fps has a better look since the breaking of the egg happens so fast. Even the 240fps isn’t fast enough to really capture it.

All the footage is flickering as well. This is a bit more obvious with the FS700 because there’s no noise in the background. The 480fps footage has been de-flickered with Flicker Free. Compare it with the 240fps to see the difference.

 

The MiniVan

In this case we have a shot of some cars a bit before sunset. This works out much better for the iPhone, but not perfectly. It’s well lit, which seems to be the key for the iPhone.

Overall, the iPhone does a decent job, however it has one problem. As the black van passes by, the iPhone auto-adjusts the brightness. You can see the effect this has by looking the ‘iPhone 6’ text in the video. The text doesn’t change color but the asphalt certainly does, making the text look like it’s changing. This does make the van look better, but it changes the exposure of the whole scene. NOT something you want if you’re shooting for professional uses.

The FS700 on the other hand, we can fix the aperture and shutter speed. This means we keep a consistent look throughout the video. You would expect this with a pro video camera, so no surprise there. It’s doing what it should be doing.

However, if you were to plan for the iPhone’s limitation in advance and not have a massive dark object enter your scene, you would end up with a pretty good slow motion shot. The iPhone is a bit softer than the Sony, but it still looks good!

Also note that when the FS700 is shooting at 480fps, it is much softer as well. This has some advantages, for example the wheels don’t have anywhere near as much motion blur as the 240fps footage. The overall shot is noticeably lower quality, with the bushes in the background being much softer than the 240fps footage.

 

The Plane! The Plane!

Next to the runway at LAX, there’s a small park where you can lay in the grass and watch planes come in about 40 feet above as they’re about to land. If you’ve never seen the underbelly of an A380 up close, it’s pretty awesome. We did not see that when doing this comparison, but we did see some other cool stuff!

Most notably we saw the problem with the iPhone’s inability to lock focus. Since the camera has nothing to focus on, when the plane enters the frame it’s completely out of focus. The iPhone 6 can’t resolve it in the few seconds it’s overhead, so the whole scene is blurry.

Compare that to the FS700 where we can get focus on one plane and when the next one comes in, we’re in focus and capture a great shot.

The iPhone completely failed this test, so the Sony footage is easily the hands down winner.

 

The Kickboxer

One last example where the iPhone performs adequately.

The only problem with this shot is the amount of background noise. As mentioned the iPhone doesn’t do a great job in low light, so there’s a lot of noise on the black background. Because of the flimsy phone tripod, it shakes a lot more as well. However, overall the footage is ok and would probably look much better if we’d used a white background. This footage also has a flicker problem and we used Flicker Free again on the 480fps footage to remove it. You’ll notice the detail of the foot and chalk particles are quite good on the iPhone. Not as good as the FS700, but that’s not really what we’re asking.

We want to know if Apple’s iPhone 6 can produce slow motion, 240fps video that’s good enough for an indie film or some other low budget production. (or even a high budget production where you have a situation you don’t want to (or can’t) put a $6000 camera) If you consider the caveats about the iPhone not being able to lock focus, the auto-adjusting brightness, and shooting in 720p, I think the answer is yes. If you take all that into consideration and plan for it, the footage can look great. (but, yeah… I’m not trading in my FS700 either. ;-)

Samsung Galaxy S5 Does NOT Shoot 240fps. It Shoots 120fps and Plays It Back at 15fps.

Like Digital Anarchy On Facebook

 

Apple’s iPhone 6 and the Samsung Galaxy S5 both shoot 240fps (or so you might think… 1/8th speed at 30fps is 240fps). Since we make Flicker Free, a plugin that removes flicker that occurs when shooting at 240fps, I thought it’d be cool to do a comparison of the two phones and post the results.

However, there was a problem. The footage from the Galaxy S5 seemed to be faster than the iPhone. After looking into a number of possibilities, including user error, I noticed that all the S5 footage was playing back in Quicktime Player at 15fps. Could it be that the Samsung S5 was actually shooting in 120fps and playing it back at 15fps to fake 240fps? Say it’s not so! Yep, it’s so.

To confirm this I took a Stopwatch app and recorded it with the Galaxy S5 at 1/8th speed (which should be 240fps if you assume a 30fps play back like normal video cameras). You can see the result here:

If the S5 was truly shooting at 240fps, over one second the frame count should be 240. It’s not. It’s 120. If you don’t trust me and want to see for yourself, the original footage from the S5 can be downloaded HERE: www.digitalanarchy.com/downloads/samsung_120fps.zip

Overall, very disappointing. It’s a silly trick to fake super slow motion. It’s hardly shocking that Samsung would use a bit of sleight of hand on the specs of their device, but still. Cheesy.

 

You might ask why this makes a difference. It’s still playing really slow. If you’re trying to use it in a video editor and mixing it with footage that IS shot at 30fps (or 24fps), the 15fps video will appear to stutter. Also, from an image quality standpoint, where you really see the problem is in the detail and motion blur. As you can see in this example:

iphone 6 vs samsung galaxy s5 240fps

Also, the overall image quality of the iPhone was superior. But that’s something I’ll talk about when I actually compare them! That’s coming up next!

How Final Cut Pro X Caches Render Files (and how to prevent Beauty Box from re-rendering)

What causes Final Cut Pro X to re-render? If you’ve ever wondered why sometimes the orange ‘unrendered’ bar shows up when you make a change and sometimes it doesn’t… I explain it all here. This is something that will be valuable to any FCP user but can be of the utmost importance if you’re rendering Beauty Box, our plugin for doing skin retouching and beauty work on HD/4K video. (Actually we’re hard at work making Beauty Box a LOT faster, so look for an announcement soon!)

Currently, if you’ve applied Beauty Box to a long clip, say 60 minutes, you can be looking at serious render times (this can happen for any non-realtime effect), possibly twelve hours or so on slower computers and video cards. (It can also be a few hours, just depends on how fast everything is)

FCP showing that footage is unrendered

Recently we had a user with that situation. They had a logo in .png format that was on top of the entire video being used as a bug. So they rendered everything out to deliver it, but, of course, the client wanted the bug moved slightly. This caused Final Cut Pro to want to re-render EVERYTHING, meaning the really long Beauty Box render needed to happen as well. Unfortunately, this is just the way Final Cut Pro works.

Why does it work that way and what can be done about it?

Continue reading How Final Cut Pro X Caches Render Files (and how to prevent Beauty Box from re-rendering)

Why Doesn’t FCP X Support Image Sequences for Time Lapse (among other reasons)

In the process of putting together a number of tutorials on time lapse (particularly stabilizing it), I discovered that FCP X does not import image sequences. If you import 1500 images that have a name with sequential numbers, it imports them as 1500 images. This is a pretty huge fail on the part of FCP. Since it is a video application, I would expect it to do what every other video application does and recognize the image sequence as VIDEO.  Even PHOTOSHOP is smart enough to let you import a series of images as an image sequence and treat it as a video file. (and, no, you should not be using the caveman like video tools in Photoshop for much of anything, but I’m just sayin’ it imports it correctly)

There are ways to get around this. Mainly use some other app or Quicktime to turn the image sequence into a video file.  I recommend shooting RAW when shooting time lapse,  so this means you have to pull the RAW sequence into one of the Adobe apps anyways (Lightroom, After Effects, Premiere) for color correction.  It would be much nicer if FCP just handled it correctly without having to jump through the Adobe apps. Once you’re in the Adobe system, you might as well stay there, IMO.

No, I’m not a FCP X hater. I just like my apps to work the way they should… just as I tore into Premiere and praised FCP for their .f4v (Flash video) support in this blog post.

Time Lapse image sequence in Final Cut Pro failing to load as a single video file

 

What’s wrong with this picture?

Why does Final Cut Pro handle Flash Video f4v files better than Premiere Pro?

First off, if you want Flash’s .f4v files to work in FCP X, you need to change the extension to .mp4. So myfile.f4v becomes myfile.mp4

I’ve been doing some streaming lately with Ustream. It’s a decent platform, but I’m not particularly in love with it (and it’s expensive). Anyways, if you save the stream to a file, it saves it as a Flash Video file (.f4v). The file itself plays back fine. However, if you pull it into Premiere Pro for editing, adding graphics, etc., PPro can’t keep the audio in sync. Adobe… WTF? It’s your file format!

Final Cut Pro X does not have this problem. As mentioned, you need to change the file extension to .mp4, but otherwise it handles it beautifully.

Even if you pull the renamed file into Premiere, it still loses the audio sync. So it’s just a complete fail on Adobe’s part. FCP does a terrific job of handling this even on long programs like this 90 minute panel discussion.

Here’s the Final Cut Pro file, saved out to a Quicktime file and then uploaded to YouTube:

Here’s the Premiere Pro video, also saved out to Quicktime and uploaded. You’ll notice it starts out ok, but then quickly loses audio sync. This is typical in my tests. The longer the video the more out of sync it gets. In this 30 second example it’s not too out of sync, but it’s there.

Breaking Down Using Beauty Box in a Music Video

It’s always cool to see folks posting how they’ve used Beauty Box Video. One of the most common uses is for music videos, including many top artists. Most performers are a little shy about letting it be known they need retouching, so we get pretty excited when something does get posted (even if we don’t know the performer). Daniel Schweinert just posted this YouTube and blog post breaking down his use of Beauty Box Video (and Mocha) for a music video in After Effects. Pretty cool stuff!

Here’s a link to his blog with more information:
http://schweinert.com/blog/files/49c88ab71626af3ecef80da0a92c9677-47.html