Have you ever considered using Transcriptive to build an effective Search Engine Optimization (SEO) strategy and increase the reach of your Social Media videos? Having your footage transcribed right after the shooting can help you quickly scan everything for soundbites that will work for instant social media posts. You can find the terms your audience searches for the most, identify high ranked keywords in your footage, and shape the content of your video based on your audience’s behavior.
According to vlogger and Social Media influencer Jack Blake, being aware of what your audience is doing online is a powerful tool to choose when and where to post your content, but also to decide what exactly to include in your Social Media Videos, which tend to be short and soundbite-like. The content of your media, titles, video descriptions and thumbnails, tags and post mentions should all be part of a strategy built based on what your audience is searching for. And this is why Blake is using Transcriptive not only to save time on editing but also to carefully curate his video content and attract new viewers.
Right after shooting his videos, the vlogger transcribes everything and exports the transcripts as rich text so he can quickly share the content with his team. After that, a Copywriter scans through the transcribed audio and identifies content that will bring traffic to the client’s website and increase ROI. “It’s amazing. I transcribe the audio in minutes, edit some small mistakes without having to leave Premiere Pro, and share the content with my team. After that, we can compare the content with our targeted keywords and choose what I should cut. The editing goes quickly and smoothly because the words are already time-stamped and my captions take no time to create. I just export the transcripts as an SRT and it is pretty much done, explains Blake.
Of course, it all starts with targeting the right keywords and that can be tricky, but there are many analytics and measurement applications offering this service nowadays. If you are just getting started in the whole keyword targeting game, the easiest and most accessible way is connecting your In-site Search queries with Google Analytics. This will allow you to get information on how users are interacting with your website, including how much your audience searches, who is performing searches and who is not, and where they begin searching, as well where they head to afterward. Google Analytics will also allow you to find out what exactly people are typing into Google when searching for content on the web.
For Blake, using competitors’ hashtags from Youtube has been very helpful to increase video views. “One of the differentials in my work is that I research my client’s competitors on Youtube and identify the VidIQs (Youtube keyword tags) they have been using on their videos so we can use competitive tagging in our content description and video title. This allows the content I produced for the client to show when people search for this specific hashtag on Youtube,” he explains. Blake’s team is also using Google Trends, a website that analyzes the popularity of top search queries in Google Search across various regions and languages. It’s a great tool to find out how often a search term is entered in Google’s search engine, compare it to their total search volume, and learn how search trends varied within a certain interval of time.
When asked what would be the last thing he would recommend to video makers wanting to boost their video views on Social Media, Blake had no hesitation in choosing captions. “Social media feeds are often very crowded, fast-moving, and competitive. Nobody has time to open the video as full screen, turn the sound on and watch the whole thing, they often watch the videos without sound, and if the captions are not there then your message will not get through. And Transcriptive makes captioning a very easy process,” he says.
It’s been 5 years since we released Flicker Free, and we can for sure say flickering from artificial lights is still one of the main reasons creatives download our flicker removal plugin. From music videos and reality-based videos to episodics on major networks, small productions to feature-long films, we’ve seen strobing caused by LED and fluorescent lights. It happens all the time and we are glad our team could help fix flickering and see those productions look their best as they get distributed to the public.
Planning a shoot so you can have control of your camera settings, light setup and color balance is still definitely the best way to film no matter what type of videos you are making. However, flickering is a difficult problem to predict and sometimes we just can’t see it happening on set. Maybe it was a light way in the background or an old fluorescent that seemed fine on the small on-set monitor but looked horrible on the 27″ monitor in the edit bay.
Of course, in a perfect world we would take our time to shoot a few minutes of test footage, use a full size monitor to check what the footage looks like, match the frame rate of the artificial light to the frame rate of the camera and make sure the shutter speed is a multiple/division of the AC frequency of the country we are shooting in. Making absolutely sure the image looks sharp and is free of flicker! But we all know this is often not possible. In these situations, post-production tools can save the day and there’s nothing wrong about that!
Travel videos are the perfect example of how sometimes we need to surrender to post-production plugins to have a high-quality finished video. Recently, Handcraft Creative co-owner Raymond Friesen shot beautiful images from pyramids in Egypt. He was fascinated by the scenery but only had a Sony A73 and a 16-70mm lens with him. After working on set for 5 years, with very well planned shoots, he knew the images wouldn’t be perfect but decided to film anyways. Yes, the end result was lots of flicker from older LED lights in the tombs. Nothing that Flicker Free couldn’t fix in post. Here’s a before and after clip:
Spontaneous filmmaking is certainly more likely to need post-production retouches, but we’ve also seen many examples of scripted projects that need to be rescued by Flicker Free. Filmmaker Emmanuel Tenenbaum talked to us about two instances where his large experience with short films was not able to stop LED flicker from showing up on his footage. He purchased the plugin a few years ago for “I’m happy to see you”, and used it again to be able to finish and distribute Two Dollars (Deux Dollars), a comedy selected in 85 festivals around the world, winner of 8 awards, broadcasted on a dozen TV channels worldwide and chosen as Vimeo Staff Pick Premiere of the week. Curious why he got flicker while filming Two Dollars (Deux Dollars)? Tenenbaum talked to us about tight deadlines and production challenges in this user story!
Those are just a few examples of how artificial lights flickering couldn’t be avoided. Our tech support team often receives footage from music video clips, marketing commercials, and sports footage, and seeing Flicker Free remove very annoying, sometimes difficult, flicker in the post has been awesome. We posted some other user story examples on our website so check them out! And If you have some awful flickering footage that Flicker Free helped fix we’d love to see it and give you a shout out on our Social Media channels. Email firstname.lastname@example.org with a link to your video clip!
The struggle of making documentary films nowadays is real. Competition is high, and budget limitations can stretch a 6-year deadline to a 10 year-long production. To make a movie you need money. To get the money you need decent, and sometimes edited, footage material to show to funding organizations and production companies. And decent footage, well-recorded audio, as well as edited pieces cost money to produce. I’ve been facing this problem myself and discovered through my work at Digital Anarchy that finding an automated tool to transcribe footage can be instrumental in making small and low budget documentary films happen.
In this interview, I talked to filmmaker Chuck Barbee to learn how Transcriptive is helping him to edit faster and discussed some tips on how to get started with the plugin. Barbee has been in the Film and TV business for over 50 years. In 2005, after an impressive career in the commercial side of the Film and TV business, he moved to California’s Southern Sierras and began producing a series of personal “passion” documentary films. His projects are very heavy on interviews, and the transcribing process he used all throughout his career was no longer effective to manage his productions.
Barbee has been using Transcriptive for a month, but already consider the plugin a game-changer. Read on to learn how he is using the plugin to makea long-form documentary about the people who created what is known as “The Bakersfield Sound” in country music.
DA: You have worked in a wide variety of productions throughout your career. Besides co-producing, directing, and editing prime-time network specials and series for the Lee Mendelson Productions, you also worked as Director of Photography for several independent feature films. In your opinion. How important is the use of transcripts in the editing process?
CB: Transcripts are essential to edit long-form productions because they allow producers, editors, and directors to go through the footage, get familiarized with the content, and choose the best bits of footage as a team. Although interview oriented pieces are more dependent on transcribed content, I truly believe transcripts are helpful no matter what type of motion picture productions you are making.
On most of my projects, we always made cassette tape copies of the interviews, then had someone manually transcribe them and print hard copies. With film projects, there was never any way to have a time reference in the transcripts, unless you wanted to do that manually. Then in the video, it was easier to make time-coded transcripts, but both of these methods were time-consuming and relatively expensive labor wise. This is the method I’ve used since the late ’60s, but the sheer volume of interviews on my current projects and the awareness that something better probably exists with today’s technology prompted me to start looking for automated transcription solutions. That’s when I found Transcriptive.
DA: And what changed now that you are using Artificial Intelligence to transcribe your filmed interviews in Premiere Pro?
CB: I think Transcriptive is a wonderful piece of software. Of course, it is only as good as the diction of the speaker and the clarity of the recording, but the way the whole system works is perfect. I place an interview on the editing timeline, click transcribe and in about 1/3 of the time of the interview I have a digital file of the transcription, with time code references. We can then go through it, highlighting sections we want, or print a hard copy and do the same thing. Then we can open the digital version of the file in Premiere, scroll to the sections that have been highlighted, either in the digital file or the hard copy, click on a word or phrase and then immediately be at that place in the interview. It is a huge time saver and a game-changer.
The workflow has been simplified quite a bit, the transcription costs are down, and the editing process has sped up because we can search and highlight content inside of Premiere or use the transcripts to make paper copies. Our producers prefer to work from a paper copy of the interviews, so we use that TXT or RTF file to make a hard copy. However, Transcriptive can also help to reduce the number of printed materials if a team wants to do all the work digitally, which can be very effective.
DA: What makes you choose between highlighting content in the panel and using printed transcripts? Are there situations where one option works better than the other?
CB: It really depends on producer/editor choices. Some producers might want to have a hard copy because they would prefer that to work on a computer. It really doesn’t matter much from an editor’s point of view because it is no problem to scroll through the text in Transcriptive to find the spots that have been highlighted on the hard copy. All you have to do is look at the timecode next to the highlighted parts of a hard copy and then scroll to that spot in Transcriptive. Highlighting in Transcriptive means you are tying up a workstation, with Premiere, to do that. If you only have one editing workstation running Premiere, then it makes more sense to have someone do the highlighting with a printed hard copy or on a laptop or any other computer which isn’t running Premiere.
DA: You mentioned the AI transcription is not perfect, but you would still prefer that than paying for human transcripts or transcribing the interviews yourself. Why do you think the automated transcripts are a better solution for your projects?
CB:Transcriptive is amazing accurate, but it is also quite “literal” and will transcribe what it hears. For example, if someone named “Artie” pronounces his name “RD”, that’s what you’ll get. Also, many of our subjects have moderate to heavy accents and that does affect accuracy. Another thing I have noticed is that, when there is a clear difference between the sound of the subject and the interviewer, Transcriptive separates them quite nicely. However, when they sound alike, it can confuse them. When multiple voices speak simultaneously, Transcriptive also has trouble, but so would a human.
My team needs very accurate transcripts because we want to be able to search through 70 or more transcripts, looking for keywords that are important. Still, we don’t find the transcription mistakes to be a problem. Even if you have to go through the interview when it comes back to make corrections, It is far simpler and faster than the manual method and cheaper than the human option. Here’s what we do: right after the transcripts are processed, we go through each transcript with the interviews playing along in sync, making corrections to spelling or phrasing or whatever, especially with keywords such as names of people, places, themes, etc. It doesn’t take too much time and my tip is that you do it right after the transcripts are back, while you are watching the footage to become familiar with the content.
DA: Many companies are afraid of incorporating Transcriptive into an on-going project workflow. How was the process of using our transcription plugin in a long-form documentary film right away?
CB: We have about 70 interviews of anywhere from 30 minutes to one hour each. It is a low budget project, being done by a non-profit called “Citizens Preserving History“.The producers were originally going to try to use time-code-window DVD copies of the interviews to make notes about which parts of the interviews to use because of budget limitations. They thought the cost of doing manually typed transcriptions was too much. But as they got into the process they began to see that typed transcripts were going to be the only way to go. Once we learned about Transcriptive and installed it, it only took a couple of days to do all 70 interviews and the cost, at 12 cents per minute is small, compared to manual methods.
Transcriptive is very easy to use and It honestly took almost no time for me to figure out the workflow. The downloading and installation process was simple and direct and the tech support at Digital Anarchy is awesome. I’ve had several technical questions and my phone calls and emails have been answered promptly, by cheerful, knowledgeable people who speak my language clearly and really know what they are doing. They can certainly help quickly if people feel lost or something goes wrong so I would say do yourself a favor and use Transcriptive in your project!
Here’s a short version of the opening tease for “The Town That Wouldn’t Die”, Episode III of Barbee’s documentary series:
Recently, an increasing number of Transcriptive users have been requesting a way of using After Effects to create burned-in subtitles using SRTs from Transcriptive. This made us anarchists get excited about making a Free After Effects SRT Importer for Subtitling And Captions.
Captioning videos is more important now than ever before. With the growth of mobile and Social Media streaming, YouTube and Facebook videos are often watched without sound and subtitles are essential to retain your audience and make them watchable. In addition to that, the Federal Communications Commission (FCC) has implemented rules for online video that require subtitles so people with disabilities can fully access media content and actively participate in the lives of their communities.
As a consequence, a lot of companies have style guides for their burned-in subtitles and/or want to do something more creative with the subtitles than what you get with standard 608/708 captions. I mean, how boring is white, monospaced text on a black background? After Effects users can do better.
While Premiere Pro does allow some customization of subtitles, creators can get greater customization via After Effects. Many companies have style guides or other requirements that specify how their subtitles should look. After Effects can be an easier place to create these types of graphics. However, it doesn’t import SRT files natively so the SRT Importer will be very useful if you don’t like Premiere’s Caption Panel or need subtitles that are more ‘designed’ than what you can get with normal captions. The script makes it easy to customize subtitles and bring them into Premiere Pro. Here’s how it works:
Windows: C:\Program Files\Adobe\Adobe After Effects CC 2019\Support Files\Scripts\ScriptUI Panels
Mac: Applications\Adobe After Effects CC 2019\Scripts\ScriptUI Panels
4. Restart AE. It’ll show up in After Effects under the Window\Transcriptive_Caption
5. Create a new AE project with nothing in it. Open the panel and set the parameters to match your footage (frame rate, resolution, etc). When you click Apply, it’ll ask for an SRT file. It’ll then create a Comp with the captions in it.
Select the text layer and open the Character panel to set the font, font size, etc. Feel free to add a drop shadow, bug or other graphics.
7. Save that project and import the Comp into Premiere (Import the AE project and select the Comp). If you have a bunch of videos, you can run the script on each SRT file you have and you’ll end up with an AE project with a bunch of comps named to match the SRTs (currently it only supports SRT). Each comp will be named: ‘Captions: MySRT File’. Import all those comps into Premiere.
8. Drop each imported comp into the respective Premiere sequence. Double-check the captions line up with the audio (same as you would for importing an SRT into Premiere). Queue the different sequences up in AME and render away once they’re all queued up. (and keep in mind it’s beta and doesn’t create the black backgrounds yet).
Although especially beneficial to Transcriptive users, this free After Effects SRT Importer for Subtitling And Captions will work with any SRT from any program and it’s definitely easier than all the steps above make it sound and it is available for all and sundry on our website. Give it a try and let us know what you think! Contact: email@example.com
When cutting together a documentary (or pretty much anything, to be honest), you don’t usually have just a single clip. Usually there are different clips, and different portions of those clips, here, there and everywhere.
Our transcription plugin, Transcriptive, is pretty smart about handling all this. So in this blog post we’ll explain what happens if you have total chaos on your timeline with cuts and clips scattered about willy nilly.
If you have something like this:
Transcriptive will only transcribe the portions of the clips necessary. Even if the clips are out of order. For example, the ‘Drinks1920’ clip at the beginning might be a cut from the end of the actual clip (let’s say 1:30:00 to 1:50:00) and the Drinks cut at the end might be from the beginning (e.g. 00:10:00 to 00:25:00).
If you transcribe the above timeline, only 10:00-25:00 and 1:30:00-1:50:00 of Drinks1920.mov will be transcribed.
If you Export>Speech Analysis, select the Drinks clip, and then look in the Metadata panel, you’ll see the Speech Analysis for the Drinks clip will have the transcript for those portions of the clip. If you drop those segments of the Drinks clip into any other project, the transcript comes along with it!
The downside to _only_ transcribing the portion of the clip on the timeline is, of course, the entire clip doesn’t get transcribed. Not a problem for this project and this timeline, but if you want to use the Drinks clip in a different project, the segment you choose to use (say 00:30:00 to 00:50:00) may not be previously transcribed.
However, if you drop the clip into another sequence, transcribe a time span that wasn’t previously transcribed and then Export>Speech Analysis, that new transcription will be added to the clips metadata. It wasn’t always this way, so make sure you’re using Transcriptive v1.5.2. If you’re in a previous version of Transcriptive and you Export>Speech Analysis to a clip that already has part of a transcript in SA, it’ll overwrite any transcripts already there.
So feel free to order your clips any way you want. Transcriptive will make sure all the transcript data gets put into the right places. AND… make sure to Export>Speech Analysis. This will ensure that the metadata is saved with the clip, not just your project.
Vertical Video is here to stay. It still makes me cringe a bit when I see people filming portrait. Since my early video journalism classes back in Brazil, shooting landscape ratio was a set rule that has always felt natural. However, nowadays the reality is that, sooner or later, a client will ask you to shoot and edit high-quality videos for their Social Media pages. And Social Media channels are mainly accessed through smartphones and tablets, which means posting portrait videos will be essential to engage and build a strong audience.
Shooting vertical is easy when you just want to post some footage of your weekend fun, but requires a change of perspective when the goal is to produce, shoot and edit professional videos instead. In that case, it’s important to produce a video that has a vertical aspect ratio in mind from the beginning of the process. But what happens when your production is meant to screen across different platforms and needs to fit vertical aspect ratio requirements? In this case, shooting 4K is gives you a lot of flexibility in post.
Most social video is posted at HD resolution, so why 4K? Cropping horizontal video to fit vertical screen usually leads to very pixelated and low-quality footage. When your frames need to be taller than they are wide, your standard 16:9 frame will need to be dramatically resized to fit the 9:16 smartphone screen and your regular HD resolution won’t allow for the image to stay sharp and clean. Shooting 4K will give you extra pixels to work with and make it easy to reposition the frame in post as you wish.
In addition to having more room for reframing, if your original footage has a quadrupled resolution then you can zoom in cleanly since you have a much better source video to work with. This is a huge advantage because Vertical Video is all about showing detail so you can make a deeper connection with your audience. 4K will give you the flexibility to efficiently adjust to vertical and square formats, and still preserve the option to watch a broader image of your subject on our beloved 16:9 standard film and television format.
Of course, you can always just upload a horizontal video on Instagram or Snachap, but don’t expect your audience to take the time to turn their phones around just to watch your video. Chances are they will keep holding their phone with one hand and careless watch your footage in a small window across the screen. It’s obvious that adjusting to 9:16 aspect ratio requires a change of perspective and demands us to rethink the way we produce, shoot and edit video. But isn’t it what film school is always trying to teach us?
Formats are changing, vertical streaming is a very strong distribution method, and mobile filmmaking is growing every day. It’s up to us, video makers, to reflect on the changes and find a balance between adjusting to our audiences and not losing image quality. I don’t believe vertical video will ever replace landscape aspect ratios, but I do think it is a solid format for short internet videos so let’s take advantage of it and get ready for the next challenge.
Recently our CEO Jim Tierney invited me to start a Podcast for Digital Anarchy. I have a journalism background and at first, the idea did not sound too bad: it would actually be awesome to take the time to chat with industry folks in a regular basis and be paid for it. The challenge began when he said I would do a video podcast, interviewing all these awesome people on camera.
It may sound silly to some people, but the idea of watching myself on camera terrifies me. Believe it or not, to this day I have not watched a video interview I gave at NAB last April. I have only listened to it, and noticing my accent in each answer was enough to make me skip the image part. Since that day Jim invited me to start the “videocast”, I have been trying to understand my fear of being on camera and my relationship with my own image. As a media professional, why can’t I look at myself on the screen? Digging into that question brought unexpected answers and the need to talk about a problem every woman faces at least once in their lives, if not all the time: beauty standards.
Being skinny has always been a prerequisite to be beautiful in my culture. It is difficult, painful, and traumatizing to grow up in Brazil as a not-so-skinny girl. If you are overweight it means you are also sedentary, unhealthy and unattractive by proxy. And believe me, you do not need to have much fat to be considered overweight in Brazil. My curly hair also did not help. Although I am from Salvador, which has the biggest African descendance in Brazil, curly hair was not a thing until very recently. I grew up straightening my hair with chemicals and only stopped doing that 4 years ago. It is hard to admit and think back, but looking at my graduation pictures from 10 years ago, looking at the popular girls at school, I realized I was just trying to belong.
I always knew most of my insecurities came from the dissatisfaction with the way a look, but I also learned very early on that not feeling pretty does not mean I am not pretty. What it means is that society sets unachievable beauty standards for women and that I must fight that daily if I want to be productive and help to minimize the harm our industry has caused to women. This was enough to deal with my own insecurity and keep me going. What I didn’t realize is that it wasn’t enough to solve the problem.
Every day the media reminds you of what it means to be beautiful to society: tall, skinny, and mostly white. Blacks, Latinas, middle-eastern are now accepted. They just need to be skinny. It’s an old and well-known problem, and although a lot of women are freeing themselves from it, most of us still compare ourselves to this woman we see on TV sometimes. In my case, I started to notice that those intangible standards can impact not only my eating and exercising habits; what I wear and how I wear the clothes I buy; but it can influence my behavior and stop me from growing professionally if I don’t face it.
What can we can do to minimize the harm our industry has already caused to women is clear to me: we must stand up and fight for inclusion, equal rights, full access to every job position available in the industry. We must include all body types in commercials, magazines, TV shows. We must have women featuring not only as personal assistant AI voices, but also coding and training the AI technology. However, for those who are already aware of this or working on solving the big picture, I ask: what can we do to do not only free other women but truly free ourselves and stop shaming our own images silently? I don’t fully know the answer, but I will start with producing, editing and hosting the Digital Anarchy podcast. It will be incredibly difficult, but I can’t wait to discuss media-making with you all. Stay tuned! More info coming up soon.
Releasing new products is awesome, but to me, the best part of working for a video/photo plugin company is to see how our clients are using our products day-to-day. From transcription to flicker removal and skin retouching, content creators all over the world are using plugins to create better content and images. There are so many talented content creators making cool stuff out there!
This week we talked to Margarita Monet, lead singer of Edge of Paradise. The band — Dave Bates-guitars, David Ruiz – guitars, Vanya Kapetanovic – bass, and Jimmy Lee – drums — has been taking advantage of visual effects to enrich their music and create unique videos. In this interview, Margarita discusses how visual effects are helping to shape Edge of Paradise’s identity and explains how she has been using Beauty Box Video to improve the image quality of her videos.
Digital Anarchy: How would you describe the Edge of Paradise music and style?
Monet: Our music has evolved over the years. I would say we started with traditional hard rock and heavy metal, influenced by the classic bands like Black Sabbath and Iron Maiden. But our music evolved into something more of a cinematic hard rock with an industrial edge. I incorporated the piano and keyboard which gave some songs a symphonic feel to them. Our music is very dynamic, with blood pumping drums, epic choruses all moved by heavy guitar riffs. But we also have very melodic and dynamic piano ballads. The upcoming album Universe really showcases what Edge Of Paradise is all about, and we are so excited to share this unique sound we created!
Digital Anarchy: Since the very beginning, your music videos have been full of visual effects. Where do all the VFX ideas come from? Are they mostly done in post-production?
Monet: Most of the visual effects we actually tried to capture on camera and enhance it in post. Except for one of the Lyric Videos ( Dust To Dust ) that was all done in After Effects.
Dust to Dust, 2017:
Usually, ideas came from me, and whoever we were working with helped us bring them to life. We’ve had to get very creative playing with light, with props, building the settings. And as the band grows our videos get more and more elaborate and we all get more creative. We recently released a music video we shot in Iceland (Face of Fear), that one was directed by Val Rassi and edited by Robyn August. No visual effects there, just all scenery captured by an amazing drone pilot Darren LaFreniere!
Face of Fear, 2019:
Digital Anarchy: How long does it usually take to produce your videos? Is the whole band always involved in each stage of production?
Monet: Depends on the video. Some take about a month, where I come up with an idea and location/setting and we shoot it. Some videos take longer with a lot of planning and it’s a group effort. And there is always something we have to do in between, whether it’s playing shows or touring. Filming usually is a 1-2 day shoot, and we allow about 1-2 months for editing to be done.
We plan as much as possible and try to create beautiful shots for each take. However, things don’t always go as planned or we can’t achieve the perfect look we want. That’s when visual effects come handy. Recently we shot a live video of an acoustic version of one of our songs. It was shot in a recording studio and we had some limitation with lighting. I was searching for something I could do to polish up the look and came across Digital Anarchy. When using 4K cameras, it creates a very high-quality image, and all the details are visible, so we decided to try Beauty Box video. It is such a great tool to polish up the look! Extremely effective and time efficient.
Digital Anarchy: How is Beauty Box helping you to achieve the look you want on your music videos?
Monet: We put so much effort into creating the settings and the “world” of the video that it’s only expected to have everything look polished and coherent. Sometimes we might have this great shot, but one of our faces looks shiny, or the light is not completely flattering. Beauty Box can fix those issues and allow us to use the shot we want!
Digital Anarchy: What was your first music video as a band and what do you think has changed so far?
Monet: Our first video was Mask, it sounds and looks like a completely different band. We had to start somewhere. It’s a well-done video, we’ve had probably the largest crew working on that to date, over 10 people, and we learned a lot from it! It was also a different lineup, so the band was still evolving. But it does not even come close to what we look and sound like now!
Digital Anarchy: Would you say the visual effects applied to photos and videos nowadays are part of the band’s identity?
Monet: Yes, we want to transport people to another world, and we want to do that in our live show as well. That is why we are building our stage show to reflect the imagery of the band when we start touring in support of the upcoming album Universe. Our vision from the beginning was always larger than life so I would say it’s a part of our identity.
I want our content to make a big impact visually. We put so much time and effort into our songs to make sure all our music, from songwriting to production, is the best it can be. We have to do the same with video! And now we can put more time and effort into creating videos that tell great stories; that are visually stunning and are of the highest quality. That is essential to keeping the band growing.
I think the fact that we do have quite a few videos, not just music videos, but promo videos as well, helped us keep building momentum. Especially today, people expect that from you. Being a newer band, especially in the beginning, it was a big challenge and I didn’t know much about video creation, so I had to learn very fast.
Digital Anarchy: Every member of the band is somehow connected to other art forms besides music. How do you think this impacts the aesthetics of the band now?
Monet: I think these days, being in a band is not just about making music, we must create a world that people will want to be a part of. And I love that, I love the visual aspect of it, I love creating a stage show, creating music videos. I make a lot of graphics and art for the band as well, and in a way that helps me with the songwriting, because I can really visualize the world I’m creating. We have a great collection of people, all their skills and ideas come into play when we evolve our world!
Digital Anarchy: After producing and editing so many music videos, what is your favorite visual effect?
Monet: The last video we worked on with Nick Peterson, he created a really cool effect where he filmed us at different playback speeds/frame rates that gave certain parts of the video more of a static/robotic feel, some parts are smooth, slow motion. It created a really cool effect and gave the video the right dynamics and motion that flows right with the song. Some other effects in the past that I liked was playing with light flares and earthquake effect is also great for music videos!
Dave, I, and the rest of the band members are very hands-on nowadays. We have a smaller 2-5 people crew, which helps everything run smoother and more efficient. Most of the time we have 1 or 2 days to shoot and as the videos get more elaborate, we must work fast and get very creative. The last video we shot with Nick Peterson (Universe) we captured so much in 1 day. It’s great to work with people who understand how to maximize the time to capture what we need to achieve the vision!
The trainer for Universe is not yet ready, but here is a sneak peek!
With a solid line-up, Edge of Paradise is working on new music videos and getting ready to release their new Album, Universe.Check their website to learn more!
Are you a content creator using Digital Anarchy plugins to produce video materials? Get in touch! We would love to learn more about your work and spread the word.
Unless you’ve been living under a rock, you know it’s March Madness… time for the NCAA Basketball Tournament. This is actually my favorite two weekends of sports a year. I’m not a huge sports guy, but watching all the single elimination games, rooting for underdogs, the drama, players putting everything they have into these single games… it’s really a blast. All the good things about sport.
It’s also the time of year that flicker drives me a little crazy. One of the downside of developing Flicker Free is that I start to see flicker everywhere it happens. And it happens a lot during the NCAA tournament. Especially slow motion shots . Now, I understand that those are during live games and playing it back immediately is more important than removing some flicker. Totally get it.
However, for human interest stories recorded days or weeks before the tournament? Slow motion shots used two days after they happened? C’mon! Spend 5 minutes to re-render it with Flicker Free. Seriously.
Here’s a portion of a story about Gonzaga star Rui Hachimura:
Most of the shots have the camera/light sync problem that Flicker Free is famous for fixing. The original has the rolling band flicker that’s the symptom of this problem, the fixed version took all of three minutes to fix. I applied Flicker Free, selected the Rolling Bands 4 preset (this is always the best preset to start with) and rendered it. It looks much better.
So if you know anyone at the NCAA in post production, let them know they can take the flicker out of March Madness!
We’ve released PowerSearch 1.0 for Premiere Pro! It’s a new part of the Transcriptive suite of tools that’s essentially a search engine for Premiere letting you search clips, sequences, markers, metadata and captions all in one place.
It streamlines your editing by allowing you to quickly search hours of video for words or phrases. While it works best when used in conjunction with Transcriptive, it plays well with any service that can get transcripts or SRTs (captions) into Premiere Pro. It’s all about helping you find data, we don’t care where the data comes from.
Like any search engine, it displays a list of results . In most cases, clicking on the result takes you to the exact moment the words were spoken in either the Source panel (clips) or the Timeline panel (sequences). If you’ve ever been asked to find a 15 second quote and had to dig through 50 hours of footage to find it, you know how valuable of a time saving tool this is.
I decided to try Transcriptive way before I became part of the Digital Anarchy family. Just like any other aspiring documentary filmmaker, I knew relying on a crew to get my editing started was not an option. Without funding you can’t pay a crew; without a crew you can’t get funding. I had no money, an idea in my head, some footage shot with the help of friends, and a lot of work to do. Especially when working on your very first feature film.
Besides being an independent Filmmaker and Social Media strategist for DA, I am also an Assistive Technology Trainer for a private company called Adaptive Technology Services. I teach blind and low vision individuals how to take advantage of technology to use their phones and computers to rejoin the workforce after their vision loss. Since the beginning of my journey as an AT Trainer – I started as a volunteer 6 years ago – I have been using my work to research the subject and prepare for this film.
My movie is about the relationship between the sighted and non-sighted communities. It seeks to establish a dialog between people with and without visual disabilities so we can come together to demystify disabilities to those without them. I know it is an important subject, but right from the beginning of this project I learned how hard it is to gather funds for any disability-related initiative. I had to carefully budget the shoots and define priorities. Paying a post-production crew was not (and still is not) possible. I have to write and cut samples on my own for now. Transcriptive was a way for me to get things moving by myself so I can apply for grants in the near future and start paying producers, editors, camera operators, sound designers, and get the project going for real. The journey started with transcribing the interviews. Transcriptive did a pretty good job with transcribe the audio from the camera as you can see below. Accuracy got even better when transcribing audio from the mic.
The idea of getting accurate automated transcripts brought a smile to my face. But could Artificial Intelligence really get the job done for me? I never believed so, and I was right. The accuracy for English interviews was pretty impressive. I barely had to do any editing on those. The situation changed as soon as I tried transcribing audio in my native language, Brazilian Portuguese. The AI transcription didn’t just get a bit flaky; it was completely unusable so I decided to do not waste more time and start doing my manual transcriptions.
I have been using Speechmatics for most of my projects because the accuracy is considerably higher than Watson with English. However, after trying to transcribe in Portuguese for the first time, it occurred to me Speechmatics actually offers Portuguese from Portugal while Watson transcribes Portuguese from Brazil. I decided to give Watson a try, but the transcription was not much better than the one I got from Speechmatics.
It is true the Brazilian Portuguese footage I was transcribing was b-roll clips recorded with a Rhode mic; placed on top of my DSLR. They were not well mic’d sit down interviews. The clips do have decent audio, but also involve some background noise that does not help foreign language speech-to-text conversion. At the time I had a deadline to match and was not able to record better audio and compare Speechmatics and Watson Portuguese transcripts. Will be interesting to give it another try, with more time to further compare and evaluate if there are advantages on using Watson for my next batch of footage.
Days after my failed attempt to transcribe Brazilian Portuguese with Speechmatics, I went back to the Transcriptive panel for Premiere, found an option to import my human transcripts, gave it a try, and realized I could still use Transcriptive to speed up my video production workflow. I could still save time by letting Transcriptive assign timecode to the words I transcribed, which would be nearly impossible for me to do on my own. The plugin allowed me to quickly find where things were said in 8 hours of interviews. Having the timecode assigned to each word allowed me to easily search the transcript and jump to that point in my video where I wanted to have a cut, marker, b-roll or transition effect applied.
My movie is still in pre-production and my Premiere project is honestly not that organized yet so the search capability was also a huge advantage. I have been working on samples to apply for grants, which means I have tons of different sequences, multicam sequences, markers that now live in folders inside of folders. Before I started working for DA I was looking for a solution to minimize the mess without having to fully organize it or spend too much money and Power Search came to the rescue. Also, being able to edit my transcripts inside of Premiere made my life a lot easier.
Last month, talking to a few film clients and friends, I found out most filmmakers still clean up human transcripts. In my case, I go through the transcripts to add punctuation marks and other things that will remind me how eloquent speakers were in that phrase. Ellipses, question marks and exclamation points remind me of the tone they spoke allowing me to get paper cuts done faster. I am not sure ASR technology will start entering punctuation in the future, but it would be very handy to me. While this is not a possibility, I am grateful Transcriptive now offers a text edit interface, so I can edit my transcripts without leaving Premiere.
For the movie I am making now I was lucky enough to have a friend willing to help me getting this tedious and time-consuming part of the work done so I am now exporting all my transcripts to Transcriptive.com. The app will allow us to collaborate on the transcript. She will be helping me all the way from LA, editing all the Transcripts without having to download a whole Premiere project to get the work done.
For the last 14 years I’ve created the Audio Art Tour for Burning Man. It’s kind of a docent led audio guide to the major art installations out there, similar to an audio guide you might get at a museum.
Burning Man always has a different ‘theme’ and this year it was ‘I, Robot’. I generally try and find background music related to the theme. EDM is big at Burning Man, land of 10,000 DJs, so I could’ve just grabbed some electronic tracks that sounded robotic. Easy enough to do. However I decided to let Artificial Intelligence algorithms create the music! (You can listen to the tour and hear the different tracks)
This turned out to be not so easy, so I’ll break down what I had to do to get seven unique sounding, usable tracks. I had a bit more success with AmperMusic, which is also currently free (unlike Jukedeck), so I’ll discuss that first.
Getting the Tracks
The problem with both services was getting unique sound tracks. The A.I. has a tendency of creating very similar sounding music. Even if you select different styles and instruments you often end up with oddly similar music. This problem is compounded by Amper’s inability to render more than about 30 seconds of music.
What I found I had to do was let it generate 30 seconds randomly or with me selecting the instruments. I did this repeatedly until I got a 30 second sample I liked. At which point I extended it out to about 3 or 4 minutes and turned off all the instruments but two or three. Amper was usually able to render that out. Then I’d turn off those instruments and turn back on another three. Then render that. Rinse, repeat until you’ve rendered all the instruments.
Now you’ve got a bunch of individual tracks that you can combine to get your final music track. Combine them in Audition or even Premiere Pro (or FCP or whatever NLE) and you’re good to go. I used that technique to get five of the tracks.
Jukedeck didn’t have the rendering problem but it REALLY suffered from the ‘sameness’ problem. It was tough getting something that really sounded unique. However, I did get a couple good tracks out of it.
Problems Using Artificial Intelligence
This is another example of A.I. and Machine Learning that works… sort of. I could have found seven stock music tracks that I like much faster (this is what I usually do for the Audio Art Tour). The amount of time it took me messing around with these services was significant. Also, if Jukedeck is any indication, a music track from one of these services will cost as much as a stock music track. Just go to Pond5 to see what you can get for the same price. With a much, much wider variety. I don’t think living, breathing musicians have much to worry about. At least for now.
That said, I did manage to get seven unique, cool sounding tracks out of them. It took some work, but it did happen.
As with most A.I./ML, it’s difficult to see what the future looks like. There has certainly been a ton of advances, but I think in a lot of cases, it’s some of the low hanging fruit. We’re seeing that with Speech-to-text algorithms in Transcriptive where they’re starting to plateau and cluster around the same accuracy levels. The fruit (accuracy) is now pretty high up and improvement are tough. It’ll be interesting to see what it takes to break through that. More data? Faster servers? A new approach?
I think music may be similar. It seems like it’s a natural thing for A.I. but it’s deceptively difficult to do in a way that mimics the range and diversity of styles and sounds that many human musicians have. Particularly a human armed with a synth that can reproduce an entire orchestra. We’ll see what it takes to get A.I. music out of the Valley of Sameness.
1) Practically every company exhibiting was talking about A.I.-something.
2) VR seemed to have disappeared from vendor booths.
The last couple years at NAB, VR was everywhere. The Dell booth had a VR simulator, Intel had a VR simulator, booths had Oculuses galore and you could walk away with an armful of cardboard glasses… this year, not so much. Was it there? Sure, but it was hardly to be seen in booths. It felt like the year 3D died. There was a pavilion, there were sessions, but nobody on the show floor was making a big deal about it.
In contrast, it seemed like every vendor was trying to attach A.I. to their name, whether they had an A.I. product or not. Not to mention, Google, Amazon, Microsoft, IBM, Speechmatics and every other big vendor of A.I. cloud services having large booths touting how their A.I. was going to change video production forever.
I’ve talked before about the limitations of A.I. and I think a lot of what was talked about at NAB was really over promising what A.I. can do. We spent most of the six months after releasing Transcriptive 1.0 developing non-A.I. features to help make the A.I. portion of the product more useful. The release were announcing today and the next release coming later this month will focus on getting around A.I. transcripts completely by importing human transcripts.
There’s a lot of value in A.I. It’s an important part of Transcriptive and for a lot use cases it’s awesome. There are just also a lot of limitations. It’s pretty common that you run into the A.I. equivalent of the Uncanny Valley (a CG character that looks *almost* human but ends up looking unnatural and creepy), where A.I. gets you 95% of the way there but it’s more work than it’s worth to get the final 5%. It’s better to just not use it.
You just have to understand when that 95% makes your life dramatically easier and when it’s like running into a brick wall. Part of my goal, both as a product designer and just talking about it, is to help folks understand where that line in the A.I. sand is.
I also don’t buy into this idea that A.I. is on an exponential curve and it’s just going to get endlessly better, obeying Moore’s law like the speed of processors.
When we first launched Transcriptive, we felt it would replace transcriptionists. We’ve been disabused of that notion. ;-) The reality is that A.I. is making transcriptionists more efficient. Just as we’ve found Transcriptive to be making video editors more efficient. We had a lot of folks coming up to us at NAB this year telling us exactly that. (It was really nice to hear. :-)
However, much of the effectiveness of Transcriptive comes more from the tools that we’ve built around the A.I. portion of the product. Those tools can work with transcripts and metadata regardless of whether they’re A.I. or human generated. So while we’re going to continue to improve what you can do with A.I., we’re also supporting other workflows.
Over the next couple months you’re going to see a lot of announcements about Transcriptive. Our goal is to leverage the parts of A.I. that really work for video production by building tools and features that amplify those strengths, like PowerSearch our new panel for searching all the metadata in your Premiere project, and build bridges to other technology that works better in other areas, such as importing human created transcripts.
Should be a fun couple months, stay tuned! btw… if you’re interested in joining the PowerSearch beta, just email us at firstname.lastname@example.org.
Addendum: Just to be clear, in one way A.I. is definitely NOT VR. It’s actually useful. A.I. has a lot of potential to really change video production, it’s just a bit over-hyped right now. We, like some other companies, are trying to find the best way to incorporate it into our products because once that is figured out, it’s likely to make editors much more efficient and eliminate some tasks that are total drudgery. OTOH, VR is a parlor trick that, other than some very niche uses, is going to go the way of 3D TV and won’t change anything.
Chief Executive Anarchist
A.I. is definitely changing how editors get transcripts and search video for content. Transcriptive demonstrates that pretty clearly with text. Searching via object recognition is something that also is already happening. But what about actual video editing?
One of the problems A.I. has is finishing. Going the last 10% if you will. For example, speech-to-text engines, at best, have an accuracy rate of about 95% or so. This is about on par with the average human transcriptionist. For general purpose recordings, human transcriptionists SHOULD be worried.
But for video editing, there are some differences, which are good news. First, and most importantly, errors tend to be cumulative. So if a computer is going to edit a video, at the very least, it needs to do the transcription and it needs to recognize the imagery. (we’ll ignore other considerations like style, emotion, story for the moment) Speech recognition is at best 95%, object recognition is worse. The more layers of AI you have, usually those errors will multiply (in some cases there might be improvement though) . While it’s possible automation will be able to produce a decent rough cut, these errors make it difficult to see automation replacing most of the types of videos that pro editors are typically employed for.
Secondly, if the videos are being done for humans, frequently the humans don’t know what they want. Or at least they’re not going to be able to communicate it in such a way that a computer will understand and be able to make changes. If you’ve used Alexa or Echo, you can see how well A.I. understands humans. Lots of situations, especially literal ones (find me the best restaurant), it works fine, lots of other situations, not so much.
Many times as an editor, the direction you get from clients is subtle or you have to read between the lines and figure out what they want. It’s going to be difficult to get A.I.s to take the way humans usually describe what they want, figure out what they actually want and make those changes.
Third… then you get into the whole issue of emotion and storytelling, which I don’t think A.I. will do well anytime soon. The Economist recently had an amusing article where it let an A.I. write the article. The result is here. Very good at mimicking the style of the Economist but when it comes to putting together a coherent narrative… ouch.
It’s Not All Good News
There are already phone apps that do basic automatic editing. These are more for consumers that want something quick and dirty. For most of the type of stuff professional editors get paid for, it’s unlikely what I’ve seen from the apps will replace humans any time soon. Although, I can see how the tech could be used to create rough cuts and the like.
Also, for some types of videos, wedding or music videos perhaps, you can make a pretty solid case that A.I. will be able to put something together soon that looks reasonably professional.
You need training material for neural networks to learn how to edit videos. Thanks to YouTube, Vimeo and the like, there is an abundance of training material. Do a search for ‘wedding video’ on YouTube. You get 52,000,000 results. 2.3 million people get married in the US every year. Most of the videos from those weddings are online. I don’t think finding a few hundred thousand of those that were done by a professional will be difficult. It’s probably trivial actually.
Same with music videos. There IS enough training material for the A.I.s to learn how to do generic editing for many types of videos.
For people that want to pay $49.95 to get their wedding video edited, that option will be there. Probably within a couple years. Have your guests shoot video, upload it and you’re off and running. You’ll get what you pay for, but for some people it’ll be acceptable. Remember, A.I. is very good at mimicking. So the end result will be a very cookie cutter wedding video. However, since many wedding videos are pretty cookie cutter anyways… at the low end of the market, an A.I. edited video may be all ‘Bridezilla on A Budget’ needs. And besides, who watches these things anyways?
Let The A.I Do The Grunt Work, Not The Editing
The losers in the short term may be assistant editors. Many of the tasks A.I. is good for… transcribing, searching for footage, etc.. is now typically given to assistants. However, it may simply change the types of tasks assistant editors are given. There’s a LOT of metadata that needs to be entered and wrangled.
While A.I. is already showing up in many aspects of video production, it feels like having it actually do the editing is quite a ways off. I can see creating A.I. tools that help with editing: Rough cut creation, recommending color corrections or B roll selection, suggesting changes to timing, etc. But there’ll still need to be a person doing the edit.
Time lapse is always challenging… you’ve got a high resolution image sequence that can seriously tax your system. Add Flicker Free on top of that… where we’re analyzing up to 21 of those high resolution images… and you can really slow a system down. So I’m going to go over a few tips for speeding things up in Premiere or other video editor.
First off, turn off Render Maximum Depth and Maximum Quality. Maximum Depth is not going to improve the render quality unless your image sequence is HDR and the format you’re saving it to supports 32-bit images. If it’s just a normal RAW or JPEG sequence, it won’t make much of a difference. Render Maximum Quality may make a bit of difference but it will likely be lost in whatever compression you use. Do a test or two to see if you can tell the difference (it does improve scaling) but I rarely can.
RAW: If at all possible you should shoot your time lapses in RAW. There are some serious benefits which I go over in detailed in this video: Shooting RAW for Time Lapse. The main benefit is that Adobe Camera RAW automatically removes dead pixels. It’s a big f’ing deal and it’s awesome. HOWEVER… once you’ve processed them in Adobe Camera RAW, you should convert the image sequence to a movie or JPEG sequence (using very little compression). It will make processing the time lapse sequence (color correction, effects, deflickering, etc.) much, much faster. RAW is awesome for the first pass, after that it’ll just bog your system down.
Nest, Pre-comp, Compound… whatever your video editing app calls it, use it. Don’t apply Flicker Free or other de-flickering software to the original, super-high resolution image sequence. Apply it to whatever your final render size is… HD, 4K, etc.
Why? Say you have a 6000×4000 image sequence and you need to deliver an HD clip. If you apply effects to the 6000×4000 sequence, Premiere will have to process TWELVE times the amount of pixels it would have to process if you applied it to HD resolution footage. 24 million pixels vs. 2 million pixels. This can result in a HUGE speed difference when it comes time to render.
How do you Nest?
This is Premiere-centric, but the concept applies to After Effects (pre-compose) or FCP (compound) as well. (The rest of this blog post will be explaining how to Nest. If you already understand everything I’ve said, you’re good to go!)
First, take your original image sequence (for example, 6000×4000 pixels) and put it into an HD sequence. Scale the original footage down to fit the HD sequence.
The reason for this is that we want to control how Premiere applies Flicker Free. If we apply it to the 6000×4000 images, Premiere will apply FF and then scale the image sequence. That’s the order of operations. It doesn’t matter if Scale is set to 2%. Flicker Free (and any effect) will be applied to the full 6000×4000 image.
So… we put the big, original images into an HD sequence and do any transformations (scaling, adjusting the position and rotating) here. This usually includes stabilization… although if you’re using Warp Stabilizer you can make a case for doing that to the HD sequence. That’s beyond the scope of this tutorial, but here’s a great tutorial on Warp Stabilizer and Time Lapse Sequences.
Next, we take our HD time lapse sequence and put that inside a different HD sequence. You can do this manually or use the Nest command.
Now we apply Flicker Free to our HD time lapse sequence. That way FF will only have to process the 1920×1080 frames. The original 6000×4000 images are hidden in the HD sequence. To Flicker Free it just looks like HD footage.
Voila! Faster rendering times!
So, to recap:
Turn off Render Maximum Depth
Shoot RAW, but apply Flicker Free to a JPEG sequence/Movie
Apply Flicker Free to the final output resolution, not the original resolution
Those should all help your rendering times. Flicker Free still takes some time to render, none of the above will make it real time. However, it should speed things up and make the render times more manageable if you’re finding them to be really excessive.
Using Transcriptive with multicam sequences is not a smooth process and doesn’t really work. It’s something we’re working on coming up with a solution for but it’s tricky due to Premiere’s limitations.
However, while we sort that out, here’s a workaround that is pretty easy to implement. Here are the steps:
1- Take the clip with the best audio and drop it into it’s own sequence.
2- Transcribe that sequence with Transcriptive.
3- Now replace that clip with the multicam clip.
4- Voila! You have a multicam sequence with a transcript. Edit the transcript and clip as you normally would.
This is not a permanent solution and we hope to make it much more automatic to deal with Premiere’s multicam clips. In the meantime, this technique will let you get transcripts for multicam clips.
Thanks to Todd Drezner at Cohn Creative for suggesting this workaround.
Artificial Intelligence (A.I.) and machine learning are changing how video editors deal with some common problems. 1) how do you get accurate transcriptions for captions or subtitles? And 2) how do you find something in hours of footage if you don’t know exactly where it is?
Getting out of the Transcription Dungeon
Kelley Slagle, director, producer and editor for Cavegirl Productions, has been working on Eye of the Beholder, a documentary on the artists that created the illustrations for the Dungeons and Dragon game. With over 40 hours of interview footage to comb through searching through it all has been made much easier by Transcriptive, a new A.I. plugin for Adobe Premiere Pro.
Imagine having Google for your video project. Turning all the dialog into text makes everything easily searchable (and it supports 28 languages). Not too mention making it easy to create captions and subtitles.
The Dragon of Time And Money
Using a traditional transcription service for 40 hours of footage, you’re looking at a minimum of $2400 and a few days to turn it all around. Not exactly cost or time effective. Especially if you’re on a doc budget. However, it’s a problem for all of us.
Transcriptive helps solve the transcription problem, and the problems of searching video and captions/subtitles. It uses A.I. and machine learning to automatically generate transcripts with up to 95% accuracy and bring them into Premiere Pro. And the cost? About $4/hour (or much less depending on the options you choose) So, 40 hours is $160 vs $2400. And you’ll get all of it back in a few hours.
Yeah, it’s hard to believe.
Read what these three filmmakers have to say and try the Transcriptive demo out on your own footage. It’ll make it much easier to believe.
“We are using Transcriptive to transcribe all of our interviews for EYE OF THE BEHOLDER. The idea of paying a premium for that much manual transcription was daunting. I am in the editing phase now and we are collaborating with a co-producer in New York. We need to share our ideas for edits and content with him, so he is reviewing transcripts generated by Transcriptive and sending us his feedback and vice versa. The ability to get a mostly accurate transcription is fine for us, as we did not expect the engine to know proper names of characters and places in Dungeons & Dragons.” – Kelley Slagle, Cavegirl Productions
Google Your Video Clips and Premiere Project?
Since everything lives right within Premiere, all the dialog is fully searchable. It’s basically a word processor designed for transcripts, where every word has time code. Yep, every word of dialog has time code. Click on the word and jump to that point on the timeline. This means you don’t have to scrub through footage to find something. Search and jump right to it. It’s an amazing way for an editor to find any quote or quip.
As Kelley says, “We are able to find what we need by searching the text or searching the metadata thanks to the feature of saving the markers in our timelines. As an editor, I am now able to find an exact quote that one of my co-producers refers to, or find something by subject matter, and this speeds up the editing process greatly.”
Joy E. Reed of Oh My! Productions, who’s directing the documentary, ‘Ren and Luca’ adds, “We use sequence markers to mark up our interviews, so when we’re searching for specific words/phrases, we can find them and access them nearly instantly. Our workflow is much smoother once we’ve incorporated the Transcriptive markers into our project. We now keep the Markers window open and can hop to our desired areas without having to flip back and forth between our transcript in a text document and Premiere.”
Workflow, Captions, and Subtitles
Captions and subtitles are one of the key uses of Transcriptive. You can use it with the Premiere’s captioning tool or export many different file formats (SRT, SMPTE, SCC, MCC, VTT, etc) for use in any captioning application.
“We’re using Transcriptive to transcribe both sit down and on-the-fly interviews with our subjects. We also use it to get transcripts of finished projects to create closed captions/subtitles.”, says Joy. “We can’t even begin to say how useful it has been on Ren and Luca and how much time it saves us. The turnaround time to receive the transcripts is SO much faster than when we sent it out to a service. We’ve had the best luck with Speechmatics. The transcripts are only as accurate as our speakers – we have a teenage boy who tends to mumble, and his stuff has needed more tweaking than some of our other subjects, but it has been great for very clearly recorded material. The time it saves vs the time you need to tweak for errors is significant.”
Transcriptive is fully integrated into Premiere Pro, you never have to leave the application or pass metadata and files around. This makes creating captions much easier, allowing you to easily edit each line while playing back the footage. There are also tools and keyboard shortcuts to make the editing much faster than a normal text editor. You then export everything to Premiere’s caption tool and use that to put on the finishing touches and deliver them with your media.
Another company doing documentary work is Windy Films. They are focused on telling stories of social impact and innovation, and like most doc makers are usually on tight budgets and deadlines. Transcriptive has been critical in helping them tell real stories with real people (with lots of real dialog that needs transcribing).
They recently completed a project for Planned Parenthood. The deadline was incredibly tight. Harvey Burrell, filmmaker at Windy, says, “We were trying to beat the senate vote on the healthcare repeal bill. We were editing while driving back from Iowa to Boston. The fact that we could get transcripts back in a matter of hours instead of a matter of days allowed us to get it done on time. We use Transcriptive for everything. The integration into premiere has been incredible. We’ve been getting transcripts done for a long time. The workflow was always a clunky; particularly to have transcripts in a word document off to one side. Having the ability to click on a word and just have Transcriptive take you there in the timeline is one of our favorite features.”
Getting Accurate Transcripts using A.I.
Audio quality matters. So the better the recording and the more the talent enunciates correctly, the better the transcript. You can get excellent results, around 95% accuracy, with very well recorded audio. That means your talent is well mic’d, there’s not a lot of background noise and they speak clearly. Even if you don’t have that, you’ll still usually get very good results as long as the talent is mic’d. Even accents are ok as long as they speak clearly. Talent that’s off mic or if there’s crosstalk will cause it to be less accurate.
Transcriptive lets you sign up with the speech services directly, allowing you to get the best pricing. Most transcription products hide the service they’re using (they’re all using one of the big A.I. services), marking up the cost per minute to as much as .50/min. When you sign up directly, you get Speechmatics for $0.07/min. And Watson gives you the first 1000 minutes free. (Speechmatics is much more accurate but Watson can be useful).
So let’s talk about something that’s near and dear to my heart: Fonts.
I recently discovered Adobe TypeKit. I know…some of you are like… ‘You just discovered that?’.
Yeah, yeah… well, in case there are other folks that are clueless about this bit of the Creative Cloud that’s included with your subscription: It’s a massive font library that can be installed on your Creative Cloud machine… much of which is free (well, included in the cost of CC).
Up until a week ago I just figured it was a way for Adobe to sell fonts. I was mistaken. You find the font you like and, more often than not, you click the SYNC button and, boom… font is installed on your machine for use in Photoshop or After Effects or whatever.
Super cool feature of Creative Cloud that if you’re as clued in as I am about everything CC includes… you might not know about. Now you do. :-) Here’s a bit more info from Adobe.
I realize this probably comes off as a bit of an ad for TypeKit, but it really is pretty cool. I just designed a logo using a new font I found there. And since it’s Adobe, the fonts are of really high quality, not like what you find on free font sites (which is what I’ve relied on for many uses).
One of the fun challenges of developing graphics software is dealing with the many, varied video cards and GPUs out there. (actually, it’s a total pain in the ass. Hey, just being honest :-)
There are a lot of different video cards out there and they all have their quirks. Which are complicated by the different operating systems and host applications… for example, Apple decides they’re going to more or less drop OpenCL in favor of Metal, which means we have to re-write quite a bit of code, Adobe After Effects and Adobe Premiere Pro handle GPUs differently even though it’s the same API, etc. etc. From the end user side of things you might not realize how much development goes into GPU Acceleration. It’s a lot.
The latest release of Beauty Box Video for Skin Retouching (v4.1) contains a bunch of fixes for video cards that use OpenCL (AMD, Intel). So if you’re using those cards it’s a worthwhile download. If you’re using Resolve and Nvidia cards, you also want to download it as there’s a bug with CUDA and Resolve and you’ll want to use Beauty Box in OpenCL mode until we fix the CUDA bug. (Probably a few weeks away) Fun times in GPU-land.
Just wanted to give you all some insight on how we spend our days around here and what your hard earned cash goes into when you buy a plugin. You know, just in case you’re under the impression all software developers do is ‘work’ at the beach and drive Ferraris around. We do have fun, but usually it involves nailing the video card of the month to the wall and shooting paintballs at it. ;-)
We here at Digital Anarchy want to make sure you have a wonderful Christmas and there’s no better way to do that than to take videos of family and colleagues and turn them into the Grinch. They’ll love it! Clients, too… although they may not appreciate it as much even if they are the most deserving. So just play it at the office Christmas party as therapy for the staff that has to deal with them.
Our free plugin Ugly Box will make it easy to do! Apply it to the footage, click Make Ugly, and then make them green! This short tutorial shows you how:
You can download the free Ugly Box plugin for After Effects, Premiere Pro, Final Cut Pro, and Avid here:
One of the challenges with stop motion animation is flicker. Lighting varies slightly for any number of reasons causing the exposure of every frame to be slightly different. We were pretty excited when Bix Pix Entertainment bought a bunch of Flicker Free licenses (our deflicker plugin) for Adobe After Effects. They do an amazing kids show for Amazon called Tumble Leaf that’s all stop motion animation. It’s won multiple awards, including an Emmy for best animated preschool show.
Many of us, if not most of us, that do VFX software are wannabe (or just flat out failed ;-) animators. We’re just better at the tech than the art. (exception to the rule: Bob Powell, one of our programmers, who was a TD at Laika and worked on Box Trolls among other things)
So we love stop motion animation. And Bix Pix does an absolutely stellar job with Tumble Leaf. The animation, the detailed set design, the characters… are all off the charts. I’ll let them tell it in their own words (below). But check out the 30 second deflicker example below (view at full screen as the Vimeo compression makes the flicker hard to see). I’ve also embedded their ‘Behind The Scenes’ video at the end of the article. If you like stop motion, you’ll really love the ‘Behind the Scenes’.
Bix Pix Entertainment is an animation studio that specializes in the art of stop-motion animation, and is known for their award-winning show Tumble Leaf on Amazon Prime.
It is not uncommon for an animator to labor for days sometimes weeks on a single stop motion shot, working frame by frame. With this process, it is natural to have some light variations between each exposure, commonly referred to as ‘flicker’ – There are many factors that can cause the shift in lighting. For instance, a studio light or lights may blow out or solar flare. Voltage and/ power surges can brighten or dim lights over a long shot. Certain types of lights, poor lighting equipment, camera malfunctions or incorrect camera settings. Sometimes an animator might wear a white t-shirt unintentionally adding fill to the shot or accidentally standing in front of a light casting a shadow from his or her body.
The variables are endless. Luckily these days compositors and VFX artists have fantastic tools to help remove these unwanted light shifts. Removing unwanted light shifts and flicker is a very important and necessary first step when working with stop-motion footage. Unless by chance it’s an artistic decision to leave that tell-tale flicker in there. But that is a rare decision that does not come about often.
Here at Bix Pix we use Adobe After Effects for all of our compositing and clean-up work. Having used 4 different flicker removal plugins over the years, we have to say Digital Anarchy’s flicker Free is the fastest, easiest and most effective flicker removal software we have come across. And also quite affordable.
During a season of Tumble Leaf we will process between 1600 and 2000 shots averaging between 3 seconds and up to a couple minutes in length. That is an average of about 5 hours of footage per season, almost three times the length of a feature film. With a tight schedule of less than a year and a small team of ten or so VFX artists and compositors. Nearly every shot has an instance of flicker free applied to it as an effect. The plugin is so fast, simple to use and reliable. De-flickering can be done in almost real time.
Digital Anarchy’s Flicker free has saved us thousands of hours of work and reduced overtime and crunch time delays. This not only saves money but frees up artists to do more elaborate effects that we could not do before due to time constraints, allowing them to focus on making their work stand out even more.
Sharpening video can be a bit trickier than sharpening photos. The process is the same of course… increasing the contrast around edges which creates the perception of sharpness.
However, because you’re dealing with 30fps instead of a single image some additional challenges are introduced:
1- Noise is more of a problem.
2- Video is frequently compressed more heavily than photos, so compression artifacts can be a serious problem.
3- Oversharpening is a problem with stills or video but can create motion artifacts when the video is played back that can be visually distracting.
4- It’s more difficult to mask out areas like skin that you don’t want sharpened.
These are problems you’ll run into regardless of the sharpening method. However, probably unsurprising, in addition to discussing the solutions using regular tools, we do talk about how our Samurai Sharpen plugin can help with them.
Noise in Video Footage
Noise is always a problem regardless of whether you’re shooting stills or videos. However, with video the noise changes from frame to frame making it a distraction to the viewer if there’s too much or it’s too pronounced.
Noise tends to be much more obvious in dark areas, as you can see below where it’s most apparent in the dark, hollow part of the guitar:
Using a mask to protect the darker areas makes it possible to increase the sharpening for the rest of the video frame. Samurai Sharpen has masks built-in, so it’s easy in that plugin, but you can do this manually in any video editor or compositing program by using keying tools, building a mask and compositing effects.
Many consumer video cameras, including GoPros and some drone cameras heavily compress footage. Especially when shooting 4K.
It’s difficult, and sometimes impossible to sharpen footage like this. The compression artifacts become very pronounced, since they tend to have edges like normal features. Unlike noise, the artifacts are visible in most areas of the footage, although they tend to be more obvious in areas with lots of detail.
In Samurai you can increase the Edge Mask Strength to lessen the impact of sharpening on the artifact (often they’re in low contrast) but depending on how compressed the footage is you may not want to sharpen it.
Sharpening is a local contrast adjustment. It’s just looking at significant edges and sharpening those areas. Oversharpening occurs when there’s too much contrast around the edges, resulting in visible halos.
Especially if you look at the guitar strings and frets, you’ll see a dark halo on the outside of the strings and the strings themselves are almost white with little detail. Way too much contrast/sharpening. The usual solution is to reduce the sharpening amount.
In Samurai Sharpen you can also adjust the strength of the halos independently. So if the sharpening results in only the dark or light side being oversharpened, you can dial back just that side.
The last thing you usually want to do is sharpen someone’s skin. You don’t want your talent’s skin looking like a dried-up lizard. (well, unless your talent is a lizard. Not uncommon these days with all the ridiculous 3D company mascots)
Especially with 4K and HD, video is already showing more skin detail than most people want (hence the reason for our Beauty Box Video plugin for digital makeup). If you’re using UnSharp Mask you can use the Threshold parameter, or in Samurai the Edge Mask Strength parameter is a more powerful version of that. Both are good ways of protecting the skin from sharpening. The skin area tends to be fairly flat contrast-wise and the Edge Mask generally does a good job of masking the skin areas out.
Either way, you want to keep an eye on the skin areas, unless you want a lizard. (and if so, you should download are free Ugly Box plugin. ;-)
You can sharpen video and most video footage will benefit from some sharpening. However, there are numerous issues that you run into and hopefully this gives you some idea of what you’re up against whether you’re using Samurai Sharpen for Video or something else.
One problem that users can run into with our Flicker Free deflicker plugin is that it will look across edits when analyzing frames for the correct luminance. The plugin looks backwards as well as forwards to gather frames and does a sophisticated blend of all those frames. So even if you create an edit, say to remove an unwanted camera shift or person walking in front of the camera, Flicker Free will still see those frames.
This is particularly a problem with Detect Motion turned OFF.
The way around this is to Nest (i.e. Pre-compose (AE), Compound Clip (FCP)) the edit and apply the plugin to the new sequence. The new sequence will start at the first frame of the edit and Flicker Free won’t be able to see the frames before the edit.
This is NOT something you always have to do. It’s only if the frames before the edit are significantly different than the ones after it (i.e. a completely different scene or some crazy camera movement). 99% of the time it’s not a problem.
This tutorial shows how to solve the problem in Premiere Pro. The technique works the same in other applications. Just replacing ‘Nesting’ with whatever your host application does (pre-composing, making a compound clip, etc).
We get a lot of questions about how Beauty Box compares to other filters out there for digital makeup. There’s a few things to consider when buying any plugin and I’ll go over them here. I’m not going to compare Beauty Box with any filter specifically, but when you download the demo plugin and compare it with the results from other filters this is what you should be looking at:
Quality of results
Ease of use
I’ll start with Support because it’s one thing most people don’t consider. We offer as good of support as anyone in the industry. You can email or call us (415-287-6069). M-F 10am-5pm PST. In addition, we also check email on the weekends and frequently in the evenings on weekdays. Usually you’ll get a response from Tor, our rockstar QA guy, but not infrequently you’ll talk to myself as well. Not often you get tech support from the guy that designed the software. :-)
Quality of Results
The reason you see Beauty Box used for skin retouching on everything from major tentpole feature films to web commercials, is the incredible quality of the digital makeup. Since it’s release in 2009 as the first plugin to specifically address skin retouching beyond just blurring out skin tones, the quality of the results has been critically acclaimed. We won several awards with version 1.0 and we’ve kept improving it since then. You can see many examples here of Beauty Box’s digital makeup, but we recommend you download the demo plugin and try it yourself.
Things to look for as you compare the results of different plugins:
Skin Texture: Does the skin look realistic? Is some of the pore structure maintained or is everything just blurry? It should, usually, look like regular makeup unless you’re going for a stylized effect. Skin Color: Is there any change in skin tones? Temporal Consistency: Does it look the same from frame to frame over time? Are there any noticeable seams where the retouching stops. Masking: How accurate is the mask of the skin tones? Are there any noticeable seams between skin and non-skin areas? How easy is it to adjust the mask?
Ease of Use
One of the things we strive for with all our plugins is to make it as easy as possible to get great results with very little work on your end. Software should make your life easier.
In most cases, you should be able to click on Analyze Frame, make an adjustment to the Skin Smoothing amount to dial in the look you want and be good to go. There are always going to be times when it requires a bit more work but for basic retouching of video, there’s no easier solution than Beauty Box.
When comparing filters, the thing to look for here is how easy is it to setup the effect and get a good mask of the skin tones? How long does it take and how accurate is it?
If you’ve used Beauty Box for a while, you know that the only complaint we had with it with version 1.0 was that it was slow. No more! It’s now fully GPU optimized and with some of the latest graphics cards you’ll get real time performance, particularly in Premiere Pro. Premiere has added better GPU support and between that the Beauty Box’s use of the GPU, you can get real time playback of HD pretty easily.
And of course we support many different host apps, which gives you a lot of flexibility in where you can use it. Avid, After Effects, Premiere Pro, Final Cut Pro, Davinci Resolve, Assimilate Scratch, Sony Vegas, and NUKE are all supported.
Hopefully that gives you some things to think about as you’re comparing Beauty Box with other plugins that claim to be as good. All of these things factor into why Beauty Box is so highly regarded and considered to be well worth the price.
Shooting slow motion footage, especially very high speed shots like 240fps or 480fps, results in flicker if you don’t have high quality lights. Stadiums often have low quality industrial lighting, LEDs, or both. Resulting in flicker during slow motion shots even on nationally broadcast, high profile sporting events.
I was particularly struck by this watching the NCAA Basketball Tournament this weekend. Seemed like I was seeing flicker on half of the slow motion shots. You can see a few in this video (along with Flicker Free plugin de-flickered versions of the same footage):
The LED lights are most often the problem. They circle the arena and depending on how bright they are, for example if it’s turned solid white, they can cast enough light on the players to cause flicker when played back in slow motion. Even if they don’t cast light on the players they’re visible in the background flickering. Here’s a photo of the lights I’m talking about in Oracle arena (white band of light going around the stadium):
While Flicker Free won’t work for live production, it works great for de-flickering this type of flicker if you can render it in a video editing app, as you can see in the original example.
It’s a common problem even for pro sports or high profile sporting events (once you start looking for it, you see it a lot). So if you run into with your footage, check out the Flicker Free plugin for most video editing applications!
Drones are all the rage at the moment, deservedly so as some of the images and footage being shot with them are amazing.
However, one problem that occurs is that if the drone is shooting with the camera at the right angle to the sun, shadows from the props cause flickering in the video footage. This can be a huge problem, making the video unusable. It turns out that our Flicker Free plugin is able to do a good job of removing or significantly reducing this problem. (of course, this forced us to go out and get one. Research, nothing but research!)
Here’s an example video showing exactly what prop flicker is and why it happens:
There are ways around getting the flicker in the first place: Don’t shoot into the sun, have the camera pointing down, etc. However, sometimes you’re not able to shoot with ideal conditions and you end up with flicker.
Our latest tutorial goes over how to solve the prop flicker issue with our Flicker Free plugin. The technique works in After Effects, Final Cut Pro, Avid, Resolve, etc. However the tutorial shows Flicker Free being used in Premiere Pro.
One key way of speeding up the Flicker Free plugin is putting it first in the order of effects. What does this mean? Let’s say you’re using the Lumetri Color Corrector in Premiere. You want to apply Flicker Free first, then apply Lumetri. You’ll see about a 300+% speed increase vs. doing it with Lumetri first. So it looks like this:
Why the Speed Difference?
Flicker Free has to analyze multiple frames to de-flicker the footage you’re using. It looks at up to 21 frames. If you have the effect applied before Flicker Free it means Lumetri is being applied TWENTY ONE times for every frame Flicker Free renders. And especially with a slow effect like Lumetri that will definitely slow everything down.
It fact, on slower machines it can bring Premiere to a grinding halt. Premiere has to render the other effect on 21 frames in order to render just one frame for Flicker Free. In this case, Flicker Free takes up a lot of memory, the other effect can take up a lot of memory and things start getting ugly fast.
Renders with Happy Endings
So to avoid this problem, just apply Flicker Free before any other effects. This goes for pretty much every video editing app. The render penalty will vary depending on the host app and what effect(s) you have applied. For example, using the Fast Color Corrector in Premiere Pro resulted in a slow down of only about 10% (vs. Lumetri and a slow down of 320%). In After Effects the slow down was about 20% with just the Synthetic Aperture color corrector that ships with AE. However, if you add more filters it can get a lot worse.
Either way, you’ll have much happier render times if you put Flicker Free first.
Hopefully this makes some sense. I’ll go into a few technical details for those that are interested. (Feel free to stop reading if it’s clear you just need to put Flicker Free first) (oh, and here are some other ways of speeding up Flicker Free)
With all host applications, Flicker Free, like all plugins, has to request frames through the host application API. With most plugins, like the Beauty Box Video plugin, the plugin only needs to request the current frame. You want to render frame X: Premiere Pro (or Avid, FCP, etc) has to load the frame, render any plugins and then display it. Plugins get rendered in the order you apply them. Fairly straightforward.
The Flicker Free plugin is different. It’s not JUST looking at the current frame. In order to figure out the correct luminance for each pixel (thus removing flicker) it has to look at pixels both before and after the current frame. This means it has to ask the API for up to 21 frames, analyze them, return the result to Premiere, which then finishes rendering the current frame.
So the API says, “Yes, I will do your bidding and get those 21 frames. But first, I must render them!”. And so it does. If there are no plugins applied to them, this is easy. It just hands Flicker Free the 21 original frames and goes on its merry way. If there are plugins applied, the API has to render those on each frame it gives to Flicker Free. FF has to wait around for all 21 frames to be rendered before it can render the current frame. It waits, therefore that means YOU wait. If you need a long coffee break these renders can be great. If not, they are frustrating.
If you use After Effects you may be familiar with pre-comping a layer with effects so that you can use it within a plugin applied to a different layer. This goes through a different portion of the API than when a plugin requests frames programmatically from AE. In the case of a layer in the layer pop-up the plugin just gets the original image with no effects applied. If the plugin actually asks AE for the frame one frame before it, AE has to render it.
One other thing that affects speed behind the scenes… some apps are better at caching frames that plugins ask for than other apps. After Effects does this pretty well, Premiere Pro less so. So this helps AE have faster render times when using Flicker Free and rendering sequentially. If you’re jumping around the timeline then this matters less.
Hopefully this helps you get better render times from Flicker Free. The KEY thing to remember however, is ALWAYS APPLY FLICKER FREE FIRST!
However, many, if not most, of our customers are like Brian Smith. Using Beauty Box for corporate clients or local commercials. They might not be winning Emmy awards for their work but they’re still producing great videos with, usually, limited budgets. “The time and budget does not usually afford us the ability to bring in a makeup artist. People that aren’t used to being on camera are often very self-conscious, and they cringe at the thought of every wrinkle or imperfection detracting from their message.”, said Brian, Founder of Ideaship Studios in Tulsa, OK. “Beauty Box has become a critical part of our Final Cut X pipeline because it solves a problem, it’s blazing fast, and it helps give my clients and on-camera talent confidence. They are thrilled with the end result, and that leads to more business for us.”
An Essential Tool for Beauty Work and Retouching
Beauty Box Video has become an essential tool at many small production houses or in-house video departments to retouch makeup-less/bad lighting situations and still end up with a great looking production. The ability to quickly retouch skin with an automatic mask without needing to go frame by frame is important. However, it’s usually the quality of retouching that Beauty Box provides that’s the main selling point.
image courtesy of Ideaship Studios
Beauty Box goes beyond just blurring skin tones. We strive to keep the skin texture and not just mush it up. You want to have the effect of the skin looking like skin, not plastic, which is important for beauty work. Taking a few years off talent and offsetting the harshness that HD/4K and video lights can add to someone. The above image of one of Brian’s clients is a good example.
When viewed at full resolution, the wrinkles are softened but not obliterated. The skin is smoothed but still shows pores. The effect is really that of digital makeup, as if you actually had a makeup artist to begin with. You can see this below in the closeup of the two images. Of course, the video compression in the original already has reduced the detail in the skin, but Beauty Box does a nice job of retaining much of what is there.
” On the above image, we did not shoot her to look her best. The key light was a bit too harsh, creating shadows and bringing out the lines. I applied the Beauty Box Video plugin, and the shots were immediately better by an order of magnitude. This was just after simply applying the plugin. A few minutes of tweaking the mask color range and effects sliders really dialed in a fantastic look. I don’t like the idea of hiding flaws. They are a natural and beautiful part of every person. However, I’ve come to realize that bringing out the true essence of a person or performance is about accentuating, not hiding. Beauty Box is a great tool for doing that.” – Brian Smith
Go for Natural Retouching
Of course, you can go too far with it, as with anything. So some skill and restraint is often needed to get the effect of regular makeup and not making the subject look ‘plastic’ or blurred. As Brain says, you want things to look natural.
However, when used appropriately you can get some amazing results, making for happy clients and easing the concerns of folks that aren’t always in front of a camera. (particularly men, since they tend to not want to wear makeup… and don’t realize how much they need it until they see themselves on a 65″ 4K screen. ;-)
One last tip, you can often easily improve the look of Beauty Box even more by using tracking masks for beauty work, as you can see in the tutorials that link goes to. The ability of these masks to automatically track the points that make up the mask and move them as your subject moves is a huge deal for beauty work. It makes it much easier to isolate an area like a cheek or the forehead, just as a makeup artist would.
First off, the important bit: All the current versions of our plugins are updated for El Capitan and should be working, regardless of host application (After Effects, Premiere Pro, Final Cut Pro, Davinci Resolve, etc). So you can go to our demo page:
And download the most recent version of your plugins.
If you haven’t upgraded to El Capitan, I’ll add to the chorus of people saying… Don’t. Overall we’re disappointed by Apple as continues its march towards making the Mac work like the iPhone. Making professional uses more and more obsolete. They’re trying way too hard to make the machines idiot proof and in the process dumbing down what can be done with it.
One of the latest examples is, of all things, Disk Utility. You can no longer make a RAID using it and have to use a terminal command. They’ve removed other functionality as well, but for many professional users RAIDs are essential as is Disk Utility. However, it’s now been crippled.
Of course, then there’s Final Cut Pro (which has gotten better but still doesn’t feel like a professional app to many people), Photos which replaced Apple’s pro app Aperture, and the Mac Pro trashcan. (kind of sad that when we need a ‘new’ Mac, usually we buy a 2010-12 12-core Mac Pro, they outperform our D500 trashcan)
Apple isn’t alone in this ‘dumbing down’ trend. Just look at latest releases of Acrobat (which I’ve heard referred to as the Fischer Price version) and Lightroom.
Note to Application Developers- Just because we’re doing a lot of things with our phones does not mean we want to do everything on them or have our desktop apps work like phone apps. There’s a difference between simplicity, making the user experience clear and intuitive but retaining features that make the apps powerful, and stupidity, i.e. making the apps idiot proof.
Anyways, end of rant… I spend a fair amount of time thinking about software usability, since we have to strike that balance between ease of use and power with our own video plugins, and using the host applications and OS professionally. So this ‘dumbing down’ concerns me both for my personal uses and having to help DA customers navigate new ‘features’ that affect our photo and video plugins.
Chief Executive Anarchist
We have a new set of tutorials up that will show you how to easily create masks and animate them for Beauty Box. This is extremely useful if you want to limit the skin retouching to just certain areas like the cheeks or forehead.
Traditionally this type of work has been the province of feature films and other big budget productions that had the money and time to hire rotoscopers to create masks frame by frame. New tools built into After Effects and Premiere Pro or available from third parties for FCP make this technique accessible to video editors and compositors on a much more modest budget or time constraints.
How Does Retouching Work Traditionally?
In the past someone would have to create a mask on Frame 1 and move forward frame by frame, adjusting the mask on EVERY frame as the actor moved. This was a laborious and time consuming way of retouching video/film. The idea for Beauty Box came from watching a visual effects artist explain his process for retouching a music video of a high profile band of 40-somethings. Frame by frame by tedious frame. I thought there had to be an easier way and a few years later we released Beauty Box.
However, Beauty Box affects the entire image by default. The mask it creates affects all skin areas. This works very well for many uses but if you wanted more subtle retouching… you still had to go frame by frame.
The New Tools!
After Effects and Premiere have some amazing new tools for tracking mask points. You can apply bezier masks that only masks the effect of a plugin, like Beauty Box. The bezier points are ‘tracking’ points. Meaning that as the actor moves, the points move with him. It usually works very well, especially for talking head type footage where the talent isn’t moving around a lot. It’s a really impressive feature. It’s available in both AE and Premiere Pro. Here’s a tutorial detailing how it works in Premiere:
After Effects also ships with Mocha Pro, another great tool for doing this type of work. This tutorial shows how to use Mocha and After Effects to control Beauty Box and get some, uh, ‘creative’ skin retouching effects!
The power of Mocha is also available for Final Cut Pro X as well. It’s available as a plugin from CoreMelt and they were kind enough to do a tutorial explaining how Splice X works with Beauty Box within FCP. It’s another very cool plugin, here’s the tutorial:
We’re excited to announce that Beauty Box Video 4.0 is now available for Avid and OpenFX Apps: Davinci Resolve, Assimilate Scratch, Sony Vegas, NUKE, and more. This is in addition to After Effects, Premiere Pro, and Final Cut Pro which were announced in April.
Beauty Box Video 4.0 adds real time rendering to the high quality, automatic skin retouching that Beauty Box is famous for. It’s not only the best retouching plugin available but it’s now one of the fastest, especially on newer graphics cards like the Nvidia GTX 980. We’re seeing real time or near real time performance in Premiere Pro, Resolve, and FCP. Other apps may not see quite that performance but they still get a significant speed increase over what was possible in Beauty Box 3.0.
Easily being able to retouch video is becoming increasingly important. HD is everywhere and 4K is widely available allowing viewers to see more detail on closeups of talent than ever before. This makes skin or makeup problems much more visible and being able to apply digital makeup easily is critical to high quality productions.
You can also incorporate masks to limit the retouching to just certain areas like cheeks or the talent’s forehead. (as can be seen in this tutorial using Premiere Pro’s tracking masks)
So head over to digitalanarchy.com for more info and to download a free trial and free tutorials on how to get started and more advanced topics. You’ll be blown away by the ease of use, high quality retouching, and now… speed!
As many of you know, we’ve come out with a real time version of Beauty Box Video. In order for that to work, it requires a really fast GPU and we LOVE the GTX 980. (Amazing price/performance) Nvidia cards are generally fastest for video apps (Premiere, After Effects, Final Cut Pro, Resolve, etc) but we are seeing real time performance on the higher end new Mac Pros (or trash cans, dilithium crystals, Job’s Urn or whatever you want to call them).
BUT what if you have an older Mac Pro?
With the newer versions of Mac OS (10.10), in theory, you can put any Nvidia card in them and it should work. Since we have lots of video cards lying around that we’re testing, we wondered if our GTX 980, Titan and Quadro 5200 would work in our Early 2009 Mac Pro. The answer is…
So, how does it work? For one you need to be running Yosemite (Mac OS X 10.10)
A GTX 980 is the easier of the two GeFroce cards, mainly because of the power needed to drive it. It only needs two six-pin connectors, so you can use the power supply built into the Mac. Usually you’ll need to buy an extra six-pin cable, as the Mac only comes standard with one, but that’s easy enough. The Quadro 5200 has only a single 6-pin connector and works well. However, for a single offline workstation, it’s tough to justify the higher price for the extra reliability the Quadros give you. (and it’s not as fast as the 980)
The tricky bit about the 980 is that you need to install Nvidia’s web driver. The 980 did not boot up with the default Mac OS driver, even in Yosemite. At least, that’s what happened for us. We have heard of reports of it working with the Default Driver, but I’m not sure how common that is. So you need to install the Nvidia Driver Manager System Pref and, while still using a different video card, set the System Pref to the Web Driver. As so:
Install those, set it to Web Driver, install the 980, and you should be good to go.
What about the Titan or other more powerful cards?
There is one small problem… the Mac Pro’s power supply isn’t powerful enough to handle the card and doesn’t have the connectors. The Mac can have two six pin power connectors, but the Titan and other top of the line cards require a 6 pin and an 8 pin or even two 8-pin connectors. REMINDER: The GTX 980 and Quadro do NOT need extra power. This is only for cards with an 8-pin connector.
The solution is to buy a bigger power supply and let it sit outside the Mac with the power cables running through the expansion opening in the back.
As long as the power supply is plugged into a grounded outlet, there’s no problem with it being external. I used a EVGA 850W Power Supply, but I think the 600w would do. The nice thing about these is they come with long cables (about 2 feet or so) which will reach inside the case to the Nvidia card’s power connectors.
One thing you’ll need to do is plug the ‘test’ connector (comes with it) into the external power supply’s motherboard connector. The power supply won’t power on unless you do this.
Otherwise, it should work great! Very powerful cards and definitely adds a punch to the Mac Pros. With this setup we had Beauty Box running at about 25fps (in Premiere Pro, AE and Final Cut are a bit slower). Not bad for a five year old computer, but not real time in this case. On newer machines with the GTX 980 you should be getting real time play back. It really is a great card for the price.
All of our current plugins have been updated to work with After Effects and Premiere Pro in Creative Cloud 2015. That means Beauty Box Video 4.0.1 and Flicker Free 1.1 are up to date and should work no problem.
What if I have an older plugin like Beauty Box 3.0.9? Do I have to pay for the upgrade?
Yes, you probablyneed to upgrade and it is a paid upgrade. After Effects changed the way it renders and Premiere Pro changed how they handle GPU plugins (of which Beauty Box is one). The key word here is probably. Our experience so far has been mixed. Sometimes the plugins work, sometimes not.
– Premiere Pro: Beauty Box 3.0.9 seems to have trouble in Premiere if it’s using the GPU. If you turn ‘UseGPU’ off (at the bottom of the BB parameter list), it seems to work fine, albeit much slower. Premiere Pro did not implement the same re-design that After Effects did, but they did add an API specifically for GPU plugins. So if the plugin doesn’t use the GPU, it should work fine in Premiere. If it uses the GPU, maybe it works, maybe not. Beauty Box seems to not.
– After Effects: Legacy plugins _should_ work but slow AE down somewhat. In the case of Beauty Box, it seems to work ok but we have seen some problems. So the bottom line is: try it out in CC 2015, if it works fine, you’re good to go. If not, you need to upgrade. We are not officially supporting 3.0.9 in Creative Cloud 2015.
– The upgrade from 3.0 is $69 and can be purchased HERE.
– The upgrade from 1.0/2.0 is $99 and can be purchased HERE.
The bottom line is try out the older plugins in CC 2015. It’s not a given that they won’t work, even though Adobe is telling everyone they need to update. It is true that you will most likely need to update the plugins for CC 2015 so their advice isn’t bad. However, before paying for upgrades load the plugins and see how they behave. They might work fine. Of course, Beauty Box 4 is super fast in both Premiere and After Effects, so you might want to upgrade anyways. :-)
We do our best not to force users into upgrades, but since Adobe has rejiggered everything, only the current releases of our products will be rejiggered in turn.
It’s been almost 4 years since the last update of FCP 7. The last officially supported OS was 10.6.8. It’s time to move on people.
Beauty Box Video 4.0 (due out in a month) will be our first product that does not officially support FCP 7.
It’s a great video editor but Apple make it very hard to support older software. Especially if you’re trying to run it on newer systems. If FCP 7 is a mission critical app for you, you’re taking a pretty big risk by trying to keep it grinding along. We started seeing a lot of weird behaviors with it and 10.9. I realize people are running it successfully on the new systems but we feel there are a lot of cracks beneath the surface. Those are only going to get more pronounced with newer OSes.
I know people love their software, hell there are still people using Media 100, but Premiere Pro, Avid, and even FCP X are all solid alternatives at this point. Those of us that develop software and hardware can’t support stuff that Apple threw under the bus 3 and a half years ago.
We will continue to support people using Beauty Box 3.0 with FCP 7 on older systems (10.8 and below) but we can’t continue to support it when most likely the problems we’ll be fixing are not caused by our software but by old FCP code breaking on new systems.
What causes Final Cut Pro X to re-render? If you’ve ever wondered why sometimes the orange ‘unrendered’ bar shows up when you make a change and sometimes it doesn’t… I explain it all here. This is something that will be valuable to any FCP user but can be of the utmost importance if you’re rendering Beauty Box, our plugin for doing skin retouching and beauty work on HD/4K video. (Actually we’re hard at work making Beauty Box a LOT faster, so look for an announcement soon!)
Currently, if you’ve applied Beauty Box to a long clip, say 60 minutes, you can be looking at serious render times (this can happen for any non-realtime effect), possibly twelve hours or so on slower computers and video cards. (It can also be a few hours, just depends on how fast everything is)
Recently we had a user with that situation. They had a logo in .png format that was on top of the entire video being used as a bug. So they rendered everything out to deliver it, but, of course, the client wanted the bug moved slightly. This caused Final Cut Pro to want to re-render EVERYTHING, meaning the really long Beauty Box render needed to happen as well. Unfortunately, this is just the way Final Cut Pro works.
Why does it work that way and what can be done about it?
Stephen Smith, a long time videographer, used a recent trip to Italy as an opportunity to hone is time lapse skills. The result is a compilation of terrific time lapse sequences from all over Italy.
He used Flicker Free to deflicker the videos and use Premiere Pro and After Effects for editing, and Davinci Resolve for color correction. It’s a great example of how easily Flicker Free fits into pretty much any workflow and produces great results.
Since he was traveling with his wife, it allowed her to explore areas where he was shooting more thoroughly. This is not always the case. Significant others are not always thrilled to be stuck in one place for an hour while you stand around watching your camera take pictures!
Although, he said it did give him an opportunity to watch how agressive the street vendors were and to meet other folks.
We’re happy that he gave us a heads up about the video which is on Vimeo or you can see it below. Of course, we’re thrilled he used Flicker Free on it as well. :-)
It’s always cool to see folks posting how they’ve used Beauty Box Video. One of the most common uses is for music videos, including many top artists. Most performers are a little shy about letting it be known they need retouching, so we get pretty excited when something does get posted (even if we don’t know the performer). Daniel Schweinert just posted this YouTube and blog post breaking down his use of Beauty Box Video (and Mocha) for a music video in After Effects. Pretty cool stuff!