Category Archives: Art & Technology

Posts about art or technology or both. Particularly as it relates to video production, slow motion, time lapse, and other topics.

Interview: The importance of transcripts in documentary filmmaking

Green Screen shoot for the interview with the sons of Bakersfield Sound Legend, Bill Woods. L to R: Tammie Barbee, Glenda Rankin (Producer), Jim Woods, Bill Woods, Jr., Dianne Sharman. Hidden by microphone, unknown.
Green Screen shoot for the interview with the sons of Bakersfield Sound Legend, Bill Woods. L to R: Tammie Barbee, Glenda Rankin (Producer), Jim Woods, Bill Woods, Jr., Dianne Sharman. Hidden by microphone, unknown.

 

The struggle of making documentary films nowadays is real. Competition is high, and budget limitations can stretch a 6-year deadline to a 10 year-long production. To make a movie you need money. To get the money you need decent, and sometimes edited, footage material to show to funding organizations and production companies. And decent footage, well-recorded audio, as well as edited pieces cost money to produce. I’ve been facing this problem myself and discovered through my work at Digital Anarchy that finding an automated tool to transcribe footage can be instrumental in making small and low budget documentary films happen.

In this interview, I talked to filmmaker Chuck Barbee to learn how Transcriptive is helping him to edit faster and discussed some tips on how to get started with the plugin. Barbee has been in the Film and TV business for over 50 years. In 2005, after an impressive career in the commercial side of the Film and TV business, he moved to California’s Southern Sierras and began producing a series of personal “passion” documentary films. His projects are very heavy on interviews, and the transcribing process he used all throughout his career was no longer effective to manage his productions. 

Barbee has been using Transcriptive for a month, but already consider the plugin a game-changer. Read on to learn how he is using the plugin to make a long-form documentary about the people who created what is known as “The Bakersfield Sound” in country music. 

Chuck Barbee in his editing suite. A scene from his documentary project , "Wild West Country is on the large screen.
Chuck Barbee in his editing suite. A scene from his documentary project, “Wild West Country is on the large screen.

 

DA: You have worked in a wide variety of productions throughout your career. Besides co-producing, directing, and editing prime-time network specials and series for the Lee Mendelson Productions, you also worked as Director of Photography for several independent feature films. In your opinion. How important is the use of transcripts in the editing process? 

CB: Transcripts are essential to edit long-form productions because they allow producers, editors, and directors to go through the footage, get familiarized with the content, and choose the best bits of footage as a team. Although interview oriented pieces are more dependent on transcribed content, I truly believe transcripts are helpful no matter what type of motion picture productions you are making. 

On most of my projects, we always made cassette tape copies of the interviews, then had someone manually transcribe them and print hard copies.  With film projects, there was never any way to have a time reference in the transcripts, unless you wanted to do that manually. Then in the video, it was easier to make time-coded transcripts, but both of these methods were time-consuming and relatively expensive labor wise. This is the method I’ve used since the late ’60s,  but the sheer volume of interviews on my current projects and the awareness that something better probably exists with today’s technology prompted me to start looking for automated transcription solutions. That’s when I found Transcriptive. 

DA: And what changed now that you are using Artificial Intelligence to transcribe your filmed interviews in Premiere Pro?

CB: I think Transcriptive is a wonderful piece of software.  Of course, it is only as good as the diction of the speaker and the clarity of the recording, but the way the whole system works is perfect.  I place an interview on the editing timeline, click transcribe and in about 1/3 of the time of the interview I have a digital file of the transcription, with time code references.  We can then go through it, highlighting sections we want, or print a hard copy and do the same thing. Then we can open the digital version of the file in Premiere, scroll to the sections that have been highlighted, either in the digital file or the hard copy, click on a word or phrase and then immediately be at that place in the interview.  It is a huge time saver and a game-changer.

The workflow has been simplified quite a bit, the transcription costs are down, and the editing process has sped up because we can search and highlight content inside of Premiere or use the transcripts to make paper copies. Our producers prefer to work from a paper copy of the interviews, so we use that TXT or RTF file to make a hard copy. However, Transcriptive can also help to reduce the number of printed materials if a team wants to do all the work digitally, which can be very effective. 

Transcriptive panel open in Premier, showing the transcript of an interview with Tommy Hays, one of the original musicians who helped to create the Bakersfield Sound. Now in his 80's, Tommy Continues to perform regularly in the Bakersfield area, including venues such as Buck Owens' "Crystal Palace".
Transcriptive panel opens in Premiere, showing the transcript of an interview with Tommy Hays, one of the original musicians who helped to create the Bakersfield Sound. Now in his 80’s, Tommy Continues to perform regularly in the Bakersfield area, including venues such as Buck Owens’ “Crystal Palace”.

 

DA: What makes you choose between highlighting content in the panel and using printed transcripts? Are there situations where one option works better than the other?

CB: It really depends on producer/editor choices.  Some producers might want to have a hard copy because they would prefer that to work on a computer.  It really doesn’t matter much from an editor’s point of view because it is no problem to scroll through the text in Transcriptive to find the spots that have been highlighted on the hard copy.  All you have to do is look at the timecode next to the highlighted parts of a hard copy and then scroll to that spot in Transcriptive. Highlighting in Transcriptive means you are tying up a workstation, with Premiere, to do that.  If you only have one editing workstation running Premiere, then it makes more sense to have someone do the highlighting with a printed hard copy or on a laptop or any other computer which isn’t running Premiere.

DA: You mentioned the AI transcription is not perfect, but you would still prefer that than paying for human transcripts or transcribing the interviews yourself. Why do you think the automated transcripts are a better solution for your projects?

CB: Transcriptive is amazing accurate, but it is also quite “literal” and will transcribe what it hears.  For example, if someone named “Artie” pronounces his name “RD”, that’s what you’ll get. Also, many of our subjects have moderate to heavy accents and that does affect accuracy.  Another thing I have noticed is that, when there is a clear difference between the sound of the subject and the interviewer, Transcriptive separates them quite nicely.  However, when they sound alike, it can confuse them. When multiple voices speak simultaneously, Transcriptive also has trouble, but so would a human. 

My team needs very accurate transcripts because we want to be able to search through 70 or more transcripts, looking for keywords that are important. Still, we don’t find the transcription mistakes to be a problem. Even if you have to go through the interview when it comes back to make corrections, It is far simpler and faster than the manual method and cheaper than the human option.  Here’s what we do: right after the transcripts are processed, we go through each transcript with the interviews playing along in sync, making corrections to spelling or phrasing or whatever, especially with keywords such as names of people, places, themes, etc. It doesn’t take too much time and my tip is that you do it right after the transcripts are back, while you are watching the footage to become familiar with the content. 

Chuck Barbee shooting interview with Tommy Hays at the Kern County Museum.
Chuck Barbee shooting interview with Tommy Hays at the Kern County Museum.

 DA: Many companies are afraid of incorporating Transcriptive into an on-going project workflow. How was the process of using our transcription plugin in a long-form documentary film right away?

CB:  We have about 70 interviews of anywhere from 30 minutes to one hour each.  It is a low budget project, being done by a non-profit called “Citizens Preserving History“. The producers were originally going to try to use time-code-window DVD copies of the interviews to make notes about which parts of the interviews to use because of budget limitations. They thought the cost of doing manually typed transcriptions was too much.  But as they got into the process they began to see that typed transcripts were going to be the only way to go. Once we learned about Transcriptive and installed it, it only took a couple of days to do all 70 interviews and the cost, at 12 cents per minute is small, compared to manual methods.

Transcriptive is very easy to use and It honestly took almost no time for me to figure out the workflow.  The downloading and installation process was simple and direct and the tech support at Digital Anarchy is awesome.  I’ve had several technical questions and my phone calls and emails have been answered promptly, by cheerful, knowledgeable people who speak my language clearly and really know what they are doing. They can certainly help quickly if people feel lost or something goes wrong so I would say do yourself a favor and use Transcriptive in your project!

Here’s a short version of the opening tease for “The Town That Wouldn’t Die”, Episode III of Barbee’s documentary series:

https://www.youtube.com/embed/Py19MFCBvk0

More about Chuck Barbee’s work: https://www.barbeefilm.com

To learn more about Transcriptive and download a Free Trial license visit  https://digitalanarchy.com/transcribe-video/transcriptive.html. Questions? Get in touch with carla@nulldigitalanarchy.com.

 

The Role of Concept Art in Film And Games

Recently attended E3, the industry conference for all things games, and while the games, booths and general spectacle are always cool, one of my favorite parts of the show is a quiet, out-of-the-way corner with the Into The Pixel gallery of game concept art.

I’ve always thought concept artists don’t get the recognition they deserve, both in film and especially in games. They play an important role in defining the look and feel of the final film or game. And much of the art is truly beautiful. It’s much faster/cheaper to do a series of sketches and then paintings to create the look, than to build a set (even a virtual one) and make endless changes to that.

We always talk about developers and 3D artists, but often forget that the beginning of the creation process often starts with pen, ink, and digital paint. Here’s a few images (top to bottom: God of War, Jose Cabrera; Control, Oliver Odmark; APOC, Krist Miha) from Into The Pixel (click the link to see more):

God-WarE3-2E3-3

My first exposure to concept art was back when I was about 10 and a complete Star Wars nut. Star Wars had just came out and one of the things I purchased (ok, my parents purchased) was a portfolio of reproductions of Ralph McQuarrie’s concept art. It was fascinating to see what the initial ideas were, what changed, what remained the same. It’s truly one of the best pieces of Star Wars memorabilia that I own. And, even now, the art is still fabulous.

As George Lucas himself said of Ralph, “Ralph was the first person I hired to help me envision Star Wars. His genial contribution, in the form of unequaled production paintings, propelled and inspired all of the cast and crew of the original Star Wars trilogy. It’s really a testament to how important he was that there’s such a connection between a lot of those iconic images and the movie scenes. The way he illustrated them were an influence on those characters and how they acted. When words could not convey my ideas, I could always point to one of Ralph’s illustrations and say ‘Do it like this.’”

I think my favorite ones are still from A New Hope. Many of these were done prior to Lucas pitching Fox on the movie, so show what some of the ideas were when there was only a rough script. They were needed to convey what his vision for the Star Wars universe was to people that had _no idea_ what he was going on about. And they turned out to be critical in Fox’s decision to green light the film.

All in all, a pretty big testament to the importance of concept artists.

Some of his images are below, so check them out.  More can be found on starwars.com and elsewhere on the interwebs. If you know of a concept artist that does great work, please feel free to put their name and website in the comments below.

Ralph McQuarrie's Star Wars Art SW-3 SW-4

Artificial Intelligence Gone Bad

There are plenty of horrible things A.I. might be able to do in the future. And this MIT article lists six potential problem areas in the very near future, which are legit to varying degrees. (Although, this is more a list of humans behaving badly than A.I. per se)

However, most people don’t realize exactly how rudimentary (i.e. dumb) A.I. is in it’s current state. This is part of the problem with the MIT list.  The technology is prone to biases, many false positives, difficulty with simple situations, etc., etc.  The problem is more humans trying to make use of and/or make critical decisions based on immature technology.

For those of us that work with it regularly, we see all the limitations on a daily basis, so the idea of A.I. taking over the world is a bit laughable. In fact,  you can see it daily yourself on your phone.

Take the auto-suggest feature on the iPhone. You would think the Natural Language Processing could take a phrase like ‘Glad you’re feeling b…’ and suggest things like better, beautiful or whatever. Not so hard, right?

Er, no.

When artificial intelligence can't handle basic things

How often does ‘glad’, ‘feeling’ and ‘bad’ appear in the same sentence? And you want to let A.I. drive your car?

We’ve got a ways to go.

Unless, of course, it’s a human problem again and there are a bunch of asshats out there that are glad you’re feeling bad. Oh, wait… it’s the internet. Right.

Depression, Suicide and Being A Creative

While there’s less stigma attached to depression than there used to be, it’s still not always accepted or people have a hard time understanding it.

Many creatives, probably more than you think, struggle with depression.

In the last six months I’ve talked to a lot of people that don’t understand what chronic depression is like. This includes giving a talk at the USC film school to graduate and undergraduate students about being a creative and dealing with depression (Thanks Norman Hollyn!). I attended a funeral for a friend who committed suicide about six months ago and last week an uncle of a co-worker killed himself. Even at my friend’s funeral, someone giving a speech saying, ‘he was bi-polar, but it wasn’t like he was depressed and down-and-out’. As if being depressed and acting like a derelict were the same thing.

 

This blog post is:

1) an attempt to give folks that don’t deal with chronic depression a better understanding of it, how it manifests and, maybe, what to do about it (both as a sufferer and someone that cares about someone suffering).

2)  I know that many people who identify as ‘creative’ struggle with similar issues and I want you to know you are not alone. It’s a lonely disease, we isolate ourselves and feel isolated by it. Nevertheless, you are not alone.

And 3)  I want to start the discussion both for those suffering and those trying to understand and help those suffering. It doesn’t help anyone to not talk about it. Let’s de-stigmatize it.

 

My Struggle

I’ve struggled with depression and suicidal thoughts for almost 40 years, since my early teens.  Please realize this post is talking from my own experience, what I’ve learned from therapists and what’s worked for me. I’m not a therapist. If you suffer from depression it’s usually very beneficial to see a therapist or psychologist. It’s really important you have help. I also encourage those of you who are therapists, or if you have struggled with depression to talk about your experiences and what’s been helpful (or not) for you. Please post in the comments!

Let’s start off by attempting to talk about what it’s like to be depressed. Or at least how it manifests for me. Everyone is different but my experience can give you some insight into the disease.

On a daily basis, as I have had for almost as long as I can remember, I have a voice inside me telling me I’m worthless, unloveable and that life is not worth living. All the time. Most of the time, that voice is just barely audible background noise, easily dismissed. But on some days it’s the sound and fury of a hurricane. On those days suicide becomes a tangible thing. I’ll talk more about that in a moment.

The rest of the time, dismissing the voice takes time and energy. It can suck the joy out of successes and it magnifies failures. It is a weight that I constantly struggle against. This is despite the fact that I have what most people would consider a pretty good life.

I’m fully aware I’m blessed… I run a successful company that I started, I have much love and support around me, a good partner. And yet…

The awareness that I have so much to be grateful for often makes it harder. On top of the depression, guilt and shame are piled on for knowing that I have all these good things yet I’m still depressed. The depression becomes like teflon. Rationally I’m aware of the love and support around me. I know such things exist. But they roll off the darkness like beads of water, unable to be absorbed to the depths where they would help. The feelings can’t be internalized.

I know I SHOULD be grateful but I can’t manifest it. Which just increases the frustration and pain.

I realize all this sounds pretty bleak. Probably bleaker than it actually is a lot of the time.  Remember that often the thoughts are mostly background noise. They definitely have a bit of a dampening effect but I can still feel happy or joyful or neutral or whatever. I don’t usually have a problem moving through the world like everyone else. That said, on the bad days, the above description doesn’t come close to capturing the depths of the darknesses. How dark the thoughts have to be to make suicide a viable option. But it can get there.

 

So what should you do?

If you want to help someone that’s deeply depressed, perhaps even suicidal, you have to meet the person where they’re at, NOT where you want them to be. Even if they say they’re suicidal. Accept that depression is an illness and hear them out. LISTEN to them. Acknowledge what they are feeling. Make them feel heard. Make them feel loved… by listening, by asking gentle questions (how did that make you feel? Why do you think it affected you like that? Is there anything that would make it better?, etc.), by making time for them, by being non-judgemental. Let them tell their story. But also be part of the conversation. Don’t just ruminate with them. Try to move the conversation forward.

However, it may be hard to get them to engage. Realize that there’s a lot of non-verbal things happening… Depression is more, and perhaps much more, something you feel in your body than something that’s in your head. So hugs without words are sometimes the best things. Offer to go out and get them their favorite food or bring them soup. Of course, you can just ask them what they need.

You’re not going to solve it. All you can do is support them in solving it for themselves.

If they are suicidal, you need to accept the fact that suicide is a viable option. Just because you don’t want it to happen doesn’t mean it can’t or won’t happen. If someone believes suicide is an option and you tell them that it’s not, you’re making it more likely. You’re invalidating their opinion, invalidating what they’re feeling. By doing so you’re confirming that they mean nothing. And, again, be careful about how you tell them what they have to live for.  They are probably very well aware of the things that they _should_ feel grateful for.

In truth, if you suspect someone is depressed you should consult a therapist. I am not a therapist. I’m just relating my own struggle with chronic depression, and every person’s struggle is different. Everyone’s reasons for being depressed are different… in many cases, it’s not chronic but event driven (a divorce, death, getting fired, etc.). Listening is always a good strategy but a therapist will be able to offer better advice for the exact situation.

The other thing to know is that often those of us that have dealt with depression for a long time are good at putting a brave face on it. It may not be obvious we’re depressed. Which is why suicide often comes as a shock. Just because outwardly someone is successful and seems to have it together doesn’t mean they aren’t suffering and struggling underneath it all. In a lot of case, it’s up to the depressed person to realize they are not alone and that they can get help.

If YOU struggle with depression…

This is a lonely and difficult struggle. Particularly when you’re younger and you’re still learning what it is and what might help but it’s difficult at any age. You have to find the strength of will to pull yourself out of it enough to either help yourself or reach out and take the hands of those offering to help.

As mentioned, see a therapist or psychologist. It really does help to talk things out. Often a therapist can help you see things and patterns you can’t see for yourself.

One of the important things is to get out of the house. If you can at least find the strength to go be depressed in a park, a makerspace, gym, mall, whatever… you’ll find it helps. Go somewhere and do something you enjoy. Especially if you can connect with a friend, but I’ve found just being in a place where there are other people helps. If lack of people works better at least try to not just stay in bed or on the couch. Take a walk in a secluded park or something.

Connect with people. Even though it seems like no one cares, you’ll find if you reach out, you have friends who do care and will help.

There are other things that can help as well. They tend to be somewhat different for each person but it’s important to find what those things are. For some people it’s art or music or just sitting in the sun. Meditation can also be a form of therapy, especially with a good teacher.

I think many creatives forget why they started doing art in the first place. Make sure you’re creating art outside of your job. Doing art you love just for the sake of the art. It can be a huge outlet and expression of what you’re feeling. It really is important to make time for it.

For myself, exercise, particularly yoga these days, has always been the best anti-depressant. However, as I’ve gotten older and injuries more frequent, I’ve come to rely on anti-depressant medications a bit more. Getting injured is a double whammy… I get depressed about not being able to do something I love doing and, at the same time, my main coping mechanism for dealing with depression is taken away.

Medications are a mixed bag. Not all of them work and some can actually make things worse. So it’s important to monitor your state of mind when you initially start taking them. If it makes you feel worse stop immediately and consult your Psychiatrist. You may have to try a few different ones to find what works for you. However, after much resistance, I was finally convinced to start taking Cymbalta regularly (next generation Prozac-like drug). It’s actually been quite helpful. Who knew?

 

There is no easy answer.

What I’ve said here is meant to help and guide folks. However, it’s mostly based off of my personal experience. It is not the be all, end all. If you have other insights, please share them in the comments. I would love to hear other things that have worked for other people. We’re all different, men sometimes have different challenges than women, as do different age groups, etc., etc. There is not one solution.

Whatever the solution is, it requires work.

But it can’t hurt to talk about it and realize we’re not alone. To know that it’s ok to be depressed. It happens. It’s an illness and needs to be treated as such. If it’s chronic, then it comes and goes. Sometimes stronger, sometimes less so. By exploring meditation, seeing a therapist, taking medication or whatever works for you, hopefully we learn how to deal with it better over time. But even after almost 40 years and all the above things I’ve talked about… I still have incredibly dark days. I still have a voice that says I’m worthless and wants to drag me down. For myself and many people, this doesn’t just disappear.

As one of my therapists said… it’s like driving a bus. Those parts of you, those passengers, are on the bus whether you like it or not. At some point you have to accept the passengers. Once you accept them, you realize they are part of you, but they AREN’T you. They don’t define you. (it’s not easy to get to that realization and some days, you’re still going to believe that voice. It happens.)

So let’s talk. Be open about our experiences, what’s helpful, what’s not. Hopefully we can further de-stigmatizing depression and make everyone realize that sometimes asking for help is the most courageous thing you’ll ever do.

 

Downloading The Captions Facebook or YouTube Creates

So you’ve uploaded your video to Facebook or YouTube and you’d like to import the captions they automatically generate with Artificial Intelligence into Transcriptive. This can be a good, FREE way of getting a transcript.

Transcriptive imports SRT files, so… all you need is an SRT file from those services. That’s easy peasy with YouTube, you just go to the Captions section and download>SRT.

Screenshot of where to download an SRT file of YouTube CaptionsDownload the SRT and you’re done. Import the SRT into Transcriptive with ‘Combine Lines into Paragraphs’ turned on… Easy, free transcription.

With Facebook it’s more difficult as they don’t let you just download an SRT file. Or any file for that matter. So you need to get tricky.

Open Facebook in Firefox and go to the Web Developer>Network. This will open the inspector at the bottom of you browser window.

Firefox's web developer tool, the Network tabWhich will give you something that looks like this:

Using the Network tab to get a Facebook caption fileGo to the Facebook video you want to get the caption file for.

Once the video starts playing, type SRT into the Filter field (as shown above)

This _should_ show an XHR file. (we’ve seen instances where it doesn’t, not sure why. So this might not work for every video)

Right Click on it, select Copy>Copy URL (as shown above)

Open a new Tab and paste in the URL.

You should now be asked to download a file. Save this as an SRT file (e.g. MyVideo.srt).

Import the SRT into Transcriptive with ‘Combine Lines into Paragraphs’ turned on… Easy, free transcription.

So that’s it. This worked as of this writing. It’s entirely possible Facebook will make a change at some point preventing this, but for now, it’s a good way of getting free transcriptions.

You can also do this in other browsers, I’m just using Firefox as an example.

Re-discovering Fractals with Frax

My first software job was with MetaTools doing Quality Assurance (Where I was KPTJim if you were online in those days). They made Kai’s Power Tools Photoshop plugins  (KPT), Bryce, Final Effects After Effects Plugins, Goo, and many other cool graphics software.  Texture Anarchy, a Photoshop plugin we give away for free these days, was directly inspired by a KPT plugin called Texture Explorer. Also, part of KPT was something called Fractal Explorer.

I was a huge fan of KPT (which was part of the reason I applied for, and got, the job) and particularly Fractal Explorer. I’d spend a lot of time just fiddling with it, exploring how to make amazing graphics with mathematics. Then finding something I liked, let my Mac Quadra 650 spend all night rendering it.

Screenshot of the Original KPT Fractal ExplorereThe original KPT Fractal Explorer circa 1993

My love of creating graphics algorithmically shows up in a lot of early Digital Anarchy plugins for After Effects and Photoshop. Not so much these days since we’re more focused on video editing than graphics creation. Perhaps because of what we’re focused on now, I almost forgot how much I love playing with graphics and fractals in particular.

I rediscovered that when I accidentally came across Frax. This is an iPad app created by Kai (of KPT fame) and Ben Weiss, who was one of the lead engineers at MetaTools and responsible for a lot of the code behind KPT.

OMG. It’s f’ing fun (at least for someone that likes to geek out on fractals).  Amazingly fast and the pro version (all of $9) has a ton of control. Really a fantastic app. I still have no idea how you could use the below images in the real world. But it’s fun and the images are beautiful.

You can’t edit the gradients being the only thing I wish they’d let you do. There’s like 250 of them but fractals can be very sensitive to where colors show up and being able to change the gradient would be really helpful. But minor complaint. Otherwise I highly recommend shelling out the $9 and losing yourself in a fractal exploration. Some of my own explorations are below…

More images created by Jim Tierney with FraxCool fractals created by Fract.alCool fractals created by Fract.alCool fractals created by Fract.al Cool fractals created by Fract.al Cool fractals created by Fract.al More images created by Jim Tierney with Frax More images created by Jim Tierney with Frax More images created by Jim Tierney with Frax

Using A.I. to Create Music with Ampermusic and Jukedeck

For the last 14 years I’ve created the Audio Art Tour for Burning Man. It’s kind of a docent led audio guide to the major art installations out there, similar to an audio guide you might get at a museum.

Burning Man always has a different ‘theme’ and this year it was ‘I, Robot’. I generally try and find background music related to the theme. EDM is big at Burning Man, land of 10,000 DJs, so I could’ve just grabbed some electronic tracks that sounded robotic. Easy enough to do. However I  decided to let Artificial Intelligence algorithms create the music! (You can listen to the tour and hear the different tracks)

This turned out to be not so easy, so I’ll break down what I had to do to get seven unique sounding, usable tracks. I had a bit more success with AmperMusic, which is also currently free (unlike Jukedeck), so I’ll discuss that first.

Getting the Tracks

The problem with both services was getting unique sound tracks. The A.I. has a tendency of creating very similar sounding music. Even if you select different styles and instruments you often end up with oddly similar music. This problem is compounded by Amper’s inability to render more than about 30 seconds of music.

Using Artificial Intelligence and machine learning to create music

What I found I had to do was let it generate 30 seconds randomly or with me selecting the instruments. I did this repeatedly until I got a 30 second sample I liked. At which point I extended it out to about 3 or 4 minutes and turned off all the instruments but two or three. Amper was usually able to render that out. Then I’d turn off those instruments and turn back on another three. Then render that. Rinse, repeat until you’ve rendered all the instruments.

Now you’ve got a bunch of individual tracks that you can combine to get your final music track. Combine them in Audition or even Premiere Pro (or FCP or whatever NLE) and you’re good to go. I used that technique to get five of the tracks.

Jukedeck didn’t have the rendering problem but it REALLY suffered from the ‘sameness’ problem. It was tough getting something that really sounded unique. However, I did get a couple good tracks out of it.

Problems Using Artificial Intelligence

This is another example of A.I. and Machine Learning that works… sort of. I could have found seven stock music tracks that I like much faster (this is what I usually do for the Audio Art Tour).  The amount of time it took me messing around with these services was significant. Also, if Jukedeck is any indication, a music track from one of these services will cost as much as a stock music track. Just go to Pond5 to see what you can get for the same price. With a much, much wider variety. I don’t think living, breathing musicians have much to worry about. At least for now.

That said, I did manage to get seven unique, cool sounding tracks out of them. It took some work, but it did happen.

As with most A.I./ML, it’s difficult to see what the future looks like. There has certainly been a ton of advances, but I think in a lot of cases, it’s some of the low hanging fruit. We’re seeing that with Speech-to-text algorithms in Transcriptive where they’re starting to plateau and cluster around the same accuracy levels. The fruit (accuracy) is now pretty high up and improvement are tough. It’ll be interesting to see what it takes to break through that. More data? Faster servers? A new approach?

I think music may be similar. It seems like it’s a natural thing for A.I. but it’s deceptively difficult to do in a way that mimics the range and diversity of styles and sounds that many human musicians have. Particularly a human armed with a synth that can reproduce an entire orchestra. We’ll see what it takes to get A.I. music out of the Valley of Sameness.

 

Photographing Lightning during The Day or Night with a DSLR

Capturing lightning using a neutral density filter and long exposure

As many of you know, I’m an avid time lapse videographer, and the original purpose of our Flicker Free filter was time lapse. I needed a way to deflicker all those night to day and day to night time lapses. I also love shooting long exposure photos.

As it turns out, this was pretty good experience to have when it came to capturing a VERY rare lightning storm that came through San Francisco late last year.

Living in San Francisco, you’re lucky if you see more than a 3 or 4 lightning bolts a year. Very different from the lightning storms I saw in Florida when I lived there for a year. However, we were treated to a definitely Florida-esqe lightning storm last September. Something like 800 lightning strikes over a few hours. It was a real treat and gave me a chance to try and capture lightning! (in a camera)

The easiest way to capture lightning is just flip your phone’s camera into video mode and point in the direction you hope the lightning is going to be at. Get the video and then pull out a good frame. This works… but video frames are usually heavily compressed and much lower resolution than a photo.

I wanted to use my 30mp Canon 5DmarkIV to get photos, not the iPhone’s mediocre video camera.

Problems, Problems, Problems

To get the 5D to capture lightning, I needed at the very least: 1) a tripod and 2) an intervalometer.

Lightning happens fast. Like, speed of light fast. Until you try and take a picture of it, you don’t realize exactly how fast. If you’re shooting video (30fps), the bolt will happen over 2, maybe 3 frames. if you’ve got a fancy 4K (or 8K!) camera that will shoot 60 or 120fps, that’s not a bad place to start.

However, if you’re trying to take advantage of your 5D’s 6720 × 4480 sensor… you’re not going to get the shot handholding it and manually pressing the shutter. Not going to happen. Cloudy with a chance of boring-ass photos.

So set the camera up on a tripod and plugin in your intervalometer. You can use the built-in, but the external one gives you more options. You want the intervalometer firing as fast as possible but that means only once every second. During the day, that’s not going to work.

Lightning And Daylight

The storm started probably about an hour before sunset. It was cloudy, but there was still a fair amount of light.

At first I thought, “once every second should be good enough”. I was wrong. Basically, the lightning had to happen the exact moment the camera took the picture. Possible, but the odds are against you getting the shot.

As mentioned, I like shooting long exposures. Sometimes at night but often during the day. To achieve this, I have several neutral density filters which I stack on top of each other. They worked great for this. I stacked a couple .9 ND filters on the lens, bringing it down 6 stops. This was enough to let me have a 1/2 sec. shutter speed.

1/2 sec. shutter speed and 1 sec. intervals… I’ve now got a 50/50 chance of getting the shot… assuming the camera is pointed in the direction of the lightning. Luckily it was striking so often, that I could make a good guess as to the area it was going to be in.  As you can see from the above shot, I got some great shots out of it.

Night Lightning

Photographing lightning at night with a Canon 5D

To the naked eye, it was basically night. So with a 2 second exposure and a 2 second interval… as long as the lightning happened where the camera was pointed, I was good to go. (it wasn’t quite night, so with the long exposure you got the last bits of light from sunset) I did not need the neutral density filters as it was pretty dark.

By this point the storm had moved. The lightning was less consistent and a bit further away. So I had to zoom in a bit, reducing the odds of getting the shot. But luck was still with me and I got a few good shots in this direction as well.

I love trying to capture stuff you can’t really see with the naked eye, whether it’s using time lapse to see how clouds move or long exposure to see water flow patterns. Experimenting with capturing lightning was a blast. Just wish we saw more of it here in SF!

So hopefully this gave you some ideas about how to capture lightning, or anything else that moves fast, next time you have a chance!

Just Say No to A.I. Chatbots

For all the developments in artificial intelligence, one of the consistently worst uses of it is with chatbots. Those little ‘Chat With Us’ side bars on many websites. Since we’re doing a lot with artificial intelligence (A.I.) in Transcriptive and in other areas, I’ve gotten very familiar with how it works and what the limitations are. It starts to be easy to spot where it’s being used, especially when it’s used badly.

So A.I. chatbots, which really doesn’t work well, have become a bit of a pet peeve of mine. If you’re thinking about using them for your website, you owe it to yourself to  click around the web and see how often ‘chatting’ gets you a usable answer. It’s usually just frustrating. You go a few rounds with a cheery chatbot before getting to what you were going to do in the first place… send a message that will be replied to by a human. Total waste of time and doesn’t answer the questions.

Artificial intelligence isn't great for chatbotsDo you trust cheery, know-nothing chatbots with your customers?

The main problem is that chatbots don’t know when to quit. I get it that some business receive the same question over and over… where are you located? what are your hours? Ok, fine, have a chatbot act as a FAQ. But the chatbot needs to quickly hand off the conversation to a real person if the questions go beyond what you could have in an FAQ. And frankly, an FAQ would be better than trying to fake-out people with your A.I. chatbot. (honesty and authenticity matter, even on the web)

A.I. is just not great at reading comprehension. It can get the jist of things usually, which I think is useful for analytics and business intelligence. But this doesn’t allow it to respond with any degree of accuracy or intelligence. For responding to customer queries it produces answers that are sort of close… but mostly unusable. So, the result is frustrated customers.

Take a recent experience with Audi. I’m looking at buying a new car and am interested in one of their SUVs. I went onto an Audi dealer site to inquire about a used one they had. I wanted to know 1) was it actually in stock and 2) how much of the original warranty was left since it was a 2017? There was a button to send a message which I was originally going to use but decided to try the chat button that was bouncing up and down getting my attention.

So, I asked those questions in the chat. If it had been a real person, they definitely could have answered #1 and probably #2, even if they were just an assistant. But no, I ended in the same place I would’ve been if I’d just clicked ‘send a message’ in the first place. But first, I had to get through a bunch of generic answers that didn’t answer any of my questions and just dragged me around in circles. This is not a good way to deal with customers if you’re trying to sell them a $40,000 car.

And don’t get me started on Amazon’s chatbots. (and emailbots for that matter)

It’s also funny to notice how the chatbots try and make you think it’s human, with misspelled words and faux emotions. I’ve had a chatbot admonish me with ‘I’m a real person…’ when I called it a chatbot. It then followed that with another generic answer that didn’t address my question. The Pinocchio chatbot… You’re not a real boy, not a real person and you don’t get to pass Go and collect $200. (The real salesperson I eventually talked to confirmed it was a chatbot.)

I also had one threaten to end the chat if I didn’t watch my language, which was not aimed at the chatbot. I just said, “I just want this to f’ing work”. A little generic frustration. However, after it told me to watch my language, I went from frustrated to kind of pissed. So much for artificial intelligence having emotional intelligence. Getting faux-insulted over something almost any real human would recognize as low grade frustration, is not going to make customers happier.

I think A.I. has some amazing uses, Transcriptive makes great use of A.I. but it also has a LOT of shortcomings. All of those shortcomings are glaringly apparent when you look at chatbots. There are, of course, many companies trying to create conversational A.I. but so far the results have been pretty poor.

Based on what I’ve seen developing products with A.I., I think it’s likely it’ll be quite a while before conversational A.I. is a good experience on a regular basis. You should think very hard about entrusting your customers to it. A web form or FAQ is going to be better than a frustrating experience with a ‘sales person’.

Not sure what this has to do with video editing. Perhaps just another example of why A.I. is going to have a hard time editing anything that requires comprehending the content. Furthering my belief that A.I. isn’t going to replace most video editors any time soon.

What Exactly is Adobe TypeKit?

So let’s talk about something that’s near and dear to my heart: Fonts.

I recently discovered Adobe TypeKit. I know…some of you are like… ‘You just discovered that?’.

Yeah, yeah… well, in case there are other folks that are clueless about this bit of the Creative Cloud that’s included with your subscription: It’s a massive font library that can be installed on your Creative Cloud machine… much of which is free (well, included in the cost of CC).

Up until a week ago I just figured it was a way for Adobe to sell fonts. I was mistaken. You find the font you like and, more often than not, you click the SYNC button and, boom… font is installed on your machine for use in Photoshop or After Effects or whatever.

Super cool feature of Creative Cloud that if you’re as clued in as I am about everything CC includes… you might not know about. Now you do. :-) Here’s a bit more info from Adobe.

I realize this probably comes off as a bit of an ad for TypeKit, but it really is pretty cool. I just designed a logo using a new font I found there. And since it’s Adobe, the fonts are of really high quality, not like what you find on free font sites (which is what I’ve relied on for many uses).

F’ing GPUs

One of the fun challenges of developing graphics software is dealing with the many, varied video cards and GPUs out there. (actually, it’s a total pain in the ass. Hey, just being honest :-)

There are a lot of different video cards out there and they all have their quirks. Which are complicated by the different operating systems and host applications… for example, Apple decides they’re going to more or less drop OpenCL in favor of Metal, which means we have to re-write quite a bit of code, Adobe After Effects and Adobe Premiere Pro handle GPUs differently even though it’s the same API, etc. etc. From the end user side of things you might not realize how much development goes into GPU Acceleration. It’s a lot.

The latest release of Beauty Box Video for Skin Retouching (v4.1) contains a bunch of fixes for video cards that use OpenCL (AMD, Intel). So if you’re using those cards it’s a worthwhile download. If you’re using Resolve and Nvidia cards, you also want to download it as there’s a bug with CUDA and Resolve and you’ll want to use Beauty Box in OpenCL mode until we fix the CUDA bug. (Probably a few weeks away) Fun times in GPU-land.

4.1 is a free update for users of the 4.0 plugin. Download the demo and it should automatically remove the older version and recognize your serial number.

Just wanted to give you all some insight on how we spend our days around here and what your hard earned cash goes into when you buy a plugin. You know, just in case you’re under the impression all software developers do is ‘work’ at the beach and drive Ferraris around. We do have fun, but usually it involves nailing the video card of the month to the wall and shooting paintballs at it. ;-)

Creating the Grinch on Video Footage with The Free Ugly Box Plugin

We here at Digital Anarchy want to make sure you have a wonderful Christmas and there’s no better way to do that than to take videos of family and colleagues and turn them into the Grinch. They’ll love it! Clients, too… although they may not appreciate it as much even if they are the most deserving. So just play it at the office Christmas party as therapy for the staff that has to deal with them.

Our free plugin Ugly Box will make it easy to do! Apply it to the footage, click Make Ugly, and then make them green! This short tutorial shows you how:

You can download the free Ugly Box plugin for After Effects, Premiere Pro, Final Cut Pro, and Avid here:

https://digitalanarchy.com/register/register_ugly.php

Of course, if you want to make people look BETTER, there’s always Beauty Box to help you apply a bit of digital makeup. It makes retouching video easy, get more info on it here:

https://digitalanarchy.com/beautyVID/main.html

De-flickering Bix Pix’s Stop Motion Animation Show ‘Tumble Leaf’ with Flicker Free

Like Digital Anarchy On FacebookLike us on Facebook!

One of the challenges with stop motion animation is flicker. Lighting varies slightly for any number of reasons causing the exposure of every frame to be slightly different. We were pretty excited when Bix Pix Entertainment bought a bunch of Flicker Free licenses (our deflicker plugin) for Adobe After Effects. They do an amazing kids show for Amazon called Tumble Leaf that’s all stop motion animation. It’s won multiple awards, including an Emmy for best animated preschool show.

Many of us, if not most of us, that do VFX software are wannabe (or just flat out failed ;-) animators. We’re just better at the tech than the art. (exception to the rule: Bob Powell, one of our programmers, who was a TD at Laika and worked on Box Trolls among other things)

So we love stop motion animation. And Bix Pix does an absolutely stellar job with Tumble Leaf. The animation, the detailed set design, the characters… are all off the charts. I’ll let them tell it in their own words (below). But check out the 30 second deflicker example below (view at full screen as the Vimeo compression makes the flicker hard to see). I’ve also embedded their ‘Behind The Scenes’ video at the end of the article. If you like stop motion, you’ll really love the ‘Behind the Scenes’.

From the Bix Pix folks themselves… breaking down how they use Flicker Free  in their Adobe After Effects workflow:

——————————————————————-

Using Digital Anarchy’s Flicker Free at Bix Pix

Bix Pix Entertainment is an animation studio that specializes in the art of stop-motion animation, and is known for their award-winning show Tumble Leaf on Amazon Prime.

It is not uncommon for an animator to labor for days sometimes weeks on a single stop motion shot, working frame by frame. With this process, it is natural to have some light variations between each exposure, commonly referred to as ‘flicker’ – There are many factors that can cause the shift in lighting. For instance, a studio light or lights may blow out or solar flare. Voltage and/ power surges can brighten or dim lights over a long shot. Certain types of lights, poor lighting equipment, camera malfunctions or incorrect camera settings. Sometimes an animator might wear a white t-shirt unintentionally adding fill to the shot or accidentally standing in front of a light casting a shadow from his or her body.

The variables are endless. Luckily these days compositors and VFX artists have fantastic tools to help remove these unwanted light shifts. Removing unwanted light shifts and flicker is a very important and necessary first step when working with stop-motion footage. Unless by chance it’s an artistic decision to leave that tell-tale flicker in there. But that is a rare decision that does not come about often.

Here at Bix Pix we use Adobe After Effects for all of our compositing and clean-up work. Having used 4 different flicker removal plugins over the years, we have to say Digital Anarchy’s flicker Free is the fastest, easiest and most effective flicker removal software we have come across. And also quite affordable.

During a season of Tumble Leaf we will process between 1600 and 2000 shots averaging between 3 seconds and up to a couple minutes in length. That is an average of about 5 hours of footage per season, almost three times the length of a feature film. With a tight schedule of less than a year and a small team of ten or so VFX artists and compositors. Nearly every shot has an instance of flicker free applied to it as an effect. The plugin is so fast, simple to use and reliable. De-flickering can be done in almost real time.

Digital Anarchy’s Flicker free has saved us thousands of hours of work and reduced overtime and crunch time delays. This not only saves money but frees up artists to do more elaborate effects that we could not do before due to time constraints, allowing them to focus on making their work stand out even more.

If you are shooting stop-motion animation and require flicker free footage, this is the plugin to use.

———————————————–

For a breakdown of how they do Tumble Leaf, you should definitely check out the Behind the Scenes video!

I even got to meet the lead character, Fig! My niece and nephew (4 and 6) were very impressed. :-)

Hanging out with Fig at BixPix Entertainment

Cheers,
Jim Tierney
Chief Executive Anarchist
Digital Anarchy

Tutorial: Removing Flicker from Edited Video Footage

Like Digital Anarchy On Facebook

 

One problem that users can run into with our Flicker Free deflicker plugin is that it will look across edits when analyzing frames for the correct luminance. The plugin looks backwards as well as forwards to gather frames and does a sophisticated blend of all those frames. So even if you create an edit, say to remove an unwanted camera shift or person walking in front of the camera, Flicker Free will still see those frames.

This is particularly a problem with Detect Motion turned OFF.

The way around this is to Nest (i.e. Pre-compose (AE), Compound Clip (FCP)) the edit and apply the plugin to the new sequence. The new sequence will start at the first frame of the edit and Flicker Free won’t be able to see the frames before the edit.

This is NOT something you always have to do. It’s only if the frames before the edit are significantly different than the ones after it (i.e. a completely different scene or some crazy camera movement). 99% of the time it’s not a problem.

This tutorial shows how to solve the problem in Premiere Pro. The technique works the same in other applications. Just replacing ‘Nesting’ with whatever your host application does (pre-composing, making a compound clip, etc).

Is The iPhone A Real Camera?

For whatever reason I’ve seen several articles/posts over the last few days about whether you can be a photo/videographer with a camera phone. Usually the argument is that just because the iPhone (or whatever) can take the occasional good video/pictures, it doesn’t make you a good videographer. Of course not. Neither does a 5Dm4 or an Arri Alexa.

Camera phones can be used for professional video.

But what if you have a good eye and are a decent videographer? I think a lot of the hand wringing comes from people that have spent a lot of money on gear and are seeing people get great shots with their phone. It’s not going to change. The cameras in a lot of phones are really good and if you have a bit of skill, it can go a long way. You can check out this blog post comparing the iPhone’s slow motion video capabilities to a Sony FS700. The 10x price difference doesn’t beget a 10x quality difference.

There is obviously a place for long or fast lenses that you need a real camera for. There are definitely shots you won’t get with a phone. However, there are definitely shots you can get with a phone that you can’t get with your big, fancy camera. Partially just because you ALWAYS have your phone and partially because of the size. Sometimes the ability to spontaneously shoot is a huge advantage.

Then you add something like Dave Basaluto’s iOgrapher device and you’ve got a video camera capable of some great stuff, especially for stock or B roll.

There are issues for sure. Especially with these devices trying to shoot 4K, like a GoPro. It doesn’t matter how well lit and framed the shot is because it’s often got massive compression artifacts.

Overall though, the cameras are impressive and if you’ve got the skills, you can consistently get good to great shots.
What’s this got to do with Digital Anarchy? Absolutely nothing. We just like cool cameras no matter what form they take.  :-)

(and, yes, I’m looking forward to getting the new 5D mark4. It was finally time to upgrade the Digital Anarchy DSLR)

VR: Because Porn! (and Siggraph and other stuff)

Over the last few months I’ve been to NAB, E3, and Siggraph and seen a bunch of VR stuff.

VR people with their headsetsMost VR people with their headsets

One panel discussion about VR filmmaking was notable for the amount of time spent talking about all the problems VR has and how once they solve this or that major, non-trivial problem, VR will be awesome! One of these problems is that, as one of the panelist pointed out, anything over 6-8 minutes doesn’t seem to work. I’m supposed to run out and buy VR headsets for a bunch of shorts? Seriously?

E3 is mostly about big game companies and AAA game titles. However, if you go to a dark, back corner of the show floor you’ll find a few rows of small 10×20 booths. It was here that I finally found a VR experience that lived up to expectations! Porn. Yes, there was a booth at E3 showing hardcore VR porn. (I wonder if they told E3 what they were showing?)

One of my favorite statistics ever is that adult, pay-per-view movies in hotel rooms are watched, on average, for about 12 minutes. Finally! A use case for VR that matches up perfectly to its many limitations. You don’t need to worry about the narrative and no one is going to watch it for more than 12 minutes. Perfect. I’m sure the hot, Black Friday special at Walmart will be the Fleshlight/Oculus Rift bundle.

Surely There Are Other Uses Besides Porn?

Ok, sure, there are. I just haven’t found them to be compelling enough to justify all the excitement VR is getting. One booth at Siggraph was showing training on how to fix damaged power lines. This included a pole with sensors on the end of it that gave haptic (vibrations) feedback to the trainee and controlled the virtual pole in the VR environment. There are  niche uses like this that are probably viable.

There are, of course, games, which are VRs best hope for getting into the mainstream. These are MUCH more compelling in the wide open space of a tradeshow than I think they’re going to be in someone’s living room. For the rank and file gamer that doesn’t want to spend $8K on a body suit to run around their living room in… sitting on the couch with a headset is probably going to be less than an awesome experience after the novelty wears off. (and we don’t want to see the average gamer in a body suit. Really. We don’t.)

And then there are VR films. There was a pretty good 5 minute film called Giant being shown at Siggraph. Basically the story of parents and an 8 year old daughter in a basement in a war zone. You sat on a stool that could vibrate, strapped on the headset and you were sitting in a corner of this basement.  It was pretty intense.

However, the vibrating stool that allowed you to feel the bombs being dropped probably added more to the experience than VR. I think it probably would have been more intense as a regular film. The problem with VR is that you can’t do close-ups and multiple cameras. So a regular film would have been able to capture the emotions of the actors better. And it’s VR, so my tendency was to look around the basement rather than to focus on what was happening in the scene. There was very little interesting in the basement besides the actors, so it was just a big distraction.

So if your idea of a good time is watching game cinematics, which is what it felt like, then VR films are for you. And that was a good VR experience. Most VR film stuff I’ve seen are either 1) incredibly bland without a focus point or 2) uses the simulation of an intimate space to shock you. (Giant was guilty of this to some degree) The novelty of this is going to wear off as fast as a 3D axe thrown at the screen.

There are good uses for VR.  It just doesn’t justify the hype and excitement people are projecting onto it. For all the money that’s  pouring into it, it’s disappointing that the demos most companies are still showing (and expecting you to be excited about) are just 360 environments. “But Look! There are balloons falling from the sky! Isn’t it cool?!” Uh… yeah. Got any porn?

Computers and Back Care part 2: Forward Bending

Like Digital Anarchy On Facebook

 

Go to Part 1 in the Back Care series

Most folks know how to pick up a heavy box. Squat down, keep your back reasonably flat and upright and use your legs to lift.

However, most folks do not know how to plug in a power cord. (as the below photo shows)

How to bend forward if you're plugging in a power cord

Forward bending puts a great deal of stress on your back and we do it hundreds of times a day. Picking up your keys, putting your socks on, plugging in a power cord, and on and on. This is why people frequently throw their backs out sneezing or picking up some insignificant thing off the floor like keys or clothing.

While normally these don’t cause much trouble, the hundreds of bends a day add up. Especially if you sit in a chair all day and are beating up your back with a bad chair or bad posture. Over time all of it weakens your back, degrades discs, and causes back pain.

So what to do?

There are a couple books I can recommend. Both have some minor issues but overall they’re very good. I’ll talk about them in detail in Part 3 of this series.

Back RX by Vijay Vad
8 Steps To a Pain Free Back by Esther Gokhale

Obviously for heavy objects, keep doing what you’re probably already doing: use your legs to lift.

But you also want to use your legs to pick up almost any object. Using the same technique to pick up small objects works as well. That said, all the squatting can be a bit tough on the knees, so lets talk about hip hinging.

Woman hinging from the hips in a way that puts less pressure on your back(the image shows a woman stretching but she’s doing it with a good hip hinge. Since it’s a stretch, it’s, uh, a bit more exaggerated than you’d do picking something up. Not a perfect image for this post, but we’ll roll with it.)

Imagine your hip as a door hinge. Your upright back as the door and your legs as the wall. Keep your back mostly flat and hinge at the hips. Tilting your pelvis instead of bending your back. Then bend your legs to get the rest of the way to the floor. This puts less strain on your back and not as much strain on your knees as going into a full squat. Also, part of it is to engage your abs as you’re hinging. Strong abs help maintain a strong back.

Directions on how to hip hinge, showing a good posture

There’s some disagreement on the best way to do this. Some say bend forward (with your knees slightly bent) until you feel a stretch in your hamstrings, then bend your knees. I usually hinge the back and bend the knees at the same time. This feels better for my body, but everyone is different so try it both ways. There is some truth that the more length you have in your hamstrings, the more you can hinge. However, since most people, especially those that sit a lot, have tight hamstrings, it’s just easier to hinge and bend at the same time.

But the really important bit is to be mindful of when you’re bending, regardless of how you do it. Your back isn’t going to break just from some forward bending, but the more you’re aware of how often you bend and doing it correctly as often as possible, the better off you’ll be.

This also applies to just doing regular work, say fixing a faucet or something where you have to be lower to the ground. If you can squat and keep a flat back instead of bending over to do the work, you’ll also be better off.

If this is totally new to you, then your back may feel a little sore as you use muscles you aren’t used to using. This is normal and should go away. However, it’s always good to check in with your doctor and/or physical therapist when doing anything related to posture.

In Part 3 I’ll discuss the books I mentioned above and some other resources for exercises and programs.

We Live in A Tron Universe: NASA, Long Exposure Photography and the Int’l Space Station

Like Digital Anarchy On Facebook

 

I’m a big fan of long exposure photography (and time lapse, and slow motion, etc. etc. :-). I’ve done some star trail photography from the top of Haleakala in Maui. 10,000 feet up on a rock in the middle of the Pacific is a good place for it! So I was pretty blown away by some of the images released by NASA that were shot by astronaut Don Pettit.

Long Exposure photos of star trails from spaceI think these have been up for a while, they were shot in 2012, but it’s the first I’ve seen of them. Absolutely beautiful imagery. Although they make the universe look like the TRON universe. These were all shot with 30 second exposures and then combined together, as Don says:

“My star trail images are made by taking a time exposure of about 10 to 15 minutes. However, with modern digital cameras, 30 seconds is about the longest exposure possible, due to electronic detector noise effectively snowing out the image. To achieve the longer exposures I do what many amateur astronomers do. I take multiple 30-second exposures, then ‘stack’ them using imaging software, thus producing the longer exposure.”

You can see the entire 36 photo set on Flickr.

Having done long exposures myself that were 10 or 15 minutes, the images are noisy but not that bad. I wonder if being in space causes the camera sensors to pick up more noise. If anyone knows, feel free to leave a comment.

If you’re stuck doing star photography from good ol’ planet Earth, then noise reduction software helps. You also want to shoot RAW as most RAW software will automatically remove dead pixels. These are particularly annoying with astro photography.

But the space station photos are really amazing, so head over to Flickr and check them out! These are not totally public domain, they can’t be used commercially, but you can download the high res versions of the photos and print or share them as you see fit. Here’s a few more to wet your appetite:

The shots were created in Photoshop by combining multiple 30 second exposure photosAmazing TRON like photos taken from the space station

Tips on Photographing Sports – Sneaking a Lens In and Other Stories

Like Digital Anarchy On Facebook

 

I love photographing sports. It’s a lot like shooting wildlife/Humpback Whales in many ways. It requires a lot of patience and quick shooting skills.

Unfortunately, I’m usually limited to shooting from the stands. So this makes the process a little harder but if you can get good seats you can make it work. As it happens, I recently got third row seats to the Golden State Warriors game against the Lakers. So here are a few tips for getting great shots if you can’t actually get a press pass.

Depth of field is always important when photographing sports

The first thing you need to check is how long of a lens you’re allowed to bring in. In this case it was a 3″ or less. So that’s what needs to be attached to the camera. (see the end of the article for some ‘other’ suggestions)

I ended up using a 100mm f2 lens for these shots, which is exactly 3″. You want as fast of a lens as possible. You’re not going to be able to use a flash, so you’re reliant on the stadium lighting which isn’t particularly bright. f2.8 is really a minimum and even then you’ll have the ISO higher than you’d like. Like wildlife, the action moves fast, so the wider the aperture, the faster the shutter speed you’ll have, and the sharper the shots will be.

The minimum shutter speed is probably about 1/500 and you’d like 1/2000 or higher. Hence the need for a f2 or f2.8 lens. Otherwise, the action shots, where you really want it to be sharp, will be a bit blurry.

Seat placement matters. Obviously you want to be as close as possible, but you also want to be at the ends of the court/field. That’s where most of the action happens. Center court seats may be great for watching the game, but behind the goal seats get you up close and personal for half of the action. Much better for photography and hence one of the reasons the press photogs are on the baseline.

Photographing basketball is best from the baseline

What if you’re not happy with a 3″ lens? Well, you COULD give a friend a larger lens and let them try and smuggle it in. Since it’s not attached to the camera, most of the security people don’t recognize it as a camera lens. Just say it’s, you know, a binocular or something (monocular? ;-). Usually it works, worst thing that happens is you have to go back to the car and store it. You’re not trying to break the rules, you’re, uh, helping train the security staff.

If you do manage to get a larger lens in, don’t expect to be able to use it much. One of the ushers will eventually spot it (especially if it’s a big, white, L Canon lens) and call you on it. You’ll have to swap it for the other lens (or risk getting kicked out). Wait until the game is well underway before trying to use it.

Of course, the basic tips apply… Shoot RAW, make sure you have a large, empty memory card(s), a fully charged battery, don’t spill beer on the camera, etc., etc. But the critical component is getting close to  the end of the court and having a very fast shutter speed (which usually means a very wide aperture).

Shooting RAW is soooo critical. It’ll give you some flexibility to adjust the exposure and do some sharpening. Since you’ll probably have a relatively high ISO, the noise reduction capabilities are important as well. Always shoot RAW.

If you’re a photographer that loves sports, it is definitely fun to get good seats and work on your sports shooting skills. Can be a bit expensive to do on a regular basis though!

Fast Shutter Speed and very wide aperture is critical for shooting sports

 

Tips on Photographing Whales – Underwater and Above

Like Digital Anarchy On Facebook

 

I’ve spent the last 7 years going out to Maui during the winter to photograph whales. Hawaii is the migration destination of the North Pacific Humpback Whales. Over the course of four months, it’s estimated that about 12,000 whales migrate from Alaska to Hawaii. During the peak months Jan 15 – March 15th or so, there’s probably about 6000+ whales around Hawaii. This creates a really awesome opportunity to photograph them as they are EVERYWHERE.

Many of the boats that go out are small, zodiac type boats. This allows you to hang over the side if you’ve got an underwater camera. Very cool if they come up to the boat, as this picture shows! (you can’t dive with them as it’s a national sanctuary for the whales)

A photographer can hang over the side of a boat to get underwater photos of the humpback whales.

The result is shots like this below the water:

Photographing whales underwater is usually done hanging over the side of a boat.

Or above the water:

A beautiful shot of a whale breaching in Maui

So ya wanna be whale paparazzi? Here are a few tips on getting great photographs of whales:

1- Patience: Most of the time the whales are below the water surface and out of range of an underwater camera. There’s a lot of ‘whale waiting’ going on. It may take quite a few trips before a whale gets close enough to shoot underwater. To capture the above the water activity you really need to pay attention. Frequently it happens very quickly and is over before you can even get your camera up if you’re distracted by talking or looking at photos on your camera. Stay present and focused.

2- Aperture Priority mode: Both above and below the water I set the camera to Aperture Priority and set the lowest aperture I can, getting it as wide open as possible. You want as fast of a shutter speed as possible (for 50 ton animals they can move FAST!) and setting it to the widest aperture will do that. You also want that nice depth of field a low fstop will give you.

3- AutoFocus: You have to have autofocus turned on. The action happens to fast to manually focus. Also, use AF points that are calculated in both the horizontal and vertical axes. Not all AF points are created the same.

4- Lenses: For above the water, the 100mm-400mm is a good lens for the distance the boats usually tend to stay from the whales. It’s not great if the whales come right up to the boat… but that’ s when you bust out your underwater camera with a very wide angle or fisheye lens. With underwater photography, at least in Maui, you can only photograph the whales if they come close to the boat.  You’re not going to be able to operate a zoom lens hanging over the side of a boat. So set a pretty wide focal length when you put it into the housing. I’ve got a 12-17mm Tokina fisheye and usually set it to about 14mm. This means the whale has to be within about 10 feet of the boat to get a good shot. But due to underwater visibility, that’s pretty much the case no matter what lens you have on the camera.

5- Burst Shooting: Make sure you set the camera to burst mode. The more photos the camera can take when you press and hold the shutter button the better.

6- Luck: You need a lot of luck. But part of luck is being prepared to take advantage of the opportunities that come up. So if you get a whale that’s breaching over and over, stay focused with your camera ready because you don’t know where he’s going to come up. Or if a whale comes up to the boat make sure that underwater camera is ready with a fully charged battery, big, empty flash card and you know how to use the controls on the housing. (trust me… most of these tips were learned the hard way)

Many whale watches will mostly be comprised of ‘whale waiting’. But if you stay present and your gear is set up correctly, you’ll be in great shape to capture those moments when you’re almost touched by a whale!

Whale photographed that was just out of arms reach. The whale is just about touching the camera.

Don’t Go To Art School, Especially for Video/Film/VFX

Like Digital Anarchy On Facebook

 

I’ve written about this before, but Forbes recently wrote a couple awesome pieces taking down San Francisco’s Academy of Art, really spelling out why for-profit art schools are such an overpriced scam. And they are.

Rule #1: Don’t go into massive debt to get an art degree

The ‘Starving Artist’ is a thing. Don’t compound it with debt.

For-profit schools will promise you anything to get you to take out a federally backed student loan. You can’t bankrupt yourself out of that loan so it’s guaranteed money to the school. They could care less if you succeed. They will certainly promote those few students that do succeed in a big way, but most end up like our former admin assistant:  Academy of Art Photography degree, a ton of debt, and a $15/hr job as an admin assistant.

And those that are successful, would be successful anywhere because they have the right mix of work ethic, skills, and talent. Especially the work ethic.

There are amazing instructors at even community colleges. I’m going to do another post soon profiling Community College of San Francisco and their excellent broadcast department with a great studio. Full switcher and control room, 4K cameras, greenscreen and all of it.  Misha Antonich, the head of the department, has set up a great program for all things broadcast. We hired our QA/Tech Support guy out of there. (Tor, who some of you have probably talked with)

So don’t get caught up in the supposed ‘prestige’ (i.e. marketing budget) of a for-profit school or other expensive school. It’s an illusion. Expensive tuition does not mean better results. You’ll do just fine at a community or state college. Ultimately, it’s your work ethic and demo reel that will make you successful.

Rule #2: Work ethic and internships

You’ll learn more in 3 months of an internship than a year in school. It’s also something that will stand out on your resume MUCH more than where you went to school. Make it happen.

The jobs you’ve had are what sells you. Spending $100K on a filmmaking or VFX degree is usually just a good way to get entry level jobs. There are much cheaper ways to get entry level jobs.

To get internships (and entry level jobs), you’ll need to do a lot of work on your own. But if you’re really into editing, vfx, or whatever this should be something you WANT to do. You should be totally into the type of work you’re trying to get. If you’re working on a personal project and you look up and realize it’s 4am because you’ve completely lost track of time because you’re so into what you’re doing that the time flys by…. that’s a really good indication you’re doing the right thing.

So dig through as many online tutorials as you can, do lots of personal projects, get together with other students and do cool stuff. It’ll all get you to the point of having a reel you can use to get internships.

One caveat: Just because someone is teaching it, doesn’t mean they’re right. With editing or visual effects there’s usually 10 different ways of doing anything and they’re all correct depending on the situation. For example, you’ll find the occasional colorist throwing an online hissy fit over digital beauty work using Beauty Box because they think it’s putting beauty artists out of work (yes, I’ve actually had an online argument about this) or it’s not true beauty work or whatever. However, you can use Beauty Box in many workflows and we have many excellent colorists that use Beauty Box for beauty work on feature films, high end music videos and national commercials. But some folks have _their_ way of doing something and feel that’s the only way. Don’t be like that. Be flexible and you’ll be a better artist (not to mention being able to work with different time/budget constraints).

Rule #3: Networking and self promotion

The other benefit of internships is you get to meet people. This is critical.

Of course, there are many other ways of meeting people. Go to user groups, join professional meetups, anything where you can meet folks that are doing what you want to do. It’s a good way to get internships, jobs, and good advice.

And you need to promote yourself. Most artists don’t get into doing art because they enjoy sales, but that is the business side to the industry. You need to talk about yourself or, at least, what you’ve been doing. Make sure you have a business card, a web site with your demo reel on it, and examples of your work on your phone.

The business side is every bit as important as your work when it comes to being successful. ALL schools tend to gloss over this. Art majors don’t want to take business classes. If you’re going to succeed, it’s critical that you understand the business side.

Rule #4: Persistence

Don’t give up and definitely follow up. If someone introduces you to someone that has a job/intern opening, follow up with them. Make sure they know you’re interested. Ask them if they need any additional information and don’t be afraid to ask for an interview. People want to hire folks that are proactive and show a willingness to make an effort. It matters. A lot.

Even if there’s not a job involved, most people are willing to help you. But you have to be proactive about it. Don’t be annoying, but if you’ve interacted with them and gotten their card don’t be afraid to send them the occasional email updating them on new projects or things you’ve completed.

So skip the high priced art school. Go to a community college or state college, go through every tutorial you can online, meet folks, do your own projects, get internships, and meet as many people as you can. That’s how you get the skills and contacts that will make you successful. Just get out there and do it. Get an entry level job (you’re going to get one anyways, degree or no) and work your way up.

A school is just a good place to get feedback, get some project ideas, and meet like minded students. It doesn’t matter if spend $100/credit or $1000.

Here’s another good article on the film school debate, rising film school costs, and the ever dropping costs of pro camera equipment.

 

 

Wacom Tablets and Repetitive Stress Injuries

Like Digital Anarchy On Facebook

 

I’ve written about this before, but Thanksgiving came along this year and I left on a 5 day, two city trip without my Wacom tablet. Which reminded me exactly why I’m thankful for the tablet.

The downside to running Digital Anarchy is that I don’t really get many  days off. Usually I’m working in some capacity at least a couple hours a day even on vacations. For trips (like Thanksgiving) that involve plane flights and other downtime, it’s usually a lot more than two hours. (Not really complaining, just pointing out that it’s a thing. There’s plenty of awesome stuff about being Chief Executive Anarchist and coming up with cool video plugins for y’all)

I’ve used a Wacom tablet as a mouse replacement since around 2003. I used to run a user group called Bay Area Motion Graphics. Because I and one other DA employee had RSI problems, I got a variety of ‘ergonomic’ devices and had DA folks and members of BAMG try them out. BAMG was mostly video editors and motion graphic artists, to give you some idea of who was using them.

Wacom tablet used with Digital Anarchy Video PluginsExtra space on your keyboard drawer, yes. Clean desk, no.

We swapped around the weird looking keyboards, joystick mouse things, trackballs, tablets, and other oddments. We then got together and decided which devices seemed to offer relief to the most people.

One of the devices that stood out, especially for me, was the Wacom tablet. Once you get used to using it as a mouse replacement it’s really an awesome device. I have multiple tablets and use them constantly in the office and while traveling. It makes using the computer much less painful.

That’s in stark contrast to the last few days. No tablet, so I’ve been forced to use the track pad on the two computers I carry around. My wrists immediately started to ache and tingle. Not good. It’s amazing that for the most part I have no problems when using the tablets, but then after a couple days not using them, much of the pain comes back. Of course, RSI  is a whole body thing. Not only do your wrists hurt, but you’re in a less ergonomic position (f’ing hotel chairs) so my shoulders and back hurt as well.

Why are the Wacom tablets so effective for helping with RSI? I’m not sure to be honest. But I feel that 1) you’re holding the pen as you would a normal pen. This is a skill you’ve been working on since you were a small child and the muscle memory is very strong. 2) you’re not just using one body part over and over again (like your index finger on a mouse). You’re using your whole hand, wrist and arm. I feel like this distributes the stress over a greater area.

Whatever the case, for me, the tablets have been a godsend. It takes some time to get really familiar with them, but it’s been well worth it for me. Of course, it’s just one part of having an ergonomic workstation but it’s a big one (a great chair is another big one). Your health is critical. Take care of yourself.

Using a Nvidia GTX 980 (or Titan or Quadro) in a Mac Pro

Like Digital Anarchy On Facebook

 

As many of you know, we’ve come out with a real time version of Beauty Box Video. In order for that to work, it requires a really fast GPU and we LOVE the GTX 980. (Amazing price/performance) Nvidia cards are generally fastest  for video apps (Premiere, After Effects, Final Cut Pro, Resolve, etc) but we are seeing real time performance on the higher end new Mac Pros (or trash cans, dilithium crystals, Job’s Urn or whatever you want to call them).

BUT what if you have an older Mac Pro?

With the newer versions of Mac OS (10.10), in theory, you can put any Nvidia card in them and it should work. Since we have lots of video cards lying around that we’re testing, we wondered if our GTX 980, Titan and Quadro 5200 would work in our Early 2009 Mac Pro. The answer is…

Nvidia GTX GPU in Mac Pro

YES!!!

So, how does it work? For one you need to be running Yosemite (Mac OS X 10.10)

A GTX 980 is the easier of the two GeFroce cards, mainly because of the power needed to drive it. It only needs two six-pin connectors, so you can use the power supply built into the Mac. Usually you’ll need to buy an extra six-pin cable, as the Mac only comes standard with one, but that’s easy enough. The Quadro 5200 has only a single 6-pin connector and works well. However, for a single offline workstation, it’s tough to justify the higher price for the extra reliability the Quadros give you. (and it’s not as fast as the 980)

The tricky bit about the 980 is that you need to install Nvidia’s web driver. The 980 did not boot up with the default Mac OS driver, even in Yosemite. At least, that’s what happened for us. We have heard of reports of it working with the Default Driver, but I’m not sure how common that is. So you need to install the Nvidia Driver Manager System Pref and, while still using a different video card, set the System Pref to the Web Driver. As so:

Set this to Web Driver to use the GTX 980
Set this to Web Driver to use the GTX 980

You can download the Mac Nvidia Web Drivers here:

For 10.10.2

For 10.10.3

For 10.10.4

Install those, set it to Web Driver, install the 980, and you should be good to go.

What about the Titan or other more powerful cards?

There is one small problem… the Mac Pro’s power supply isn’t powerful enough to handle the card and doesn’t have the connectors. The Mac can have two six pin power connectors, but the Titan and other top of the line cards require a 6 pin and an 8 pin or even two 8-pin connectors. REMINDER: The GTX 980 and Quadro do NOT need extra power. This is only for cards with an 8-pin connector.

The solution is to buy a bigger power supply and let it sit outside the Mac with the power cables running through the expansion opening in the back.

As long as the power supply is plugged into a grounded outlet, there’s no problem with it being external. I used a EVGA 850W Power Supply, but I think the 600w would do. The nice thing about these is they come with long cables (about 2 feet or so) which will reach inside the case to the Nvidia card’s power connectors.

Mac Pro external power supply

One thing you’ll need to do is plug the ‘test’ connector (comes with it) into the external power supply’s motherboard connector. The power supply won’t power on unless you do this.

Otherwise, it should work great! Very powerful cards and definitely adds a punch to the Mac Pros. With this setup we had Beauty Box running at about 25fps (in Premiere Pro, AE and Final Cut are a bit slower). Not bad for a five year old computer, but not real time in this case. On newer machines with the GTX 980 you should  be getting real time play back. It really is a great card for the price.

Creating GIFs from Video: The 4K Animated GIF?

Like Us On FacebookLIKE Digital Anarchy!

I was at a user group recently and a video editor from a large ad agency was talking about the work he does.

‘web video’ encompasses many things, especially when it comes to advertising. The editor mentioned that he is constantly being asked to create GIF animations from the video he’s editing. The video may go on one site, but the GIF animation will be used on another one. So while one part of the industry is trying to push 4K and 8K, another part is going backwards to small animated GIFs for Facebook ads and the like.

Online advertising is driving the trend, and it’s probably something many editors deal with daily… creating super high resolution for the broadcast future (which may be over the internet), but creating extremely low res versions for current web based ads.

Users want high resolution when viewing content but ads that aren’t in the video stream (like traditional ads) can slow down a users web browsing experience and cause them to bounce if the file size is too big.

Photoshop for Video?

Photoshop’s timeline is pretty useless for traditional video editing. However, for creating these animated GIFs, it works very well. Save out the frames or short video clip you want to make into a GIF, import them into Photoshop and lay them out on the Timeline, like you would video clips in an editing program. Then select Save For Web… and save it out as a GIF. You can even play back the animation in the Save for Web dialog. It’s a much better workflow for creating GIFs than any of the traditional video editors have.

So, who knew? An actual use for the Photoshop Timeline. You too can create 4K animated GIFs! ;-)

animated GIF

One particularly good example of an animated GIF. Rule #1 for GIFs: every animated GIF needs a flaming guitar.

Odyssey 7Q+ .wav Problem – How to Fix It and Import It into Your Video Editor

Like Digital Anarchy On Facebook

 

We have a Sony FS700 hanging around the Digital Anarchy office for shooting slow motion and 4K footage to test with our various plugins ( We develop video plugins for Premiere Pro, After Effects, Avid, Final Cut Pro, Resolve, etc., etc.) . In order to get 4K out of the camera we had to buy an Odyssey 7Q+ from Convergent Designs (don’t you love how all these cameras are ‘4K – capable’, meaning if you want 4K, it’s another $2500+. Yay for marketing.)

(btw… if you don’t care about the back story, and just want to know how to import a corrupted .wav file into a video editing app, then just jump to the last couple paragraphs. I won’t hold it against you. :-)

The 7Q+ overall is a good video recorder and we like it a lot but we recently ran into a problem. One of the videos we shot didn’t have sound. It had sound when played back on the 7Q+, but when you imported it into any video editing application. no audio.

The 7Q+ records 4K as a series of .dng files with a sidecar .wav file for the audio. The wav file had the appropriate size as if it had audio data (it wasn’t a 1Kb file or something) but importing into FCP, Premiere Pro, Quicktime, or Windows Media Player showed no waveform and no audio.

Convergent Designs wasn’t particularly helpful. The initial suggestion was to ‘rebuild’ the SSD drives. This was suggested multiple times, as if it was un-imaginable this wouldn’t fix it and/or I was an idiot not doing it correctly. The next suggestion was to buy file recovery software. This didn’t really make sense either. The .dng files making up the video weren’t corrupted, the 7Q+ could play it back, and the file was there with the appropriate size. It seemed more likely that the 7Q+ wrote the file incorrectly, in which case file recovery software would do nothing.

So Googling around for people with similar problems I discovered 1) at least a couple other 7Q users have had the same problem and 2) there were plenty of non-7Q users with corrupted .wav files. One technique for the #2 folks was to pull them into VLC Media Player. Would this work for the 7Q+?

YES! Pull it into VLC, then save it out as a different .wav (or whatever) file. It then imported and played back correctly. Video clip saved and I didn’t need to return the 7Q+ to Convergent and lose it for a couple weeks.

Other than this problem the Odyssey 7Q+ has been great… but this was a pretty big problem. Easily fixed though thanks to VLC.

4K Showdown! New MacPro vs One Nvidia GTX 980

Like Digital Anarchy On Facebook

 

For NAB this year we finally bought into the 4K hype and decided to have one of our demo screens be a 4K model, showing off Beauty Box Video and Flicker Free in glorious 4K.

NAB Booth Beauty Box Video and Flicker Free in 4k
The Digital Anarchy NAB Booth

So we bought a 55” 4K Sony TV to do the honors. We quickly realized if we wanted to use it for doing live demos we would need a 4K monitor as well. (We could have just shown the demo reel on it) For live demos you need to mirror the computer monitor onto the TV. An HD monitor upscaled on the 4K TV looked awful, so a 4K monitor it was (we got a Samsung 28″, gorgeous monitor).

Our plan was to use our Mac Pro for this demo station. We wanted to show off the plugins in Adobe’s AE/Premiere apps and Apple’s Final Cut Pro. Certainly our $4000 middle of the line Mac Pro with two AMD D500s could drive two 4K screens. Right?

We were a bit dismayed to discover that it would drive the screens at the cost of slowing the machine down to unusable. Not good.

For running Beauty Box in GPU accelerated mode, our new favorite video card for GPU performance is Nvidia’s GTX 980. The price/performance ratio is just amazing. So we figured we’d plug the two 4K screens into our generic $900 Costco PC that had the GTX 980 in it and see what kind of performance we’d get out of it.

Not only did the 980 drive the monitors, it still ran Beauty Box Video in real time within Premiere Pro. F’ing amazing for a $550 video card.

The GTX 980 single handedly knocked out the Mac Pro and two AMD D500s. Apple should be embarrassed.

I will note, that for rendering and using the apps, the Mac Pro is about on par with the $900 PC + 980. I still would expect more performance from Apple’s $4000 machine but at least it’s not an embarrassment.

FCP 7 Is Dead. It’s Time to Move On.

It’s been almost 4 years since the last update of FCP 7. The last officially supported OS was 10.6.8. It’s time to move on people.

Beauty Box Video 4.0 (due out in a month) will be our first product that does not officially support FCP 7.

It’s a great video editor but Apple make it very hard to support older software. Especially if you’re trying to run it on newer systems. If FCP 7 is a mission critical app for you, you’re taking a pretty big risk by trying to keep it grinding along. We started seeing a lot of weird behaviors with it and 10.9. I realize people are running it successfully on the new systems but we feel there are a lot of cracks beneath the surface. Those are only going to get more pronounced with newer OSes.

I know people love their software, hell there are still people using Media 100, but Premiere Pro, Avid, and even FCP X are all solid alternatives at this point. Those of us that develop software and hardware can’t support stuff that Apple threw under the bus 3 and a half years ago.

We will continue to support people using Beauty Box 3.0 with FCP 7 on older systems (10.8 and below) but we can’t continue to support it when most likely the problems we’ll be fixing are not caused by our software but by old FCP code breaking on new systems.

iPhone 6 vs Sony FS700: Comparison of Slow Motion Modes (240fps and Higher)

Like Digital Anarchy On Facebook

 

Comparing slow motion modes of the iPhone 6 vs the Sony FS700

The Sony FS700 is a $6000 video camera that can shoot HD up to 960fps or 4K at 60fps. It’s an excellent camera that can shoot some beautiful imagery, especially at 240fps (the 960fps footage really isn’t all that, however).

The iPhone 6 is a $700 phone with a video camera that shoots at 240fps. I thought it’d be pretty interesting to compare the iPhone 6 to the Sony FS700. I mean, the iPhone couldn’t possibly compare to a video camera that is dedicated to shooting high speed video, right? Well, ok yes, you’re right. Usually. But surprisingly, the iPhone 6 holds its own in many cases and if you have a low budget production, could be a solution for you.

Let’s compare them.

kickboxing at 240fps

First the caveats:

1: The FS700 shoots 1080p, the iPhone shoots 720p. Obviously if your footage HAS to be 1080p, then the iPhone is a no go. However, there are many instances where 720p is more than adequate.

2: The iPhone has no tripod mount. So you need something like this Joby phone tripod:

3: You can’t play the iPhone movies created in slow motion on Windows. The Windows version of QuickTime does not support the feature. They can be converted with a video editing app, but this is a really annoying problem for Windows users trying to shoot with the iPhone. The Sony movies play fine on a Mac or Windows machine.

4: The iPhone will automatically try and focus and adjust brightness. This is the biggest problem with the iPhone. If you’re going to shoot with the iPhone you HAVE to consider this. We’ll discuss it a lot more in this article.

5: The iPhone does let you zoom and record, but it’s not an optical zoom so it’s lower quality than the non-zoomed image. With the FS700 you can change lenses, put on a sweet zoom lens, and zoom in to your hearts content. But that’s one of the things you pay the big bucks for. We did not use the iPhone’s zoom feature for any of these shots, so in some cases the iPhone is a bit wider than the FS700 equivalent.

 

The Egg

Our first example is a falling egg. The FS700 footage is obviously better in this case.

The iPhone does very poorly in low light. You can see this in the amount of noise on the black background. It’s very distracting. Particularly since the egg portion IS well lit. Also, you’ll notice that the highlight on the egg is completely blown out.

Unfortunately, there’s nothing you can do about this except light better. One of the problems with the iPhone is the automatic brightness adjustment. It shows up here in the blown out highlight, with no way to adjust the exposure. You get what you get, so you NEED to light perfectly.

In the video there’s also an example of the FS700 shooting at 480fps. The 960fps mode of the FS700 is pretty lacking, but the 480fps does produce pretty good footage. For something like the egg, the 480fps has a better look since the breaking of the egg happens so fast. Even the 240fps isn’t fast enough to really capture it.

All the footage is flickering as well. This is a bit more obvious with the FS700 because there’s no noise in the background. The 480fps footage has been de-flickered with Flicker Free. Compare it with the 240fps to see the difference.

 

The MiniVan

In this case we have a shot of some cars a bit before sunset. This works out much better for the iPhone, but not perfectly. It’s well lit, which seems to be the key for the iPhone.

Overall, the iPhone does a decent job, however it has one problem. As the black van passes by, the iPhone auto-adjusts the brightness. You can see the effect this has by looking the ‘iPhone 6’ text in the video. The text doesn’t change color but the asphalt certainly does, making the text look like it’s changing. This does make the van look better, but it changes the exposure of the whole scene. NOT something you want if you’re shooting for professional uses.

The FS700 on the other hand, we can fix the aperture and shutter speed. This means we keep a consistent look throughout the video. You would expect this with a pro video camera, so no surprise there. It’s doing what it should be doing.

However, if you were to plan for the iPhone’s limitation in advance and not have a massive dark object enter your scene, you would end up with a pretty good slow motion shot. The iPhone is a bit softer than the Sony, but it still looks good!

Also note that when the FS700 is shooting at 480fps, it is much softer as well. This has some advantages, for example the wheels don’t have anywhere near as much motion blur as the 240fps footage. The overall shot is noticeably lower quality, with the bushes in the background being much softer than the 240fps footage.

 

The Plane! The Plane!

Next to the runway at LAX, there’s a small park where you can lay in the grass and watch planes come in about 40 feet above as they’re about to land. If you’ve never seen the underbelly of an A380 up close, it’s pretty awesome. We did not see that when doing this comparison, but we did see some other cool stuff!

Most notably we saw the problem with the iPhone’s inability to lock focus. Since the camera has nothing to focus on, when the plane enters the frame it’s completely out of focus. The iPhone 6 can’t resolve it in the few seconds it’s overhead, so the whole scene is blurry.

Compare that to the FS700 where we can get focus on one plane and when the next one comes in, we’re in focus and capture a great shot.

The iPhone completely failed this test, so the Sony footage is easily the hands down winner.

 

The Kickboxer

One last example where the iPhone performs adequately.

The only problem with this shot is the amount of background noise. As mentioned the iPhone doesn’t do a great job in low light, so there’s a lot of noise on the black background. Because of the flimsy phone tripod, it shakes a lot more as well. However, overall the footage is ok and would probably look much better if we’d used a white background. This footage also has a flicker problem and we used Flicker Free again on the 480fps footage to remove it. You’ll notice the detail of the foot and chalk particles are quite good on the iPhone. Not as good as the FS700, but that’s not really what we’re asking.

We want to know if Apple’s iPhone 6 can produce slow motion, 240fps video that’s good enough for an indie film or some other low budget production. (or even a high budget production where you have a situation you don’t want to (or can’t) put a $6000 camera) If you consider the caveats about the iPhone not being able to lock focus, the auto-adjusting brightness, and shooting in 720p, I think the answer is yes. If you take all that into consideration and plan for it, the footage can look great. (but, yeah… I’m not trading in my FS700 either. ;-)

Samsung Galaxy S5 Does NOT Shoot 240fps. It Shoots 120fps and Plays It Back at 15fps.

Like Digital Anarchy On Facebook

 

Apple’s iPhone 6 and the Samsung Galaxy S5 both shoot 240fps (or so you might think… 1/8th speed at 30fps is 240fps). Since we make Flicker Free, a plugin that removes flicker that occurs when shooting at 240fps, I thought it’d be cool to do a comparison of the two phones and post the results.

However, there was a problem. The footage from the Galaxy S5 seemed to be faster than the iPhone. After looking into a number of possibilities, including user error, I noticed that all the S5 footage was playing back in Quicktime Player at 15fps. Could it be that the Samsung S5 was actually shooting in 120fps and playing it back at 15fps to fake 240fps? Say it’s not so! Yep, it’s so.

To confirm this I took a Stopwatch app and recorded it with the Galaxy S5 at 1/8th speed (which should be 240fps if you assume a 30fps play back like normal video cameras). You can see the result here:

If the S5 was truly shooting at 240fps, over one second the frame count should be 240. It’s not. It’s 120. If you don’t trust me and want to see for yourself, the original footage from the S5 can be downloaded HERE: www.digitalanarchy.com/downloads/samsung_120fps.zip

Overall, very disappointing. It’s a silly trick to fake super slow motion. It’s hardly shocking that Samsung would use a bit of sleight of hand on the specs of their device, but still. Cheesy.

 

You might ask why this makes a difference. It’s still playing really slow. If you’re trying to use it in a video editor and mixing it with footage that IS shot at 30fps (or 24fps), the 15fps video will appear to stutter. Also, from an image quality standpoint, where you really see the problem is in the detail and motion blur. As you can see in this example:

iphone 6 vs samsung galaxy s5 240fps

Also, the overall image quality of the iPhone was superior. But that’s something I’ll talk about when I actually compare them! That’s coming up next!

VFX Students: Get Ready to Work for $600/mo.

I was talking with the owner of a mid-sized effects house in LA last weekend. They’ve always done most of their work where they could get subsidies to pay for part of salaries… Canada, Singapore, etc.

However, the staff for a new production is in Indonesia, where the artists are making $600/mo. They’re already doing production work and it may not be top tier, but it’s good.

Prices for VFX work have been going down for quite a while and it’s probably not going to stop. Yes, there are still jobs in the US, but the trend is moving towards countries where staff can be had for a lot less. The effort to unionize may help, but probably not as much as folks think. An electrician has to be on set. Most VFX work doesn’t require that. It can be done anywhere.

So, where does that leave students? I don’t have a lot of respect for the schools promising careers in VFX. They don’t mention the state of the industry while they’re happily telling students how to fill out the government loan forms. The end result is that you have students graduating these places with a lot of debt and not a lot of job opportunities.

There are jobs for the top graduates, but it’s been my experience that these students would be better off doing online training (www.fxphd.com for example), working on their own projects and getting an internship. They’re probably going to excel no matter where they’re at. These are, of course, the folks that get featured in ‘Alumni Stories’. But instead of ‘Alumni Stories’ I’d much rather see the percentage of ex-students working full time in the VFX industry. The reason you don’t see that statistic is that it’d be pretty depressing.

So if you’re thinking about a career in VFX, before you sign up for $20,000/yr in debt, consider the $600/mo the VFX artists are making in Indonesia. There are other ways to break into the industry than an expensive school. As an artist you may not want to think about finances, but I can assure you… once you have to start paying that back, you’ll be thinking a lot about it.

Time Lapsing Around Italy with Flicker Free

Stephen Smith, a long time videographer, used a recent trip to Italy as an opportunity to hone is time lapse skills. The result is a compilation of terrific time lapse sequences from all over Italy.

He used Flicker Free to deflicker the videos and use Premiere Pro and After Effects for editing, and Davinci Resolve for color correction. It’s a great example of how easily Flicker Free fits into pretty much any workflow and produces great results.

 

Italy Time Lapse from Stephen Smith on Vimeo.

 

Since he was traveling with his wife, it allowed her to explore areas where he was shooting more thoroughly. This is not always the case. Significant others are not always thrilled to be stuck in one place for an hour while you stand around watching your camera take pictures!

Although, he said it did give him an opportunity to watch how agressive the street vendors were and to meet other folks.

We’re happy that he gave us a heads up about the video which is on Vimeo or you can see it below. Of course, we’re thrilled he used Flicker Free on it as well. :-)

 

Why does Final Cut Pro handle Flash Video f4v files better than Premiere Pro?

First off, if you want Flash’s .f4v files to work in FCP X, you need to change the extension to .mp4. So myfile.f4v becomes myfile.mp4

I’ve been doing some streaming lately with Ustream. It’s a decent platform, but I’m not particularly in love with it (and it’s expensive). Anyways, if you save the stream to a file, it saves it as a Flash Video file (.f4v). The file itself plays back fine. However, if you pull it into Premiere Pro for editing, adding graphics, etc., PPro can’t keep the audio in sync. Adobe… WTF? It’s your file format!

Final Cut Pro X does not have this problem. As mentioned, you need to change the file extension to .mp4, but otherwise it handles it beautifully.

Even if you pull the renamed file into Premiere, it still loses the audio sync. So it’s just a complete fail on Adobe’s part. FCP does a terrific job of handling this even on long programs like this 90 minute panel discussion.

Here’s the Final Cut Pro file, saved out to a Quicktime file and then uploaded to YouTube:

Here’s the Premiere Pro video, also saved out to Quicktime and uploaded. You’ll notice it starts out ok, but then quickly loses audio sync. This is typical in my tests. The longer the video the more out of sync it gets. In this 30 second example it’s not too out of sync, but it’s there.