Keyboard Shortcuts are a huge part of Transcriptive and can make working in it much faster/easier. These are for Transcriptive 2.x/3.x. If you’re still using 1.x, please check the manual.
Ctrl + Space: Play / Stop
Undo: Ctrl + Z (Mac and PC)
Redo: Ctrl + Shift + Z
MAC USERS: Mac OS assigns Cmd+Z to the application (Premiere) and we can’t change that.
Ctrl + Left Arrow – Previous Word | Ctrl + Right Arrow – Next Word
Ctrl + Shift + Up OR [Delete]: Merge Line/paragraph with line above.
Ctrl + Shift + Down OR [Enter}: Split Line/paragraph into two lines.
(These behave slightly differently. ‘Control+Shift+up’ will merge the two lines together no matter where the cursor is. If you’re trying to combine a bunch of lines together, this is very fast. [Delete] uses the cursor position, which has to be at the beginning of the line to merge the lines together.)
Up or Down Arrow: Change Capitalization
Ctrl + Backspace: Delete Word | Ctrl + Delete: Delete Word
Control + i: Set In Point in Source panel
Control + o: Set Out Point in Source panel
Control + , (comma): Insert video segment into active sequence (this does the same thing as , (comma) in the Source panel)
This is a quick blog post showing you how to use the free Transcriptive trial version to convert any SRT caption file into a text file without timecode or line numbers (which SRTs have). You can do this on Transcriptive.com or if you have Premiere, you can use Transcriptive for Premiere Pro.
This can occur because you have a caption file (SRT or VTT) but don’t have access to the original transcript. SRT files tend to look like this:
00:00:02,299 –> 00:00:09,100
The quick brown fox
00:00:09,100 –> 00:00:17,200
hit the gas pedal and
And you might want normal human readable text so someone can read the dialog, without the line numbers and timecode. So this post will show you how to do that with Transcriptive for free!
We are, of course, in the business of selling software. So we’d prefer you bought Transcriptive BUT if you’re just looking to convert an SRT (or any caption file) to a text file, the free trial does that well and you’re welcome to use it. (btw, we also have some free plugins for After Effects, Premiere, FCP, and Resolve HERE. We like selling stuff, but we also like making fun or useful free plugins)
Getting The Free Trial License
As mentioned, this works for the Premiere panel or Transcriptive.com, but I’ll be using screenshots from the panel. So if you’re using Transcriptive.com it may look a little bit different.
You do need to create a Transcriptive account, which is free. When the panel first pops up, click the Trial button to start the registration process:
You then need to create your account, if you don’t have one. (If you’re using Transcriptive.com, this will look different. You’ll need to manually select the ‘free’ account option.)
Importing the SRT
Once you register the free trial license, you’ll need to import the SRT. If you’re on Transcriptive.com, you’ll need to upload something (could be 10sec of black video, doesn’t matter what, but there has to be some media). If you’re in Premiere, you’ll need to create a Sequence first, make sure Clip Mode is Off (see below) and then you can click IMPORT.
Once you click Import, you can select SRT from the dropdown. You’ll need to select the SRT file using the file browser (click the circled area below). Then click the Import button at the bottom.
You can ignore all the other options in the SRT Import Window. Since you’re going to be converting this to a plain text file without timecode, none of the other stuff matters.
After clicking Import, the Transcriptive panel will look something like this. The text from the SRT file along with all the timecode, speakers, etc:
Exporting The Plain Text File
Alright… so how do we extract just the text? Easy! Click the Export button in the lower, left corner. In the dialog that gets displayed, select Plain Text:
The important thing here is to turn OFF ‘Display Timecode’ and ‘Include Speakers’. This will strip out any extra data that’s in the SRT and leave you with just the text. (After you hit the Export button)
Ok, well, since caption files tend to have lines that are 32 characters long you might have a text file that looks like this:
The quick brown fox
hit the gas pedal and
If you want that to look normal, you’ll need to bring it into Word or something and replace the Paragraphs with a Space like this:
And that will give you:
The quick brown fox hit the gas pedal and
And now you have human readable text from an SRT file! A few steps, but pretty easy. Obviously there are lots of other things you can do with SRTs in Transcriptive, but converting the SRT to a plain text file is one that can be done with the free trial. As mentioned, this also works with VTT files as well.
So grab the free trial of Transcriptive here and you can do it yourself! You can also request an unrestricted trial by emailing firstname.lastname@example.org. While this SRT to Plain Text functionality works fine, there are some other limitations if you’re testing out the plugins for transcripts or editing the text.
We occasionally get questions from customers asking why we charge .04/min ($2.40/hr) for transcription (if you pre-pay), when some competitors charge .25/min or even .50/min. Is it lower accuracy? Are you selling our data?
No and no. Ok, but why?
Transcriptive and PowerSearch work best when all your media has transcripts attached to it. Our goal is to make Transcriptive as useful as possible. We hope the less you have to think about the cost of the transcripts, the more media you’ll transcribe… resulting in making Transcriptive and PowerSearch that much more powerful.
The Transcriptive-AI service is equal to, or better, than what other services are using. We’re not tied to one A.I. and we’re constantly evaluating the different A.I. services. We use whatever we think is currently state-of-the-art. Since we do such a high volume we get good pricing from all the services, so it doesn’t really matter which one we use.
Do we make a ton of money on transcribing? No.
The services that charge .25/min (or whatever) are probably making a fair amount of money on transcribing. We’re all paying about .02/min or less. Give or take, that’s the wholesale/volume price.
If you’re getting your transcripts for free… those transcripts are probably being used for training, especially if the service is keeping track of the edits you make (e.g. YouTube, Otter, etc.). Transcriptive is not sending your edits back to the A.I. service. That’s the important bit if you’re going to train the A.I. Without the corrected version, the A.I. doesn’t know what it got wrong and can’t learn from it.
So, for us, it all comes down to making Transcriptive.com, the Transcriptive Premiere Pro panel, and PowerSearch as useful as possible. To do so, we want the most accurate transcripts and we want them to be as low cost as possible. We know y’all have a LOT of footage. We’d rather reduce the barriers to you transcribing all of it.
We often get asked what the differences are between Transcriptive 2.0 and 1.0. So here is the full list of new features! As always there are a lot of other bug fixes and behind the scenes changes that aren’t going to be apparent to our customers. So this is just a list of features you’ll encounter while using Transcriptive.
NEW FEATURES IN TRANSCRIPTIVE 2.0
Works with clips or sequences: You no longer have to have clips in sequences to get them transcribed. Clips can be transcribed and edited just by selecting them in the Project panel. This opens up many different workflows and is something the new caption system in Premiere can’t do. Watch the tutorial on transcribing clips in Premiere
A clip selected in the Project panel. Setting In/Out points in TS!
Editing with Text: Clip Mode enables you to search through clips to find sound bites. You can then set IN/OUT points in the transcript and insert them into your edit. This is a powerful way of compiling rough cuts without having to scrub through footage. Watch the Tutorial on editing video using a transcript!
Collaborate by Sharing/Send/receive to Transcriptive.com: Collaborate on creating a paper edit by sharing the transcript with your team and editor. Send transcripts or videos from Premiere to Transcriptive.com, letting a client, AE, or producer edit them in a web browser or add Comments or strike-through text. The transcript can then be sent back to the video editor in Premiere to continue working with it. Watch the tutorial on collaborating in Premiere using Transcriptive.com! There’s also this blog post on collaborative workflows.
Now includes PowerSearch for free! Transcriptive can only search one transcript at a time. With PowerSearch, you can search every clip and sequence in your project! It’s a search engine for Premiere. Search for text and get search results like Google. Click on a result and it jumps to exactly where the dialog is in that clip or sequence. Watch the tutorials on PowerSearch, the search engine for Premiere.
Reduced cost: As low as .04/min. by prepaying minutes you can get the cost down to .04/min! Why is it so inexpensive? Is it worse than the other services that charge .25 or .50/min? No! We’re just as good or better (don’t take my word, run your own comparisons). Transcriptive only works if you’ve transcribed your footage. By keeping the cost of minutes low, hopefully we make it an easy decision to transcribe all your footage and make Transcriptive as useful as possible!
Ability to add comments/notes at any point in the transcript. The new Comments feature lets you add a note to any line of dialog. Incredibly useful if you’re working with someone else and need to share information. It’s also great if you want to make notes for yourself as you’re going through footage.
Strikethrough text: Allows you to strikethrough text to indicate dialog that should be removed. Of course, you can just delete it but if you’re working with someone and you want them to see what you’ve flagged for deletion OR if you’re just unsure if you want to definitely delete it, strikethrough is an excellent way of identifying that text.
More ‘word processor’ like text editor: A.I. isn’t perfect, even though it’s pretty close in many cases (usually 96-99% accurate with good audio). However, you can correct any mistake you find with the new text editor! It’s quick and easy to use because it works just like a word processor built into Premiere. Watch the tutorial on editing text in Transcriptive!
Align English transcripts for free: If you already have a script, you can sync the text to your audio track at no cost. You’ll get all the benefits of the A.I. (per word timing, searchability, etc) without the cost. It’s a free way of making use of transcripts you already have. Watch the tutorial on syncing transcripts in Premiere!
Adjust timing for words: If you’re editing text and correcting any errors the A.I. might have made it can result in the new words having timecode that doesn’t quite sync with the spoken dialog. This new feature lets you adjust the timecode for any word so it’s precisely aligned with the spoken word.
Ability to save the transcript to any audio or video file: In TS 1.0 the transcript always got saved to the video file. Now you can save it to any file. This is very helpful if you’ve recorded the audio separately and want the transcript linked to that file.
More options for exporting markers: You can set the duration of markers and control what text appears in them.
Profanity filter: **** out words that might be a bit much for tender ears.
More speaker management options: Getting speaker names correct can be critical. There are now more options to control how this feature works.
Additional languages: Transcriptive now supports over 30 languages!
Checks for duplicate transcripts: Reduces the likelihood a clip/sequence will get transcribed twice unnecessarily. Sometimes users will accidentally transcribe the same clip twice. This helps prevent that and save you money!
Lock to prevent editing: This allows other people to view the transcript in Premiere or on Transcriptive.com and prevent them from accidentally making changes.
Sync Transcript to Sequence: Often you’ll get the transcript before you make any edits. As you start cutting and moving things around, the transcript will no longer match the edit. This is a one-click way of regenerating the transcript to match the edit.
Streamlined payment/account workflow: Access multiple speech engines with one account. Choose the one most accurate for your footage.
We get a fair number of questions from Transcriptive users that are concerned the A.I. is going to use their data for training.
First off, in the Transcriptive preferences, if you select ‘Delete transcription jobs from server’ your data is deleted immediately. This will delete everything from the A.I. service’s servers and from the Digital Anarchy servers. So that’s an easy way of making sure your data isn’t kept around and used for anything.
However, generally speaking, the A.I. services don’t get more accurate with user submitted data. Partially because they aren’t getting the ‘positive’ or corrected transcript.
When you edit your transcript we aren’t sending the corrections back to the A.I. (some services are doing this… e.g. if you correct YouTube’s captions, you’re training their A.I.)
So the audio by itself isn’t that useful. What the A.I. needs to learn is the audio file, the original transcript AND the corrected transcript. So even if you don’t have the preference checked, it’s unlikely your audio file will be used for training.
This is great if you’re concerned about security BUT it’s less great if you really WANT the A.I. to learn. For example, I don’t know how many videos I’ve submitted over the last 3 years saying ‘Digital Anarchy’. And still to this day I get: Dugal Accusatorial (seriously), Digital Ariki, and other weird stuff. A.I. is great when it works, but sometimes… it definitely does not work. And people want to put this into self-driving cars? Crazy talk right there.
If you want to help the A.I. out, you can use the Speech-to-Text Glossary (click the link for a tutorial). This still won’t train the A.I., but if the A.I. is uncertain about a word, it’ll help it select the right one.
How does the glossary work? The A.I. analyzes a word sound and then comes up with possible words for that sound. Each word gets a ‘confidence score’. The one with the highest score is the one you see in your transcript. In the case above, ‘Ariki’ might have had a confidence of .6 (out 0 to 1, so .6 is pretty low) and ‘Anarchy’ might have been .53. So my transcript showed Ariki. But if I’d put Anarchy into the Glossary, then the A.I. would have seen the low confidence score for Ariki and checked if the alternatives matched any glossary terms.
So the Glossary can be very useful with proper names and the like.
But, as mentioned, nothing you do in Transcriptive is training the A.I. The only thing we’re doing with your data is storing it and we’re not even doing that if you tell us not to.
It’s possible that we will add the option in the future to submit training data to help train the A.I. But that’ll be a specific feature and you’ll need to intentionally upload that data.
We’ve been working on Transcriptive for like 3 years now. In that time, the A.I. has heard my voice saying ‘Digital Anarchy’ umpteen million times. So, you would think it would easily get that right by now. As the below transcript from our SRT Importing tutorial shows… not so much. (Dugal Accusatorial? Seriously?)
ALSO, you would think that by now I would have a list of terms that I would copy/paste into Transcriptive’s Glossary field every time I get a transcript for a tutorial. The glossary helps the A.I. determine what ‘vocal sounds’ should be when it translates those sounds into words. Uh, yeah… not so much.
So… don’t be like AnarchyJim. If you have words you know the A.I. probably won’t get: company names, industry jargon, difficult proper names (cool blog post on applying player names to an MLB video here), etc., then use Transcriptive’s glossary (in the Transcribe dialog). It does work. (and somebody should mention that to the guy that designed the product. Oy.)
Overall the A.I. is really accurate and does usually get ‘Digital Anarchy’ correct. So I get lazy about using the glossary. It is a really useful thing…
(The above video covers all this as well, but for those who’d rather read, than watch a video… here ya go!)
Getting an SRT file into Premiere is easy!
But, then it gets not so easy getting it to display correctly.
This is mostly fixed in the new caption system that Premiere 2021 has. We’ll go over that in a minute, but first let’s talk about how it works in Premiere Pro 2020. (if you only care about 2021, then jump ahead)
Premiere Pro 2020 SRT Import
1: Like you would import any other file, go to File>Import or Command/Control+I.
2: Select the SRT file you want.
3: It’ll appear in your Project panel.
4: You can drag it onto your timeline as you would any other file.
Now the fun starts.
5: From the Tools menu in the Program panel (the wrench icon), make sure Closed Captions are enabled.
5b: Go into Settings and select Open Captions
6: The captions should now display in your Program panel.
7: In many cases, SRT files start off being displayed very small.
Those bigger captions sure look good!
8: USUALLY the easiest way to fix this is to go to the Caption panel and change the point size. You do this by Right+Clicking on any caption and ‘Select All’. (this is the only way you can select all the captions)
8b: With all the captions selected, you can then change the Size for all of them. (or change any other attribute for that matter)
9: The other problem that occurs is that Premiere will bring in an SRT file with a 720×486 resoltion. Not helpful for a 1080p project. In the lower left corner of the Caption panel you’ll see Import Settings. Click that to make sure it matches your Project settings.
Other Fun Tricks: SRTs with Non-Zero Start Times
If your video has an opening without any dialog, your SRT file will usually start with a timecode other than Zero. However, Premiere doesn’t recognize SRTs with non-zero start times. It assumes ALL SRT files start at zero. If yours does not, as in the example below, you will have to move it to match the start of the dialog.
You don’t have to do this with SRTs from Transcriptive. Since we know you’re likely using it in Premiere, we add some padding to the beginning to import it correctly.
If your captions start at 05:00, Premiere puts them at 00:00
Importing an SRT file in Premiere 2021: The New Caption System!
(as of this writing, I’m using the beta. You can download the beta by going to the Beta section of Creative Cloud.)
0: If you’re using the beta, you need to enable this feature from the Beta menu. Click it on it and ‘Enable New Captions’.
1: Like you would import any other file, go to File>Import or Command/Control+I.
2: Select the SRT file you want.
3: It’ll appear in your Project panel.
4: You can drag it onto your timeline as you would any other file… BUT
This is where things get different!
4b: Premiere 2021 adds it to a new caption track above the normal timeline. You do need to tell Premiere you want to treat them as Open Captions (or you can select a different option as well)
4c: And Lo! It comes in properly sized! Very exciting.
5: There is no longer a Caption panel. If you want to edit the text of the captions, you need to select the new Text panel (Windows>Text). There you can edit the text, add new captions, etc.
6: To change the look/style of the captions you now need to use the Essential Graphics panel. There you can change the font, size, and other attributes.
Overall it’s a much better captions workflow. So far, from what I’ve seen it works pretty well. But I haven’t used it much. As of this writing it’s still in beta and regardless there may be some quirks that show up with heavier use. But for now it looks quite good.
As you’ve probably heard, Adobe announced a new caption system a few weeks ago. We’ve been fielding a bunch of questions about it and how it affects Transcriptive, so I figured I’d let y’all know what our take on it is, given what we know.
Overall it seems like a great improvement to how Premiere handles captions. Adobe is pretty focused on captions. So that’s mainly what the new system is designed to deal with and it looks impressive. While there is some overlap with Transcriptive in regards to editing the transcript/captions, as far as we can tell there isn’t really anything to help you edit video. And there’s a lot of functionality in Transcriptive that’s designed to help you do that. As such, we’re focused on enhancing those features and adding to that part of the product.
Also, it also looks like it’s only going to work with sequences. It _seems_ that when they add the speech-to-text (it’s not available in the beta yet), it’s mostly designed for generating captions for the final edit.
However, being able to transcribe clips and use the transcript to search the clip in the Source panel is one powerful feature that Transcriptive will allow you to do. You can even set in/out points in Transcriptive and then drop that cut into your main sequence.
The ability to send the transcript to a client/AE that doesn’t have Premiere and let them edit it in a web browser is another.
With Transcriptive’s Conform feature, you can take the edited transcript and use it as a Paper Cut. Conform will build a sequence with all the edits.
Along with a bunch of other smaller features, like the ability to add Comments to the transcript.
So… we feel there will still be a lot of value even once the caption system is released. If we didn’t… we would’ve stopped development on it. But we’re still adding features to it… v2.5.1, which lets you add Comments to the transcript, is coming out this week sometime (Dec. 10th, give or take).
One thing we do know, is that the caption system will only import/export caption files (i.e. SRT, SCC, etc). From our perspective, this is not a smart design. It’s one of my annoyances with the current caption system. Transcriptive users have to export a caption file and re-import that into Premiere. It’s not a good workflow, especially when we should just be able to save captions directly to your timeline. Adobe is telling us it’s going to be the same klugy workflow.
So if that doesn’t sound great to you, you can go to the Adobe site and leave a comment asking for JSON import/export. (URL: https://tinyurl.com/y4hofqoa) Perhaps if they hear from enough people, they’ll add that.
Why would that help us (and you)? When we get a transcript back from the A.I., it’s a rich-data text file (JSON format). It has a lot of information about the words in it. Caption formats are data poor. It’s kind of like comparing a JPEG to a RAW file. You usually lose a lot of information when you save as a caption format (as you do with a JPEG).
It’ll make it much easier for us and other developers to move data back and forth between the caption system and other tools. For example: If you want someone to make corrections to the Adobe transcript outside of Premiere (on Transcriptive.com for example :-), it’s easier to keep the per-word timecode and metadata with a JSON file.
Historically Adobe has had products that were very open. It’s why they have such a robust plugin/third-party ecosystem. So we’re hopeful they continue that by making it easy to access high resolution data from within the caption system or anywhere else data/metadata is being generated.
It’s great Adobe is adding a better caption workflow or speech-to-text. The main reason Transcriptive isn’t more caption-centric is we knew Adobe was going to upgrade that sooner or later. But the lack of easy import/export is a bummer. It really doesn’t help us (or any developer) extend the caption system or help Premiere users that want to use another product in conjunction with the system. As mentioned, it’s still beta, so we’ll see what happens. Hopefully they make it a bit more flexible and open.
The Glossary feature in Transcriptive is one way of increasing the accuracy of the transcripts generated by artificial intelligence services. The A.I. services can struggle with names of people or companies and it’s a big of mixed bag with technical terms or industry jargon. If you have a video with names/words you think the A.I. will have a tough time with, you can enter them into the Glossary field to help the A.I. along.
For example, I grabbed this video of MLB’s top 30 draft picks in 2018:
Obviously a lot names that need to be accurate and since we know what they are, we can enter them into the Glossary.
As the A.I. creates the transcript, words that sound similar to the names will usually be replaced with the Glossary terms. As always, the A.I. analyzes the sentence structure and makes a call on whether the word it initially came up with fits better in the sentence. So if the Glossary term is ‘Bohm’ and the sentence is ‘I was using a boom microphone’, it probably won’t replace the word. However if the sentence is ‘The pick is Alex boom’, it will replace it. As the word ‘boom’ makes no sense in that sentence.
Here’s a short sample to give you an idea of the difference. Again, all we did was add in the last names to the Glossary (Mize, Bart, Bohm):
With the Glossary:
The Detroit Tigers select Casey Mize, a right handed pitcher. From Auburn University in Auburn, Alabama. With the second selection of the 2018 MLB draft, the San Francisco Giants select Joey Bart a catcher. A catcher from Georgia Tech in Atlanta, Georgia, with the third selection of a 2018 MLB draft. The Philadelphia Phillies select Alec Bohm, third baseman
Without the Glossary:
The Detroit Tigers select Casey Mys, a right handed pitcher. From Auburn University in Auburn, Alabama. With the second selection of the 2018 MLB draft, the San Francisco Giants select Joey Bahrke, a catcher. A catcher from Georgia Tech in Atlanta, Georgia, with the third selection of a 2018 MLB draft. The Philadelphia Phillies select Alec Bomb. A third baseman
As you can see it corrected the names it should have. If you have names or words that are repeated often in your video, the Glossary can really save you a lot of time fixing the transcript after you get it back. It can really improve the accuracy, so I recommend testing it out for yourself!
It’s also worth trying both Speechmatics and Transcriptive-A.I. Both are improved by the glossary, however Speechmatics seems to be a bit better with glossary words. Since Transcriptive-A.I. has a bit better accuracy normally, you’ll have to run a test or two to see which will work best for your video footage.
If you have any questions, feel free to hit us up at email@example.com!
Update: For Premiere 14.3.2 and above New World is working pretty well at this point. Adobe has fixed various bugs with it and things are working as they better.
However, we’re still recommending people keep it off if they can. On long transcripts ( over 90 minutes or so) New World usually does cause performance problems. But if having it off causes any problems, you can turn it on and Transcriptive should work fine. It just might be a little slow on long transcripts.
If you’re using Transcriptive v1.5.2, please see this blog post for instructions on turning it off manually.
As with most new systems, Adobe fixes a bunch of stuff and breaks a few new things. So we’re hoping over the next couple months they work all the kinks out and it all sorts itself out.
As always, we will keep you updated.
Fwiw, here’s what you’ll see in Transcriptive if you open it with New World turned on:
That message can only be closed by restarting Premiere. If New World is on, Transcriptive isn’t usable. So you _must_ restart.
What we’re doing in the background is setting a flag to off. You can see this by pulling up the Debug Console in Premiere. Use Command+F12 (mac) or Control+F12 (windows) to bring up the console and choose Debug Database from the hamburger menu.
You’ll see this:
If you want to turn it back on at some point, this is where you’ll find it. However, as mentioned, there’s no disadvantage to having it off and if you have it on, Transcriptive won’t run.
If you have any questions, please reach out to us at firstname.lastname@example.org.
If you’ve been using Speechmatics credits to transcribe in Transcriptive, our transcription plugin for Premiere Pro, then you noticed that accessing your credits in Transcriptive 2.0.2 and later is not an option anymore. Speechmatics is discontinuing the API that we used to support their service in Transcriptive, which means your Speechmatics credentials can no longer be validated inside of the Transcriptive panel.
We know a lot of users still have Speechmatics credits and have been working closely with Speechmatics so those credits can be available in your Transcriptive account as soon as possible. Hopefully in the next week or two.
In the meantime, there are a couple ways users can still transcribe with Speechmatics credits. 1) Use an older version of Transcriptive like v1.5.2 or v2.0.1. Those should still work for a bit longer but uses the older, less accurate API or 2) Upload directly on their website and export the transcript as a JSON file to be imported into Transcriptive. It is a fairly simple process and a great temporary solution for this. Here’s a step-by-step guide:
1. Head to the Speechmatics website – To use your Speechmatics credits, head to www.speechmatics.com and login to your account. Under “What do you want to do?”, choose “Transcription” and select the language of your file.
2. Upload your media file to the Speechmatics website – Speechmatics will give you the option to drag and drop or select your media from a folder on your computer. Choose whatever option works best for you and then click on “Upload”. After the file is uploaded, the transcription will start automatically and you can check the status of the transcription on your “Jobs” list. 3. Download a .JSON file – After the transcription is finished (refresh the page if the status doesn’t change automatically!), click on the Actions icon to access the transcript. You will then have the option to export the transcript as a .JSON file
4. Import the .JSON file into any version of Transcriptive – Open your Transcriptive panel in Premiere. If you are usingTranscriptive 2.0, be sure Clip Mode is turned on. Select the clip you have just transcribed on Speechmatics and click on “Import”. If you are using an older version of Transcriptive, drop the clip into a sequence before choosing “Import”.
You will then have the option to “Choose an Importer”. Select the JSON option and import the Speechmatics file saved on your computer. The transcript will be synced with the clip automatically at no additional charge.
One important thing to know is that, although Transcriptive v1.x still have Speechmatics as an option and it still works, we would still recommend following the steps above to transcribe with Speechmatics credits. The option available in these versions of the panel is an older version of their API and less accurate than the new version. So we recommend you transcribe on the Speechmatics website if you want to use your Speechmatics credits now and not wait for them to be transferred.
However, we should have the transfer sorted out very soon, so keep an eye open for an email about it if you have Speechmatics credits. If the email address you use for Speechmatics is different than the one you use for Transcriptive.com, please email email@example.com. We want to make sure we get things synced up so the credits go to the right place!
Or you must turn ‘NewWorld’ off (instructions are below)
Or keep using Premiere Pro 14.0.1
If you’re using Transcriptive 1.x, it’s still not exactly a problem but does require some hoop jumping. (and eventually ‘Old World’ will not be supported in Premiere and you’ll be forced to upgrade TS. That’s a ways off, though.)
Turning Off New World
Here are the steps to turn off ‘NewWorld’ and have Premiere revert back to using ‘Old World’:
Press Control + F12 or Command + F12. This will bring up Premiere’s Console.
From the Hamburger menu (three lines next to the word ‘Console’), select Debug Database View
Scroll down to ScriptLayerPPro.EnableNewWorld and uncheck the box (setting it to False).
Restart Premiere Pro
When Premiere restarts, NewWorld will be off and Transcriptive 1.x should work normally.
So far there are no new major bugs and relatively few minor ones that we’re aware of when using Transcriptive 2.0.3 with Premiere 14.0.2 (with NewWorld=On). There are also a LOT of other improvements in 2.0.3 that have nothing to do with this.
Adobe actually gave us a pretty good heads up on this. Of course, in true Anarchist fashion, we tested it early on (and things were fine) and then we tested it last week and things were not fine. So it’s been an interesting week and a half scrambling to make sure everything was working by the time Adobe sent 14.0.2 out into the world.
So everything seems to be working well at this point. And if they aren’t, you now know how to turn off all the new fangled stuff until we get our shit together! (but we do actually think things are in good shape)
When cutting together a documentary (or pretty much anything, to be honest), you don’t usually have just a single clip. Usually there are different clips, and different portions of those clips, here, there and everywhere.
Our transcription plugin, Transcriptive, is pretty smart about handling all this. So in this blog post we’ll explain what happens if you have total chaos on your timeline with cuts and clips scattered about willy nilly.
If you have something like this:
Transcriptive will only transcribe the portions of the clips necessary. Even if the clips are out of order. For example, the ‘Drinks1920’ clip at the beginning might be a cut from the end of the actual clip (let’s say 1:30:00 to 1:50:00) and the Drinks cut at the end might be from the beginning (e.g. 00:10:00 to 00:25:00).
If you transcribe the above timeline, only 10:00-25:00 and 1:30:00-1:50:00 of Drinks1920.mov will be transcribed.
If you Export>Speech Analysis, select the Drinks clip, and then look in the Metadata panel, you’ll see the Speech Analysis for the Drinks clip will have the transcript for those portions of the clip. If you drop those segments of the Drinks clip into any other project, the transcript comes along with it!
The downside to _only_ transcribing the portion of the clip on the timeline is, of course, the entire clip doesn’t get transcribed. Not a problem for this project and this timeline, but if you want to use the Drinks clip in a different project, the segment you choose to use (say 00:30:00 to 00:50:00) may not be previously transcribed.
However, if you drop the clip into another sequence, transcribe a time span that wasn’t previously transcribed and then Export>Speech Analysis, that new transcription will be added to the clips metadata. It wasn’t always this way, so make sure you’re using Transcriptive v1.5.2. If you’re in a previous version of Transcriptive and you Export>Speech Analysis to a clip that already has part of a transcript in SA, it’ll overwrite any transcripts already there.
So feel free to order your clips any way you want. Transcriptive will make sure all the transcript data gets put into the right places. AND… make sure to Export>Speech Analysis. This will ensure that the metadata is saved with the clip, not just your project.
Premiere Pro CS6 has the ability to turn speech into text and put it into the Speech Analysis metadata. You can still use it in any version of Premiere Pro.
In Premiere CS6 you can right+click on a piece of footage and select ‘Analyze Content’. This would turn all the speech into text. Adobe removed it in later versions of Creative Cloud but all that infrastructure is still in Premiere Pro CC 2018 (and other versions) and this post will tell you how to make use of it with, and without, Transcriptive, our plugin for transcribing video.
First off, if you have Creative Cloud, you still have access to CS6 (or CC). You can download it and use that to turn all your speech to text. This will get saved with your file and when you import it into Premiere 2018, all the text will be in the Speech Analysis field of the Metadata panel. This is very handy as you can use the text with the Source panel to set in and out points and edit with text.
To get older versions of Premiere, go to the Creative Cloud app and find Premiere Pro. Click the menu button (or down arrow) and select ‘Other Versions’. You can install all the way back to CS6.
Once CS6 is installed, you can import the footage, right+click and select ‘Anaylze Content’. It takes some time to do this, but once it’s done, you’ll have all the speech turned into text in Speech Analysis. Import the clips into the version of Premiere you’re using and all that text will show up in the Metadata panel. Voila! It’s not an awesome interface for editing the text (and it needs a lot of editing as it’s not very accurate, which is why Adobe removed it) but it’s there.
Transcriptive can also use that data. If you’re using Transcriptive and drop the footage into a sequence, it’ll pull the text from the Speech Analysis field.
As mentioned, the CS6 speech-to-text isn’t very accurate, which you can see below. So it’s usually worth it to pay a few cents a minute to get a good A.I. transcript or $1.25/min to get human transcripts (which Transcriptive can import).
However, if you want free, then the CS6 trick is one way of doing it. Or you could use YouTube and import their captions into Transcriptive. It’s free, easy and we have a great tutorial that shows you how to get YouTube captions into Premiere!
1) Practically every company exhibiting was talking about A.I.-something.
2) VR seemed to have disappeared from vendor booths.
The last couple years at NAB, VR was everywhere. The Dell booth had a VR simulator, Intel had a VR simulator, booths had Oculuses galore and you could walk away with an armful of cardboard glasses… this year, not so much. Was it there? Sure, but it was hardly to be seen in booths. It felt like the year 3D died. There was a pavilion, there were sessions, but nobody on the show floor was making a big deal about it.
In contrast, it seemed like every vendor was trying to attach A.I. to their name, whether they had an A.I. product or not. Not to mention, Google, Amazon, Microsoft, IBM, Speechmatics and every other big vendor of A.I. cloud services having large booths touting how their A.I. was going to change video production forever.
I’ve talked before about the limitations of A.I. and I think a lot of what was talked about at NAB was really over promising what A.I. can do. We spent most of the six months after releasing Transcriptive 1.0 developing non-A.I. features to help make the A.I. portion of the product more useful. The release were announcing today and the next release coming later this month will focus on getting around A.I. transcripts completely by importing human transcripts.
There’s a lot of value in A.I. It’s an important part of Transcriptive and for a lot use cases it’s awesome. There are just also a lot of limitations. It’s pretty common that you run into the A.I. equivalent of the Uncanny Valley (a CG character that looks *almost* human but ends up looking unnatural and creepy), where A.I. gets you 95% of the way there but it’s more work than it’s worth to get the final 5%. It’s better to just not use it.
You just have to understand when that 95% makes your life dramatically easier and when it’s like running into a brick wall. Part of my goal, both as a product designer and just talking about it, is to help folks understand where that line in the A.I. sand is.
I also don’t buy into this idea that A.I. is on an exponential curve and it’s just going to get endlessly better, obeying Moore’s law like the speed of processors.
When we first launched Transcriptive, we felt it would replace transcriptionists. We’ve been disabused of that notion. ;-) The reality is that A.I. is making transcriptionists more efficient. Just as we’ve found Transcriptive to be making video editors more efficient. We had a lot of folks coming up to us at NAB this year telling us exactly that. (It was really nice to hear. :-)
However, much of the effectiveness of Transcriptive comes more from the tools that we’ve built around the A.I. portion of the product. Those tools can work with transcripts and metadata regardless of whether they’re A.I. or human generated. So while we’re going to continue to improve what you can do with A.I., we’re also supporting other workflows.
Over the next couple months you’re going to see a lot of announcements about Transcriptive. Our goal is to leverage the parts of A.I. that really work for video production by building tools and features that amplify those strengths, like PowerSearch our new panel for searching all the metadata in your Premiere project, and build bridges to other technology that works better in other areas, such as importing human created transcripts.
Should be a fun couple months, stay tuned! btw… if you’re interested in joining the PowerSearch beta, just email us at firstname.lastname@example.org.
Addendum: Just to be clear, in one way A.I. is definitely NOT VR. It’s actually useful. A.I. has a lot of potential to really change video production, it’s just a bit over-hyped right now. We, like some other companies, are trying to find the best way to incorporate it into our products because once that is figured out, it’s likely to make editors much more efficient and eliminate some tasks that are total drudgery. OTOH, VR is a parlor trick that, other than some very niche uses, is going to go the way of 3D TV and won’t change anything.
Chief Executive Anarchist
Using Transcriptive with multicam sequences is not a smooth process and doesn’t really work. It’s something we’re working on coming up with a solution for but it’s tricky due to Premiere’s limitations.
However, while we sort that out, here’s a workaround that is pretty easy to implement. Here are the steps:
1- Take the clip with the best audio and drop it into it’s own sequence.
2- Transcribe that sequence with Transcriptive.
3- Now replace that clip with the multicam clip.
4- Voila! You have a multicam sequence with a transcript. Edit the transcript and clip as you normally would.
This is not a permanent solution and we hope to make it much more automatic to deal with Premiere’s multicam clips. In the meantime, this technique will let you get transcripts for multicam clips.
Thanks to Todd Drezner at Cohn Creative for suggesting this workaround.
Wherein Jim Tierney rants and opines about After Effects, Premiere Pro, Final Cut Pro, and other nonsense