Saturday, January 24, 2009

Important notes about DV

DV is wierd in a number of ways.

DV is often 4:3 but stored with data aspect ratio of 720x480. 720x480 is *NOT* a ratio of 4:3. DV has non-square pixels and expects to be displayed at 640x480 which *IS* 4:3 ratio. I am unclear whether this is due to legacy technology, or whether there is actually extra horizontal data in DV that goes to waste unless output is displayed at > 640 columns.

DV is interlaced. Interlacing is actually quite common - it was a necessary tactic in order to maximize motion and resolution in early video tech. However handling interlacing is of utmost importance. FCP displays DV footage at correct 4:3 ratio but will export a file that quicktime plays back at 720 causing everything to be short and fat. When converting you'll often have two options - whether the input is interlaced (should be de-interlaced) and whether the output should be interlaced.

http://www.dvcreators.net/interlacing/ is a good introduction

NOTE: besides the various methods of interlacing, there are two fundamentally distinct cases - a case like analogue cameras where alternate fields were recorded at different times (60 samples become 60 fields in 30 frames) and cases where the source was progressive and so both fields in a frame are from the same moment in time - such as a 24p film source being interlaced for DVD. It the case of a film, it is possible to deinterlace and get back the original complete frames. In the case of 60i video, you can deinterlace to 60 frames at half height which keeps the full time resolution. You can then stretch the height by 2 to get 60p - although you can't recover the other half of the lines, you aren't really throwing away data. Result may be a little soft due to scaling, but that's unavoidable. The most common method seems to be throwing away odds or evens, which will convert 60i to 30p - but losing time resolution, as well as the same picture quality as above (half height stretched to twice as tall). In the case of minimal motion or fixed camera, motion adaptive interlacing can identify parts of the screen which are still and combine both frames in those areas, to achieve full vertical resolution. Moving areas can be stretched. This method should work to 60 or 30 frame rate output.

DVD content should indicate the format well enough for player to handle both cases.

Field dominance applies only to interlaced footage. DV "field dominance" is "lower". Most everything else is "upper". If you are working with/to/from progressive video you might have to choose "none". Beware that Compressor presets may have field dominance set to upper, if you are compressing from DV you might need to change it.

CHATTING WITH FRIENDS

eyescratch: FCP will "render" by copying untouched frames. if you don't motion or color correct or timestretch, simple whole-frame editing will be lossless up to point of export. when you deinterlace you throw away half the data. only do titles once you are in progressive!

wait - does that mean every time i render a sequence with motion / filters and render it, I'm taking another hit quality-wise? will final export render work through all the embedded sequences and filters to render each frame fresh? can i control this? do I need to set seq compression to "uncompressed" to avoid it? if i clear all render files before export does that ensure no intermediate loss? if i have interlaced footage in an interlaced seq but clip has motion, then will rendering de- and re- interlace or does it process each field? what is the domain for video filters.

seej: one problem at a time. de-interlace as its own step, don't rely on software to automatically handle it cuz then you don't have control - double true if it is also handling resolution, aspect ratio, etc. this implies export from FCP uncompressed/png interlaced, then deinterlace to an uncompressed, then attack aspect ratio etc.

pepper pad

The pepper pad is a linux-based wifi touchscreen tablet. Compared to newer sleek devices it may seem clunky, but the screen is a nice size for touch interactivity and the odd split qwerty keyboard offers convenient tactile control.

Pepper pad can run VNC remote desktop client, so it can be used as a wireless touchscreen interface to any software on any platform.

In my own projects I used the pad to wirelessly control GDAM - dj mixing from around the room (tactile keys mapped to transport, looping, and effects functions), sequencers and break resequencing that anyone could try.

For Share I used the pad as a touchscreen interface to open source DAW Ardour running on an OSX tower. Musicians plugged into our mixing system from stage boxes situated around the venue, and I could monitor levels and adjust settings from whereever they were.

Friday, January 23, 2009

Free windows audio production setup

I use GDAM on linux for most music composition, and the excellent open-source DAW Ardour for mixing. However I found the need for a Windows setup for creating specific parts.

I bought some sample sets in REX format and some VSTi instruments (the excellent Raging Guitars), and needed a platform to work with them - in particular, creating a project with guitar, bass, and drums set up for rapid song prototyping and exploiting my commercial content. Linux does offer some VST support, but I didn't want to reconfigure my Linux gig box or spend time struggling with system configuration so I decided to look within the wealth of Windows audio platforms.

I didn't want to cough up for a big commercial suite, so I tried some free and shareware programs. I was working with "Music Studio Independence" but couldn't create multiple instances of VSTi, the looping wasn't perfect so it would drift vs other audio sources, and the demo time limit was limiting.

A friend turned me onto Reaper, a shareware DAW with unrestricted demo. I had no problem creating a bunch of tracks each with their own VSTi driven by MIDI. I'm quite likely to pay for this software as it seems to meet all my needs.

For REX playback I used UVI Workstation an excellent free REX player. Reaper creates a bunch of output channels, but I didn't see how it was possible to send to the top half, and by default all loops send to main output - so for most instances I delete the other tracks. In UVI select the "Loop" tab then the "Map" tab, there is an icon of note score that you can drag to the timeline so that you can edit the MIDI to rearrange the break. You can load 8 REX files into 8 banks (each assigned to consecutive MIDI channel), but after dragging the MIDI into the timeline you'll need to open the sequence, select all, and set event channel to match the bank containing the loop. As far as I can tell, all the MIDI patterns in different channels need to be in the main timeline track for their events to drive the plugin.

Alieno is a free VSTi synth with a bunch of presets. The presets have abstract names so it is hard to find what you need, but there are some nice complex pads hidden within.

iblit is a free synth with some nice bass presets, a wide pitch-bend range, and the ability to map pitch bend differently for each synth component. I use this to generate sliding basslines.

TAL-Bassline is a nice free emulation of synth hardware, I use this for acid-style and arpeggiated basses.

DSK MiniDrumZ is a free VSTi with drums sounds from a handful of vintage devices, useful for rapid prototyping or retro sounds.

Now I have a Reaper project which is a template for prototyping new songs or generating a single guitar/bass/organ part for an existing song.

REAPER NOTES

right click in track area -> new track from virtual instrument. opens in record mode. select midi input. may need to toggle the speaker icon (track monitor) a few times to get midi flowing to plugin.

multiple MIDI tracks that send to one synth: create virtual instrument. select that track, right click, add track. new track i/o button, add send, send to virtual instrument track, audio-none

UVI WORKSTATION NOTES

Browse button to open browser. Select slot. Browse for sample. Load samples in each slot. Rex files will play at track tempo - while loading samples and when track is in play! Main. Select a slot, Loop, hit the Map button. Rex will no longer play. Drag and drop the note sequence into Reaper track. Now when track is played, each slice will play back in time to recreate loop.

Dropped midi data seems to always be channel 1, so after dropping midi data from a slot > 1, open the midi segment from track by double-clicking, select all, right click, and set midi channel to match the slot number.

Sometimes a new instance of workstation is not audible even though track meter shows activity. alt-r to open reaper routing matrix, make sure it is sending to master.

Advanced button lets you send each slot to a different output track. Selecting Main again seems to send to *two* tracks, so mute one. If using one track for all loops, you can delete the other outputs to keep your project clean.

Monday, January 19, 2009

Final Cut Pro / DV / Aspect Ratio / YouTube / DVD

A bunch of notes from my struggles with FCP related to aspect ratio and frame rate.

DV has data aspect ratio of 720x480 but with non-square pixels, so it displays at 640x480. Within FCP, DV is handled properly and circles in your footage appear perfectly round. If you "export with current settings" you get a .mov file full of native DV, the same data and quality as within FCP. However even the Quicktime player will play your video back at screen resolution of 720x480, wider than it should, and circles will appear squashed. Placing the quicktime player over the FCP viewer zoomed to 100%, the extra 80 pixels of width are clear.

Encountering a dv .mov which displayed too wide, I re-exported it in H.264, medium bandwidth, and under resolution selected "DV (4:3)". The resulting file plays in quicktime.


Re-exported file in Finder shows "720x480" in QT -> Info "720x480 (640x480)" Interestingly, the DV export shows identical resolution information, but displays at 720 screen pixels in QT. Also, the H.264 encode with square pixels is much sharper on the title frame than DV version... an effect of the square pixels?


Why does DV use non-square pixels? Presumably legacy reasons... perhaps relatedly due to eye and brain perception (eg interlacing is horizontal)

So isn't it correct, if preparing a file for computer use, to reencode to 640x480 with square pixels? Not if you can help it. True, if the file will be displayed at 640x480 on a computer monitor, then the video will unavoidably be resized. But videos are often watched zoomed fullscreen, with a display size much greater than 720. Having the full 720 columns of video data allows the fullscreen image to be sharper. And if you rezise to 640x480 square pixels, you are throwing away data, plus any reencode reduces quality. So better to keep full data as long as possible. (TODO: is it true that horizontal data in DV is extra dense, or is there some interlace-type effect which means it is actually half density?)

How to make sure your files play correctly? Need to have file data set so that player can know the difference between data aspect ratio and screen aspect ratio. Player needs to honor the flag. Therefore you are at the mercy of the player. However HDV footage is often (always?) stored in a distorted aspect ratio, and most players coorectly play HDV footage (is in a friendly format) so is this too much to expect?

INTERLACING

Interlacing refers to the process of transmitting video frame 1/2 at a time, odd-numered rows alternated with even-numbered rows. De-interlacing is the process of generating progressive (non-interlaced, complete) frames from interlaced data. A read of the most excellent site http://100fps.com/ show how complex this one aspect of video can actually be, and the examples at the top of the page will convince you of the importance of getting interlacing right.


DV is commonly (always?) interlaced. FCP does a good job of de-interlacing within the app, but you have to be careful on export.




EXPORTING FOR YOUTUBE

YouTube now exhorts their users to leave files at highest resolution and quality, as long as they fit within the 1GB limit. They remind users that every re-encode reduces quality, and even suggest leaving footage direct from HD cam untouched if possible. So if your DV export from FCP is under 1GB, of course you should upload it... right? Why re-encode?

However a late 2008 post in the forums, user epontius claims that YouTube will not handle non-square pixels correctly, and that it will do a low-quality job shrinking the video for standard size YouTube player.

http://help.youtube.com/group/youtube-howto/browse_thread/thread/1ba7424dff529a23/7f3e3e7604343193?hide_quotes=no

Hard to believe in the HD era that one should reduce resolution, and it flies in the face of YouTube's statements... but nonetheless I am concerned and can't find confirmation that SD DV is handled correctly.

Here lezro44 describes his process of reimporting dv codec video into a new FCP project to change to square pixels, then exporting from there.

http://help.youtube.com/group/youtube-howto/browse_thread/thread/84ba6bd4a3ce98a9?fwc=1

Try: export from FCP with "current settings" but manual override of aspect ratio to produce DV that QT displays correctly?: Export -> Use: Current Settings -> change to Custom... -> change Aspect Ratio from "NTSC DV 3:2" to "NTSC 4:3", Pixel Aspect Ration to Square, QuickTime Movie Settings -> Advanced -> Scan -> Progressive In the other tab I set Motion Effects quality to Best. TODO: left field dominance as "Even"... good?

Note: it takes twice as long to export as "current settings"

Results: The title is not as sharp as h.264 reencode (in fact i think it is just as bad as direct dv export, except tighted 10% horizontally) but deinterlacing is great, none of the jaggies visible in the native export. Nature of compression is different, I can sense it is more square. Seems like a good candidate for reencode for web. TODO: encode to h.264 and compare sharpness etc to the encode from interlace.

Try: further attempts to convert from progressive, square dv

Results: still interlaced!

Try: "quicktime conversion" to h.264 and mp4 at highest quality, under video -> settings -> size check the "deinterlace source" box *even though source is not interlaced!*

Results: non-interlaced output! however, one bit of footage in my project had "problematic" interlacing with a missing row near the bottom. In FCP and exported progressive DV the artifact had disappeared... but now it is back again in encodes!

Try: MPEG Streamclip, export to mp4, h.264 encoder full quality, FIELD DOMINANCE LOWER, deselect the interlace options as dialogue suggests.

Result: non-interlaced, but problem footage scanline reappears!!

Try: add de-interlace filter to problem line video within FCP (TODO: details). Also add desaturate just so I can be sure everything is re-rendered.

NOTE: can't find the contrast / black level filters I added to clean up line footage! Seq embed problem.

Result: 640 prog export looks great - but it always does.

Try: new prog 640 clip -> quicktime player -> file -> export -> movie to quicktime movie -> options... -> video settings -> highest bitrate. movie settings -> size -> check the "deinterlace source" box *even though source is not interlaced!*

Result: looks like it worked, no interlacing and no artifacts in line feed.

Try: exporting according to dvcreators guide (above) EXCEPT - current size showed <640 !?! so override to VGA.

Result: dvcreators version indeed looks sharper on title frame, but bassbot footage is all re-interlaced! Gaaah!


EXPORTING FOR DVD

NOTE: FCP -> DVD Studio toolchain works well, so export "current settings" dv .mov to use for DVD use, and use the progressive square version for computer formats. TODO: isn't a progressive scan DVD better? research.

Here is a guide for simple FCP->DVDSP workflow that doesn't address progressive video. http://fcproducer.com/2007/04/make-a-final-cut-pro-movie-into-mpeg-2-for-dvd-studio-pro/

An HD->SD workflow http://www.produxion.net/tag/final-cut-pro/

Suggests compressing to progressive mpg2 to get a DVD that looks good on computer and TV. Seems to be the same process as I do, how does it work for this guy? http://www.geniusdv.com/weblog/archives/a_fix_for_chopiness_in_dvd_video.php

User **DONOTDELETE** claims high-end players often screw up 30p (29.97p?) http://www.2-popforums.com/forums/showthread.php?t=51948

This guide to handling film footage is well-written and touches on related issues http://www.lafcpug.org/Tutorials/basic_video_to_film.html

dvdcreators.net suggests exporting to png frames for best quality - but isn't this just a high-bandwidth version of the indentical output you get from DV render? Or does a DV export with any editing or filters actually introduce a re-compression? Makes sense that edit export is re-compressed, although many people recommend "export with current settings" as "no video loss" - but if it compressed to DV there is a slight loss they might not detect... rendering motion / deinterlace / aspect ratio / square pixels to a lossless format would seem to avoid a reencode. http://www.dvcreators.net/how-do-i-export-a-high-quality-movie/ somewhere they have a thumbnail comparison of the soft text from default output vs the sharp text of their method.

TODO: Wikipedia says mpeg2 allows 720 but not 640... so I shouldn't convert from 640 export then? http://en.wikipedia.org/wiki/MPEG-2


There are two opinions about interlacing re: FCP and DVDSP

1) Don't ever de-interlace, let the interlace continue through the workflow, produce an interlaced DVD, it will look fine on TV's

2) De-interlace at some point in the process, produce a progressive DVD

Old displays relied on interlacing. DV footage comes interlaced from the camera. Computer displays are progressive. DVD's can include either interlaced or progressive video. Interlaced video can be de-interlaced at time of playback *if your player supports it*. Reportedly new versions of the OSX DVD Player include deinterlacing so that interlaced DVDs appear smooth. The big issues are the quality of the deinterlace (depending what step executes it) and requirements from hardware. De-interlacing is a tricky problem, if trying to keep the best possible time and visual resolution. 1000fps site details many approaches. Some hardware has fantastic deinterlace abilities, beyond what the FCP toolchain offers, when outputing to a progressive display. Other hardware will display lots of interlace jaggies.

However I am with group 2) because all hardware is capable of re-interlacing the video if necessary for display - and unlike de-interlacing, interlacing has one correct answer (ignoring field order etc) and there is no voodoo to achieve better results. Therefore it is better to do the best possible deinterlacing during production - there may be a handful of systems which could have done a better job during playback, but you demolish the worst-case scenario. More and more people view DVD on progressive displays and the commercial DVD market sells progressive scan as a feature, so it is past the time of producing interlaced DVD's.

The question becomes where in the toolchain to deinterlace. You can

1) de-interlace option in DVD Studio Pro

2) export to progressive scan DV file, import to DVDSP

3) apply builtin or 3rd party deinterlace filter to your final FCP sequence before exporting it

I opted to try 2 - my project and sequences are all interlaced, but when exporting I changed the settings to progressive. I actually tried this method because it is invisible... no de-interlace settings to tweak, no changes to my FCP project, just part of the output process. It seems like it should be "correct" based on what it knows of DV. If the results are good I won't have to worry about any details or test with various deinterlace settings.

There is also the question of whether to mpeg compress FCP output or let DVDSP do it on import. I prefer compess on export, just to be able to crank up the bitrate for short videos and also to avoid the encoding hit if I use the clip in more than one DVD or rebuild DVD from scratch.

Try: Export -> Custom settings -> (ntsc dv 2:3, dv pixel ratio) advanced -> progressive, 4:3

Notes: fast export

Results: title is soft as native, QT displays it extra wide (TODO: need to specify 4:3 at export?) Perfectly progressive, looks good other than squashedness. Remains to be seen if ratio comes out correct on DVD.

Try: Compressor on "for dvd", best for 90 min, field dominance: progressive, 4:3

Results: output is mega-interlaced despite selecting "progressive" from popup... is is possible selecting "progressive" caused a double process and re-interlacing? Guess I have to try without forcing progressive and compare.

Try: Still believe in the progressive export. Try compressor with "90 min best" no funky options.

Result: m2v is still interlaced.



Try: Interlaced output into Compressor with "field dominance" -> progressive.

NOTE: FCP and DVDSP are all part of FCSP, could well be using identical de-interlacing.



A BUNCH OF TESTS with the goal of a nice progressive output for use on web etc

(all of this is with an extra de-interlace in the troublesome line footage)

set up a four-second clip

from progressive sequence:

1) export quicktime, custom, 720 NTSC DV 3:2, CCIR601 PAR, no field dominance, advanced -> progressive 4:3

2) export quicktime, custom, 640 NTSC DV 4:3, square PAR, no field dominance, advanced -> progressive 4:3

3) export -> quicktime conversion -> setting -> png non-interlaced <- -> size 640 VGA, don't de-interlace

1 and 2 titles are soft and pixellated but all footage is clean. 1 is extra wide. 3 is interlaced all footage (except line video?) The interlacing seems more extreme and clean that usual, as if PNG output was being interlaced despite my settings.


from standard dv interlaced sequence:

1) export quicktime, custom, 720 NTSC DV 3:2, CCIR601 PAR, no field dominance, advanced -> progressive 4:3 (no deinterlace option, trust FCP to know how to handle)

2) export quicktime, custom, 640 NTSC DV 4:3, square PAR, no field dominance, advanced -> progressive 4:3 (ditto above)

3) export -> quicktime conversion -> setting -> png non-interlaced <- -> size 640 VGA, do de-interlace source video

2 titles are mungy and hair seems less precise as if subject to same munging TODO: compare hair resolution with best result for deficiencies. 3 titles are sharp but video is interlaced to hell, as above super sharp.

4) like 3 but skip deinterlace of source

still interlaced output, can't detect any clear difference to 3

5) interlaced png without deinterlacing source

result: same

6) interlaced png, deinterlace source

result: same

7) export -> quicktime -> 640 NTSC 4:3 square, qv compressor = uncompressed 8 bit

result: same interlace problem!

NOTE: can set PNG compressor here - save size only?

8) prog png, deinterlace source, swap fields TODO: deinterlace as filter to control order?

9) above without deinterlace source

created "output filter test" interlaced seq and drag 4 sec interlaced seq into it. right-click, show in viewer, video filters -> video -> shift fields

1) 8 bit uncompressed (aka test 7 above)

result same as above - interlaced output file



A BUNCH OF TESTS with the goal of a clean progressive output. because it is intended for DVD I leave it 720.


Q: export quicktime "field dominance" to set output or needs to match sequence? is it ignored when "progressive" is set under "advanced"?

NOTE: seems true that NTSC DVD is NTSC meaning PAR is non-square.

Q: to preserve best data/time res wouldn't it be better to leave interlaced even for a computer display? or at least to 60fps half-height video?

NOTE: de-interlacing often means throwing away the odd/even frames, which softens the image because half the vertical info is discarded!

TIP: option-double click on item in browser to open it in its own viewer window

NOTE: changing length of nested sequence will change length within parent sequence, and ripple and time changes to prevent gaps/overlaps. But this only happens if you nested the whole sequence *without in/out points set* or set the length of nested seq within parent seq. Does this mean i can spoil it by fucking with in/out point??? dangerous!

NOTE: all rendered frames are written to disk even if render is cancelled.

PROGRESSIVE DVD:

edited sequence interlaced. made a 720 ntsc 3:2 dv progressive seq, nest my edit seq in it. add deinterlace filter to nested clip. verify with canvas at 100% that it is progressive. add titles. export to quicktime movie

general tab: 720 3:2 ntsc dv, ntsc PAR, field dominance none, uncompressed 8-bit
processing tab: high precision YUV, superwhite, best motion quality

note: 10-bit is an option, but some filters don't support it, and the source is 8-bit anyway, and i don't have subtle gradients.

note: superwhite because benton said so. this preserves details in the whites (prevent brightness clipping) but makes everything ~8% darker. except.. this supposedly only involves RGB material and I don't have any so... no need to worry.

export to .mov, import to dvdsp. settings -> best 90 min all. delete one of the audios. click on mpeg setting and poke around inspector. in GOP tab set the GOP size as short as possible, maximize i-frames - eg IBBP 7

Q: what about mpeg2 encoding at one-pass CBR set to 8M+ since my project is very short? multi-pass only helps with VBR data allocation (http://www.afterdawn.com/glossary/terms/multipass.cfm) so for short projects forcing max bitrate is best.

video bitrate: "Maximum video bit rate is 9.8 Mbps" (by which he seems to mean V+A) "LPCM is fixed at 1536_Kbps" = 8.2 Mbps for video but round down to 8.0 because people tend to.

MARKERS: FCP manual says that FCP will automatically insert compression markers at edits, but that certainly doesn't work with edit nested in title/export sequence. need to test edit sequence export to see if markers even works there.

DVD encode: could I force all iframes at 8.0M, would that look OK, is it legal? accomplish this with a compression marker at every frame?

Make a new setting based on "DVD 90 min best", change to progressive CBR 8Mbps, GOP IP closed 6 IPPPPP. Save as "DVD Best".

TIP: option-\ to play one frame at a time, not real-time but slo-mo preview. useful if you can't play full quality real time

compression frames tell encoder to put a full mpeg frame just after each edit, to avoid artifacts due to difference-frames across edits. but you have to be careful to preserve them all the way to the encoder. how to preserve to compressor? can i automatically add at every edit point? FCP knows...

how do compression markers interact with nesting?

FCP docs say it adds compression markers automatically at edit point (if exporting with correct settings) but does this work in nested sequences? manual doesn't say.

can I do mpeg with all i-frames for a 5-minute dvd? does that exceed max bitrate without throttling frame quality?

Q: can i examine a .mov text track to determine which markers are present? can i look at file size difference between export 1 and export 2 files to determine if markers are present?

NOTE: on liberty dvd export final 1, zoom-in at beginning of liberty looks all interlaced!!! does this prove that i need de-interlaced sequences and fix each source asap? but it doesn't happen once the song starts even tho we are still zoomed in... only under title?

TODO: record timestamped midi performace. could have used for liberty fx to match visuals.



http://dvd-hq.info/bitrate_calculator.php


Strategy: split fields to convert 60i to 60p - same quality loss but twice the data and time detail. can convert first and edit in 60p. need to convert back to 30p or 60i at the end, which means that we'll end up throwing half away. but could make a 60fps file for computer viewing. do HDTV's deinterlace to 60p? Hey... can take my interlaced edit, export a copy to interlaced uncompressed, split fields, reimport and title in a 60p sequence. Use that source for high-speed

NESTING:
http://www.kenstone.net/fcp_homepage/basic_nest.html
also see FCP manual "sequence to sequence editing"

Monday, January 5, 2009

logical / heirarchical editing in FCP

I'm editing a music video. I have dozens of camera angles from a dozen performances of the song, shot at various locations.

Each angle is a file named by event and source "warper - elsa" was shot by elsa at a performance at the warper event.

I make a "clip sequence" called "warper elsa" which gets that single angle, time-cropped to the relevant song. I double click the instance of "warper - elsa" file in the "warper elsa" sequence to open it. I then fix up anything on the clip level - color correction, scale and position for fixed cameras that are too wide, perhaps automate zoom and light balance to counteract any shortcomings in the raw footage.

Each performance of the song is slightly different, this part or that extended for extra bars. Therefore, I create an "alignment sequence" named "warper elsa aligned" with the studio track. I drag the clip sequence "warper elsa" into the alignment sequence "warper ela aligned" to create a reference. I then cut up the clip and align it with the studio audio track, matching verse to verse and breakdown to breakdown.

After repeating for each angle, i have a bunch of alignment sequences which are all synched with studio track.

Now I create an "all-up" sequence. I embed all the alignment sequences into the all-up sequence. I ctrl-click each instance of an alignment sequence and "open in viewer", then use the motion tab to move it into a 9-up position: x = +-245, y=+-160. Too many angles to play more then a second in real time, but i can render it and it serves as a nice reference, a way to view all angles when looking for edits.

I also create one or more "edit sequences". Edit sequence also gets an embed of each alignment sequence. But rather than using motion for all-up, I cut out bits of the sequences to reveal each in the appropriate places.

Now I have an end-to-end project for editing this song.

The idea is that if two angles have excessively different color balance, I can color correct their clip sequences. If the mouth doesn't quite match the voice, or the headbanging starts two bars before the music cue, I can open the alignment sequence for that angle and nudge the segments by a few frames or swap bars.

PROBLEM: if i double-click a clip sequence in the browser to open it in the timeline, then open the underlying clip in viewer and desaturate it, it appears b/w in the clip sequence. However the alignment sequence stays in color. Double-clicking the instance of clip sequence in the alignment sequence timeline, it opens in the same tab as clip sequence - except without the desaturate filter. Here it gets a little wierd - the instance of clip sequence embedded in alignment sequence is not a reference, it appears to be a copy. It may share a timeline tab with the original clip sequence, but it is a copy - made at time of embedding presumably.

IT GETS ODDER: So if dragging one sequence into another makes an embedded copy (rather than reference) then any changes made to the alignment sequence should not be reflected in the all-up sequence. But in fact, testing with a desaturate on a different angle, all-up *does* honor filters added to instance of clip sequence in alignment sequence (although i have to re-render before I see it).

The only obvious difference is the level of indirection - in failure case desaturate is applied to an instance of a clip in a sequence - in success case the filter is applied to an instance of sequence in another sequence. If I put each clip in a "file sequence" then embed that in the clip sequence, would that be a workaround?

Test: the alignment sequence has a bunch of cut up bits of the clip sequence - except a copy of the sequence. Is it one copy or many? Double-click a chunk of clip sequence in the alignment sequence to open it in a timeline tab. Add a desaturate to the instance of clip in this (copy occupying the same tab as original) clip sequence. Back to the align clip, it is honored. Honored also in the all-up.



Current strategy: delete the clip sequence, solve clip-level problems in the tab that open when double-clicking instance of clip sequence (which isn't lost even after clip sequence is deleted from browser) in align sequence. This reduces the clutter in the bin, a nice side effect.

I have trouble rerendering all-up seq after filtering clip seq. Sometimes rendering works. Fix: toggle visibility,

Double check: filter in original clip seq, force update in align. Nope, still not honored.

Note: fixing align seq, def honored in both edits. so, that is workable.

According to this page, when 'nesting' changes made to the original sequence are reflected in the sequence which nests it. http://www.scottsimmons.tv/articles/nesting.html


My all-up and edit sequence strategy is working well. But to edit efficiently i need to have the edit seq open in timeline/canvas, and i need to be able to view the all-up seq in sync. Multiclip editing allows this, you can set the viewer to 4- or 9- up. But multiclips are limiting (you can only use clips, not sequences, so there is no nesting of source angles). But i can't find a way to play a 2nd sequence in sync:

can't make 2nd viewer or canvas windows

if playhead is open, then viewer automatically loads clip/seq currently visible in timeline. I need to view the all-up constantly, in sync with my current edit. I wish i could make playhead sync follow the time without changing the clip... that feature is foiling my plans!

Tried to put the all-up on top in sequence (so it would auto-select in viewer with open sync) but find a way to keep it from being visible in canvas. Can't use filter because those show up in viewer. Try changing composite mode, but that is sloooow.. plus all modes have the top layer visible, plus the viewer shows the same composite as canvas, not just the top layer!

Best workaround: put the all-up seq as top layer in edit seq, make that top layer invisible, open in view, viewer playhead gang sync. You can't watch both at same time, but you can play all-up (in viewer) or edit (in canvas) back-to-back without any other gui clicking.

NOTE: edit seq has only one copy of audio because two copies is too loud. Result is that when opening instance-of-all-up-in-edit-seq in viewer, it plays silent. Bit it is possible to drag the all-up seq from browser to viewer, as long as sync is gang it'll catch up to activity in the edit timeline and be playable on its own with audio. You have to manipulate timeline and click play in the appropriate window, and you can't watch both angles in sync, but this does allow you to view the preview track and exit back-to-back. You can use ` to create matching markers on the viewer and canvas so you can play back the track from the same moment.

NOTE: there seems to be some funny business, relative-position-matching. To get things started in sync i had to put both viewer and timeline at (synchronized) zero, then set gang, then play in the timeline.

WAIT: in gang mode viewer will catch up with the canvas, but the canvas won't keep up with viewer - viewer plays on its own. Doesn't lose sync in the sense that playing the canvas at different location will cause viewer to catch up. But even with viewer and canvas both ganged, canvas will not follow viewer.

NOTE: "Liberty Edit" seq had one instance of ref to "flow ilan line feed" but that was cut up a bit for creative purposes. It seems like 90% of the cut up bits open one copy of the seq, one bit opens a fresh copy. Even after fixing black levels for all the hand segements of line feed, one bit of edit showed black problems. Double-clicking this segement in edit seq opened "flow ilan line feed" copy that didn't show any of the edits i had previously done to set black levels. The problem segment wasn't at -100% speed or any other conspicuous attribute.