The ProcessSmilWorkflowHandler is used to edit media files using descriptions from a SMIL file. The SMIL file is typically generated by the editor or it can be constructed and uploaded. It contains names of one or more source tracks and a list of selected clips defined by in/out points in ms in the source tracks. It will concatenate all the clips from the source tracks according to the in/out points and encode the result into multiple target videos using a list of encoding profiles. In addition, the target videos are optionally tagged with the name of the encoding profiles.
The Video editor produces a SMIL file and by default will also encode one set of edited videos targets as an intermediate format to be used to do segmentation and then used as source to generate multiple delivery formats. This workflow operation is used to bypass the generation of the temporary targets and generate the delivery formats directly. Subsequent workflow operations can select the highest quality source medium by tags and flavors. This operation saves the encoding time of one set of full length video and allows concurrent processing of multiple independent ffmpeg operations.
To use this operation with the editor, the following must be added to the editor workflow operation to bypass the video editor encoding,
Currently, there is only one transition type, which is "fade to black". The edited video will fade in from black with a fade-out/fade-in for each clip transition and a fade out at the end. The transition duration is a 2 second fade, configured in org.opencastproject.composer.impl.ComposerServiceImpl.cfg. In the future, each transition can be configurable as a SMIL element.
The SMIL file can use more than one source video, but the caller has to take care that the dimension of all the source videos are the same. This workflow will generate one independent ffmpeg operation per SMIL paramgroup (based on source) regardless of the number of target outputs.
This workflow can handle each source flavor selector independently. eg: Each source selector can have its own set of encoding profiles, target tags and flavors. The parameters for each configuration, such as flavor are separated into sections by ";". E ach source media selector can have its own sets of encoding profile ids (one for each target recording) and target tags, as well as its own set of target tags and flavors, defined as a comma delimited list.
As an example, using presenter/source and presentation/source as uploaded media. eg:
One source selector means that all the matching recording will be processed the same way.
Two different source selectors separated by semicolons means that all the matching recordings in the first selector will be processed according to the parameters in the first section and the all the matching recordings in the second selector will be processed according to the parameters in next section of the other configuration values such as encoding profiles.
Each source selector can have only one corresponding section in each set of values. The use of the semi-colon is optional. If it is absent, there is only one section. If there is only one source selector, but multiple sections in the parameters, then the sections are collapsed into one and they will apply to all the source flavors in the source selector. "N to N" means that each section has its own processing configuration. "1 to N" or "N to 1" means that all the sections are processed the same way, but "M to N" where "M <> N" will result in an error.
<configuration key="target-flavors">*/preview</configuration> <configuration key="encoding-profiles">mp4-low.http;mp4-vga-medium</configuration>
All targets are flavored the same way. Using the example above, all media are encoded with "mp4-low.http" and "mp4-vga-medium" and targets are flavored as "presenter/preview" and "presentation/preview"
<configuration key="target-tags">engage-streaming,rss,atom;engage-download,rss,atom</configuration> <configuration key="encoding-profiles">mp4-medium.http;mp4-vga-medium</configuration>
Each section is tagged individually. Using the example above, presenter/preview is encoded with "mp4-medium.http" and tagged with "engage-streaming" ,"rss" and "atom", presentation/preview is encoded with "mp4-vga-medium" and tagged with "engage-download","rss" and "atom".
If presenter/work is to be encoded with "mp4-low.http,mp4-medium.http" and presentation/work is to be encoded with "mp4-vga-medium,mp4-medium.http", and the target media are flavored as "presenter/delivery" and "presentation/delivery" respectively, and all targets are tagged with "engage" and "archive" in addition to the names of the encoding profiles used.
This workflow supports HLS adaptive streaming. By: 1) Using only H.264/HENV encodings in the encoding profiles. 2) Adding a special encoding profile "multiencode-hls" to the list of encoding profiles. HLS Playlists are generated as part of the encoding process. Each mp4 is a fragmented MP4. A variant playlist is created for each mp4 and a master playlist is used to access all the different qualities.
To make sure that stream switching works as expected, state the bitrates explicitly for each of mp4 encoding profiles used. For advices on how to pick bitrates see: https://developer.apple.com/documentation/http_live_streaming/hls_authoring_specification_for_apple_devices
For more details on HLS, see: https://tools.ietf.org/html/rfc8216 https://tools.ietf.org/html/draft-pantos-http-live-streaming-23
Without HLS, it will look like the following.
|smil-flavor||smil/smil||Specifies the flavor of the new media|
|source-flavors||presenter/work;presentation/work||Which media should be encoded|
|target-flavors||*/delivery||Specifies the flavor of the new media|
|target-tags||engage,archive||Specifies the tags of the new media|
|encoding-profiles||mp4-low.http,mp4-med.http;mp4-vga-med,mp4-med.http||Profiles for each source flavor|
|tag-with-profile||true (default to false)||target medium are tagged with coresponding encoding profile Id|
With HLS, encoding profiles will look like the following.
|encoding-profiles | mp4-low.http,mp4-med.http,multiencode-hls;mp4-vga-med,mp4-med.http,multiencode-hls | Profiles|
The parameters in the table above will look like this as a workflow operation.
<operation id="process-smil" fail-on-error="true" exception-handler-workflow="error" description="Encoding presenter (camera) video to Flash download"> <configurations> <configuration key="smil-flavor">smil/cutting</configuration> <configuration key="source-flavors">presenter/work;presentation/work</configuration> <configuration key="target-flavors">*/delivery</configuration> <configuration key="target-tags">engage,archive</configuration> <configuration key="encoding-profiles"> mp4-low.http,mp4-medium.http*;*mp4-vga-medium,mp4-medium.http</configuration> <configuration key="tag-with-profile">true</configuration> </configurations> </operation>
With HLS, encoding profiles line will look like:
<configuration key="encoding-profiles"> mp4-low.http,mp4-medium.http,multiencode-hls*;*mp4-vga-medium,mp4-medium.http,multiencode-hls</configuration>
Each encoding section generates all the target media in one ffmpeg call by incorporating relevant parts of each encoding profile command using complex filters.
Care must be taken that no complex filters are used in the encoding profiles used for this workflow, as it can cause a conflict and ffmpeg will fail. Simple filters (i.e.: -vf, -af , -filter:v, -filter:a) can be used.
Encoded target recording are distinguished by the suffix, it is important that all the encoding profiles used have distinct suffixes or the target video tagging can be wrong, for example:
profile.mp4-vga-medium.http.suffix = -vga-medium.mp4 profile.mp4-medium.http.suffix = -medium.mp4
- If using this to process SMIL files generated by the editor in the same workflow, be sure to set the "skip-processing" key in the editor to true.