Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (50)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (7899)

  • ffmpeg avcodec_encode_video2 hangs when using Quick Sync h264_qsv encoder

    11 janvier 2017, par Mike Simpson

    When I use the mpeg4 or h264 encoders, I am able to successfully encode images to make a valid AVI file using the API for ffmpeg 3.1.0. However, when I use the Quick Sync encoder (h264_qsv), avcodec_encode_video2 will hang some of the time. I found that when using images that are 1920x1080, it was rare that avcodec_encode_video2 would hang. When using 256x256 images, it was very likely that the function would hang.

    I have created the test code below that demonstrates the hang of avcodec_encode_video2. The code will create a 1000 frame, 256x256 AVI with a bit rate of 400000. The frames are simply allocated, so the output video should just be green frames.

    The problem was observed using Windows 7 and Windows 10, using the 32-bit or 64-bit test application.

    If anyone has any idea on how I can avoid the avcodec_encode_video2 hang I would be very grateful ! Thanks in advance for any assistance.

    extern "C"
    {
    #ifndef __STDC_CONSTANT_MACROS
    #define __STDC_CONSTANT_MACROS
    #endif
    #include "avcodec.h"
    #include "avformat.h"
    #include "swscale.h"
    #include "avutil.h"
    #include "imgutils.h"
    #include "opt.h"
    #include
    }

    #include <iostream>


    // Globals
    AVCodec* m_pCodec = NULL;
    AVStream *m_pStream = NULL;
    AVOutputFormat* m_pFormat = NULL;
    AVFormatContext* m_pFormatContext = NULL;
    AVCodecContext* m_pCodecContext = NULL;
    AVFrame* m_pFrame = NULL;
    int m_frameIndex;

    // Output format
    AVPixelFormat m_pixType = AV_PIX_FMT_NV12;
    // Use for mpeg4
    //AVPixelFormat m_pixType = AV_PIX_FMT_YUV420P;

    // Output frame rate
    int m_frameRate = 30;
    // Output image dimensions
    int m_imageWidth = 256;
    int m_imageHeight = 256;
    // Number of frames to export
    int m_frameCount = 1000;
    // Output file name
    const char* m_fileName = "c:/test/test.avi";
    // Output file type
    const char* m_fileType = "AVI";
    // Codec name used to encode
    const char* m_encoderName = "h264_qsv";
    // use for mpeg4
    //const char* m_encoderName = "mpeg4";
    // Target bit rate
    int m_targetBitRate = 400000;

    void addVideoStream()
    {
       m_pStream = avformat_new_stream( m_pFormatContext, m_pCodec );
       m_pStream->id = m_pFormatContext->nb_streams - 1;
       m_pStream->time_base = m_pCodecContext->time_base;
       m_pStream->codec->pix_fmt = m_pixType;
       m_pStream->codec->flags = m_pCodecContext->flags;
       m_pStream->codec->width = m_pCodecContext->width;
       m_pStream->codec->height = m_pCodecContext->height;
       m_pStream->codec->time_base = m_pCodecContext->time_base;
       m_pStream->codec->bit_rate = m_pCodecContext->bit_rate;
    }

    AVFrame* allocatePicture( enum AVPixelFormat pix_fmt, int width, int height )
    {
       AVFrame *frame;

       frame = av_frame_alloc();

       if ( !frame )
       {
           return NULL;
       }

       frame->format = pix_fmt;
       frame->width  = width;
       frame->height = height;

       int checkImage = av_image_alloc( frame->data, frame->linesize, width, height, pix_fmt, 32 );

       if ( checkImage &lt; 0 )
       {
           return NULL;
       }

       return frame;
    }

    bool initialize()
    {
       AVRational frameRate;
       frameRate.den = m_frameRate;
       frameRate.num = 1;

       av_register_all();

       m_pCodec = avcodec_find_encoder_by_name(m_encoderName);

       if( !m_pCodec )
       {
           return false;
       }

       m_pCodecContext = avcodec_alloc_context3( m_pCodec );
       m_pCodecContext->width = m_imageWidth;
       m_pCodecContext->height = m_imageHeight;
       m_pCodecContext->time_base = frameRate;
       m_pCodecContext->gop_size = 0;
       m_pCodecContext->pix_fmt = m_pixType;
       m_pCodecContext->codec_id = m_pCodec->id;
       m_pCodecContext->bit_rate = m_targetBitRate;

       av_opt_set( m_pCodecContext->priv_data, "+CBR", "", 0 );

       return true;
    }

    bool startExport()
    {
       m_frameIndex = 0;
       char fakeFileName[512];
       int checkAllocContext = avformat_alloc_output_context2( &amp;m_pFormatContext, NULL, m_fileType, fakeFileName );

       if ( checkAllocContext &lt; 0 )
       {
           return false;
       }

       if ( !m_pFormatContext )
       {
           return false;
       }

       m_pFormat = m_pFormatContext->oformat;

       if ( m_pFormat->video_codec != AV_CODEC_ID_NONE )
       {
           addVideoStream();

           int checkOpen = avcodec_open2( m_pCodecContext, m_pCodec, NULL );

           if ( checkOpen &lt; 0 )
           {
               return false;
           }

           m_pFrame = allocatePicture( m_pCodecContext->pix_fmt, m_pCodecContext->width, m_pCodecContext->height );                
           if( !m_pFrame )
           {
               return false;
           }
           m_pFrame->pts = 0;
       }

       int checkOpen = avio_open( &amp;m_pFormatContext->pb, m_fileName, AVIO_FLAG_WRITE );
       if ( checkOpen &lt; 0 )
       {
           return false;
       }

       av_dict_set( &amp;(m_pFormatContext->metadata), "title", "QS Test", 0 );

       int checkHeader = avformat_write_header( m_pFormatContext, NULL );
       if ( checkHeader &lt; 0 )
       {
           return false;
       }

       return true;
    }

    int processFrame( AVPacket&amp; avPacket )
    {
       avPacket.stream_index = 0;
       avPacket.pts = av_rescale_q( m_pFrame->pts, m_pStream->codec->time_base, m_pStream->time_base );
       avPacket.dts = av_rescale_q( m_pFrame->pts, m_pStream->codec->time_base, m_pStream->time_base );
       m_pFrame->pts++;

       int retVal = av_interleaved_write_frame( m_pFormatContext, &amp;avPacket );
       return retVal;
    }

    bool exportFrame()
    {
       int success = 1;
       int result = 0;

       AVPacket avPacket;

       av_init_packet( &amp;avPacket );
       avPacket.data = NULL;
       avPacket.size = 0;

       fflush(stdout);

       std::cout &lt;&lt; "Before avcodec_encode_video2 for frame: " &lt;&lt; m_frameIndex &lt;&lt; std::endl;
       success = avcodec_encode_video2( m_pCodecContext, &amp;avPacket, m_pFrame, &amp;result );
       std::cout &lt;&lt; "After avcodec_encode_video2 for frame: " &lt;&lt; m_frameIndex &lt;&lt; std::endl;

       if( result )
       {
           success = processFrame( avPacket );
       }

       av_packet_unref( &amp;avPacket );

       m_frameIndex++;
       return ( success == 0 );
    }

    void endExport()
    {
       int result = 0;
       int success = 0;

       if (m_pFrame)
       {
           while ( success == 0 )
           {
               AVPacket avPacket;
               av_init_packet( &amp;avPacket );
               avPacket.data = NULL;
               avPacket.size = 0;

               fflush(stdout);
               success = avcodec_encode_video2( m_pCodecContext, &amp;avPacket, NULL, &amp;result );

               if( result )
               {
                   success = processFrame( avPacket );
               }
               av_packet_unref( &amp;avPacket );

               if (!result)
               {
                   break;
               }
           }
       }

       if (m_pFormatContext)
       {
           av_write_trailer( m_pFormatContext );

           if( m_pFrame )
           {
               av_frame_free( &amp;m_pFrame );
           }

           avio_closep( &amp;m_pFormatContext->pb );
           avformat_free_context( m_pFormatContext );
           m_pFormatContext = NULL;
       }
    }

    void cleanup()
    {
       if( m_pFrame || m_pCodecContext )
       {
           if( m_pFrame )
           {
               av_frame_free( &amp;m_pFrame );
           }

           if( m_pCodecContext )
           {
               avcodec_close( m_pCodecContext );
               av_free( m_pCodecContext );
           }
       }
    }

    int main()
    {
       bool success = true;
       if (initialize())
       {
           if (startExport())
           {
               for (int loop = 0; loop &lt; m_frameCount; loop++)
               {
                   if (!exportFrame())
                   {
                       std::cout &lt;&lt; "Failed to export frame\n";
                       success = false;
                       break;
                   }
               }
               endExport();
           }
           else
           {
               std::cout &lt;&lt; "Failed to start export\n";
               success = false;
           }

           cleanup();
       }
       else
       {
           std::cout &lt;&lt; "Failed to initialize export\n";
           success = false;
       }

       if (success)
       {
           std::cout &lt;&lt; "Successfully exported file\n";
       }
       return 1;
    }
    </iostream>
  • issue after video rotation how fix

    2 avril 2015, par Vahagn

    I have next code for rotate video

    OpenCVFrameConverter.ToIplImage converter2 = new OpenCVFrameConverter.ToIplImage() ;

    for (int i = firstIndex; i &lt;= lastIndex; i++) {
       long t = timestamps[i % timestamps.length] - startTime;
       if (t >= 0) {
           if (t > recorder.getTimestamp()) {
               recorder.setTimestamp(t);
           }
           Frame g = converter2.convert(rotate(converter2.convertToIplImage(images[i % images.length]),9 0));
       recorder.record(g);
       }
    }

    images[i] - Frame in JavaCV
    after in video have green lines

    UPDATE
    Convertation function

    /*
    * Copyright (C) 2015 Samuel Audet
    *
    * This file is part of JavaCV.
    *
    * JavaCV is free software: you can redistribute it and/or modify
    * it under the terms of the GNU General Public License as published by
    * the Free Software Foundation, either version 2 of the License, or
    * (at your option) any later version (subject to the "Classpath" exception
    * as provided in the LICENSE.txt file that accompanied this code).
    *
    * JavaCV is distributed in the hope that it will be useful,
    * but WITHOUT ANY WARRANTY; without even the implied warranty of
    * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    * GNU General Public License for more details.
    *
    * You should have received a copy of the GNU General Public License
    * along with JavaCV.  If not, see /www.gnu.org/licenses/>.
    */

    package com.example.vvardanyan.ffmpeg;

    import org.bytedeco.javacpp.BytePointer;
    import org.bytedeco.javacpp.Pointer;

    import java.nio.Buffer;

    import static org.bytedeco.javacpp.opencv_core.CV_16S;
    import static org.bytedeco.javacpp.opencv_core.CV_16U;
    import static org.bytedeco.javacpp.opencv_core.CV_32F;
    import static org.bytedeco.javacpp.opencv_core.CV_32S;
    import static org.bytedeco.javacpp.opencv_core.CV_64F;
    import static org.bytedeco.javacpp.opencv_core.CV_8S;
    import static org.bytedeco.javacpp.opencv_core.CV_8U;
    import static org.bytedeco.javacpp.opencv_core.CV_MAKETYPE;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_16S;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_16U;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_32F;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_32S;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_64F;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_8S;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_8U;
    import static org.bytedeco.javacpp.opencv_core.IplImage;
    import static org.bytedeco.javacpp.opencv_core.Mat;

    /**
    * A utility class to map data between {@link Frame} and {@link IplImage} or {@link Mat}.
    * Since this is an abstract class, one must choose between two concrete classes:
    * {@link ToIplImage} or {@link ToMat}.
    *
    * @author Samuel Audet
    */
    public abstract class OpenCVFrameConverter<f> extends FrameConverter<f> {
       IplImage img;
       Mat mat;

       public static class ToIplImage extends OpenCVFrameConverter<iplimage> {
           @Override public IplImage convert(Frame frame) { return convertToIplImage(frame); }
       }

       public static class ToMat extends OpenCVFrameConverter<mat> {
           @Override public Mat convert(Frame frame) { return convertToMat(frame); }
       }

       public static int getFrameDepth(int depth) {
           switch (depth) {
               case IPL_DEPTH_8U:  case CV_8U:  return Frame.DEPTH_UBYTE;
               case IPL_DEPTH_8S:  case CV_8S:  return Frame.DEPTH_BYTE;
               case IPL_DEPTH_16U: case CV_16U: return Frame.DEPTH_USHORT;
               case IPL_DEPTH_16S: case CV_16S: return Frame.DEPTH_SHORT;
               case IPL_DEPTH_32F: case CV_32F: return Frame.DEPTH_FLOAT;
               case IPL_DEPTH_32S: case CV_32S: return Frame.DEPTH_INT;
               case IPL_DEPTH_64F: case CV_64F: return Frame.DEPTH_DOUBLE;
               default: return -1;
           }
       }

       public static int getIplImageDepth(Frame frame) {
           switch (frame.imageDepth) {
               case Frame.DEPTH_UBYTE:  return IPL_DEPTH_8U;
               case Frame.DEPTH_BYTE:   return IPL_DEPTH_8S;
               case Frame.DEPTH_USHORT: return IPL_DEPTH_16U;
               case Frame.DEPTH_SHORT:  return IPL_DEPTH_16S;
               case Frame.DEPTH_FLOAT:  return IPL_DEPTH_32F;
               case Frame.DEPTH_INT:    return IPL_DEPTH_32S;
               case Frame.DEPTH_DOUBLE: return IPL_DEPTH_64F;
               default:  return -1;
           }
       }
       static boolean isEqual(Frame frame, IplImage img) {
           return img != null &amp;&amp; frame != null &amp;&amp; frame.image != null &amp;&amp; frame.image.length > 0
                   &amp;&amp; frame.imageWidth == img.width() &amp;&amp; frame.imageHeight == img.height()
                   &amp;&amp; frame.imageChannels == img.nChannels() &amp;&amp; getIplImageDepth(frame) == img.depth()
                   &amp;&amp; new Pointer(frame.image[0]).address() == img.imageData().address()
                   &amp;&amp; frame.imageStride * Math.abs(frame.imageDepth) / 8 == img.widthStep();
       }
       public IplImage convertToIplImage(Frame frame) {
           if (frame == null) {
               return null;
           } else if (frame.opaque instanceof IplImage) {
               return (IplImage)frame.opaque;
           } else if (!isEqual(frame, img)) {
               int depth = getIplImageDepth(frame);
               img = depth &lt; 0 ? null : IplImage.createHeader(frame.imageWidth, frame.imageHeight, depth, frame.imageChannels)
                       .imageData(new BytePointer(new Pointer(frame.image[0].position(0)))).widthStep(frame.imageStride * Math.abs(frame.imageDepth) / 8);
           }
           return img;
       }
       public Frame convert(IplImage img) {
           if (img == null) {
               return null;
           } else if (!isEqual(frame, img)) {
               frame = new Frame();
               frame.imageWidth = img.width();
               frame.imageHeight = img.height();
               frame.imageDepth = getFrameDepth(img.depth());
               frame.imageChannels = img.nChannels();
               frame.imageStride = img.widthStep() * 8 / Math.abs(frame.imageDepth);
               frame.image = new Buffer[] { img.createBuffer() };
               frame.opaque = img;
           }
           return frame;
       }

       public static int getMatDepth(Frame frame) {
           switch (frame.imageDepth) {
               case Frame.DEPTH_UBYTE:  return CV_8U;
               case Frame.DEPTH_BYTE:   return CV_8S;
               case Frame.DEPTH_USHORT: return CV_16U;
               case Frame.DEPTH_SHORT:  return CV_16S;
               case Frame.DEPTH_FLOAT:  return CV_32F;
               case Frame.DEPTH_INT:    return CV_32S;
               case Frame.DEPTH_DOUBLE: return CV_64F;
               default:  return -1;
           }
       }
       static boolean isEqual(Frame frame, Mat mat) {
           return mat != null &amp;&amp; frame != null &amp;&amp; frame.image != null &amp;&amp; frame.image.length > 0
                   &amp;&amp; frame.imageWidth == mat.cols() &amp;&amp; frame.imageHeight == mat.rows()
                   &amp;&amp; frame.imageChannels == mat.channels() &amp;&amp; getMatDepth(frame) == mat.depth()
                   &amp;&amp; new Pointer(frame.image[0]).address() == mat.data().address()
                   &amp;&amp; frame.imageStride * Math.abs(frame.imageDepth) / 8 == (int)mat.step();
       }
       public Mat convertToMat(Frame frame) {
           if (frame == null) {
               return null;
           } else if (frame.opaque instanceof Mat) {
               return (Mat)frame.opaque;
           } else if (!isEqual(frame, mat)) {
               int depth = getMatDepth(frame);
               mat = depth &lt; 0 ? null : new Mat(frame.imageHeight, frame.imageWidth, CV_MAKETYPE(depth, frame.imageChannels),
                       new Pointer(frame.image[0].position(0)), frame.imageStride * Math.abs(frame.imageDepth) / 8);
           }
           return mat;
       }
       public Frame convert(Mat mat) {
           if (mat == null) {
               return null;
           } else if (!isEqual(frame, mat)) {
               frame = new Frame();
               frame.imageWidth = mat.cols();
               frame.imageHeight = mat.rows();
               frame.imageDepth = getFrameDepth(mat.depth());
               frame.imageChannels = mat.channels();
               frame.imageStride = (int)mat.step() * 8 / Math.abs(frame.imageDepth);
               frame.image = new Buffer[] { mat.createBuffer() };
               frame.opaque = mat;
           }
           return frame;
       }
    }
    </mat></iplimage></f></f>
  • ffmpeg : How can a MOV with transparent background be created ?

    25 mars 2017, par Mat

    I’m trying - with no success at all - to convert the green pixels of a background into transparent ones and output the result as clip with ffmpeg. N.b. I do not want to lay the clip over anything ; I’m not having a problem with that. What I want is a clip with transparent background for the OpenShot video editor (the chromakey filter of which doesn’t work satisfyingly).

    What I have tried (amongst 1 zillion other things over the last 15 hrs.) was

    ffmpeg.exe -i in.mov -vf chromakey=0x008001:0.115:0.0 -c:v qtrle out.mov

    but the pixels simply would not be transparent. Seemingly, nothing happens. I reckon the filter is ok, because it works fine in a complex chain (overlaying a background image).

    The output of ffprompt -show_stream -show_format of out.mov is as follows :

    [STREAM]
    index=0
    codec_name=qtrle
    codec_long_name=QuickTime Animation (RLE) video
    profile=unknown
    codec_type=video
    codec_time_base=1/30
    codec_tag_string=rle
    codec_tag=0x20656c72
    width=1920
    height=1080
    coded_width=1920
    coded_height=1080
    has_b_frames=0
    sample_aspect_ratio=1:1
    display_aspect_ratio=16:9
    pix_fmt=bgra
    level=-99
    color_range=N/A
    color_space=unknown
    color_transfer=unknown
    color_primaries=unknown
    chroma_location=unspecified
    field_order=progressive
    timecode=N/A
    refs=1
    id=N/A
    r_frame_rate=30/1
    avg_frame_rate=30/1
    time_base=1/15360
    start_pts=0
    start_time=0.000000
    duration_ts=54789
    duration=3.566992
    bit_rate=822383192
    max_bit_rate=N/A
    bits_per_raw_sample=N/A
    nb_frames=107
    nb_read_frames=N/A
    nb_read_packets=N/A
    DISPOSITION:default=1
    DISPOSITION:dub=0
    DISPOSITION:original=0
    DISPOSITION:comment=0
    DISPOSITION:lyrics=0
    DISPOSITION:karaoke=0
    DISPOSITION:forced=0
    DISPOSITION:hearing_impaired=0
    DISPOSITION:visual_impaired=0
    DISPOSITION:clean_effects=0
    DISPOSITION:attached_pic=0
    DISPOSITION:timed_thumbnails=0
    TAG:language=eng
    TAG:handler_name=DataHandler
    TAG:encoder=Lavc57.64.101 qtrle
    [/STREAM]
    [STREAM]
    index=1
    codec_name=aac
    codec_long_name=AAC (Advanced Audio Coding)
    profile=LC
    codec_type=audio
    codec_time_base=1/44100
    codec_tag_string=mp4a
    codec_tag=0x6134706d
    sample_fmt=fltp
    sample_rate=44100
    channels=2
    channel_layout=stereo
    bits_per_sample=0
    id=N/A
    r_frame_rate=0/0
    avg_frame_rate=0/0
    time_base=1/44100
    start_pts=926
    start_time=0.020998
    duration_ts=157481
    duration=3.570998
    bit_rate=132103
    max_bit_rate=132103
    bits_per_raw_sample=N/A
    nb_frames=153
    nb_read_frames=N/A
    nb_read_packets=N/A
    DISPOSITION:default=1
    DISPOSITION:dub=0
    DISPOSITION:original=0
    DISPOSITION:comment=0
    DISPOSITION:lyrics=0
    DISPOSITION:karaoke=0
    DISPOSITION:forced=0
    DISPOSITION:hearing_impaired=0
    DISPOSITION:visual_impaired=0
    DISPOSITION:clean_effects=0
    DISPOSITION:attached_pic=0
    DISPOSITION:timed_thumbnails=0
    TAG:language=eng
    TAG:handler_name=DataHandler
    [/STREAM]
    [FORMAT]
    filename=out.mov
    nb_streams=2
    nb_programs=0
    format_name=mov,mp4,m4a,3gp,3g2,mj2
    format_long_name=QuickTime / MOV
    start_time=0.000000
    duration=3.567000
    size=366708874
    bit_rate=822447712
    probe_score=100
    TAG:major_brand=qt
    TAG:minor_version=512
    TAG:compatible_brands=qt
    TAG:encoder=Lavf57.56.101
    [/FORMAT]

    I have a "sample" clip which shows the behaviour I want, with the following stream and information :

    [STREAM]
    index=0
    codec_name=qtrle
    codec_long_name=QuickTime Animation (RLE) video
    profile=unknown
    codec_type=video
    codec_time_base=1/24
    codec_tag_string=rle
    codec_tag=0x20656c72
    width=1920
    height=1080
    coded_width=1920
    coded_height=1080
    has_b_frames=0
    sample_aspect_ratio=0:1
    display_aspect_ratio=0:1
    pix_fmt=bgra
    level=-99
    color_range=N/A
    color_space=unknown
    color_transfer=unknown
    color_primaries=unknown
    chroma_location=unspecified
    field_order=progressive
    timecode=N/A
    refs=1
    id=N/A
    r_frame_rate=24/1
    avg_frame_rate=24/1
    time_base=1/12288
    start_pts=0
    start_time=0.000000
    duration_ts=74760
    duration=6.083984
    bit_rate=49226848
    max_bit_rate=N/A
    bits_per_raw_sample=N/A
    nb_frames=146
    nb_read_frames=N/A
    nb_read_packets=N/A
    DISPOSITION:default=1
    DISPOSITION:dub=0
    DISPOSITION:original=0
    DISPOSITION:comment=0
    DISPOSITION:lyrics=0
    DISPOSITION:karaoke=0
    DISPOSITION:forced=0
    DISPOSITION:hearing_impaired=0
    DISPOSITION:visual_impaired=0
    DISPOSITION:clean_effects=0
    DISPOSITION:attached_pic=0
    DISPOSITION:timed_thumbnails=0
    TAG:language=eng
    TAG:handler_name=DataHandler
    TAG:encoder=Lavc57.24.102 qtrle
    [/STREAM]
    [STREAM]
    index=1
    codec_name=aac
    codec_long_name=AAC (Advanced Audio Coding)
    profile=LC
    codec_type=audio
    codec_time_base=1/48000
    codec_tag_string=mp4a
    codec_tag=0x6134706d
    sample_fmt=fltp
    sample_rate=48000
    channels=2
    channel_layout=stereo
    bits_per_sample=0
    id=N/A
    r_frame_rate=0/0
    avg_frame_rate=0/0
    time_base=1/48000
    start_pts=0
    start_time=0.000000
    duration_ts=293856
    duration=6.122000
    bit_rate=53537
    max_bit_rate=128000
    bits_per_raw_sample=N/A
    nb_frames=288
    nb_read_frames=N/A
    nb_read_packets=N/A
    DISPOSITION:default=1
    DISPOSITION:dub=0
    DISPOSITION:original=0
    DISPOSITION:comment=0
    DISPOSITION:lyrics=0
    DISPOSITION:karaoke=0
    DISPOSITION:forced=0
    DISPOSITION:hearing_impaired=0
    DISPOSITION:visual_impaired=0
    DISPOSITION:clean_effects=0
    DISPOSITION:attached_pic=0
    DISPOSITION:timed_thumbnails=0
    TAG:language=eng
    TAG:handler_name=DataHandler
    [/STREAM]
    [FORMAT]
    filename=templateOK.mov
    nb_streams=2
    nb_programs=0
    format_name=mov,mp4,m4a,3gp,3g2,mj2
    format_long_name=QuickTime / MOV
    start_time=0.000000
    duration=6.144000
    size=37478506
    bit_rate=48800138
    probe_score=100
    TAG:major_brand=qt
    TAG:minor_version=512
    TAG:compatible_brands=qt
    TAG:encoder=Lavf57.25.100
    [/FORMAT]

    and I simply am not able to spot the relevant difference.

    The input, output and the working template can be found here.

    (The security issue you might see when clicking the link comes from the server certificate being self-signed. You can accept a temporal exception. Btw : The ridiculous file size of the output file will be the next nut to crack. Probably something about compression.)