Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (69)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (11142)

  • JSmpeg is not playing audio from websocket stream

    5 juin 2023, par Nik

    I am trying to stream RTSP to web browser using ffmpeg through web socket relay written in node js taken from https://github.com/phoboslab/jsmpeg , and on the browser i am using JSMpeg to display the RTSP stream, the video is playing fine, but audio is not playing,

    


    The ffmpeg command :

    


    ffmpeg -rtsp_transport tcp -i rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mp4 
       -f mpegts -c:v mpeg1video -c:a mp2 http://127.0.0.1:8081/stream_from_ffmpeg/


    


    The node js web socket relay :

    


    // Use the websocket-relay to serve a raw MPEG-TS over WebSockets. You can use&#xA;// ffmpeg to feed the relay. ffmpeg -> websocket-relay -> browser&#xA;// Example:&#xA;// node websocket-relay yoursecret 8081 8082&#xA;// ffmpeg -i <some input="input"> -f mpegts http://localhost:8081/yoursecret&#xA;&#xA;var fs = require(&#x27;fs&#x27;),&#xA;    http = require(&#x27;http&#x27;),&#xA;    WebSocket = require(&#x27;ws&#x27;);&#xA;&#xA;if (process.argv.length &lt; 3) {&#xA;    console.log(&#xA;        &#x27;Usage: \n&#x27; &#x2B;&#xA;        &#x27;node websocket-relay.js <secret> [ ]&#x27;&#xA;    );&#xA;    process.exit();&#xA;}&#xA;&#xA;var STREAM_SECRET = process.argv[2],&#xA;    STREAM_PORT = process.argv[3] || 8081,&#xA;    WEBSOCKET_PORT = process.argv[4] || 8082,&#xA;    RECORD_STREAM = false;&#xA;&#xA;// Websocket Server&#xA;var socketServer = new WebSocket.Server({port: WEBSOCKET_PORT, perMessageDeflate: false});&#xA;socketServer.connectionCount = 0;&#xA;socketServer.on(&#x27;connection&#x27;, function(socket, upgradeReq) {&#xA;    socketServer.connectionCount&#x2B;&#x2B;;&#xA;    console.log(&#xA;        &#x27;New WebSocket Connection: &#x27;,&#xA;        (upgradeReq || socket.upgradeReq).socket.remoteAddress,&#xA;        (upgradeReq || socket.upgradeReq).headers[&#x27;user-agent&#x27;],&#xA;        &#x27;(&#x27;&#x2B;socketServer.connectionCount&#x2B;&#x27; total)&#x27;&#xA;    );&#xA;    socket.on(&#x27;close&#x27;, function(code, message){&#xA;        socketServer.connectionCount--;&#xA;        console.log(&#xA;            &#x27;Disconnected WebSocket (&#x27;&#x2B;socketServer.connectionCount&#x2B;&#x27; total)&#x27;&#xA;        );&#xA;    });&#xA;});&#xA;socketServer.broadcast = function(data) {&#xA;    socketServer.clients.forEach(function each(client) {&#xA;        if (client.readyState === WebSocket.OPEN) {&#xA;            client.send(data);&#xA;        }&#xA;    });&#xA;};&#xA;&#xA;// HTTP Server to accept incoming MPEG-TS Stream from ffmpeg&#xA;var streamServer = http.createServer( function(request, response) {&#xA;    var params = request.url.substr(1).split(&#x27;/&#x27;);&#xA;&#xA;    if (params[0] !== STREAM_SECRET) {&#xA;        console.log(&#xA;            &#x27;Failed Stream Connection: &#x27;&#x2B; request.socket.remoteAddress &#x2B; &#x27;:&#x27; &#x2B;&#xA;            request.socket.remotePort &#x2B; &#x27; - wrong secret.&#x27;&#xA;        );&#xA;        response.end();&#xA;    }&#xA;&#xA;    response.connection.setTimeout(0);&#xA;    console.log(&#xA;        &#x27;Stream Connected: &#x27; &#x2B;&#xA;        request.socket.remoteAddress &#x2B; &#x27;:&#x27; &#x2B;&#xA;        request.socket.remotePort&#xA;    );&#xA;    request.on(&#x27;data&#x27;, function(data){&#xA;        socketServer.broadcast(data);&#xA;        if (request.socket.recording) {&#xA;            request.socket.recording.write(data);&#xA;        }&#xA;    });&#xA;    request.on(&#x27;end&#x27;,function(){&#xA;        console.log(&#x27;close&#x27;);&#xA;        if (request.socket.recording) {&#xA;            request.socket.recording.close();&#xA;        }&#xA;    });&#xA;&#xA;    // Record the stream to a local file?&#xA;    if (RECORD_STREAM) {&#xA;        var path = &#x27;recordings/&#x27; &#x2B; Date.now() &#x2B; &#x27;.ts&#x27;;&#xA;        request.socket.recording = fs.createWriteStream(path);&#xA;    }&#xA;})&#xA;// Keep the socket open for streaming&#xA;streamServer.headersTimeout = 0;&#xA;streamServer.listen(STREAM_PORT);&#xA;&#xA;console.log(&#x27;Listening for incoming MPEG-TS Stream on http://127.0.0.1:&#x27;&#x2B;STREAM_PORT&#x2B;&#x27;/<secret>&#x27;);&#xA;console.log(&#x27;Awaiting WebSocket connections on ws://127.0.0.1:&#x27;&#x2B;WEBSOCKET_PORT&#x2B;&#x27;/&#x27;);&#xA;</secret></secret></some>

    &#xA;

    The front end code

    &#xA;

    &#xA;&#xA;  &#xA;    &#xA;    &#xA;    &#xA;    <code class="echappe-js">&lt;script src='http://stackoverflow.com/feeds/tag/jsmpeg.min.js'&gt;&lt;/script&gt;&#xA;    &#xA;  &#xA;  &#xA;    &#xA;  &#xA;  &lt;script&gt;&amp;#xA;    let url;&amp;#xA;    let player;&amp;#xA;    let canvas = document.getElementById(&quot;video-canvas&quot;);&amp;#xA;    let ipAddr = &quot;127.0.0.1:8082&quot;;&amp;#xA;    window.onload = async() =&gt; {&amp;#xA;      url = `ws://${ipAddr}`;&amp;#xA;      player = new JSMpeg.Player(url, { canvas: canvas, });&amp;#xA;    };&amp;#xA;&amp;#xA;  &lt;/script&gt;&#xA;&#xA;&#xA;

    &#xA;

    The above code works fine and plays the video, but no audio is playing&#xA;Things I tried :

    &#xA;

    Changed the audio context state inside the player object from suspended to running

    &#xA;

    player.audioOut.context.onstatechange = async () => {&#xA;    console.log("Event triggered by audio");&#xA;&#xA;    if (player.audioOut.context === "suspended") {&#xA;        await player.audioOut.context.resume();&#xA;    }&#xA;}&#xA;

    &#xA;

  • libavcodec : how to encode with h264 codec ,with mp4 container using controllable frame rate and bitrate(through c code)

    26 mai 2016, par musimbate

    I am trying to record the screen of a pc and encode the recorded frames using h264 encoder
    and wrap them into a mp4 container.I want to do this because this super user link http://superuser.com/questions/300897/what-is-a-codec-e-g-divx-and-how-does-it-differ-from-a-file-format-e-g-mp/300997#300997 suggests it allows good trade-off between size and quality of the output file.

    The application I am working on should allow users to record a few hours of video and have the minimum output file size with decent quality.

    The code I have cooked up so far allows me to record and save .mpg(container) files with the mpeg1video encoder

    Running :

    ffmpeg -i test.mpg

    on the output file gives the following output :

    [mpegvideo @ 028c7400] Estimating duration from bitrate, this may be inaccurate
    Input #0, mpegvideo, from 'test.mpg':
     Duration: 00:00:00.29, bitrate: 104857 kb/s
       Stream #0:0: Video: mpeg1video, yuv420p(tv), 1366x768 [SAR 1:1 DAR 683:384], 104857 kb/s, 25 fps, 25 tbr, 1200k tbn, 25 tbc

    I have these settings for my output :

    const char * filename="test.mpg";
       int codec_id= AV_CODEC_ID_MPEG1VIDEO;
       AVCodec *codec11;
       AVCodecContext *outContext= NULL;
       int got_output;
       FILE *f;
       AVPacket pkt;
       uint8_t endcode[] = { 0, 0, 1, 0xb7 };

       /* put sample parameters */
       outContext->bit_rate = 400000;
       /* resolution must be a multiple of two */
       outContext->width=pCodecCtx->width;
       outContext->height=pCodecCtx->height;
       /* frames per second */
       outContext->time_base.num=1;
       outContext->time_base.den=25;
       /* emit one intra frame every ten frames
        * check frame pict_type before passing frame
        * to encoder, if frame->pict_type is AV_PICTURE_TYPE_I
        * then gop_size is ignored and the output of encoder
        * will always be I frame irrespective to gop_size
        */
       outContext->gop_size = 10;
       outContext->max_b_frames = 1;
       outContext->pix_fmt = AV_PIX_FMT_YUV420P;

    When I change int codec_id= AV_CODEC_ID_MPEG1VIDEO to int codec_id= AV_CODEC_ID_H264 i get a file that does not play with vlc.

    I have read that writing the

    uint8_t endcode[] = { 0, 0, 1, 0xb7 };

    array at the end of your file when finished encoding makes your file a legitimate mpeg file.It is written like this :

    fwrite(endcode, 1, sizeof(endcode), f);
       fclose(f);

    in my code. Should I do the same thing when I change my encoder to AV_CODEC_ID_H264 ?

    I am capturing using gdi input like this :

    AVDictionary* options = NULL;
       //Set some options
       //grabbing frame rate
       av_dict_set(&amp;options,"framerate","30",0);
       AVInputFormat *ifmt=av_find_input_format("gdigrab");
       if(avformat_open_input(&amp;pFormatCtx,"desktop",ifmt,&amp;options)!=0){
           printf("Couldn't open input stream.\n");
           return -1;
           }

    I want to be able to modify my grabbing rate to optimize for the outptut file size
    but When I change it to 20 for example I get a video that plays so fast.How do
    I get a video that plays with normal speed with frames captured at 20 fps or any
    lower frame rate value ?

    While recording I get the following output on the standard error output :

    [gdigrab @ 00cdb8e0] Capturing whole desktop as 1366x768x32 at (0,0)
    Input #0, gdigrab, from '(null)':
     Duration: N/A, start: 1420718663.655713, bitrate: 1006131 kb/s
       Stream #0:0: Video: bmp, bgra, 1366x768, 1006131 kb/s, 29.97 tbr, 1000k tbn, 29.97 tbc
    [swscaler @ 00d24120] Warning: data is not aligned! This can lead to a speedloss
    [mpeg1video @ 00cdd160] AVFrame.format is not set
    [mpeg1video @ 00cdd160] AVFrame.width or height is not set
    [mpeg1video @ 00cdd160] AVFrame.format is not set
    [mpeg1video @ 00cdd160] AVFrame.width or height is not set
    [mpeg1video @ 00cdd160] AVFrame.format is not set

    How do I get rid of this error in my code ?

    In summary :
    1) How do I encode h264 video wrapped into mp4 container ?

    2) How do I capture at lower frame rates and still play
    the encoded video at normal speed ?

    3) How do I set the format(and which format—depends on the codec ?)
    and width and height info on the frames I write ?

    The code I am using in its entirety is shown below

    extern "C"
    {
    #include "libavcodec/avcodec.h"
    #include "libavformat/avformat.h"
    #include "libswscale/swscale.h"
    #include "libavdevice/avdevice.h"


    #include <libavutil></libavutil>opt.h>
    #include <libavutil></libavutil>channel_layout.h>
    #include <libavutil></libavutil>common.h>
    #include <libavutil></libavutil>imgutils.h>
    #include <libavutil></libavutil>mathematics.h>
    #include <libavutil></libavutil>samplefmt.h>
    //SDL
    #include "SDL.h"
    #include "SDL_thread.h"
    }

    //Output YUV420P
    #define OUTPUT_YUV420P 0
    //'1' Use Dshow
    //'0' Use GDIgrab
    #define USE_DSHOW 0

    int main(int argc, char* argv[])
    {

       //1.WE HAVE THE FORMAT CONTEXT
       //THIS IS FROM THE DESKTOP GRAB STREAM.
       AVFormatContext *pFormatCtx;
       int             i, videoindex;
       AVCodecContext  *pCodecCtx;
       AVCodec         *pCodec;

       av_register_all();
       avformat_network_init();

       //ASSIGN STH TO THE FORMAT CONTEXT.
       pFormatCtx = avformat_alloc_context();

       //Register Device
       avdevice_register_all();
       //Windows
    #ifdef _WIN32
    #if USE_DSHOW
       //Use dshow
       //
       //Need to Install screen-capture-recorder
       //screen-capture-recorder
       //Website: http://sourceforge.net/projects/screencapturer/
       //
       AVInputFormat *ifmt=av_find_input_format("dshow");
       //if(avformat_open_input(&amp;pFormatCtx,"video=screen-capture-recorder",ifmt,NULL)!=0){
       if(avformat_open_input(&amp;pFormatCtx,"video=UScreenCapture",ifmt,NULL)!=0){
           printf("Couldn't open input stream.\n");
           return -1;
       }
    #else
       //Use gdigrab
       AVDictionary* options = NULL;
       //Set some options
       //grabbing frame rate
       av_dict_set(&amp;options,"framerate","30",0);
       //The distance from the left edge of the screen or desktop
       //av_dict_set(&amp;options,"offset_x","20",0);
       //The distance from the top edge of the screen or desktop
       //av_dict_set(&amp;options,"offset_y","40",0);
       //Video frame size. The default is to capture the full screen
       //av_dict_set(&amp;options,"video_size","640x480",0);
       AVInputFormat *ifmt=av_find_input_format("gdigrab");
       if(avformat_open_input(&amp;pFormatCtx,"desktop",ifmt,&amp;options)!=0){
           printf("Couldn't open input stream.\n");
           return -1;
       }

    #endif
    #endif//FOR THE WIN32 THING.

       if(avformat_find_stream_info(pFormatCtx,NULL)&lt;0)
       {
           printf("Couldn't find stream information.\n");
           return -1;
       }
       videoindex=-1;
       for(i=0; inb_streams; i++)
           if(pFormatCtx->streams[i]->codec->codec_type
                   ==AVMEDIA_TYPE_VIDEO)
           {
               videoindex=i;
               break;
           }
       if(videoindex==-1)
       {
           printf("Didn't find a video stream.\n");
           return -1;
       }
       pCodecCtx=pFormatCtx->streams[videoindex]->codec;
       pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
       if(pCodec==NULL)
       {
           printf("Codec not found.\n");
           return -1;
       }
       if(avcodec_open2(pCodecCtx, pCodec,NULL)&lt;0)
       {
           printf("Could not open codec.\n");
           return -1;
       }

       //THIS IS WHERE YOU CONTROL THE FORMAT(THROUGH FRAMES).
       AVFrame *pFrame;

       pFrame=av_frame_alloc();

       int ret, got_picture;

       AVPacket *packet=(AVPacket *)av_malloc(sizeof(AVPacket));

       //TRY TO INIT THE PACKET HERE
        av_init_packet(packet);


       //Output Information-----------------------------
       printf("File Information---------------------\n");
       av_dump_format(pFormatCtx,0,NULL,0);
       printf("-------------------------------------------------\n");


    //&lt;&lt;--FOR WRITING MPG FILES
       //&lt;&lt;--START:PREPARE TO WRITE YOUR MPG FILE.

       const char * filename="test.mpg";
       int codec_id= AV_CODEC_ID_MPEG1VIDEO;



       AVCodec *codec11;
       AVCodecContext *outContext= NULL;
       int got_output;
       FILE *f;
       AVPacket pkt;
       uint8_t endcode[] = { 0, 0, 1, 0xb7 };

       printf("Encode video file %s\n", filename);

       /* find the mpeg1 video encoder */
       codec11 = avcodec_find_encoder((AVCodecID)codec_id);
       if (!codec11) {
           fprintf(stderr, "Codec not found\n");
           exit(1);
       }

       outContext = avcodec_alloc_context3(codec11);
       if (!outContext) {
           fprintf(stderr, "Could not allocate video codec context\n");
           exit(1);
       }

       /* put sample parameters */
       outContext->bit_rate = 400000;
       /* resolution must be a multiple of two */

       outContext->width=pCodecCtx->width;
       outContext->height=pCodecCtx->height;


       /* frames per second */
       outContext->time_base.num=1;
       outContext->time_base.den=25;

       /* emit one intra frame every ten frames
        * check frame pict_type before passing frame
        * to encoder, if frame->pict_type is AV_PICTURE_TYPE_I
        * then gop_size is ignored and the output of encoder
        * will always be I frame irrespective to gop_size
        */
       outContext->gop_size = 10;
       outContext->max_b_frames = 1;
       outContext->pix_fmt = AV_PIX_FMT_YUV420P;

       if (codec_id == AV_CODEC_ID_H264)
           av_opt_set(outContext->priv_data, "preset", "slow", 0);

       /* open it */
       if (avcodec_open2(outContext, codec11, NULL) &lt; 0) {
           fprintf(stderr, "Could not open codec\n");
           exit(1);
       }

       f = fopen(filename, "wb");
       if (!f) {
           fprintf(stderr, "Could not open %s\n", filename);
           exit(1);
       }


       AVFrame *outframe = av_frame_alloc();
       int nbytes = avpicture_get_size(outContext->pix_fmt,
                                      outContext->width,
                                      outContext->height);

       uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);

      //ASSOCIATE THE FRAME TO THE ALLOCATED BUFFER.
       avpicture_fill((AVPicture*)outframe, outbuffer,
                      AV_PIX_FMT_YUV420P,
                      outContext->width, outContext->height);

       SwsContext* swsCtx_ ;
       swsCtx_= sws_getContext(pCodecCtx->width,
                               pCodecCtx->height,
                               pCodecCtx->pix_fmt,
                               outContext->width, outContext->height,
                               outContext->pix_fmt,
                               SWS_BICUBIC, NULL, NULL, NULL);


       //HERE WE START PULLING PACKETS FROM THE SPECIFIED FORMAT CONTEXT.
       while(av_read_frame(pFormatCtx, packet)>=0)
       {
           if(packet->stream_index==videoindex)
           {
               ret= avcodec_decode_video2(pCodecCtx,
                                            pFrame,
                                            &amp;got_picture,packet );
               if(ret &lt; 0)
               {
                   printf("Decode Error.\n");
                   return -1;
               }
               if(got_picture)
               {

               sws_scale(swsCtx_, pFrame->data, pFrame->linesize,
                     0, pCodecCtx->height, outframe->data,
                     outframe->linesize);


               av_init_packet(&amp;pkt);
               pkt.data = NULL;    // packet data will be allocated by the encoder
               pkt.size = 0;


               ret = avcodec_encode_video2(outContext, &amp;pkt, outframe, &amp;got_output);
               if (ret &lt; 0) {
                  fprintf(stderr, "Error encoding frame\n");
                  exit(1);
                 }

               if (got_output) {
                   printf("Write frame %3d (size=%5d)\n", i, pkt.size);
                   fwrite(pkt.data, 1, pkt.size, f);
                   av_free_packet(&amp;pkt);
                  }

               }
           }

           av_free_packet(packet);
       }//THE LOOP TO PULL PACKETS FROM THE FORMAT CONTEXT ENDS HERE.



       //
       /* get the delayed frames */
       for (got_output = 1; got_output; i++) {
           //fflush(stdout);

           ret = avcodec_encode_video2(outContext, &amp;pkt, NULL, &amp;got_output);
           if (ret &lt; 0) {
               fprintf(stderr, "Error encoding frame\n");
               exit(1);
           }

           if (got_output) {
               printf("Write frame %3d (size=%5d)\n", i, pkt.size);
               fwrite(pkt.data, 1, pkt.size, f);
               av_free_packet(&amp;pkt);
           }
       }



       /* add sequence end code to have a real mpeg file */
       fwrite(endcode, 1, sizeof(endcode), f);
       fclose(f);

       avcodec_close(outContext);
       av_free(outContext);
       //av_freep(&amp;frame->data[0]);
       //av_frame_free(&amp;frame);

       //THIS WAS ADDED LATER
       av_free(outbuffer);

       avcodec_close(pCodecCtx);
       avformat_close_input(&amp;pFormatCtx);

       return 0;
    }

    Thank you for your time.

  • ffmpeg : stream copy from .mxf into NLE-compatible format

    9 juin 2013, par David

    Because my NLE software does not support the .mxf-files from Canon XF100 I need to convert them into a supported format.

    As far as I know, mxf-files are just another container format for mpeg2 streams, so it would be really nice to extract the streams and place them into another container (without reencoding).

    I think ffmpeg can do this – correct me if I'm wrong – by running the following command :

    ffmpeg -i in.mxf -vcodec copy out.m2ts (or .ts, .mts, ...)

    ffmpeg finishes without errors after about 2 seconds (in.mxf is abut 170mb) :

    c:\video>c:\ffmpeg\bin\ffmpeg -i in.MXF -vcodec copy out.m2ts
    ffmpeg version N-53680-g0ab9362 Copyright (c) 2000-2013 the FFmpeg developers
     built on May 30 2013 12:14:03 with gcc 4.7.3 (GCC)
     configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-av
    isynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enab
    le-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetyp
    e --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --ena
    ble-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-l
    ibopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libsp
    eex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo-
    amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --
    enable-libxvid --enable-zlib
     libavutil      52. 34.100 / 52. 34.100
     libavcodec     55. 12.102 / 55. 12.102
     libavformat    55.  8.100 / 55.  8.100
     libavdevice    55.  2.100 / 55.  2.100
     libavfilter     3. 73.100 /  3. 73.100
     libswscale      2.  3.100 /  2.  3.100
     libswresample   0. 17.102 /  0. 17.102
     libpostproc    52.  3.100 / 52.  3.100
    Guessed Channel Layout for  Input Stream #0.1 : mono
    Guessed Channel Layout for  Input Stream #0.2 : mono
    Input #0, mxf, from &#39;in.MXF&#39;:
     Metadata:
       uid             : 1bb23c97-6205-4800-80a2-e00002244ba7
       generation_uid  : 1bb23c97-6205-4800-8122-e00002244ba7
       company_name    : CANON
       product_name    : XF100
       product_version : 1.00
       product_uid     : 060e2b34-0401-010d-0e15-005658460100
       modification_date: 2013-01-06 11:05:02
       timecode        : 01:42:14:22
     Duration: 00:00:28.32, start: 0.000000, bitrate: 51811 kb/s
       Stream #0:0: Video: mpeg2video (4:2:2), yuv422p, 1920x1080 [SAR 1:1 DAR 16:9
    ], 25 fps, 25 tbr, 25 tbn, 50 tbc
       Stream #0:1: Audio: pcm_s16le, 48000 Hz, mono, s16, 768 kb/s
       Stream #0:2: Audio: pcm_s16le, 48000 Hz, mono, s16, 768 kb/s
    Output #0, mpegts, to &#39;out.m2ts&#39;:
     Metadata:
       uid             : 1bb23c97-6205-4800-80a2-e00002244ba7
       generation_uid  : 1bb23c97-6205-4800-8122-e00002244ba7
       company_name    : CANON
       product_name    : XF100
       product_version : 1.00
       product_uid     : 060e2b34-0401-010d-0e15-005658460100
       modification_date: 2013-01-06 11:05:02
       timecode        : 01:42:14:22
       encoder         : Lavf55.8.100
       Stream #0:0: Video: mpeg2video, yuv422p, 1920x1080 [SAR 1:1 DAR 16:9], q=2-3
    1, 25 fps, 90k tbn, 25 tbc
       Stream #0:1: Audio: mp2, 48000 Hz, mono, s16, 128 kb/s
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
     Stream #0:1 -> #0:1 (pcm_s16le -> mp2)
    Press [q] to stop, [?] for help
    frame=  532 fps=0.0 q=-1.0 size=  143511kB time=00:00:21.25 bitrate=55314.1kbits
    frame=  561 fps=435 q=-1.0 size=  151254kB time=00:00:22.42 bitrate=55242.0kbits
    frame=  586 fps=314 q=-1.0 size=  158021kB time=00:00:23.41 bitrate=55288.0kbits
    frame=  609 fps=255 q=-1.0 size=  164182kB time=00:00:24.34 bitrate=55235.4kbits
    frame=  636 fps=217 q=-1.0 size=  171463kB time=00:00:25.42 bitrate=55235.1kbits
    frame=  669 fps=194 q=-1.0 size=  180133kB time=00:00:26.72 bitrate=55226.3kbits
    frame=  699 fps=173 q=-1.0 size=  188326kB time=00:00:27.92 bitrate=55256.6kbits
    frame=  708 fps=169 q=-1.0 Lsize=  190877kB time=00:00:28.30 bitrate=55233.6kbit
    s/s
    video:172852kB audio:442kB subtitle:0 global headers:0kB muxing overhead 10.1461
    18%

    Unfortunately the output file turns out to be displayed correctly only by vlc player.
    My NLE-software (Cyberlink Power Director) is able to open the file but most of the picture is green. Only a few pixels on the left edge show the original video :

    output file

    Any ideas how to solve that problem ? Is there a better way to use .mxf-files in NLE-software without native support ?

    thanks in advance