Recherche avancée

Médias (91)

Autres articles (22)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

  • Les thèmes de MediaSpip

    4 juin 2013

    3 thèmes sont proposés à l’origine par MédiaSPIP. L’utilisateur MédiaSPIP peut rajouter des thèmes selon ses besoins.
    Thèmes MediaSPIP
    3 thèmes ont été développés au départ pour MediaSPIP : * SPIPeo : thème par défaut de MédiaSPIP. Il met en avant la présentation du site et les documents média les plus récents ( le type de tri peut être modifié - titre, popularité, date) . * Arscenic : il s’agit du thème utilisé sur le site officiel du projet, constitué notamment d’un bandeau rouge en début de page. La structure (...)

Sur d’autres sites (4211)

  • what is the correct command using gstreamer to convert mp4 to 264 file ?

    25 mars 2020, par hilow

    My english is pool, i am sorry !

    I use ffmpeg and gstreamer convert mp4 file to 264 format, but the output file is different.
    The question is :

    • 1.Why they are different file ?

    • 2.What is the level mean in gst-discoverer-1.0 ?

      Use ffmpeg it is Codec:     video/x-h264, ...... level=(string)1.2.

      Use gstreamer it is Codec:     video/x-h264, ...... level=(string)3.

    • 3.How to use gstreamer to convert the correct 264 file ?

    The original video file is come from https://github.com/notedit/media-server-go-demo/blob/master/video-mixer/public/big_buck_bunny.mp4 .

    command :

    gst-launch-1.0 filesrc location=big_buck_bunny.mp4 ! \
       qtdemux name=demux \
         demux.video_0 ! queue ! \
         decodebin ! \
         videoconvert ! \
         videoscale ! \
         videorate ! \
         video/x-raw,width=320,height=240,framerate=15/1,pixel-aspect-ratio=1/1,level=1.2 ! \
         x264enc bframes=0 byte-stream=true bitrate=9000 ! \
         filesink location=videogst.264

    ffmpeg -i big_buck_bunny.mp4 -f h264 -vcodec libx264 -s 320x240 -bf 0 -r 15 videoffmpeg.264

    output file :

    -rw-r--r-- 1 xxx staff 1.9M 3 25 13:39 videoffmpeg.264
    -rw-r--r-- 1 xxx staff 17M 3 25 13:40 videogst.264

    video codec :

    xxx@xxxs-MacBook-Pro resource %  gst-discoverer-1.0 videoffmpeg.264 -v
    Analyzing file:///Users/xxx/tool/resource/videoffmpeg.264
    Done discovering file:///Users/xxx/tool/resource/videoffmpeg.264

    Topology:
     video: video/x-h264, width=(int)320, height=(int)240, framerate=(fraction)15/1, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true, stream-format=(string)avc, alignment=(string)au, profile=(string)high, level=(string)1.2, codec_data=(buffer)0164000cffe100176764000cacb20283f420000003002000000303c1e2854901000668ebc3cb22c0
       Tags:
         视频编码: H.264 (High Profile)

       Codec:
         video/x-h264, width=(int)320, height=(int)240, framerate=(fraction)15/1, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true, stream-format=(string)avc, alignment=(string)au, profile=(string)high, level=(string)1.2, codec_data=(buffer)0164000cffe100176764000cacb20283f420000003002000000303c1e2854901000668ebc3cb22c0
       Additional info:
         None
       Stream ID: 349989c8845fcc23360fb0ab02ea7510051b926669bf8f3862879823fbab6daf
       Width: 320
       Height: 240
       Depth: 24
       Frame rate: 15/1
       Pixel aspect ratio: 1/1
       Interlaced: false
       Bitrate: 0
       Max bitrate: 0

    Properties:
     Duration: 0:01:32.995000000
     Seekable: yes
     Live: no
     Tags:
         视频编码: H.264 (High Profile)


    xxx@xxxs-MacBook-Pro resource % gst-discoverer-1.0 videogst.264 -v
    Analyzing file:///Users/xxx/tool/resource/videogst.264
    Done discovering file:///Users/xxx/tool/resource/videogst.264

    Topology:
     video: video/x-h264, pixel-aspect-ratio=(fraction)1/1, width=(int)320, height=(int)240, framerate=(fraction)15/1, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true, stream-format=(string)avc, alignment=(string)au, profile=(string)high, level=(string)3, codec_data=(buffer)0164001effe1001d6764001eacb20283f602d4180416940000030004000003007a3c58b92001000568ebccb22c
       Tags:
         视频编码: H.264 (High Profile)

       Codec:
         video/x-h264, pixel-aspect-ratio=(fraction)1/1, width=(int)320, height=(int)240, framerate=(fraction)15/1, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true, stream-format=(string)avc, alignment=(string)au, profile=(string)high, level=(string)3, codec_data=(buffer)0164001effe1001d6764001eacb20283f602d4180416940000030004000003007a3c58b92001000568ebccb22c
       Additional info:
         None
       Stream ID: fb99f4104b347e5682d52c0bd65bcee91b765e42f89ce2e3553be5d6d743a666
       Width: 320
       Height: 240
       Depth: 24
       Frame rate: 15/1
       Pixel aspect ratio: 1/1
       Interlaced: false
       Bitrate: 0
       Max bitrate: 0

    Properties:
     Duration: 0:01:45.505000000
     Seekable: yes
     Live: no
     Tags:
         视频编码: H.264 (High Profile)
  • How to use FFMPEG API to decode to client allocated memory

    25 mars 2020, par VorpalSword

    I’m trying to use the FFMPEG API to decode into a buffer defined by the client program by following the tips in this question but using the new pattern for decoding instead of the now deprecated avcodec_decode_video2 function.

    If my input file is an I-frame only format, everything works great. I’ve tested with a .mov file encoded with v210 (uncompressed).

    However, if the input is a long-GoP format (I’m trying with H.264 high profile 4:2:2 in an mp4 file) I get the following pleasingly psychedelic/impressionistic result :

    Crowd run. On acid!

    There’s clearly something motion-vectory going on here !

    And if I let FFMPEG manage its own buffers with the H.264 input by not overriding AVCodecContext::get_buffer2, I can make a copy from the resulting frame to my desired destination buffer and get good results.

    Here’s my decoder method, _frame and _codecCtx are object members of type AVFrame* and AVCodecContext* respectively. They get alloc’d and init’d in the constructor.

           virtual const DecodeResult decode(const rv::sz_t toggle) override {
           _toggle = toggle & 1;
           using Flags_e = DecodeResultFlags_e;
           DecodeResult ans(Flags_e::kNoResult);

           AVPacket pkt;   // holds compressed data
           ::av_init_packet(&pkt);
           pkt.data = NULL;
           pkt.size = 0;
           int ret;

           // read the compressed frame to decode
           _err = av_read_frame(_fmtCtx, &pkt);
           if (_err < 0) {
               if (_err == AVERROR_EOF) {
                   ans.set(Flags_e::kEndOfFile);
                   _err = 0; // we can safely ignore EOF errors
                   return ans;
               } else {
                   baleOnFail(__PRETTY_FUNCTION__);
               }
           }

           // send (compressed) packets to the decoder until it produces an uncompressed frame
           do {

               // sender
               _err = ::avcodec_send_packet(_codecCtx, &pkt);
               if (_err < 0) {
                   if (_err == AVERROR_EOF) {
                       _err = 0; // EOFs are ok
                       ans.set(Flags_e::kEndOfFile);
                       break;
                   } else {
                       baleOnFail(__PRETTY_FUNCTION__);
                   }
               }

               // receiver
               ret = ::avcodec_receive_frame (_codecCtx, _frame);
               if (ret == AVERROR(EAGAIN)) {
                   continue;
               } else if (ret == AVERROR_EOF) {
                   ans.set(Flags_e::kEndOfFile);
                   break;
               } else if (ret < 0) {
                   _err = ret;
                   baleOnFail(__PRETTY_FUNCTION__);
               } else {
                   ans.set(Flags_e::kGotFrame);
               }

               av_packet_unref (&pkt);

           } while (!ans.test(Flags_e::kGotFrame));        

           //packFrame(); <-- used to copy to client image

           return ans;
       }

    And here’s my override for get_buffer2

           int getVideoBuffer(struct AVCodecContext* ctx, AVFrame* frm) {
           // ensure frame pointers are all null
           if (frm->data[0] || frm->data[1] || frm->data[2] || frm->data[3]){
               ::strncpy (_errMsg, "non-null frame data pointer detected.", AV_ERROR_MAX_STRING_SIZE);
               return -1;
           }

           // get format descriptor, ensure it's valid.
           const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(static_cast<avpixelformat>(frm->format));
           if (!desc) {
               ::strncpy (_errMsg, "Pixel format descriptor not available.", AV_ERROR_MAX_STRING_SIZE);
               return AVERROR(EINVAL);
           }

           // for Video, extended data must point to the same place as data.
           frm->extended_data = frm->data;

           // set the data pointers to point at the Image data.
           int chan = 0;
           IMG* img = _imgs[_toggle];
           // initialize active channels
           for (; chan &lt; 3; ++chan) {
               frm->buf[chan] =  av_buffer_create (
                   static_cast(img->begin(chan)),
                   rv::unsigned_cast<int>(img->size(chan)),
                   Player::freeBufferCallback, // callback does nothing
                   reinterpret_cast(this),
                   0 // i.e. AV_BUFFER_FLAG_READONLY is not set
               );
               frm->linesize[chan] = rv::unsigned_cast<int>(img->stride(chan));
               frm->data[chan] = frm->buf[chan]->data;
           }
           // zero out inactive channels
           for (; chan &lt; AV_NUM_DATA_POINTERS; ++chan) {
               frm->data[chan] = NULL;
               frm->linesize[chan] = 0;
           }
           return 0;
       }
    </int></int></avpixelformat>

    I can reason that the codec needs to keep reference frames in memory and so I’m not really surprised that this isn’t working, but I’ve not been able to figure out how to have it deliver clean decoded frames to client memory. I thought that AVFrame::key_frame would have been a clue, but, after observing its behaviour in gdb, it doesn’t provide a useful trigger for when to allocate AVFrame::bufs from the buffer pool and when they can be initialized to point at client memory.

    Grateful for any help !

  • FFmpeg memory usage

    30 octobre 2018, par mbutan

    I’m using the latest FFmpeg library to blend together 4 different input videos.
    To accomplish it I do some basic "filter_complex" operations for video and "amix" for audios.
    After about 1 minute of processing data, the process is being killed with signal "SIGKILL". Probably out of memory causes this error. To check current memory usage I run "top" tool which shows up in the moment of a crash that 90% of all available memory is allocated.

    My kubernetes pool has available 30GB RAM and 8CPU. It looks strange because FullHd process consumes 30GB memory in 1 minute of work.

    I’m wonder is any way to optimize or limit memory usage.

    FFmpeg version : 4.0.2
    system : Linux
    encoder : h.264
    format : 1920x1080

    FFmpeg output
    https://gist.github.com/mbutan/51f832a99d0edf0b09af934d1934971e

    Code snippet :

             ffmpeg \
     -i /tmp/a3ddcc11-9819-4bef-8e8d-156342aa68df.mp4 \
     -itsoffset 3 -i /tmp/c87d7e8f-c9e7-4fbe-b845-e6cd6d6ac7bb.mp4 \
     -itsoffset 3.199 -i /tmp/250cb6e8-8daf-4c5b-88b3-4b6cfb02834b.mp4 \
     -itsoffset 37.52 -i /tmp/24e466e1-c1e0-4797-b88a-09e2a9f5f673.mp4 \
     -itsoffset 68.04 -i /tmp/3e0e0e62-82e4-4d6a-881a-119d7c72cf9f.mp4 \
     -itsoffset 415.188 -i /tmp/02ca91d5-f0c1-4140-ba12-fa445f09ddf6.mp4 \
     -i /tmp/1.png -i /tmp/2.png -i /tmp/3.png \
     -y -filter_complex pad=1920:1080:color=black [base];[6] scale=1920:1080 [background];\
     [0:v] scale=1740:980,pad=1740:980:(ow-iw)/2:(oh-ih)/2 [main_0];[base][main_0] overlay=90:50:enable=between'(t,0,3)' [tmp_4];\
     [1:v] scale=1740:980,pad=1740:980:(ow-iw)/2:(oh-ih)/2 [main_1];[tmp_4][main_1] overlay=90:50:enable=between'(t,3,29.5)' [tmp_6];\
     [2:v] scale=1740:980,pad=1740:980:(ow-iw)/2:(oh-ih)/2 [main_2];[tmp_6][main_2] overlay=90:50:enable=between'(t,29.5,37.5)' [tmp_8];\
     [3:v] scale=1740:980,pad=1740:980:(ow-iw)/2:(oh-ih)/2 [main_3];[tmp_8][main_3] overlay=90:50:enable=between'(t,37.5,58)' [tmp_10];\
     [2:v] scale=1740:980,pad=1740:980:(ow-iw)/2:(oh-ih)/2 [main_4];[tmp_10][main_4] overlay=90:50:enable=between'(t,58,68)' [tmp_12];\
     [4:v] scale=1740:980,pad=1740:980:(ow-iw)/2:(oh-ih)/2 [main_5];[tmp_12][main_5] overlay=90:50:enable=between'(t,68,414.5)' [tmp_14];\
     [2:v] scale=1740:980,pad=1740:980:(ow-iw)/2:(oh-ih)/2 [main_6];[tmp_14][main_6] overlay=90:50:enable=between'(t,414.5,415.5)' [tmp_16];\
     [5:v] scale=1740:980,pad=1740:980:(ow-iw)/2:(oh-ih)/2 [main_7];[tmp_16][main_7] overlay=90:50:enable=between'(t,415.5,416.248)' [tmp_18];\
     [tmp_18][background] overlay=0:0 [tmp_19];[7] scale=306.66666666666663:190[user_shadow_0];\
     [tmp_19][user_shadow_0] overlay=807:842:enable=between'(t,3.199,30.833)' [shadow_output_0_0];\
     [2:v] scale=266.66666666666663:150:force_original_aspect_ratio=decrease, pad=266.66666666666663:150:(ow-iw)/2:(oh-ih)/2:black [user_0_3199];\
     [shadow_output_0_0][user_0_3199] overlay=827:862:enable=between'(t,3.199,30.833)' [tmp_23];[7] scale=306.66666666666663:190[user_shadow_0];\
     [tmp_23][user_shadow_0] overlay=807:842:enable=between'(t,37.52,57.965)' [shadow_output_0_0];\
     [2:v] scale=266.66666666666663:150:force_original_aspect_ratio=decrease, pad=266.66666666666663:150:(ow-iw)/2:(oh-ih)/2:black [user_0_37520];\
     [shadow_output_0_0][user_0_37520] overlay=827:862:enable=between'(t,37.52,57.965)' [tmp_27];[7] scale=306.66666666666663:190[user_shadow_0];\
     [tmp_27][user_shadow_0] overlay=807:842:enable=between'(t,68.04,414.648)' [shadow_output_0_0];\
     [2:v] scale=266.66666666666663:150:force_original_aspect_ratio=decrease, pad=266.66666666666663:150:(ow-iw)/2:(oh-ih)/2:black [user_0_68040];\
     [shadow_output_0_0][user_0_68040] overlay=827:862:enable=between'(t,68.04,414.648)' [tmp_31];\
     [0:v] scale=1920:1080 [intro];[tmp_31][intro] overlay=0:0:enable=between'(t,0,3.25)' [tmp_32];\
     [tmp_32][0:v] overlay='if(lte((t-3.25)*18432,w),(t-3.25)*18432,w)':0:enable=between'(t,3.25,4.45)' [tmp_33];[5:v] scale=1920:1080 [end];\
     [tmp_33][end] overlay=0:0:enable=between'(t,414.75,418)' [tmp_34];\
     [tmp_34][5:v] overlay='if(lte((t-418)*18432,w),(t-418)*18432,w)':0:enable=between'(t,418,419.2)' [outVideo];\
     [0:a]adelay=1|1 [audio_0];\
     [1:a]adelay=3001|3001 [audio_1];\
     [2:a]adelay=3200|3200 [audio_2];\
     [3:a]adelay=37521|37521 [audio_3];\
     [4:a]adelay=68041|68041 [audio_4];\
     [5:a]adelay=415189|415189 [audio_5];\
     [audio_0][audio_1][audio_2][audio_3][audio_4][audio_5] amix=inputs=6:duration=longest \
     [outAudio] -map [outVideo] -map [outAudio] -framerate 30 -g 60 -video_size 1920x1080 -vcodec libx264 -acodec libfdk_aac -profile:a aac_he -b:a 192k -ac 2 -ar 44100 -max_muxing_queue_size 9999 -threads 0 -strict experimental -preset fast /tmp/53a89b44-fd56-4c5b-adb9-a5695d52e5d2_output.mp4