Recherche avancée

Médias (91)

Autres articles (104)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (12697)

  • Malformed header from CGI script

    13 janvier 2014, par user3188518

    This message is driving me crazy :

    Malformed header from CGI script:
    ffmpeg version 1.2.1 Copyright (c) 2000-2013 the FFmpeg developers
     built on May 10 2013 16:31:05 with gcc 4.8.0 (GCC) 20130502 (prerelease)
     configuration: --prefix=/usr --disable-debug --disable-static --enable-avresample --enable-dxva2 --enable-fontconfig --enable-gpl --enable-libass --enable-libbluray --enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libv4l2 --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-postproc --enable-runtime-cpudetect --enable-shared --enable-vdpau --enable-version3 --enable-x11grab
     libavutil      52. 18.100 / 52. 18.100
     libavcodec     54. 92.100 / 54. 92.100
     libavformat    54. 63.104 / 54. 63.104
     libavdevice    54.  3.103 / 54.  3.103
     libavfilter     3. 42.103 /  3. 42.103
     libswscale      2.  2.100 /  2.  2.100
     libswresample   0. 17.102 /  0. 17.102
     libpostproc    52.  2.100 / 52.  2.100
    Guessed Channel Layout for  Input Stream #0.0 : stereo
    Input #0, wav, from '/projekt/aplikacja/app/92ed9478ecfa4a4dfb176f417d4ef66c/2014-01-12-220148_sample_1.wav':
     Duration: 00:02:35.62, bitrate: 1411 kb/s
       Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
    Output #0, wav, to '/projekt/aplikacja/app/webroot/files/preview/2014-01-12-234038_52d327f6b2939.wav':
     Metadata:
       ISFT            : Lavf54.63.104
       Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, 1411 kb/s
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
    Press [q] to stop, [?] for help
    size=     864kB time=00:00:05.01 bitrate=1411.3kbits/s    
    video:0kB audio:864kB subtitle:0 global headers:0kB muxing overhead 0.009042%
    Status: 302
    Location: http://www.example.org/
    Content-type: text/html

    After executing this function :

      public function createpreview(){
            if ($this->request->is('post')) {
              foreach($this->request->data['TrackId'] as $key => $value){
                   $TrackId[] = $key;
               }
               $this->Track->recursive = -1;
               $view =  $this->Track->find('all', array(
                                 'conditions' => array(
                                 "Track.id" => $TrackId
                                  )));
               ini_set('date.timezone', 'Europe/London');

              foreach ($view as $v){
                 $comand_1 = 'ffmpeg -i '.APP.'92ed9478ecfa4a4dfb176f417d4ef66c'.DS. $v['Track']['filename'].' 2>&1';
                 $time_data = shell_exec($comand_1);
                 $search = '/Duration: (.*?),/';
                 $duration = preg_match($search, $time_data, $matches, PREG_OFFSET_CAPTURE, 3);
                 $time = explode('.', $matches[1][0]);
                 $time = explode(":", $time[0]);
                 $seconds = $time[0]*3600 + $time[1]*60 + $time[2];
                 if ($seconds >= 30){

                $now = date('Y-m-d-His');
                    $prew_file_name = $now .'_'. uniqid().'.wav';
                    $comand_2 = 'ffmpeg  -ss 00:00:25.000 -analyzeduration 99999999  -i '.APP.'92ed9478ecfa4a4dfb176f417d4ef66c'.DS. $v['Track']['filename'].' -t 5 -c:v copy -c:a copy '.WWW_ROOT.'files'.DS.'preview'.DS.$prew_file_name;
                       $t = shell_exec($comand_2);
                       $this->Track->updateAll(array('Track.preview' => "'.$prew_file_name.'"), array('Track.id' => $v['Track']['id']));
                     }
                  }

           }
            header('Location: http://www.example.org/');
          // $this->redirect(array('controller'=>'albums', 'action'=>'menage_index'));
      }

    As far as i know this shell_exec($comand_2) couses error (after i comment it out, it redirects me correctly).Googling this didn't gave me answers, the wierd part is previews aka shell_exec($comand_2) are made and fine, i just can't get it to redirect me. I tried non cake way but it doesn't work either.

    What I'm doing wrong ?

  • "matches no streams" issue in ffmpeg complex_filter

    17 décembre 2023, par Nimderp

    i'm my goal is to do this with ffmpeg complex_filters :

    


    I have a background 1080 vid and like to slide in some overlay graphics and texts. So my input is the vid and 2 jpg files. The files should flide in to a position over the background-video, fade in in the same time, stay a few seconds and then fade out. Additionally, a text should be displayed.

    


    My main problem is, that it works with only one input jpg, but when i add the second block i got a thar error here :

    


    [fc#0 @ 000001bf9d0d0c00] Stream specifier 'merge1C:v' in filtergraph description [1:v]format=pix_fmts=yuva420p,scale=765:1083,fade=in:st=0:d=1:alpha=1 [overlay1A], [0:v][overlay1A] overlay=x='if(lte(-w+(t)*1065,300),-w+(t)*1065,300)':y=300:enable='between(t,0,1):shortest=1'[merge1A]; [1:v]format=pix_fmts=yuva420p,scale=765:1083,fade=out:st=3:d=1:alpha=1 [overlay1B], [merge1A][overlay1B] overlay=300:300:enable='between(t,1,4)':shortest=1[merge1B]; [merge1B] drawtext=alpha=if(lt(t\,0.3)\,0\,if(lt(t\,1.3)\,(t-0.3)/1\,if(lt(t\,3)\,1\,if(lt(t\,4)\,(1-(t-3))/1\,0)))):fontcolor=ffffff:fontsize=64:text=test:x=200:y=200 [merge1C]; [2:v]format=pix_fmts=yuva420p,scale=765:1083,fade=in:st=4:d=1:alpha=1 [overlay2A], [merge1C:v][overlay2A] overlay=x='if(lte(-w+(t)*1065,300),-w+(t)*1065,300)':y=300:enable='between(t,4,5):shortest=1'[merge2A]; [2:v]format=pix_fmts=yuva420p,scale=765:1083,fade=out:st=7:d=1:alpha=1 [overlay2B], [merge2A][overlay2B] overlay=300:300:enable='between(t,5,8)':shortest=1[merge2B]; [merge2B] drawtext=alpha=if(lt(t\,4.3)\,0\,if(lt(t\,5.3)\,(t-4.3)/1\,if(lt(t\,7)\,1\,if(lt(t\,8)\,(1-(t-7))/1\,0)))):fontcolor=ffffff:fontsize=64:text=test:x=200:y=200 [merge2C]; matches no streams.
Error initializing complex filters: Invalid argument


    


    My ffmpeg command looks like this :

    


    ffmpeg -i concat-video.mp4  -loop 1 -i card1.jpg -loop 1 -i card2.jpg -filter_complex "[1:v]format=pix_fmts=yuva420p,scale=765:1083,fade=in:st=0:d=1:alpha=1 [overlay1A], [0:v][overlay1A] overlay=x='if(lte(-w+(t)*1065,300),-w+(t)*1065,300)':y=300:enable='between(t,0,1):shortest=1'[merge1A]; [1:v]format=pix_fmts=yuva420p,scale=765:1083,fade=out:st=3:d=1:alpha=1 [overlay1B], [merge1A][overlay1B] overlay=300:300:enable='between(t,1,4)':shortest=1[merge1B]; [merge1B] drawtext=alpha=if(lt(t\,0.3)\,0\,if(lt(t\,1.3)\,(t-0.3)/1\,if(lt(t\,3)\,1\,if(lt(t\,4)\,(1-(t-3))/1\,0)))):fontcolor=ffffff:fontsize=64:text=test:x=200:y=200 [merge1C]; [2:v]format=pix_fmts=yuva420p,scale=765:1083,fade=in:st=4:d=1:alpha=1 [overlay2A], [merge1C:v][overlay2A] overlay=x='if(lte(-w+(t)*1065,300),-w+(t)*1065,300)':y=300:enable='between(t,4,5):shortest=1'[merge2A]; [2:v]format=pix_fmts=yuva420p,scale=765:1083,fade=out:st=7:d=1:alpha=1 [overlay2B], [merge2A][overlay2B] overlay=300:300:enable='between(t,5,8)':shortest=1[merge2B]; [merge2B] drawtext=alpha=if(lt(t\,4.3)\,0\,if(lt(t\,5.3)\,(t-4.3)/1\,if(lt(t\,7)\,1\,if(lt(t\,8)\,(1-(t-7))/1\,0)))):fontcolor=ffffff:fontsize=64:text=test:x=200:y=200 [merge2C];" -pix_fmt yuva420p -map "[merge2C]" output.mp4


    


    or here a little bit formatted for better readability :

    


    ffmpeg -i concat-video.mp4  -loop 1 -i card1.jpg -loop 1 -i card2.jpg -filter_complex "

# start of block for first input jpg
[1:v]format=pix_fmts=yuva420p,scale=765:1083,fade=in:st=0:d=1:alpha=1 [overlay1A],
[0:v][overlay1A] overlay=x='if(lte(-w+(t)*1065,300),-w+(t)*1065,300)':y=300:enable='between(t,0,1):shortest=1'[merge1A];
[1:v]format=pix_fmts=yuva420p,scale=765:1083,fade=out:st=3:d=1:alpha=1 [overlay1B],
[merge1A][overlay1B] overlay=300:300:enable='between(t,1,4)':shortest=1[merge1B];
[merge1B] drawtext=alpha=if(lt(t\,0.3)\,0\,if(lt(t\,1.3)\,(t-0.3)/1\,if(lt(t\,3)\,1\,if(lt(t\,4)\,(1-(t-3))/1\,0)))):fontcolor=ffffff:fontsize=64:text=test:x=200:y=200 [merge1C];

# start of block for second input jpg
[2:v]format=pix_fmts=yuva420p,scale=765:1083,fade=in:st=4:d=1:alpha=1 [overlay2A],
[merge1C:v][overlay2A] overlay=x='if(lte(-w+(t)*1065,300),-w+(t)*1065,300)':y=300:enable='between(t,4,5):shortest=1'[merge2A];
[2:v]format=pix_fmts=yuva420p,scale=765:1083,fade=out:st=7:d=1:alpha=1 [overlay2B],
[merge2A][overlay2B] overlay=300:300:enable='between(t,5,8)':shortest=1[merge2B];
[merge2B] drawtext=alpha=if(lt(t\,4.3)\,0\,if(lt(t\,5.3)\,(t-4.3)/1\,if(lt(t\,7)\,1\,if(lt(t\,8)\,(1-(t-7))/1\,0)))):fontcolor=ffffff:fontsize=64:text=test:x=200:y=200 [merge2C];

" 
-pix_fmt yuva420p -map "[merge2C]" output.mp4


    


    Hope somebody can give me a hint, it's driving me nuts :D

    


    I tried several syntax changes, but always got an error.

    


    The issue seems to be the chain between the output "merge1C" (which includes the fade, slide, text, and fadout of the first jpg overlay) to the second block for the other jpg.

    


  • Trim Overlay memory issue

    4 juillet 2019, par Костянтин Тюртюбек

    Greetings fellow FFmpeg users,

    We are experiencing a strange memory leek issue (or maybe just doing something really wrong) and need some directions on how to debug it.

    What are we trying to achieve :

    Process a conference recording that includes multiple user streams, each in its own separate file (all files are mp4/opus).

    • Make a dynamic scene from a set of recordings, based on their volume level at set point of time.
    • The scene must include two parts : smaller grid of all the participants videos, bigger grid of currently talking people. Something like Google Hangouts or Skype does in their applications.

    What went wrong :

    • Memory footprint started unpredictably skyrocketing for some reason during montage

    What are we using :

    First FFmpeg command that reads filter_complex_script from file and adds drawbox as a talking indication on each video source file, when its volume is over a set threshold.

    Second FFmpeg command that reads filter_complex_script from file and :

    • takes an input file (using 0:v),
    • trims a part of it, when the user was talking,
    • scales it according to the amount of concurrently talking users,
    • pads to that resolution (in case if user video is smaller)

    filter_complex command using SELECT :

    [0]select='between(t, 1, 2)',  scale=762:428:force_original_aspect_ratio=decrease,pad=763:429:(ow-iw)/2:(oh-ih)/2[stream-0-workspace-scale-1-1];

    [block-2-grid][stream-0-workspace-scale-1-1]
    overlay=repeatlast=1:shortest=0:x=10:y=316:enable='between(t, 1, 2)'
    [block-2-workspace-1];

    filter_complex command using TRIM :

    [input-file-tag]
    trim=start=#{start}:duration=#{duration},
    setpts=PTS-STARTPTS,
    scale=#{w-1}:#{h-1}:force_original_aspect_ratio=decrease,
    pad=#{w}:#{h}:(ow-iw)/2:(oh-ih)/2
    [input-file-trimmed];

    [previous-block-tag]
    overlay=repeatlast=1:shortest=0:x=#{x}:y=#{y}:enable='between(t, #{from}, #{to})'
    [next-block-tag]

    We have tried going the TRIM command way, tried the SELECT command way. Problem is, both take insane amounts of ram during execution.

    Examples and more description :

    • Lets assume that only two of the five inputs have the volume above a
      certain volume threshold from second two to five.

    • We are trying to display only them according to some overlay math.

    • Cropped commands in human readable form : https://pastebin.com/YwrnRgnA

    • Full FFmpeg command is way too long to read through, that is the reason we started using filter_complex_script and loading it from file

    • Sometimes one block of video conference may have up to 300+
      intermediate overlays, which leads to the memory issue described. We were expecting the memory footprint to be similar to the amount of input files or maybe two-to-three times higher. However, we reach 15Gb or RAM usage withing the first two minutes of montage, while the input files are no bigger than 200Mb.

    What have we done in terms of debugging :

    • We had been using split at first, but quickly figured out that split
      does in fact copy each input and load it in memory, so we had to
      ditch that approach.

    • As matter of fact we moved to using the input files themselves, so
      the problem lies elsewhere.

    • To clarify, we have split our ffmpeg command into two separate ones.
      First one overlays the talking box animation using drawbox as well as
      user avatar and name. It outputs new video files which we than use in
      the command described above as direct input files tags, like 0:v, 1:v
      etc.

    Thank you for taking time reading through our issue.
    We sure hope that you can help us narrow it down.
    Please feel free to ask for any additional information or descriptions if needed.

    Have a good day !