Recherche avancée

Médias (91)

Autres articles (92)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (12475)

  • How to configure proc_open "pipes" for ffmpeg stdin/stderr on Windows ?

    10 septembre 2018, par GDP

    Firstly, I’ve spent the week googling and trying variations of dozens and dozens of answers for Unix, but it’s been a complete bust, I need an answer for Windows, so this is not a duplicate question of the Unix equivalents.

    We’re trying to create a scheduled task that will process a queue of tasks in PHP, and maintain an array of up to 10 ffmpeg instances at a time. I’ve tried exec, shell_exec and proc_open , coupled with/without start /B without any "complete" luck.
    I’m also quite certain that it has to do with setting up the descriptorspec and pipes (which I’m completely unfamiliar with), and here’s why :

    Per https://trac.ffmpeg.org/wiki/PHP,

    The part that says ">/dev/null" will redirect the standard OUTPUT
    (stdout) of the ffmpeg instance to /dev/null (effectively ignoring the
    output) and "2>/dev/null" will redirect the standard ERROR (stderr) to
    /dev/null (effectively ignoring any error log messages). These two can
    be combined into a shorter representation : ">/dev/null 2>&1". If you
    like, you can ?read more about I/O Redirection.

    An important note should be mentioned here. The ffmpeg command-line
    tool uses stderr for output of error log messages and stdout is
    reserved for possible use of pipes (to redirect the output media
    stream generated from ffmpeg to some other command line tool). That
    being said, if you run your ffmpeg in the background, you’ll most
    probably want to redirect the stderr to a log file, to be able to
    check it later.

    One more thing to take care about is the standard INPUT (stdin).
    Command-line ffmpeg tool is designed as an interactive utility that
    accepts user’s input (usually from keyboard) and reports the error log
    on the user’s current screen/terminal. When we run ffmpeg in the
    background, we want to tell ffmpeg that no input should be accepted
    (nor waited for) from the stdin. We can tell this to ffmpeg, using I/O
    redirection again "

    echo "Starting ffmpeg...\n\n";
    echo shell_exec("ffmpeg -y -i input.avi output.avi null >/dev/null 2>/var/log/ffmpeg.log &");
    echo "Done.\n";

    This example actually uses shell_exec, though we want to use proc_open so that we can use a loop to check if the process has completed or not.

    Here’s a basic sample loop of what I’ve tried. The problem in executing this is that the actual ffmpeg processing completes, but the process is hung "waiting for something". When I use debugging, and step out of the loop and terminate the process after a few minutes, the ffmpeg output is written and the script carries on. (From the command line, ffmpeg takes less than a minute to complete)

    $descriptorspec = array(
       array('pipe', 'r'),
       array('pipe', 'w'),
       array('pipe', 'w'),
    );
    $pipes = null;
    $cwd = null;
    $env = null;
    $process = proc_open('start /B ffmpeg.exe -i input.mov output.mp4 -nostdin', $descriptorspec, $pipes, $cwd, $env);
    $status = proc_get_status($process);
    while($status['running']) {
       sleep (60);
       $status = proc_get_status($process);
    }
    proc_terminate($process);

    Also, as documented at ffmpeg Main-options :

    Enable interaction on standard input. On by default unless standard
    input is used as an input. To explicitly disable interaction you need
    to specify -nostdin.

    The -nostdin option seems to indicate that it addresses my problem, but it has no apparent affect. In all solutions for Unix that I’ve found, it appears to still require some form of this this unix added : null or 2>&1.

    So, with that somewhat exhaustive prologue, can someone explain how to properly configure the proc_open function to satisfy how ffmpeg.exe interacts with I/O ? If there is a better or more appropriate approach, I’m happy to do that, but the important thing is to be able to loop thru an array of processes to check if they’re complete, so that other faster processes can complete in the meantime.

    UPDATE
    After exhaustive R&D, it seems that the I/O is not the issue in making this happen (the -nostdin option seems to work as advertised). The premise of my design was to use proc_get_status() to determine when ffmpeg was finished. The flaw in that approach is that apparently that does NOT return the actual PID of the ffmpeg process...it returns the parent PID. So, when proc_get_status() returned that the video conversion was complete, it was in fact still running, not hung. This was further complicated by testing on larger video files. The larger the video, the longer the "residual" time was that it took to actually finish — the I/O wasn’t the issue - watching the Parent PID instead of the child PID was the problem. So, without getting into much lower level system internals with Windows, this doesn’t appear to be possible with PHP directly. I’ve decided to abandon this approach, but hopefully this discovery will save someone else some time and trouble.

  • FFMPEG - concantenating mp4s from different sources - unable to stop "Non-monotonous DTS in output stream" warning

    7 août 2018, par Sam P

    I need to concatenate mp4 files from different sources, this means some of the variables are out of my control such as timebase, aspect ratio and encoding. So to get around this I re-encode and attempt to standardise the files before concantenating them. Unfortunately, despite this I get Non-monotonous DTS in output stream warnings during the concatenation stage, and the output video seems to always have broken audio/video syncing by the last segment.

    I know there are a lot of other questions out there about resolving the warning above, but I’ve been through them all and reviewed the documentation.. but unfortunately I’ve been still been unable to solve it..

    I think the thing which I don’t understand is : if I have mp4s from different sources, what exactly do I need to do to ensure that the files will always neatly concatenate together ?

    What I’ve tried so far

    The script I’m using to standardise the mp4 files before concantenation is the following (amends resolution, frame rate, timebase, bitrate for audio, bitrate for video, audio encoding and video encoding) :

    ffmpeg -y -i $1 -vf 'scale=1280:720:force_original_aspect_ratio=1,pad=1280:720:(ow-iw)/2:(oh-ih)/2' -r 30 -video_track_timescale 90000 -b:a 128K -b:v 1200K -c:a aac -c:v libx264 $2

    Here’s the ffprobe output on two of the files, there are some differences but I’m not sure if they are significant ?

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'intro.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf58.12.100
     Duration: 00:00:08.98, start: 0.000000, bitrate: 1210 kb/s
       Stream #0:0(eng): Video: h264 (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 1069 kb/s, 30 fps, 30 tbr, 90k tbn, 60 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 132 kb/s (default)
       Metadata:
         handler_name    : SoundHandler

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'middle.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf58.12.100
     Duration: 00:00:59.72, start: 0.000000, bitrate: 1200 kb/s
       Stream #0:0(und): Video: h264 (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 1063 kb/s, 30 fps, 30 tbr, 90k tbn, 60 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
       Metadata:
         handler_name    : SoundHandler

    They all have normal video and audio at this point.

    After that I concatenate them and add a watermark using the following (it sucks that I need to re-encode here) :

     ffmpeg -y \
       -f concat \
       -safe 0 \
       -i $INFILES \
       -c:v libx264 \
       -c:a copy \
       -preset fast \
       -vf drawtext=enable="'between(t, $DRAW_TEXT_DELAY, $DRAW_TEXT_DURATION)': fontfile=$FONT_DIR/$FONT: text='$TEXT': fontcolor=$FONTCOLOR: fontsize=$FONTSIZE: $POSITION" \
       $OUTFILE

    INFILES is a path to a text file formatted like :

    file /usr/src/app/data/test/out/intro.mp4
    file /usr/src/app/data/test/out/middle.mp4
    file /usr/src/app/data/test/out/outro.mp4

    What am I missing here ? Is there a way to debug this further ?

  • Basic to use FFMPEG in Python (Django Rest Framework)

    31 juillet 2018, par Feed Git

    I’m writing a viewset in Django Rest Framework to convert images sequence into video by using FFMPEG but I don’t understand how I can do it. I tried but not success. Please help me with my code.

    My code :

    class VideoRenderSerializer(ModelSerializer):
       imageSequence = ListField(required=False, child=FileField(max_length=100000, allow_empty_file=False, use_url=False))
       class Meta:
           model = Video
           fields = [
               'imageSequence',
           ]

    class VideoRenderView(ModelViewSet):
       queryset = Video.objects.all()
       serializer_class = VideoRenderSerializer

       def render_video(self, request):
           imgSequences = request.FILES.getlist('imageSequence')
           render_from_img_sequences_cmd = 'ffmpeg -loop 1 -i image.png -vf format=yuv420p -t 30 C:\output.mp4'
           ffmpegresult = subprocess.call(render_from_img_sequences_cmd, shell=True, stdout=open(os.devnull, "w"), stderr=subprocess.STDOUT)
           return Response(ffmpegresult, status=200)

    Output is : 1 and I cant find where is output file !