Recherche avancée

Médias (0)

Mot : - Tags -/interaction

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (63)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

  • Submit enhancements and plugins

    13 avril 2011

    If you have developed a new extension to add one or more useful features to MediaSPIP, let us know and its integration into the core MedisSPIP functionality will be considered.
    You can use the development discussion list to request for help with creating a plugin. As MediaSPIP is based on SPIP - or you can use the SPIP discussion list SPIP-Zone.

Sur d’autres sites (10027)

  • How to configure proc_open "pipes" for ffmpeg stdin/stderr on Windows ?

    10 septembre 2018, par GDP

    Firstly, I’ve spent the week googling and trying variations of dozens and dozens of answers for Unix, but it’s been a complete bust, I need an answer for Windows, so this is not a duplicate question of the Unix equivalents.

    We’re trying to create a scheduled task that will process a queue of tasks in PHP, and maintain an array of up to 10 ffmpeg instances at a time. I’ve tried exec, shell_exec and proc_open , coupled with/without start /B without any "complete" luck.
    I’m also quite certain that it has to do with setting up the descriptorspec and pipes (which I’m completely unfamiliar with), and here’s why :

    Per https://trac.ffmpeg.org/wiki/PHP,

    The part that says ">/dev/null" will redirect the standard OUTPUT
    (stdout) of the ffmpeg instance to /dev/null (effectively ignoring the
    output) and "2>/dev/null" will redirect the standard ERROR (stderr) to
    /dev/null (effectively ignoring any error log messages). These two can
    be combined into a shorter representation : ">/dev/null 2>&1". If you
    like, you can ?read more about I/O Redirection.

    An important note should be mentioned here. The ffmpeg command-line
    tool uses stderr for output of error log messages and stdout is
    reserved for possible use of pipes (to redirect the output media
    stream generated from ffmpeg to some other command line tool). That
    being said, if you run your ffmpeg in the background, you’ll most
    probably want to redirect the stderr to a log file, to be able to
    check it later.

    One more thing to take care about is the standard INPUT (stdin).
    Command-line ffmpeg tool is designed as an interactive utility that
    accepts user’s input (usually from keyboard) and reports the error log
    on the user’s current screen/terminal. When we run ffmpeg in the
    background, we want to tell ffmpeg that no input should be accepted
    (nor waited for) from the stdin. We can tell this to ffmpeg, using I/O
    redirection again "

    echo "Starting ffmpeg...\n\n";
    echo shell_exec("ffmpeg -y -i input.avi output.avi null >/dev/null 2>/var/log/ffmpeg.log &");
    echo "Done.\n";

    This example actually uses shell_exec, though we want to use proc_open so that we can use a loop to check if the process has completed or not.

    Here’s a basic sample loop of what I’ve tried. The problem in executing this is that the actual ffmpeg processing completes, but the process is hung "waiting for something". When I use debugging, and step out of the loop and terminate the process after a few minutes, the ffmpeg output is written and the script carries on. (From the command line, ffmpeg takes less than a minute to complete)

    $descriptorspec = array(
       array('pipe', 'r'),
       array('pipe', 'w'),
       array('pipe', 'w'),
    );
    $pipes = null;
    $cwd = null;
    $env = null;
    $process = proc_open('start /B ffmpeg.exe -i input.mov output.mp4 -nostdin', $descriptorspec, $pipes, $cwd, $env);
    $status = proc_get_status($process);
    while($status['running']) {
       sleep (60);
       $status = proc_get_status($process);
    }
    proc_terminate($process);

    Also, as documented at ffmpeg Main-options :

    Enable interaction on standard input. On by default unless standard
    input is used as an input. To explicitly disable interaction you need
    to specify -nostdin.

    The -nostdin option seems to indicate that it addresses my problem, but it has no apparent affect. In all solutions for Unix that I’ve found, it appears to still require some form of this this unix added : null or 2>&1.

    So, with that somewhat exhaustive prologue, can someone explain how to properly configure the proc_open function to satisfy how ffmpeg.exe interacts with I/O ? If there is a better or more appropriate approach, I’m happy to do that, but the important thing is to be able to loop thru an array of processes to check if they’re complete, so that other faster processes can complete in the meantime.

    UPDATE
    After exhaustive R&D, it seems that the I/O is not the issue in making this happen (the -nostdin option seems to work as advertised). The premise of my design was to use proc_get_status() to determine when ffmpeg was finished. The flaw in that approach is that apparently that does NOT return the actual PID of the ffmpeg process...it returns the parent PID. So, when proc_get_status() returned that the video conversion was complete, it was in fact still running, not hung. This was further complicated by testing on larger video files. The larger the video, the longer the "residual" time was that it took to actually finish — the I/O wasn’t the issue - watching the Parent PID instead of the child PID was the problem. So, without getting into much lower level system internals with Windows, this doesn’t appear to be possible with PHP directly. I’ve decided to abandon this approach, but hopefully this discovery will save someone else some time and trouble.

  • FFMEG libavcodec decoder then re-encode video issue

    11 novembre 2018, par maxhap

    I’m trying to use libavcodec library in FFMpeg to decode then re-encode a h264 video.

    I have the decoding part working (rendes to an SDL window fine) but when I try to re-encode the frames I get bad data in the re-encoded videos samples.

    Here is a cut down code snippet of my encode logic.

    EncodeResponse H264Codec::EncodeFrame(AVFrame* pFrame, StreamCodecContainer* pStreamCodecContainer, AVPacket* pPacket)
    {
       int result = 0;

       result = avcodec_send_frame(pStreamCodecContainer->pEncodingCodecContext, pFrame);

       if(result < 0)
       {
           return EncodeResponse::Fail;
       }

       while (result >= 0)
       {
           result = avcodec_receive_packet(pStreamCodecContainer->pEncodingCodecContext, pPacket);

           // If the encoder needs more frames to create a packed then return and wait for
           // method to be called again upon a new frame been present.
           // Else check if we have failed to encode for some reason.
           // Else a packet has successfully been returned, then write it to the file.
           if (result == AVERROR(EAGAIN) || result == AVERROR_EOF)
           {
               // Higher level logic, dedcodes next frame from source
               // video then calls this method again.
               return EncodeResponse::SendNextFrame;
           }
           else if (result < 0)
           {
               return EncodeResponse::Fail;
           }
           else
           {
               // Prepare packet for muxing.
               if (pStreamCodecContainer->codecType == AVMEDIA_TYPE_VIDEO)
               {
                   av_packet_rescale_ts(m_pPacket, pStreamCodecContainer->pEncodingCodecContext->time_base,
                                        m_pDecodingFormatContext->streams[pStreamCodecContainer->streamIndex]->time_base);
               }

               m_pPacket->stream_index = pStreamCodecContainer->streamIndex;

               int result = av_interleaved_write_frame(m_pEncodingFormatContext, m_pPacket);

               av_packet_unref(m_pPacket);
           }
       }

       return EncodeResponse::EncoderEndOfFile;
    }

    Strange behaviour I notice is that before I get the first packet from avcodec_receive_packet I have to send 50+ frames to avcodec_send_frame.

    I built a debug build of FFMpeg and stepping into the code I notice that AVERROR(EAGAIN) is returned by avcodec_receive_packet because of the following in x264encoder::encode in encoder.c

       if( h->frames.i_input <= h->frames.i_delay + 1 - h->i_thread_frames )
       {
           /* Nothing yet to encode, waiting for filling of buffers */
           pic_out->i_type = X264_TYPE_AUTO;
           return 0;
       }

    For some reason my code-context (h) never has any frames. I have spent a long time trying to debug ffmpeg and to determine what I’m doing wrong. But have reached the limit of my video codec knowledge (which is little).

    I’m testing this with a video that has no audio to reduce complication.

    I have created a cut down version of my application and provided a self contained (with ffmpeg and SDL built dependencies) project. Hopefully this can help anyone-one willing to help me :).

    Project Link
    https://github.com/maxhap/video-codec


    After looking into encoder initialisation I found that I have to set the codec AV_CODEC_FLAG_GLOBAL_HEADER before calling avcodec_open2

    pStreamCodecContainer->pEncodingCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

    This change led to the re-encoded moov box looking much heathier (used MP4Box.js to parse it). However, the video still does not play correctly, the output video has grey frames at the start when played in VLC and won’t play in other players.

    I have since tried creating an encoding context via the sample code, rather than using my decoding codec parameters. This led to fixing the bad/data or encoding issue. However, my DTS times are scaling to huge numbers

    Here is my new codec init

    if (pStreamCodecContainer->codecType == AVMEDIA_TYPE_VIDEO)
    {
       pStreamCodecContainer->pEncodingCodecContext->height = pStreamCodecContainer->pDecodingCodecContext->height;
       pStreamCodecContainer->pEncodingCodecContext->width = pStreamCodecContainer->pDecodingCodecContext->width;
       pStreamCodecContainer->pEncodingCodecContext->sample_aspect_ratio = pStreamCodecContainer->pDecodingCodecContext->sample_aspect_ratio;

       /* take first format from list of supported formats */
       if (pStreamCodecContainer->pEncodingCodec->pix_fmts)
       {
           pStreamCodecContainer->pEncodingCodecContext->pix_fmt = pStreamCodecContainer->pEncodingCodec->pix_fmts[0];
       }
       else
       {
           pStreamCodecContainer->pEncodingCodecContext->pix_fmt = pStreamCodecContainer->pDecodingCodecContext->pix_fmt;
       }

       /* video time_base can be set to whatever is handy and supported by encoder */      
       pStreamCodecContainer->pEncodingCodecContext->time_base = av_inv_q(pStreamCodecContainer->pDecodingCodecContext->framerate);
       pStreamCodecContainer->pEncodingCodecContext->sample_aspect_ratio = pStreamCodecContainer->pDecodingCodecContext->sample_aspect_ratio;
    }
    else
    {
       pStreamCodecContainer->pEncodingCodecContext->channel_layout = pStreamCodecContainer->pDecodingCodecContext->channel_layout;
       pStreamCodecContainer->pEncodingCodecContext->channels =
           av_get_channel_layout_nb_channels(pStreamCodecContainer->pEncodingCodecContext->channel_layout);

       /* take first format from list of supported formats */
       pStreamCodecContainer->pEncodingCodecContext->sample_fmt = pStreamCodecContainer->pEncodingCodec->sample_fmts[0];
       pStreamCodecContainer->pEncodingCodecContext->time_base = AVRational{ 1, pStreamCodecContainer->pEncodingCodecContext->sample_rate };
    }

    Any ideas why my DTS time is re-scaling incorrectly ?


    I managed to fix the DTS scalling by using the time_base value directly from the decoding streams.

    So

    pStreamCodecContainer->pEncodingCodecContext->time_base = m_pDecodingFormatContext->streams[pStreamCodecContainer->streamIndex]->time_base

    Instead of

    pStreamCodecContainer->pEncodingCodecContext->time_base = av_inv_q(pStreamCodecContainer->pDecodingCodecContext->framerate);

    I will create an answer based on all my finding.

  • cbs_h264 : Fix handling of auxiliary pictures

    7 novembre 2018, par Andreas Rheinhardt
    cbs_h264 : Fix handling of auxiliary pictures
    

    The earlier code used the most recent non-auxiliary slice to determine
    whether an auxiliary slice has the syntax of an IDR slice, even when
    the most recent slice was from a slice of a redundant frame. Now only
    slices of the primary coded picture are used, as the specifications
    mandate.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@googlemail.com>

    • [DH] libavcodec/cbs_h264_syntax_template.c