Recherche avancée

Médias (91)

Autres articles (18)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Submit enhancements and plugins

    13 avril 2011

    If you have developed a new extension to add one or more useful features to MediaSPIP, let us know and its integration into the core MedisSPIP functionality will be considered.
    You can use the development discussion list to request for help with creating a plugin. As MediaSPIP is based on SPIP - or you can use the SPIP discussion list SPIP-Zone.

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

Sur d’autres sites (3636)

  • FFmpeg - Delay Only Video Stream Of Audio Linked dshow Input

    15 avril 2018, par Nimble

    I’m having a slight issue trying to sync my audio and video up with an acceptable margin of error. Here is my command :

    ffmpeg -y -thread_queue_size 9999 -indexmem 9999 -guess_layout_max 0 -f dshow -video_size 3440x1440 -rtbufsize 2147.48M ^
    -framerate 100 -pixel_format nv12 -i video="Video (00 Pro Capture HDMI 4K+)":audio="SPDIF/ADAT (1+2) (RME Fireface UC)" ^
    -map 0:0,0:1 -map 0:1 -flags +cgop -force_key_frames expr:gte(t,n_forced*2) -c:v h264_nvenc -preset: llhp -pix_fmt nv12 ^
    -b:v 250M -minrate 250M -maxrate 250M -bufsize 250M -c:a aac -ar 44100 -b:a 384k -ac 2 -r 100 -af "aresample=async=250" ^
    -vsync 1 -max_muxing_queue_size 9999 -f segment -segment_time 600 -segment_wrap 9 -reset_timestamps 1 ^
    C:\Users\djcim\Videos\PC\PC\PC%02d.ts

    My problem is the video comes in slightly ahead of the audio, I can use -itsoffset but then I have to call the video and audio as separate inputs as -itsoffset offsets both audio and video. While this may seem the obvious solution it causes inconsistent audio synchronization if the audio isn’t called with the video. Basically if both audio and video aren’t called at the same time the video can now be ahead or behind with a 2-3 frame margin. When I call them at the same time the video consistently comes in 2 frames ahead of the audio, every time. I just need a way to delay only the video stream without delaying the audio while keeping both audio and video linked from the beginning. I’ve tried this with no luck :

    ffmpeg -y -thread_queue_size 9999 -indexmem 9999 -guess_layout_max 0 -f dshow -video_size 3440x1440 -rtbufsize 2147.48M ^
    -framerate 200 -pixel_format nv12 -i video="Video (00 Pro Capture HDMI 4K+)":audio="SPDIF/ADAT (1+2) (RME Fireface UC)" ^
    -flags +cgop -force_key_frames expr:gte(t,n_forced*2) -c:v h264_nvenc -preset: llhp -pix_fmt nv12 -b:v 250M ^
    -minrate 250M -maxrate 250M -bufsize 250M -c:a aac -ar 44100 -b:a 384k -ac 2 -r 100 ^
    -filter_complex "[0:v] setpts=PTS-STARTPTS+.032/TB [v]; [0:a] asetpts=PTS-STARTPTS, aresample=async=250 [a]" -map [v] ^
    -map [a] -vsync 1 -max_muxing_queue_size 9999 -f segment -segment_time 600 -segment_wrap 9 -reset_timestamps 1 ^
    C:\Users\djcim\Videos\PC\PC\PC%02d.ts

    Just like -itsoffset both video and audio are being delayed. You can delay solely audio with adelay, but there doesn’t seem to be a video delaying equivalent.

    Any help or advice would be greatly appreciated.

  • avcodec/options_table : Add SMPTE ST428-1 colour primaries (CIE 1931 XYZ) to command...

    1er septembre 2015, par Kevin Wheatley
    avcodec/options_table : Add SMPTE ST428-1 colour primaries (CIE 1931 XYZ) to command line options
    

    Signed-off-by : Kevin Wheatley <kevin.j.wheatley@gmail.com>
    Reviewed-by : Hendrik Leppkes <h.leppkes@gmail.com>
    Signed-off-by : Michael Niedermayer <michael@niedermayer.cc>

    • [DH] libavcodec/options_table.h
  • How to use ffmpeg api to make a filter overlay water mark ?

    6 septembre 2022, par Leon Lee

    OS : Ubuntu 20.04

    &#xA;

    FFmpeg : 4.4.0

    &#xA;

    Test video :

    &#xA;

    Input #0, hevc, from './videos/akiyo_352x288p25.265' :&#xA;Duration : N/A, bitrate : N/A&#xA;Stream #0:0 : Video : hevc (Main), yuv420p(tv), 352x288, 25 fps, 25 tbr, 1200k tbn, 25 tbc

    &#xA;

    Test watermark :

    &#xA;

    200*200.png

    &#xA;

    I copy ffmpeg official example.

    &#xA;

    Compiler no error, run no error , but i can't see add watermark

    &#xA;

    Here is my code

    &#xA;

    #include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libavfilter></libavfilter>buffersink.h>&#xA;#include <libavfilter></libavfilter>buffersrc.h>&#xA;int open_input_file(AVFormatContext *fmt, AVCodecContext **codecctx, AVCodec *codec, const char *filename, int index)&#xA;{&#xA;    int ret = 0;&#xA;    char msg[500];&#xA;    *codecctx = avcodec_alloc_context3(codec);&#xA;    ret = avcodec_parameters_to_context(*codecctx, fmt->streams[index]->codecpar);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        printf("avcodec_parameters_to_context error,ret:%d\n", ret);&#xA;        &#xA;        return -1;&#xA;    }&#xA;&#xA;    // open 解码器&#xA;    ret = avcodec_open2(*codecctx, codec, NULL);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        printf("avcodec_open2 error,ret:%d\n", ret);&#xA;        &#xA;        return -2;&#xA;    }&#xA;    printf("pix:%d\n", (*codecctx)->pix_fmt);&#xA;    return ret;&#xA;}&#xA;&#xA;int init_filter(AVFilterContext **buffersrc_ctx, AVFilterContext **buffersink_ctx, AVFilterGraph **filter_graph, AVStream *stream, AVCodecContext *codecctx, const char *filter_desc)&#xA;{&#xA;    int ret = -1;&#xA;    char args[512];&#xA;    char msg[500];&#xA;    const AVFilter *buffersrc = avfilter_get_by_name("buffer");&#xA;    const AVFilter *buffersink = avfilter_get_by_name("buffersink");&#xA;&#xA;    AVFilterInOut *input = avfilter_inout_alloc();&#xA;    AVFilterInOut *output = avfilter_inout_alloc();&#xA;&#xA;    AVRational time_base = stream->time_base;&#xA;    enum AVPixelFormat pix_fmts[] = {AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE};&#xA;&#xA;    if (!output || !input || !filter_graph)&#xA;    {&#xA;        ret = -1;&#xA;        printf("avfilter_graph_alloc/avfilter_inout_alloc error,ret:%d\n", ret);&#xA;        &#xA;        goto end;&#xA;    }&#xA;    snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d", codecctx->width, codecctx->height, codecctx->pix_fmt, stream->time_base.num, stream->time_base.den, codecctx->sample_aspect_ratio.num, codecctx->sample_aspect_ratio.den);&#xA;    ret = avfilter_graph_create_filter(buffersrc_ctx, buffersrc, "in", args, NULL, *filter_graph);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        printf("avfilter_graph_create_filter buffersrc error,ret:%d\n", ret);&#xA;        &#xA;        goto end;&#xA;    }&#xA;&#xA;    ret = avfilter_graph_create_filter(buffersink_ctx, buffersink, "out", NULL, NULL, *filter_graph);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        printf("avfilter_graph_create_filter buffersink error,ret:%d\n", ret);&#xA;        &#xA;        goto end;&#xA;    }&#xA;    ret = av_opt_set_int_list(*buffersink_ctx, "pix_fmts", pix_fmts, AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        printf("av_opt_set_int_list error,ret:%d\n", ret);&#xA;        &#xA;        goto end;&#xA;    }&#xA;    /*&#xA; * The buffer source output must be connected to the input pad of&#xA; * the first filter described by filters_descr; since the first&#xA; * filter input label is not specified, it is set to "in" by&#xA; * default.&#xA; */&#xA;    output->name = av_strdup("in");&#xA;    output->filter_ctx = *buffersrc_ctx;&#xA;    output->pad_idx = 0;&#xA;    output->next = NULL;&#xA;&#xA;    /*&#xA; * The buffer sink input must be connected to the output pad of&#xA; * the last filter described by filters_descr; since the last&#xA; * filter output label is not specified, it is set to "out" by&#xA; * default.&#xA; */&#xA;    input->name = av_strdup("out");&#xA;    input->filter_ctx = *buffersink_ctx;&#xA;    input->pad_idx = 0;&#xA;    input->next = NULL;&#xA;&#xA;    if ((ret = avfilter_graph_parse_ptr(*filter_graph, filter_desc, &amp;input, &amp;output, NULL)) &lt; 0)&#xA;    {&#xA;        printf("avfilter_graph_parse_ptr error,ret:%d\n", ret);&#xA;        &#xA;        goto end;&#xA;    }&#xA;&#xA;    if ((ret = avfilter_graph_config(*filter_graph, NULL)) &lt; 0)&#xA;    {&#xA;        printf("avfilter_graph_config error,ret:%d\n", ret);&#xA;        &#xA;        goto end;&#xA;    }&#xA;    end:&#xA;    avfilter_inout_free(&amp;input);&#xA;    avfilter_inout_free(&amp;output);&#xA;    return ret;&#xA;}&#xA;&#xA;int main(int argc, char **argv)&#xA;{&#xA;    int ret;&#xA;    char msg[500];&#xA;    const char *filter_descr = "drawbox=x=100:y=100:w=100:h=100:color=pink@0.5"; // OK&#xA;    //const char *filter_descr = "movie=200.png[wm];[in][wm]overlay=10:10[out]"; //Test&#xA;    // const char *filter_descr = "scale=640:360,transpose=cclock";&#xA;    AVFormatContext *pFormatCtx = NULL;&#xA;    AVCodecContext *pCodecCtx;&#xA;    AVFilterContext *buffersink_ctx;&#xA;    AVFilterContext *buffersrc_ctx;&#xA;    AVFilterGraph *filter_graph;&#xA;    AVCodec *codec;&#xA;    int video_stream_index = -1;&#xA;&#xA;    AVPacket packet;&#xA;    AVFrame *pFrame;&#xA;    AVFrame *pFrame_out;&#xA;    filter_graph = avfilter_graph_alloc();&#xA;    FILE *fp_yuv = fopen("test.yuv", "wb&#x2B;");&#xA;    ret = avformat_open_input(&amp;pFormatCtx, argv[1], NULL, NULL);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        printf("avformat_open_input error,ret:%d\n", ret);&#xA;        &#xA;        return -1;&#xA;    }&#xA;&#xA;    ret = avformat_find_stream_info(pFormatCtx, NULL);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        printf("avformat_find_stream_info error,ret:%d\n", ret);&#xA;        &#xA;        return -2;&#xA;    }&#xA;&#xA;    ret = av_find_best_stream(pFormatCtx, AVMEDIA_TYPE_VIDEO, -1, -1, &amp;codec, 0);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        printf("av_find_best_stream error,ret:%d\n", ret);&#xA;        &#xA;        return -3;&#xA;    }&#xA;    // 获取到视频流索引&#xA;    video_stream_index = ret;&#xA;&#xA;    av_dump_format(pFormatCtx, 0, argv[1], 0);&#xA;    if ((ret = open_input_file(pFormatCtx, &amp;pCodecCtx, codec, argv[1], video_stream_index)) &lt; 0)&#xA;    {&#xA;        ret = -1;&#xA;        printf("open_input_file error,ret:%d\n", ret);&#xA;        &#xA;        goto end;&#xA;    }&#xA;&#xA;    if ((ret = init_filter(&amp;buffersrc_ctx, &amp;buffersink_ctx, &amp;filter_graph, pFormatCtx->streams[video_stream_index], pCodecCtx, filter_descr)) &lt; 0)&#xA;    {&#xA;        ret = -2;&#xA;        printf("init_filter error,ret:%d\n", ret);&#xA;        &#xA;        goto end;&#xA;    }&#xA;    pFrame = av_frame_alloc();&#xA;    pFrame_out = av_frame_alloc();&#xA;    while (1)&#xA;    {&#xA;        if ((ret = av_read_frame(pFormatCtx, &amp;packet)) &lt; 0)&#xA;            break;&#xA;&#xA;        if (packet.stream_index == video_stream_index)&#xA;        {&#xA;            ret = avcodec_send_packet(pCodecCtx, &amp;packet);&#xA;            if (ret &lt; 0)&#xA;            {&#xA;                printf("avcodec_send_packet error,ret:%d\n", ret);&#xA;                &#xA;                break;&#xA;            }&#xA;&#xA;            while (ret >= 0)&#xA;            {&#xA;                ret = avcodec_receive_frame(pCodecCtx, pFrame);&#xA;                if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;                {&#xA;                    break;&#xA;                }&#xA;                else if (ret &lt; 0)&#xA;                {&#xA;                    printf("avcodec_receive_frame error,ret:%d\n", ret);&#xA;                    &#xA;                    goto end;&#xA;                }&#xA;&#xA;                pFrame->pts = pFrame->best_effort_timestamp;&#xA;&#xA;                /* push the decoded frame into the filtergraph */&#xA;                ret = av_buffersrc_add_frame_flags(buffersrc_ctx, pFrame, AV_BUFFERSRC_FLAG_KEEP_REF);&#xA;                if (ret &lt; 0)&#xA;                {&#xA;                    printf("av_buffersrc_add_frame_flags error,ret:%d\n", ret);&#xA;                    &#xA;                    break;&#xA;                }&#xA;&#xA;                /* pull filtered frames from the filtergraph */&#xA;                while (1)&#xA;                {&#xA;                    ret = av_buffersink_get_frame(buffersink_ctx, pFrame_out);&#xA;                    if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;                        break;&#xA;                    if (ret &lt; 0)&#xA;                        goto end;&#xA;                    if (pFrame_out->format == AV_PIX_FMT_YUV420P)&#xA;                    {&#xA;                        //Y, U, V&#xA;                        for (int i = 0; i &lt; pFrame_out->height; i&#x2B;&#x2B;)&#xA;                        {&#xA;                            fwrite(pFrame_out->data[0] &#x2B; pFrame_out->linesize[0] * i, 1, pFrame_out->width, fp_yuv);&#xA;                        }&#xA;                        for (int i = 0; i &lt; pFrame_out->height / 2; i&#x2B;&#x2B;)&#xA;                        {&#xA;                            fwrite(pFrame_out->data[1] &#x2B; pFrame_out->linesize[1] * i, 1, pFrame_out->width / 2, fp_yuv);&#xA;                        }&#xA;                        for (int i = 0; i &lt; pFrame_out->height / 2; i&#x2B;&#x2B;)&#xA;                        {&#xA;                            fwrite(pFrame_out->data[2] &#x2B; pFrame_out->linesize[2] * i, 1, pFrame_out->width / 2, fp_yuv);&#xA;                        }&#xA;                    }&#xA;                    av_frame_unref(pFrame_out);&#xA;                }&#xA;                av_frame_unref(pFrame);&#xA;            }&#xA;        }&#xA;        av_packet_unref(&amp;packet);&#xA;    }&#xA;    end:&#xA;    avcodec_free_context(&amp;pCodecCtx);&#xA;    fclose(fp_yuv);&#xA;}&#xA;

    &#xA;