Recherche avancée

Médias (0)

Mot : - Tags -/objet éditorial

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (13)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

Sur d’autres sites (3756)

  • ERROR "Tag [3][0][0][0] incompatible with output codec id '86016' (mp4a)" while writing headers for output mp4 file from a UDP stream

    8 mai 2023, par lokit khemka

    I have a running UDP stream, that I simulating using FFMPEG command :

    


    ffmpeg -stream_loop -1 -re -i ./small_bunny_1080p_60fps.mp4 -v 0 -vcodec mpeg4 -f mpegts udp://127.0.0.1:23000


    


    The video file is obtained from the github link : https://github.com/leandromoreira/ffmpeg-libav-tutorial

    


    I keep getting error response, when I calling the function avformat_write_header. The output format is mp4, output video codec is av1 and output audio codec is same as input audio codec.

    


    I tried to create a "minimal reproducible code", however, I think it is still not completely minimal, but it reproduces the exact error.

    


    #include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavutil></libavutil>timestamp.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libswscale></libswscale>swscale.h>&#xA;#include &#xA;#include &#xA;#include &#xA;&#xA;#include &#xA;#include &#xA;&#xA;typedef struct StreamingContext{&#xA;    AVFormatContext* avfc;&#xA;    const AVCodec *video_avc;&#xA;    const AVCodec *audio_avc;&#xA;    AVStream *video_avs;&#xA;    AVStream *audio_avs;&#xA;    AVCodecContext *video_avcc;&#xA;    AVCodecContext *audio_avcc;&#xA;    int video_index;&#xA;    int audio_index;&#xA;    char* filename;&#xA;    struct SwsContext *sws_ctx;&#xA;}StreamingContext;&#xA;&#xA;&#xA;typedef struct StreamingParams{&#xA;    char copy_video;&#xA;    char copy_audio;&#xA;    char *output_extension;&#xA;    char *muxer_opt_key;&#xA;    char *muxer_opt_value;&#xA;    char *video_codec;&#xA;    char *audio_codec;&#xA;    char *codec_priv_key;&#xA;    char *codec_priv_value;&#xA;}StreamingParams;&#xA;&#xA;void logging(const char *fmt, ...)&#xA;{&#xA;    va_list args;&#xA;    fprintf(stderr, "LOG: ");&#xA;    va_start(args, fmt);&#xA;    vfprintf(stderr, fmt, args);&#xA;    va_end(args);&#xA;    fprintf(stderr, "\n");&#xA;}&#xA;&#xA;int fill_stream_info(AVStream *avs, const AVCodec **avc, AVCodecContext **avcc)&#xA;{&#xA;    *avc = avcodec_find_decoder(avs->codecpar->codec_id);&#xA;    *avcc = avcodec_alloc_context3(*avc);&#xA;    if (avcodec_parameters_to_context(*avcc, avs->codecpar) &lt; 0)&#xA;    {&#xA;        logging("Failed to fill Codec Context.");&#xA;        return -1;&#xA;    }&#xA;    avcodec_open2(*avcc, *avc, NULL);&#xA;    return 0;&#xA;}&#xA;&#xA;int open_media(const char *in_filename, AVFormatContext **avfc)&#xA;{&#xA;    *avfc = avformat_alloc_context();&#xA;    if (avformat_open_input(avfc, in_filename, NULL, NULL) != 0)&#xA;    {&#xA;        logging("Failed to open input file %s", in_filename);&#xA;        return -1;&#xA;    }&#xA;&#xA;    if (avformat_find_stream_info(*avfc, NULL) &lt; 0)&#xA;    {&#xA;        logging("Failed to get Stream Info.");&#xA;        return -1;&#xA;    }&#xA;}&#xA;&#xA;int prepare_decoder(StreamingContext *sc)&#xA;{&#xA;    for (int i = 0; i &lt; (int)sc->avfc->nb_streams; i&#x2B;&#x2B;)&#xA;    {&#xA;        if (sc->avfc->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)&#xA;        {&#xA;            sc->video_avs = sc->avfc->streams[i];&#xA;            sc->video_index = i;&#xA;&#xA;            if (fill_stream_info(sc->video_avs, &amp;sc->video_avc, &amp;sc->video_avcc))&#xA;            {&#xA;                return -1;&#xA;            }&#xA;        }&#xA;        else if (sc->avfc->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO)&#xA;        {&#xA;            sc->audio_avs = sc->avfc->streams[i];&#xA;            sc->audio_index = i;&#xA;&#xA;            if (fill_stream_info(sc->audio_avs, &amp;sc->audio_avc, &amp;sc->audio_avcc))&#xA;            {&#xA;                return -1;&#xA;            }&#xA;        }&#xA;        else&#xA;        {&#xA;            logging("Skipping Streams other than Audio and Video.");&#xA;        }&#xA;    }&#xA;    return 0;&#xA;}&#xA;&#xA;int prepare_video_encoder(StreamingContext *encoder_sc, AVCodecContext *decoder_ctx, AVRational input_framerate,&#xA;                          StreamingParams sp, int scaled_frame_width, int scaled_frame_height)&#xA;{&#xA;    encoder_sc->video_avs = avformat_new_stream(encoder_sc->avfc, NULL);&#xA;    encoder_sc->video_avc = avcodec_find_encoder_by_name(sp.video_codec);&#xA;    if (!encoder_sc->video_avc)&#xA;    {&#xA;        logging("Cannot find the Codec.");&#xA;        return -1;&#xA;    }&#xA;&#xA;    encoder_sc->video_avcc = avcodec_alloc_context3(encoder_sc->video_avc);&#xA;    if (!encoder_sc->video_avcc)&#xA;    {&#xA;        logging("Could not allocate memory for Codec Context.");&#xA;        return -1;&#xA;    }&#xA;&#xA;    av_opt_set(encoder_sc->video_avcc->priv_data, "preset", "fast", 0);&#xA;    if (sp.codec_priv_key &amp;&amp; sp.codec_priv_value)&#xA;        av_opt_set(encoder_sc->video_avcc->priv_data, sp.codec_priv_key, sp.codec_priv_value, 0);&#xA;&#xA;    encoder_sc->video_avcc->height = scaled_frame_height;&#xA;    encoder_sc->video_avcc->width = scaled_frame_width;&#xA;    encoder_sc->video_avcc->sample_aspect_ratio = decoder_ctx->sample_aspect_ratio;&#xA;&#xA;    if (encoder_sc->video_avc->pix_fmts)&#xA;        encoder_sc->video_avcc->pix_fmt = encoder_sc->video_avc->pix_fmts[0];&#xA;    else&#xA;        encoder_sc->video_avcc->pix_fmt = decoder_ctx->pix_fmt;&#xA;&#xA;    encoder_sc->video_avcc->bit_rate = 2 * 1000 * 1000;&#xA;&#xA;    encoder_sc->video_avcc->time_base = av_inv_q(input_framerate);&#xA;    encoder_sc->video_avs->time_base = encoder_sc->video_avcc->time_base;&#xA;&#xA;    &#xA;&#xA;    if (avcodec_open2(encoder_sc->video_avcc, encoder_sc->video_avc, NULL) &lt; 0)&#xA;    {&#xA;        logging("Could not open the Codec.");&#xA;        return -1;&#xA;    }&#xA;    avcodec_parameters_from_context(encoder_sc->video_avs->codecpar, encoder_sc->video_avcc);&#xA;    return 0;&#xA;}&#xA;&#xA;&#xA;int prepare_copy(AVFormatContext *avfc, AVStream **avs, AVCodecParameters *decoder_par)&#xA;{&#xA;    *avs = avformat_new_stream(avfc, NULL);&#xA;    avcodec_parameters_copy((*avs)->codecpar, decoder_par);&#xA;    return 0;&#xA;}&#xA;&#xA;int encode_video(StreamingContext *decoder, StreamingContext *encoder, AVFrame *input_frame)&#xA;{&#xA;    if (input_frame)&#xA;        input_frame->pict_type = AV_PICTURE_TYPE_NONE;&#xA;&#xA;    AVPacket *output_packet = av_packet_alloc();&#xA;&#xA;&#xA;    int response = avcodec_send_frame(encoder->video_avcc, input_frame);&#xA;&#xA;    while (response >= 0)&#xA;    {&#xA;        response = avcodec_receive_packet(encoder->video_avcc, output_packet);&#xA;        if (response == AVERROR(EAGAIN) || response == AVERROR_EOF)&#xA;        {&#xA;            break;&#xA;        }&#xA;&#xA;        output_packet->stream_index = decoder->video_index;&#xA;        output_packet->duration = encoder->video_avs->time_base.den / encoder->video_avs->time_base.num / decoder->video_avs->avg_frame_rate.num * decoder->video_avs->avg_frame_rate.den;&#xA;&#xA;        av_packet_rescale_ts(output_packet, decoder->video_avs->time_base, encoder->video_avs->time_base);&#xA;        response = av_interleaved_write_frame(encoder->avfc, output_packet);&#xA;    }&#xA;&#xA;    av_packet_unref(output_packet);&#xA;    av_packet_free(&amp;output_packet);&#xA;&#xA;    return 0;&#xA;}&#xA;&#xA;int remux(AVPacket **pkt, AVFormatContext **avfc, AVRational decoder_tb, AVRational encoder_tb)&#xA;{&#xA;    av_packet_rescale_ts(*pkt, decoder_tb, encoder_tb);&#xA;    if (av_interleaved_write_frame(*avfc, *pkt) &lt; 0)&#xA;    {&#xA;        logging("Error while copying Stream Packet.");&#xA;        return -1;&#xA;    }&#xA;    return 0;&#xA;}&#xA;&#xA;int transcode_video(StreamingContext *decoder, StreamingContext *encoder, AVPacket *input_packet, AVFrame *input_frame)&#xA;{&#xA;    int response = avcodec_send_packet(decoder->video_avcc, input_packet);&#xA;    while (response >= 0)&#xA;    {&#xA;        response = avcodec_receive_frame(decoder->video_avcc, input_frame);&#xA;        &#xA;        if (response == AVERROR(EAGAIN) || response == AVERROR_EOF)&#xA;        {&#xA;            break;&#xA;        }&#xA;        if (response >= 0)&#xA;        {&#xA;            if (encode_video(decoder, encoder, input_frame))&#xA;                return -1;&#xA;        }&#xA;&#xA;        av_frame_unref(input_frame);&#xA;    }&#xA;    return 0;&#xA;}&#xA;&#xA;int main(int argc, char *argv[])&#xA;{&#xA;    const int scaled_frame_width = 854;&#xA;    const int scaled_frame_height = 480;&#xA;    StreamingParams sp = {0};&#xA;    sp.copy_audio = 1;&#xA;    sp.copy_video = 0;&#xA;    sp.video_codec = "libsvtav1";&#xA;    &#xA;    StreamingContext *decoder = (StreamingContext *)calloc(1, sizeof(StreamingContext));&#xA;    decoder->filename = "udp://127.0.0.1:23000";&#xA;&#xA;    StreamingContext *encoder = (StreamingContext *)calloc(1, sizeof(StreamingContext));&#xA;    encoder->filename = "small_bunny_9.mp4";&#xA;    &#xA;    if (sp.output_extension)&#xA;    {&#xA;        strcat(encoder->filename, sp.output_extension);&#xA;    }&#xA;&#xA;    open_media(decoder->filename, &amp;decoder->avfc);&#xA;    prepare_decoder(decoder);&#xA;&#xA;&#xA;    avformat_alloc_output_context2(&amp;encoder->avfc, NULL, "mp4", encoder->filename);&#xA;    AVRational input_framerate = av_guess_frame_rate(decoder->avfc, decoder->video_avs, NULL);&#xA;    prepare_video_encoder(encoder, decoder->video_avcc, input_framerate, sp, scaled_frame_width, scaled_frame_height);&#xA;&#xA;    prepare_copy(encoder->avfc, &amp;encoder->audio_avs, decoder->audio_avs->codecpar);&#xA;        &#xA;&#xA;    if (encoder->avfc->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;        encoder->avfc->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;&#xA;    if (!(encoder->avfc->oformat->flags &amp; AVFMT_NOFILE))&#xA;    {&#xA;        if (avio_open(&amp;encoder->avfc->pb, encoder->filename, AVIO_FLAG_WRITE) &lt; 0)&#xA;        {&#xA;            logging("could not open the output file");&#xA;            return -1;&#xA;        }&#xA;    }&#xA;&#xA;    &#xA;    if (avformat_write_header(encoder->avfc, NULL) &lt; 0)&#xA;    {&#xA;        logging("an error occurred when opening output file");&#xA;        return -1;&#xA;    }&#xA;&#xA;    AVFrame *input_frame = av_frame_alloc();&#xA;    AVPacket *input_packet = av_packet_alloc();&#xA;&#xA;    while (1)&#xA;    {&#xA;        int ret = av_read_frame(decoder->avfc, input_packet);&#xA;        if(ret&lt;0)&#xA;            break;&#xA;        if (decoder->avfc->streams[input_packet->stream_index]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)&#xA;        {&#xA;            if (transcode_video(decoder, encoder, input_packet, input_frame))&#xA;                return -1;&#xA;            av_packet_unref(input_packet);&#xA;&#xA;        }&#xA;        else if (decoder->avfc->streams[input_packet->stream_index]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO)&#xA;        {&#xA;            &#xA;                if (remux(&amp;input_packet, &amp;encoder->avfc, decoder->audio_avs->time_base, encoder->audio_avs->time_base))&#xA;                    return -1;&#xA;        }&#xA;        else&#xA;        {&#xA;            logging("Ignoring all nonvideo or audio packets.");&#xA;        }&#xA;    }&#xA;&#xA;    if (encode_video(decoder, encoder, NULL))&#xA;        return -1;&#xA;    &#xA;&#xA;    av_write_trailer(encoder->avfc);&#xA;&#xA;&#xA;    if (input_frame != NULL)&#xA;    {&#xA;        av_frame_free(&amp;input_frame);&#xA;        input_frame = NULL;&#xA;    }&#xA;&#xA;    if (input_packet != NULL)&#xA;    {&#xA;        av_packet_free(&amp;input_packet);&#xA;        input_packet = NULL;&#xA;    }&#xA;&#xA;    avformat_close_input(&amp;decoder->avfc);&#xA;&#xA;    avformat_free_context(decoder->avfc);&#xA;    decoder->avfc = NULL;&#xA;    avformat_free_context(encoder->avfc);&#xA;    encoder->avfc = NULL;&#xA;&#xA;    avcodec_free_context(&amp;decoder->video_avcc);&#xA;    decoder->video_avcc = NULL;&#xA;    avcodec_free_context(&amp;decoder->audio_avcc);&#xA;    decoder->audio_avcc = NULL;&#xA;&#xA;    free(decoder);&#xA;    decoder = NULL;&#xA;    free(encoder);&#xA;    encoder = NULL;&#xA;&#xA;    return 0;&#xA;}&#xA;

    &#xA;

  • ffmpeg with multiple live-stream inputs adds async delay after filter

    12 janvier 2021, par Godmar

    I am struggling to apply ffmpeg for remote control of autonomous truck.

    &#xA;&#xA;

    There are 3 video streams from cameras in local network, described with .sdp files like this one (MJPEG over RTP, correct me if I'm wrong) :&#xA;m=video 50910 RTP/AVP 26&#xA;c=IN IP4 192.168.1.91

    &#xA;&#xA;

    I want to make a single video stream from three pictures combined using this :

    &#xA;&#xA;

    ffmpeg -hide_banner -protocol_whitelist "rtp,file,udp" -i "cam1.sdp" \&#xA;-protocol_whitelist "rtp,file,udp" -i "cam2.sdp" \&#xA;-protocol_whitelist "rtp,file,udp" -i "cam3.sdp" \&#xA;-filter_complex "\&#xA;nullsrc=size=1800x600 [back]; \&#xA;[back][b]overlay=1000[tmp1]; \&#xA;[tmp1][c]overlay=600[tmp2]; \&#xA;[tmp2][a]overlay" \&#xA;-vcodec libx264 \&#xA;-crf 25 -maxrate 4M -bufsize 8M -r 30 -preset ultrafast -tune zerolatency \&#xA;-f mpegts udp://localhost:1234&#xA;

    &#xA;&#xA;

    When i launch this, the ffmpeg starts sending errors about RTP packets being lost. In the output the fps of every camera seems unstable, so this is unacceptable.&#xA;I am able to launch ffplay or mplayer on three cameras simultaneously. And I also can make such stream using pre-recorded videofile as input. So it seems like the ffmpeg just can't read three UDP streams so fast.&#xA;The cameras are streaming at 10 Mbit/s, 800x600, 30 fps MJPEG, and those are the minimal settings I can afford, but the cameras can do much more.

    &#xA;&#xA;

    So I tried to do something to the size of UDP buffer. Well, there is a possibility to setup buffer_size and fifo_size for a UDP stream, but no such option for a stream described with an .sdp file. Even though I've found a way to run the stream with rtp://-like URL, but it doesn't seem to pass the arguments after ' ?' to the UDP.

    &#xA;&#xA;

    My next idea was to launch multiple ffmpeg instances and receive the streams separately, process them and re-stream to another instance, which would consume any kind of stream, stitch them together and send out. That would actually be a good setup, since I need to filter the streams individually, crop them, lenscorrect, rotate, and maybe a large -filter_complex on a single ffmpeg instance would not handle all the streams. And I'm going to have 3 more of them.

    &#xA;&#xA;

    I tried to implement this setup using 3 fifopipe or using 3 udp://localhost:124x internal streams. None of the approaches solved my problem, but the separated ffmpeg instances seem to be able to receive three streams simultaneously.&#xA;I was able to open the repeated stream through pipes and through UDP via mplayer or ffplay. They are completely synced and live&#xA;The stitching still fails miserably.&#xA;The pipes got me a few seconds delays for cameras, and after stitching streams were choppy and out of sync.&#xA;The udp :// got me a smooth video stream as a result, but one camera has 5 sec delay, and the others have 15 and 25.

    &#xA;&#xA;

    This smells like buffer. Changing the fifo_size and buffer_size doesn't seem to influence much.&#xA;I tried to add local time timestamp in re-streamer instances - this is how I found the 5, 15, 25sec delays.&#xA;I tried to add frame timestamp in stitcher instance - they come out completely synced. So setpts=PTS-STARTPTS doesn't work either.

    &#xA;&#xA;

    So, the buffer happens between the udp :// socket and the -filter_complex input. How do I get rid of it ? How do you like my workaround ? Am I doing it completely wrong ?

    &#xA;

  • ffmpeg : use vidstabtransform to overlay it over blurred background

    5 novembre 2023, par konewka

    I am using ffmpeg to concatenate multiple video clips taken from the same object over multiple timeframes. To make sure the videos are properly aligned (and therefore show the object in rougly the same position), I manually identify two points in the first frame each clip, and use that to calculate the scaling and positioning necessary for proper alignment. I'm using Python for this, and it also generates the ffmpeg command for me. When it has calculated that the appropriate scale of the video is less than 100%, that means that some parts of the frame will become black. To counter that, I overlay the scaled and positioned video over a blurred version of the original video (like this effect)

    &#xA;

    Now, additionally, some of the video clips are a bit shaky, so my flow now first applies the vidstabdetect and vidstabtransform filters, and uses the transformed stabilized version as input for my final command. However, if the shaking is significant, the vidstabtransform will zoom in and therefore I will either lose some of the details around the edges, or a black border is created around the edge. As I am later including the stabilized version of the video in the concatenation, with the possibility of it shrinking, I would rather perform the vidstabtransform step inside my command, and use the output directly into the overlay over the blurred version. That way, I would want to achieve that the clip rotates across the frame as it is stabilized, and it is shown over the blurred background. Is it possible to achieve this using ffmpeg, or am I trying to stretch it too far ?

    &#xA;

    As a minimal example, these are my commands :

    &#xA;

    ffmpeg -i video1.mp4 -vf vidstabdetect=output=transform.trf -f null - &#xA;&#xA;ffmpeg -i video1.mp4 -vf vidstabtransform=input=transform.trf video1_stabilized.mp4&#xA;&#xA;# same for video2.mp4&#xA;&#xA;ffmpeg -i video1_stabilized.mp4 -i video2_stabilized.mp4 -filter_complex "&#xA;    [0:v]split=2[v0blur][v0scale];&#xA;    [v0blur]gblur=sigma=50[v0blur];  // blur the video&#xA;    [v0scale]scale=round(iw*0.8/2)*2:round(ih*0.8/2)*2[v0scale];  // scale the video&#xA;    [v0blur][v0scale]overlay=x=100:y=200[v0];  // overlay the scaled video over the blur at a specific location&#xA;    [1:v]split=2[v1blur][v1scale];&#xA;    [v1blur]gblur=sigma=50[v1blur];&#xA;    [v1scale]scale=round(iw*0.9/2)*2:round(ih*0.9/2)*2[v1scale];&#xA;    [v1blur][v1scale]overlay=x=150:y=150[v1];&#xA;    [v0][v1]concat=n=2  // concatenate the two clips" &#xA;-c:v libx264 -r 30 out.mp4&#xA;

    &#xA;

    So, I know I can put the vidstabtransform step into the filter_complex-graph (I'll do the detection in a separate step still), but can I also use it such that I can achieve the stabilization over the blurred background and have the clip move around the frame as it is stabilized ?

    &#xA;

    EDIT : so to include vidstabtransform into the filter graph, it would then look like this :

    &#xA;

    ffmpeg -i video1.mp4 -i video2.mp4 -filter_complex "&#xA;    [0:v]vidstabtransform=input=transform1.trf[v0stab]&#xA;    [v0stab]split=2[v0blur][v0scale];&#xA;    [v0blur]gblur=sigma=50[v0blur];&#xA;    [v0scale]scale=round(iw*0.8/2)*2:round(ih*0.8/2)*2[v0scale];&#xA;    [v0blur][v0scale]overlay=x=100:y=200[v0];&#xA;    [1:v]vidstabtransform=input=transform2.trf[v1stab]&#xA;    [v1stab]split=2[v1blur][v1scale];&#xA;    [v1blur]gblur=sigma=50[v1blur];&#xA;    [v1scale]scale=round(iw*0.9/2)*2:round(ih*0.9/2)*2[v1scale];&#xA;    [v1blur][v1scale]overlay=x=150:y=150[v1];&#xA;    [v0][v1]concat=n=2"&#xA;-c:v libx264 -r 30 out.mp4&#xA;

    &#xA;