Recherche avancée

Médias (0)

Mot : - Tags -/gis

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (73)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (14566)

  • ffmpeg - maintain video quality as I apply multi-pass workflow, each includes decoding and encoding again, in video editing

    27 septembre 2020, par QRrabbit

    I'm using FFMPEG libraries do do some video manipulation, and due to complexity of filters and image overlaying, I have to run the process in multiple passes.
This means, my process is as such :
Open original video, decode it, run -complex_filter, recompress the video in whichever format the original video was encoded.
Open the output from the first pass, run another -complex_filter, etc-etc.
Sometimes I have to do the above 3-4 times. My concern is that the video, with every compression is losing quality - obvious signs of that is the file is shrinking in size with every pass.

    


    With the above, I have two questions :

    


      

    1. Would it make sense to, after first manipulation, instead of saving the video in its original format, I chose some format that is lossless, and then perform my passes one after the other, knowing that the quality remains the same, then on the final pass I recompress one-time into the format of the source. If so, what format of a video would you recommend ? ProRes 4444 ? Any other formats I shell consider ? Any parameters I should set and carry-over from encoding to encoding ?

      


    2. 


    3. With every step I carry over all extra streams of audio and other meta data.
Wouldn't it be more efficient to strip everything except the video, run my video passages over and over and not to need for adding -c:a copy and c:s copy ? Then on my final run, merge all streams from the original source to the output file ? If yes, how will I carry the ffmpeg command specifically ? I have a video that has 1 video stream + 15 audio streams + some extra tmcd which my ffmpeg cannot read.

      


    4. 


    


    Thank you.

    


    Edit 1 :

    


    if the input video has a video codec = dvvideo, and if dimensions of the video is 1280x1080, that means the video doesn't have a typical square pixels.
I first need to resize the video, in doing so I scale the video up. Then I can run my filters :

    


    pass-1 : -vf scale=1920x1080 (this step is skipped if the video is of a normal x to y ratio)
pass-2 : -filter_complex : which calls my special filter that adds some proprietary watermark to a video
pass-3 : -filter_complex : "0overlay=5:21:enable='between(t,2,3)+between(t,4,5)+between(t,6,8)'" (sole objective is to inserts an icon.png at a location near where the watermark was placed in previous step.)
pass-4 : -vf scale=1280x1080 (this step scales the video back, if pass-1 was executed)

    


    I could probably rewrite my 'C' filter code at some point in the future to accommodate this logic of checking for 1280x1080, as well as inserting this icon.png, and do it all in one step, but for right now, I thought just using 2-step process if a normal video, or a 4 passes if needs scaling, and utilize something of a lossless format as a temp file solution (I arbitrary chose ProRes 4444 v5, but open to suggestions), should minimize the losses during recompression to the video.

    


    Steps 1 and 4 are conditional, and only applicable if :&#xA;if vcodec == 'dvvideo' and aspect_ratio < 1.2 : # 1280x1080 ratio has about 1.16&#xA;I run steps 1->4. Otherwise only steps 2 & 3 :

    &#xA;

    Step1 :

    &#xA;

    ffmpeg -i in.mov -vf scale=1920x1080 -map 0:v? -map 0:a? -map 0:s? -map_metadata 0 -b:v 115084915 -maxrate 115084915 -minrate 115084915 -c:v prores_ks -profile:v 5 -preset ultrafast -crf 0 -c:a copy -timecode 00:00:00.00 -c:s copy -y step2.mov&#xA;

    &#xA;

    Step2 :

    &#xA;

    ffmpeg -i step2.mov -filter_complex " myFilter=enable=&#x27;between(t,0,30)&#x27;:x=15:y=25:size=95:etc-etc..." -map 0:v? -map 0:a? -map 0:s? -map_metadata 0 -b:v 115084915 -maxrate 115084915 -minrate 115084915 -c:v prores_ks -profile:v 5 -preset ultrafast -crf 0 -c:a copy -timecode 00:00:00.00 -c:s copy -y step3.mov&#xA;

    &#xA;

    Step3 :

    &#xA;

    ffmpeg -i step3.mov -i icon.png -filter_complex "[0][1]overlay=15:20:enable=&#x27;between(t,1,3.600)&#x2B;between(t,4,5.500)&#x2B;between(t,6,20)&#x27; " -map 0:v? -map 0:a? -map 0:s? -map_metadata 0 -b:v 115084915 -maxrate 115084915 -minrate 115084915 -c:v prores_ks -profile:v 5 -preset ultrafast -crf 0 -c:a copy -timecode 00:00:00.00 -c:s copy -y step4.mov&#xA;

    &#xA;

    Step4 :

    &#xA;

    ffmpeg -i step4.mov -map 0:v? -vf scale=1280x1080 -map 0:a? -map 0:s? -c:v dvvideo -pix_fmt yuv422p -b:v 115084915 -maxrate 115084915 -minrate 115084915 -r 29.97 -top 1 -color_primaries bt709 -color_trc bt709 -colorspace bt709 -vtag dvh6 -map_metadata 0 -c:a copy -timecode 00:00:00.00 -c:s copy -y final-output.mov&#xA;

    &#xA;

    Since I post my entire set of ffmpeg commands, maybe someone could recommend how to make my output match input so that I don't lose timecode entry :input is on the left panel, my output is on the right

    &#xA;

  • FFmpeg loses first two frames

    13 avril 2018, par CoXier

    I want to decode input video file and get some info such as frame count.

    Here is input file link : https://drive.google.com/file/d/1LilwULBNQp-uVVzyRjAUwbgjGxaoLFs8/view?usp=sharing

    AVStream#nb_frames

    I can get frame count without decoding input video.

    av_register_all();

    AVFormatContext *pFormatContext = avformat_alloc_context();
    if (!pFormatContext) {
       return -1;
    }

    if (avformat_open_input(&amp;pFormatContext, input_file, nullptr, nullptr) != 0) {
       return -1;
    }

    # GetVideoStreamIndex just used to get video stream index
    int videoStreamIndex = GetVideoStreamIndex(pFormatContext);

    int64_t frames = pFormatContext->streams[videoStreamIndex]->nb_frames;

    I got 596 frames.

    decode

    Another way is decoding video file. Here is my code.

    av_register_all();

    AVFormatContext *pFormatContext = avformat_alloc_context();
    avformat_open_input(&amp;pFormatContext, input_file, nullptr, nullptr);
    int videoStreamIndex = GetVideoStreamIndex(pFormatContext);

    AVCodecParameters *pCodecParameters = pFormatContext->streams[videoStreamIndex]->codecpar;
    AVCodec *pCodec = avcodec_find_decoder(pCodecParameters->codec_id);
    AVCodecContext *pCodecContext = avcodec_alloc_context3(pCodec);
    avcodec_parameters_to_context(pCodecContext, pCodecParameters);
    avcodec_open2(pCodecContext, pCodec, nullptr);

    AVPacket *pPacket = av_packet_alloc();
    AVFrame *pFrame = av_frame_alloc();


    while (av_read_frame(pFormatContext, pPacket) >= 0) {
       if (pPacket->stream_index == videoStreamIndex) {
           decode_packet(pCodecContext, pPacket, pFrame);
       }
       av_packet_unref(pPacket);
    }

    avformat_free_context(pFormatContext);
    av_packet_free(&amp;pPacket);
    av_frame_free(&amp;pFrame);
    avcodec_free_context(&amp;pCodecContext);

    And decode_packet method :

    static void decode_packet(AVCodecContext *pCodecContext, AVPacket *pPacket, AVFrame *pFrame) {
       // Supply raw packet data as input to a decoder
       // https://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga58bc4bf1e0ac59e27362597e467efff3
       int response = avcodec_send_packet(pCodecContext, pPacket);
       if (response != 0) {
           fprintf(stderr, "Error while sending a packet to the decoder: %s\n", av_err2str(response));
           return;
       }
       while (response >= 0) {
           response = avcodec_receive_frame(pCodecContext, pFrame);
           if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
               break;
           } else if (response &lt; 0) {
               fprintf(stderr, "Error while receiving frame: %s\n", av_err2str(response));
               break;
           }
           fprintf(stdout, "Frame %d\n", pCodecContext->frame_number);
           av_frame_unref(pFrame);
       }
    }

    However I got 594 frames.

    After I log some info, I found first two frames lost.

    response = avcodec_receive_frame(pCodecContext, pFrame);

    First two response is AVERROR(EAGAIN).

  • why AVFrame pts value doesn't affect bitrate of frames ?

    31 janvier 2021, par fsdfhdsjkhfjkds

    I'm trying code realtime screen sharing I noticed H264 codec doesn't do constant time encode for every frame. That causes being not able to encode exact same amount frame rate with context->time_base. When we encode less frames per second than time_base bitrate of second becomes lower than what we set.

    &#xA;

    I modified libav's example encode code and put 1/1000 time base and supply it with only 10 frame. I increase frame->pts related with time_base but bitrates still stay at low.

    &#xA;

    For results I just change context->time_base to 1, 1000, 1, 10 etc

    &#xA;

    1/1000 time base (as sum 1989 bytes per second) :

    &#xA;

    encoded frame 0 (size=1169)&#xA;encoded frame 100 (size=95)&#xA;encoded frame 200 (size=92)&#xA;encoded frame 300 (size=102)&#xA;encoded frame 400 (size=90)&#xA;encoded frame 500 (size=90)&#xA;encoded frame 600 (size=90)&#xA;encoded frame 700 (size=83)&#xA;encoded frame 800 (size=95)&#xA;encoded frame 900 (size=83)&#xA;

    &#xA;

    1/10 time base (as sum 95324 bytes per second) :

    &#xA;

    encoded frame 0 (size=14187)&#xA;encoded frame 1 (size=6053)&#xA;encoded frame 2 (size=8530)&#xA;encoded frame 3 (size=9277)&#xA;encoded frame 4 (size=9508)&#xA;encoded frame 5 (size=11163)&#xA;encoded frame 6 (size=9685)&#xA;encoded frame 7 (size=9346)&#xA;encoded frame 8 (size=7662)&#xA;encoded frame 9 (size=9913)&#xA;

    &#xA;

    code :

    &#xA;

    #include &#xA;#include &#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;&#xA;static void encode(AVCodecContext *context, AVFrame *frame, AVPacket *pkt, FILE *outfile){&#xA;    int ret = avcodec_send_frame(context, frame);&#xA;    assert(ret >= 0);&#xA;    while(ret >= 0){&#xA;        ret = avcodec_receive_packet(context, pkt);&#xA;        if(ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;            return;&#xA;        else if(ret &lt; 0)&#xA;            assert(0);&#xA;        printf("encoded frame %lld (size=%d)\n", pkt->pts, pkt->size);&#xA;        fwrite(pkt->data, 1, pkt->size, outfile);&#xA;        av_packet_unref(pkt);&#xA;    }&#xA;}&#xA;&#xA;int main(int argc, char **argv){&#xA;    if(argc &lt;= 1){&#xA;        fprintf(stderr, "Usage: %s <output file="file">\n", argv[0]);&#xA;        exit(0);&#xA;    }&#xA;    av_log_set_level(AV_LOG_QUIET);&#xA;    const char *filename = argv[1];&#xA;    const AVCodec *codec = avcodec_find_encoder(AV_CODEC_ID_H264);&#xA;    assert(codec);&#xA;    AVCodecContext *context = avcodec_alloc_context3(codec);&#xA;    assert(context);&#xA;    AVFrame *frame = av_frame_alloc();&#xA;    assert(frame);&#xA;    AVPacket *pkt = av_packet_alloc();&#xA;    assert(pkt);&#xA;    context->bit_rate = 800000;&#xA;    context->width = 1280;&#xA;    context->height = 720;&#xA;    context->time_base = (AVRational){1, 1000};&#xA;    context->pix_fmt = AV_PIX_FMT_YUV420P;&#xA;    AVDictionary *dict = 0;&#xA;    assert(av_dict_set(&amp;dict, "preset", "veryfast", 0) >= 0);&#xA;    assert(av_dict_set(&amp;dict, "tune", "zerolatency", 0) >= 0);&#xA;    assert(avcodec_open2(context, codec, &amp;dict) >= 0);&#xA;    FILE *f = fopen(filename, "wb");&#xA;    assert(f);&#xA;    frame->format = context->pix_fmt;&#xA;    frame->width  = context->width;&#xA;    frame->height = context->height;&#xA;    assert(av_frame_get_buffer(frame, 0) >= 0);&#xA;    for(int i = 0; i &lt; 10; i&#x2B;&#x2B;){&#xA;        for(int y = 0; y &lt; context->height; y&#x2B;&#x2B;){&#xA;            for(int x = 0; x &lt; context->width; x&#x2B;&#x2B;){&#xA;                frame->data[0][y * frame->linesize[0] &#x2B; x] = x &#x2B; y &#x2B; i * 3;&#xA;            }&#xA;        }&#xA;        for(int y = 0; y &lt; context->height / 2; y&#x2B;&#x2B;){&#xA;            for(int x = 0; x &lt; context->width / 2; x&#x2B;&#x2B;){&#xA;                frame->data[1][y * frame->linesize[1] &#x2B; x] = 128 &#x2B; y &#x2B; i * 2;&#xA;                frame->data[2][y * frame->linesize[2] &#x2B; x] = 64 &#x2B; x &#x2B; i * 5;&#xA;            }&#xA;        }&#xA;        frame->pts = i * (context->time_base.den / 10);&#xA;        encode(context, frame, pkt, f);&#xA;    }&#xA;    fclose(f);&#xA;    avcodec_free_context(&amp;context);&#xA;    av_frame_free(&amp;frame);&#xA;    av_packet_free(&amp;pkt);&#xA;    return 0;&#xA;}&#xA;</output>

    &#xA;

    How we can keep right bitrate with different time_base than frame rate ?

    &#xA;