Recherche avancée

Médias (16)

Mot : - Tags -/mp3

Autres articles (59)

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

Sur d’autres sites (7354)

  • FFMPEG : Create timestamp based on actual creation time

    2 juillet 2022, par Peder Wessel

    Desired outcome

    


    Add overlay with timestamp for each frame of a video based on the original creation time for the video. E.g. starting at 2022-03-26T15:51:49.000000Z and a second later in the video present 2022-03-26T15:51.50.000000Z

    


    Approach

    


    Creation_time stored in the file already, e.g. when running ffmpeg -i input.mov" it presents creation_time   : 2022-03-26T15:51:49.000000Z.

    


    Adding overlay with timestamp to video :
ffmpeg -i input.mov -filter_complex "drawtext=text='%{pts\:gmtime\:1507046400\:%d-%m-%Y %T}': x=100 : y=100: box=1" -c:a copy output.mp4

    


    Challenge/ help needed

    


    Need to replace the gmtime\:1507046400 with the actual creation_time. How does one do it ?

    


    Sources

    


    


  • FFMPEG : Overlay Capture Time (w/Counter)

    27 juin 2022, par AdamK

    I am using the Shutter Encoder application on Windows to batch convert .MOV files, which provides the option of injecting custom FFMPEG commands for each file. The app natively offers overlay (drawtext) of timecode starting at 00:00:00:00. I also see that it knows and preserves the metadata time for each file as this is included in the commands "-metadata creation_time="2022-06-27T16:00:30.730888500Z"

    


    I would like to have the timecode start at the creation time, and was wondering how I might be able to offset the timecode as such. Or...is there another way of overlaying (drawtext-ing) a time counter, starting at creation time ? I would also like to overlay the creation date as well. Thanks in advance for your advice.

    


  • ffmpeg api alternate transcoding and remuxing for same file

    21 juin 2022, par Alexandre Novius

    Context

    


    Hello !

    


    I'm currently working on the development of a small library allowing to cut an h.264 video on any frame, but without re-encoding (transcoding) the whole video. The idea is to re-encode only the GOP on which we want to cut, and to rewrite (remux) directly the others GOP.

    


    The avcut project (https://github.com/anyc/avcut) allows to do that, but requires a systematic decoding of each package, and seems to not work with the recent versions of ffmpeg from the tests I could do and from the recent feedbacks in the github issues.

    


    As a beginner, I started from the code examples provided in the ffmpeg documentation, in particular : transcoding.c and remuxing.c.

    


    Problem encountered

    


    The problem I'm having is that I can't get both transcoding and remuxing to work properly at the same time. In particular, depending on the method I use to initialize the AVCodecParameters of the output video stream, transcoding works, or remuxing works :

    


      

    • avcodec_parameters_copy works well for remuxing
    • 


    • avcodec_parameters_from_context works well for transcoding
    • 


    


    In case I choose avcodec_parameters_from_context, the transcoded GOP are correctly read by my video player (parole), but the remuxed packets are not read, and ffprobe does not show/detect them.

    


    In case I choose avcodec_parameters_from_context, the remuxing GOP are correctly read by my video player, but the transcoding key_frame are bugged (I have the impression that the b-frame and p-frame are ok), and ffprobe -i return an error about the NAL of the key-frames :

    


    [h264 @ 0x55ec8a079300] sps_id 32 out of range
[h264 @ 0x55ec8a079300] Invalid NAL unit size (1677727148 > 735).
[h264 @ 0x55ec8a079300] missing picture in access unit with size 744


    


    I suspect that the problem is related to the extradata of the packets. Through some experiments on the different attributes of the output AVCodecParameters, it seems that it is the extradata and extradata_size attributes that are responsible for the functioning of one method or the other.

    


    Version

    


    ffmpeg development branch retrieved on 2022-05-17 from https://github.com/FFmpeg/FFmpeg.

    


    Compiled with --enable-libx264 --enable-gpl --enable-decoder=png --enable-encoder=png

    


    Code

    


    My code is written in c++ and is based on two classes : a class defining the parameters and methods on the input file (InputContexts) and a class defining them for the output file (OutputContexts). The code of these two classes is defined in the following files :

    


    


    The code normally involved in the problem is the following :

    


      

    • stream initialization
    • 


    


    int OutputContexts::init(const char* out_filename, InputContexts* input_contexts){
    int ret;
    int stream_index = 0;

    avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);
    if (!ofmt_ctx) {
        fprintf(stderr, "Could not create output context\n");
        ret = AVERROR_UNKNOWN;
        return ret;
    }

    av_dump_format(ofmt_ctx, 0, out_filename, 1);
 
    encoders.resize(input_contexts->ifmt_ctx->nb_streams, nullptr);
    codecs.resize(input_contexts->ifmt_ctx->nb_streams, nullptr);
  
    // stream mapping
    for (int i = 0; i < input_contexts->ifmt_ctx->nb_streams; i++) {
        AVStream *out_stream;
        AVStream *in_stream = input_contexts->ifmt_ctx->streams[i];
        AVCodecContext* decoder_ctx = input_contexts->decoders[i];
 
        // add new stream to output context
        out_stream = avformat_new_stream(ofmt_ctx, NULL);
        if (!out_stream) {
            fprintf(stderr, "Failed allocating output stream\n");
            ret = AVERROR_UNKNOWN;
            return ret;
        }

        // from avcut blog
        av_dict_copy(&out_stream->metadata, in_stream->metadata, 0);

        out_stream->time_base = in_stream->time_base;

        // encoder
        if (decoder_ctx->codec_type == AVMEDIA_TYPE_VIDEO){
            ret = prepare_encoder_video(i, input_contexts);
            if (ret < 0){
                fprintf(stderr, "Error while preparing encoder for stream #%u\n", i);
                return ret;
            }

            // from avcut
            out_stream->sample_aspect_ratio = in_stream->sample_aspect_ratio;

            // works well for remuxing
            ret = avcodec_parameters_copy(out_stream->codecpar, in_stream->codecpar);
            if (ret < 0) {
                fprintf(stderr, "Failed to copy codec parameters\n");
                return ret;
            }

            // works well for transcoding
            // ret = avcodec_parameters_from_context(out_stream->codecpar, encoders[i]);
            // if (ret < 0) {
            //     av_log(NULL, AV_LOG_ERROR, "Failed to copy encoder parameters to output stream #%u\n", i);
            //     return ret;
            // }

        } else if (decoder_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
            ...
        } else {
            ...
        }

        // TODO useful ???
        // set current stream position to 0
        // out_stream->codecpar->codec_tag = 0;
    }

    // opening output file in write mode with the ouput context
    if (!(ofmt_ctx->oformat->flags & AVFMT_NOFILE)) {
        ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
        if (ret < 0) {
            fprintf(stderr, "Could not open output file '%s'", out_filename);
            return ret;
        }
    }
 
    // write headers from output context in output file
    ret = avformat_write_header(ofmt_ctx, NULL);
    if (ret < 0) {
        fprintf(stderr, "Error occurred when opening output file\n");
        return ret;
    }

    return ret;
}


    


      

    • AVCodecContext initialization for encoder
    • 


    


    int OutputContexts::prepare_encoder_video(int stream_index, InputContexts* input_contexts){
    int ret;
    const AVCodec* encoder;
    AVCodecContext* decoder_ctx = input_contexts->decoders[stream_index];
    AVCodecContext* encoder_ctx;

    if (video_index >= 0){
        fprintf(stderr, "Impossible to mark stream #%u as video, stream #%u is already registered as video stream.\n", 
                stream_index, video_index);
        return -1; //TODO change this value for correct error code
    }
    video_index = stream_index;

    if(decoder_ctx->codec_id == AV_CODEC_ID_H264){
        encoder = avcodec_find_encoder_by_name("libx264");
        if (!encoder) {
            av_log(NULL, AV_LOG_FATAL, "Encoder libx264 not found\n");
            return AVERROR_INVALIDDATA;
        }
        fmt::print("Encoder libx264 will be used for stream {}.\n", stream_index);
    } else {
        std::string s = fmt::format("No video encoder found for the given codec_id: {}\n", avcodec_get_name(decoder_ctx->codec_id));
        av_log(NULL, AV_LOG_FATAL, s.c_str());
        return AVERROR_INVALIDDATA;
    }
    
    encoder_ctx = avcodec_alloc_context3(encoder);
    if (!encoder_ctx) {
        av_log(NULL, AV_LOG_FATAL, "Failed to allocate the encoder context\n");
        return AVERROR(ENOMEM);
    }

    // from avcut
    encoder_ctx->time_base = decoder_ctx->time_base;
    encoder_ctx->ticks_per_frame = decoder_ctx->ticks_per_frame;
    encoder_ctx->delay = decoder_ctx->delay;
    encoder_ctx->width = decoder_ctx->width;
    encoder_ctx->height = decoder_ctx->height;
    encoder_ctx->pix_fmt = decoder_ctx->pix_fmt;
    encoder_ctx->sample_aspect_ratio = decoder_ctx->sample_aspect_ratio;
    encoder_ctx->color_primaries = decoder_ctx->color_primaries;
    encoder_ctx->color_trc = decoder_ctx->color_trc;
    encoder_ctx->colorspace = decoder_ctx->colorspace;
    encoder_ctx->color_range = decoder_ctx->color_range;
    encoder_ctx->chroma_sample_location = decoder_ctx->chroma_sample_location;
    encoder_ctx->profile = decoder_ctx->profile;
    encoder_ctx->level = decoder_ctx->level;

    encoder_ctx->thread_count = 1; // spawning more threads causes avcodec_close to free threads multiple times
    encoder_ctx->codec_tag = 0;
    
    // correct values ???
    encoder_ctx->qmin = 16;
    encoder_ctx->qmax = 26;
    encoder_ctx->max_qdiff = 4;
    // end from avcut

    // according to avcut, should not be set
    // if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER){
    //     encoder_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
    // }

    ret = avcodec_open2(encoder_ctx, encoder, NULL);
    if (ret < 0) {
        av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", stream_index);
        return ret;
    }
    
    codecs[stream_index] = encoder;
    encoders[stream_index] = encoder_ctx;

    return ret;
}


    


    Example

    


    To illustrate my problem, I provide here a test code using the two classes that alternates between transcoding and remuxing at each key-frame encountered in the file using my classes.

    


    


    To compile the code :

    


    g++ -o trans_remux trans_remux.cpp contexts.cpp -D__STDC_CONSTANT_MACROS `pkg-config --libs libavfilter` -lfmt -g


    


    Currently the code is using avcodec_parameters_copy (contexts.cpp:333), so it works well for remuxing. If you want to test the version with avcodec_parameters_from_context, pls comment from line 333 to 337 in contexts.cpp and uncomment from line 340 to 344 and recompile.