Recherche avancée

Médias (5)

Mot : - Tags -/open film making

Autres articles (57)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (6709)

  • Keep FFMPEG encoding session running

    8 août 2016, par user3579130

    I am trying to encode an external HLS (m3u8) link into MPEG-TS over UDP via ffmpeg.
    ( ffmpeg command : ffmpeg -re -i http://domain.com/index400.m3u8 -vcodec copy -acodec copy -f mpegts udp ://127.0.0.1:10000 ?pkt_size=1316 )

    Currently I am executing the command directly inside a terminal which I keep open on my Centos server. However, and after some time (volatile), I get the following error :

    Failed to resolve hostname domain.com : Temporary failure in name resolution
    [hls,applehttp @ 0x349b420] Failed to reload playlist 0

    My question is, how can I run this command in a bash script or upstart or ... so that whenever it unexpectedly stops, it automatically restarts.
    I prefer not to use third parties like monit, and please be explicit in writing the script with annotation for newbies (I am not well experienced on this).

    Thank you for your time and support.

  • mimic : do not release the newly obsolete reference at the end of decoding

    25 juillet 2016, par Anton Khirnov
    mimic : do not release the newly obsolete reference at the end of decoding
    

    The reference frames are used in update_thread_context(), so modifying
    them after finish_setup() is a race. The frame in question will be
    released during the next decode call.

    CC : libav-stable@libav.org

    • [DBH] libavcodec/mimic.c
  • C++ ffmpeg Queue input is backward in time while encoding

    15 août 2022, par Turgut

    I've made a program that takes a video as an input, decodes it's video and audio data, then edits the video data and encodes both video and audio (audio remains unedited). I've managed to successfully get the edited video as an output so far, but when I add in the audio, I get an error that says Queue input is backward in time. I used the muxing example from ffmpegs doc/examples for encoding, here is what it looks like (I'm not including the video encoding parts since its working just fine) :

    


    typedef struct {
            OutputStream video_st, audio_st;
            const AVOutputFormat *fmt;
            AVFormatContext *oc;
            int have_video, have_audio, encode_video, encode_audio;
            std::string name;
        } encode_info;

encode_info enc_inf;

void video_encoder::open_audio(AVFormatContext *oc, const AVCodec *codec,
                       OutputStream *ost, AVDictionary *opt_arg)
{
    AVCodecContext *c;
    int nb_samples;
    int ret;
    AVDictionary *opt = NULL;

    c = ost->enc;

    /* open it */
    av_dict_copy(&opt, opt_arg, 0);
    ret = avcodec_open2(c, codec, &opt);
    av_dict_free(&opt);
    if (ret < 0) {
        fprintf(stderr, "Could not open audio codec: %s\n", ret);
        exit(1);
    }

    /* init signal generator */
    ost->t     = 0;
    ost->tincr = 2 * M_PI * 110.0 / c->sample_rate;
    /* increment frequency by 110 Hz per second */
    ost->tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;

    if (c->codec->capabilities & AV_CODEC_CAP_VARIABLE_FRAME_SIZE)
        nb_samples = 10000;
    else
        nb_samples = c->frame_size;

    ost->frame     = alloc_audio_frame(c->sample_fmt, c->channel_layout,
                                       c->sample_rate, nb_samples);
    ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout,
                                       c->sample_rate, nb_samples);

    /* copy the stream parameters to the muxer */
    ret = avcodec_parameters_from_context(ost->st->codecpar, c);
    if (ret < 0) {
        fprintf(stderr, "Could not copy the stream parameters\n");
        exit(1);
    }

    /* create resampler context */
    ost->swr_ctx = swr_alloc();
    if (!ost->swr_ctx) {
        fprintf(stderr, "Could not allocate resampler context\n");
        exit(1);
    }

    /* set options */
    av_opt_set_int       (ost->swr_ctx, "in_channel_count",   c->channels,       0);
    av_opt_set_int       (ost->swr_ctx, "in_sample_rate",     c->sample_rate,    0);
    av_opt_set_sample_fmt(ost->swr_ctx, "in_sample_fmt",      AV_SAMPLE_FMT_S16, 0);
    av_opt_set_int       (ost->swr_ctx, "out_channel_count",  c->channels,       0);
    av_opt_set_int       (ost->swr_ctx, "out_sample_rate",    c->sample_rate,    0);
    av_opt_set_sample_fmt(ost->swr_ctx, "out_sample_fmt",     c->sample_fmt,     0);

    /* initialize the resampling context */
    if ((ret = swr_init(ost->swr_ctx)) < 0) {
        fprintf(stderr, "Failed to initialize the resampling context\n");
        exit(1);
    }
}


void video_encoder::encode_one_frame()
{
    if (enc_inf.encode_video || enc_inf.encode_audio) {
        /* select the stream to encode */
       if (enc_inf.encode_video &&
            (!enc_inf.encode_audio || av_compare_ts(enc_inf.video_st.next_pts, enc_inf.video_st.enc->time_base,
                                            enc_inf.audio_st.next_pts, enc_inf.audio_st.enc->time_base) <= 0)) {
            enc_inf.encode_video = !write_video_frame(enc_inf.oc, &enc_inf.video_st);
        } else {
            std::cout << "Encoding audio" << std::endl;
            enc_inf.encode_audio = !write_audio_frame(enc_inf.oc, &enc_inf.audio_st);
        }
    }
}

int video_encoder::write_audio_frame(AVFormatContext *oc, OutputStream *ost)
{
    AVCodecContext *c;
    AVFrame *frame;
    int ret;
    int dst_nb_samples;

    c = ost->enc;

    frame = audio_frame;//get_audio_frame(ost);

    if (frame) {
        /* convert samples from native format to destination codec format, using the resampler */
        /* compute destination number of samples */
        dst_nb_samples = av_rescale_rnd(swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples,
                                        c->sample_rate, c->sample_rate, AV_ROUND_UP);
        //av_assert0(dst_nb_samples == frame->nb_samples);

        /* when we pass a frame to the encoder, it may keep a reference to it
         * internally;
         * make sure we do not overwrite it here
         */
        ret = av_frame_make_writable(ost->frame);
        if (ret < 0)
            exit(1);

        /* convert to destination format */
        ret = swr_convert(ost->swr_ctx,
                          ost->frame->data, dst_nb_samples,
                          (const uint8_t **)frame->data, frame->nb_samples);
        if (ret < 0) {
            fprintf(stderr, "Error while converting\n");
            exit(1);
        }
        frame = ost->frame;

        frame->pts = av_rescale_q(ost->samples_count, (AVRational){1, c->sample_rate}, c->time_base);
        ost->samples_count += dst_nb_samples;
    }

    return write_frame(oc, c, ost->st, frame, ost->tmp_pkt);
}
void video_encoder::set_audio_frame(AVFrame* frame)
{
    audio_frame = frame;
}


    


    Normally the muxing example above uses get_audio_frame(ost) for frame insde write_audio_frame to create a dummy audio frame, but I want to use the audio that I have decoded from my input video. After decoding the audio frame I pass it to the encoder using set_audio_frame so my encoder can use it. Then I removed get_audio_frame(ost) and simply replaced it with audio_frame. So here is what my main loop looks like :

    


    ...
open_audio(args);
...
while(current_second < ouput_duration)
{
...
   video_reader_read_frame(buffer, &pts, start_ts);
   edit_decoded_video(buffer);
   ...
   if(frame_type == 2)
      encoder->set_audio_frame(audio_test->get_frame());
   encoder->encode_one_frame();
}


    


    And here is what my decoder looks like :

    


       int video_decode::decode_audio(AVCodecContext *dec, const AVPacket *pkt)
    {
        auto& frame= state.av_frame;
        int ret = 0;
    
        // submit the packet to the decoder
        ret = avcodec_send_packet(dec, pkt);
        if (ret < 0) {
            std::cout << "Error submitting a packet for decoding" << std::endl;
            return ret;
        }
    
        // get all the available frames from the decoder
        while (ret >= 0) {
            ret = avcodec_receive_frame(dec, frame);
            if (ret < 0) {
                // those two return values are special and mean there is no output
                // frame available, but there were no errors during decoding
                if (ret == AVERROR_EOF || ret == AVERROR(EAGAIN))
                    return 0;
    
                std::cout << "Decode err" << std::endl;
                return ret;
            }
    
            if (ret < 0)
                return ret;
        }
    
        return 0;
    }

    int video_decode::video_reader_read_frame(uint8_t* frame_buffer, int64_t* pts, double seg_start) 
    {
        // Unpack members of state
        auto& width = state.width;
        auto& height = state.height;
        auto& av_format_ctx = state.av_format_ctx;
        auto& av_codec_ctx = state.av_codec_ctx;
        auto& audio_codec_ctx = state.audio_codec_ctx;
        auto& video_stream_index = state.video_stream_index;
        auto& audio_stream_index = state.audio_stream_index;
        auto& av_frame = state.av_frame;
        auto& av_packet = state.av_packet;
        auto& sws_scaler_ctx = state.sws_scaler_ctx;

        // Decode one frame
        //double pt_in_seconds = (*pts) * (double)state.time_base.num / (double)state.time_base.den;
        if (!this->skipped) {
            this->skipped = true;
            *pts = (int64_t)(seg_start * (double)state.time_base.den / (double)state.time_base.num);
            video_reader_seek_frame(*pts);
        }

        int response;
        while (av_read_frame(av_format_ctx, av_packet) >= 0) {
            // Audio decode

            if (av_packet->stream_index == video_stream_index){
                std::cout << "Decoded VIDEO" << std::endl;

                response = avcodec_send_packet(av_codec_ctx, av_packet);
                if (response < 0) {
                    printf("Failed to decode packet: %s\n", av_make_error(response));
                    return false;
                }


                response = avcodec_receive_frame(av_codec_ctx, av_frame);
                if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
                    av_packet_unref(av_packet);
                    continue;
                } else if (response < 0) {
                    printf("Failed to decode packet: %s\n", av_make_error(response));
                    return false;
                }


                *pts = av_frame->pts;
                // Set up sws scaler
                if (!sws_scaler_ctx) {
                    auto source_pix_fmt = correct_for_deprecated_pixel_format(av_codec_ctx->pix_fmt);
                    sws_scaler_ctx = sws_getContext(width, height, source_pix_fmt,
                                                    width, height, AV_PIX_FMT_RGB0,
                                                    SWS_BICUBIC, NULL, NULL, NULL);
                }
                if (!sws_scaler_ctx) {
                    printf("Couldn't initialize sw scaler\n");
                    return false;
                }

                uint8_t* dest[4] = { frame_buffer, NULL, NULL, NULL };
                int dest_linesize[4] = { width * 4, 0, 0, 0 };
                sws_scale(sws_scaler_ctx, av_frame->data, av_frame->linesize, 0, height, dest, dest_linesize);
                av_packet_unref(av_packet);
                return 1;
            }
            if (av_packet->stream_index == audio_stream_index){
                std::cout << "Decoded AUDIO" << std::endl;
                decode_audio(audio_codec_ctx, av_packet);
                av_packet_unref(av_packet);
                return 2;
            }else {
                av_packet_unref(av_packet);
                continue;
            }

            av_packet_unref(av_packet);
            break;
        }


        return true;
    }
void init()
{
...
if (open_codec_context(&audio_stream_index, &audio_dec_ctx, av_format_ctx, AVMEDIA_TYPE_AUDIO) >= 0) {
      audio_stream = av_format_ctx->streams[audio_stream_index];
}
...
}


    


    My decoder uses the same format context, packet and frame for video and audio decoding and uses separate stream and codec context.

    


    Why am I getting Queue input is backward in time error and how can I propperly encode the audio ? I'm not sure but from the looks of it decodes the audio just fine. And again there are no problems on video encoding/decoding whatsoever.