Recherche avancée

Médias (0)

Mot : - Tags -/objet éditorial

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (71)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

Sur d’autres sites (9191)

  • Installing ffmpeg in ubuntu [closed]

    9 octobre 2023, par Sovan Bhakta

    Recently I tried to install ffmpeg in my ubuntu and I received an error saying broken packages.

    


    I already tried

    


    Sudo apt update && sudo apt install ffmpeg

    


    Sudo apt update && sudo apt upgrade

    


    

-hp:~$ sudo apt update && sudo apt install ffmpeg

Hit:1 http://security.ubuntu.com/ubuntu jammy-security InRelease

Hit:2 https://esm.ubuntu.com/infra/ubuntu jammy-infra-security InRelease

Hit:3 https://esm.ubuntu.com/infra/ubuntu jammy-infra-updates InRelease

Hit:4 http://archive.ubuntu.com/ubuntu jammy InRelease

Hit:5 http://archive.ubuntu.com/ubuntu jammy-updates InRelease

Hit:6 http://archive.ubuntu.com/ubuntu jammy-security InRelease

Reading package lists... Done

Building dependency tree... Done

Reading state information... Done

All packages are up to date.

Reading package lists... Done

Building dependency tree... Done

Reading state information... Done

Some packages could not be installed. This may mean that you have

requested an impossible situation or if you are using the unstable

distribution that some required packages have not yet been created

or been moved out of Incoming.

The following information may help to resolve the situation:

The following packages have unmet dependencies:

 ffmpeg : Depends: libavcodec58 (= 7:4.4.2-0ubuntu0.22.04.1)

          Depends: libavdevice58 (= 7:4.4.2-0ubuntu0.22.04.1) but it is not going to be installed

          Depends: libavfilter7 (= 7:4.4.2-0ubuntu0.22.04.1)

          Depends: libavformat58 (= 7:4.4.2-0ubuntu0.22.04.1)

          Depends: libavutil56 (= 7:4.4.2-0ubuntu0.22.04.1) but 7:4.4.2-0ubuntu0.22.04.1+esm1 is to be installed

          Depends: libpostproc55 (= 7:4.4.2-0ubuntu0.22.04.1) but 7:4.4.2-0ubuntu0.22.04.1+esm1 is to be installed

          Depends: libswresample3 (= 7:4.4.2-0ubuntu0.22.04.1) but 7:4.4.2-0ubuntu0.22.04.1+esm1 is to be installed

          Depends: libswscale5 (= 7:4.4.2-0ubuntu0.22.04.1) but 7:4.4.2-0ubuntu0.22.04.1+esm1 is to be installed

E: Unable to correct problems, you have held broken package
s.


    


    I am unable to find the cause so please help me get it right

    


  • C++ ffmpeg Queue input is backward in time while encoding

    15 août 2022, par Turgut

    I've made a program that takes a video as an input, decodes it's video and audio data, then edits the video data and encodes both video and audio (audio remains unedited). I've managed to successfully get the edited video as an output so far, but when I add in the audio, I get an error that says Queue input is backward in time. I used the muxing example from ffmpegs doc/examples for encoding, here is what it looks like (I'm not including the video encoding parts since its working just fine) :

    


    typedef struct {
            OutputStream video_st, audio_st;
            const AVOutputFormat *fmt;
            AVFormatContext *oc;
            int have_video, have_audio, encode_video, encode_audio;
            std::string name;
        } encode_info;

encode_info enc_inf;

void video_encoder::open_audio(AVFormatContext *oc, const AVCodec *codec,
                       OutputStream *ost, AVDictionary *opt_arg)
{
    AVCodecContext *c;
    int nb_samples;
    int ret;
    AVDictionary *opt = NULL;

    c = ost->enc;

    /* open it */
    av_dict_copy(&opt, opt_arg, 0);
    ret = avcodec_open2(c, codec, &opt);
    av_dict_free(&opt);
    if (ret < 0) {
        fprintf(stderr, "Could not open audio codec: %s\n", ret);
        exit(1);
    }

    /* init signal generator */
    ost->t     = 0;
    ost->tincr = 2 * M_PI * 110.0 / c->sample_rate;
    /* increment frequency by 110 Hz per second */
    ost->tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;

    if (c->codec->capabilities & AV_CODEC_CAP_VARIABLE_FRAME_SIZE)
        nb_samples = 10000;
    else
        nb_samples = c->frame_size;

    ost->frame     = alloc_audio_frame(c->sample_fmt, c->channel_layout,
                                       c->sample_rate, nb_samples);
    ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout,
                                       c->sample_rate, nb_samples);

    /* copy the stream parameters to the muxer */
    ret = avcodec_parameters_from_context(ost->st->codecpar, c);
    if (ret < 0) {
        fprintf(stderr, "Could not copy the stream parameters\n");
        exit(1);
    }

    /* create resampler context */
    ost->swr_ctx = swr_alloc();
    if (!ost->swr_ctx) {
        fprintf(stderr, "Could not allocate resampler context\n");
        exit(1);
    }

    /* set options */
    av_opt_set_int       (ost->swr_ctx, "in_channel_count",   c->channels,       0);
    av_opt_set_int       (ost->swr_ctx, "in_sample_rate",     c->sample_rate,    0);
    av_opt_set_sample_fmt(ost->swr_ctx, "in_sample_fmt",      AV_SAMPLE_FMT_S16, 0);
    av_opt_set_int       (ost->swr_ctx, "out_channel_count",  c->channels,       0);
    av_opt_set_int       (ost->swr_ctx, "out_sample_rate",    c->sample_rate,    0);
    av_opt_set_sample_fmt(ost->swr_ctx, "out_sample_fmt",     c->sample_fmt,     0);

    /* initialize the resampling context */
    if ((ret = swr_init(ost->swr_ctx)) < 0) {
        fprintf(stderr, "Failed to initialize the resampling context\n");
        exit(1);
    }
}


void video_encoder::encode_one_frame()
{
    if (enc_inf.encode_video || enc_inf.encode_audio) {
        /* select the stream to encode */
       if (enc_inf.encode_video &&
            (!enc_inf.encode_audio || av_compare_ts(enc_inf.video_st.next_pts, enc_inf.video_st.enc->time_base,
                                            enc_inf.audio_st.next_pts, enc_inf.audio_st.enc->time_base) <= 0)) {
            enc_inf.encode_video = !write_video_frame(enc_inf.oc, &enc_inf.video_st);
        } else {
            std::cout << "Encoding audio" << std::endl;
            enc_inf.encode_audio = !write_audio_frame(enc_inf.oc, &enc_inf.audio_st);
        }
    }
}

int video_encoder::write_audio_frame(AVFormatContext *oc, OutputStream *ost)
{
    AVCodecContext *c;
    AVFrame *frame;
    int ret;
    int dst_nb_samples;

    c = ost->enc;

    frame = audio_frame;//get_audio_frame(ost);

    if (frame) {
        /* convert samples from native format to destination codec format, using the resampler */
        /* compute destination number of samples */
        dst_nb_samples = av_rescale_rnd(swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples,
                                        c->sample_rate, c->sample_rate, AV_ROUND_UP);
        //av_assert0(dst_nb_samples == frame->nb_samples);

        /* when we pass a frame to the encoder, it may keep a reference to it
         * internally;
         * make sure we do not overwrite it here
         */
        ret = av_frame_make_writable(ost->frame);
        if (ret < 0)
            exit(1);

        /* convert to destination format */
        ret = swr_convert(ost->swr_ctx,
                          ost->frame->data, dst_nb_samples,
                          (const uint8_t **)frame->data, frame->nb_samples);
        if (ret < 0) {
            fprintf(stderr, "Error while converting\n");
            exit(1);
        }
        frame = ost->frame;

        frame->pts = av_rescale_q(ost->samples_count, (AVRational){1, c->sample_rate}, c->time_base);
        ost->samples_count += dst_nb_samples;
    }

    return write_frame(oc, c, ost->st, frame, ost->tmp_pkt);
}
void video_encoder::set_audio_frame(AVFrame* frame)
{
    audio_frame = frame;
}


    


    Normally the muxing example above uses get_audio_frame(ost) for frame insde write_audio_frame to create a dummy audio frame, but I want to use the audio that I have decoded from my input video. After decoding the audio frame I pass it to the encoder using set_audio_frame so my encoder can use it. Then I removed get_audio_frame(ost) and simply replaced it with audio_frame. So here is what my main loop looks like :

    


    ...
open_audio(args);
...
while(current_second < ouput_duration)
{
...
   video_reader_read_frame(buffer, &pts, start_ts);
   edit_decoded_video(buffer);
   ...
   if(frame_type == 2)
      encoder->set_audio_frame(audio_test->get_frame());
   encoder->encode_one_frame();
}


    


    And here is what my decoder looks like :

    


       int video_decode::decode_audio(AVCodecContext *dec, const AVPacket *pkt)
    {
        auto& frame= state.av_frame;
        int ret = 0;
    
        // submit the packet to the decoder
        ret = avcodec_send_packet(dec, pkt);
        if (ret < 0) {
            std::cout << "Error submitting a packet for decoding" << std::endl;
            return ret;
        }
    
        // get all the available frames from the decoder
        while (ret >= 0) {
            ret = avcodec_receive_frame(dec, frame);
            if (ret < 0) {
                // those two return values are special and mean there is no output
                // frame available, but there were no errors during decoding
                if (ret == AVERROR_EOF || ret == AVERROR(EAGAIN))
                    return 0;
    
                std::cout << "Decode err" << std::endl;
                return ret;
            }
    
            if (ret < 0)
                return ret;
        }
    
        return 0;
    }

    int video_decode::video_reader_read_frame(uint8_t* frame_buffer, int64_t* pts, double seg_start) 
    {
        // Unpack members of state
        auto& width = state.width;
        auto& height = state.height;
        auto& av_format_ctx = state.av_format_ctx;
        auto& av_codec_ctx = state.av_codec_ctx;
        auto& audio_codec_ctx = state.audio_codec_ctx;
        auto& video_stream_index = state.video_stream_index;
        auto& audio_stream_index = state.audio_stream_index;
        auto& av_frame = state.av_frame;
        auto& av_packet = state.av_packet;
        auto& sws_scaler_ctx = state.sws_scaler_ctx;

        // Decode one frame
        //double pt_in_seconds = (*pts) * (double)state.time_base.num / (double)state.time_base.den;
        if (!this->skipped) {
            this->skipped = true;
            *pts = (int64_t)(seg_start * (double)state.time_base.den / (double)state.time_base.num);
            video_reader_seek_frame(*pts);
        }

        int response;
        while (av_read_frame(av_format_ctx, av_packet) >= 0) {
            // Audio decode

            if (av_packet->stream_index == video_stream_index){
                std::cout << "Decoded VIDEO" << std::endl;

                response = avcodec_send_packet(av_codec_ctx, av_packet);
                if (response < 0) {
                    printf("Failed to decode packet: %s\n", av_make_error(response));
                    return false;
                }


                response = avcodec_receive_frame(av_codec_ctx, av_frame);
                if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
                    av_packet_unref(av_packet);
                    continue;
                } else if (response < 0) {
                    printf("Failed to decode packet: %s\n", av_make_error(response));
                    return false;
                }


                *pts = av_frame->pts;
                // Set up sws scaler
                if (!sws_scaler_ctx) {
                    auto source_pix_fmt = correct_for_deprecated_pixel_format(av_codec_ctx->pix_fmt);
                    sws_scaler_ctx = sws_getContext(width, height, source_pix_fmt,
                                                    width, height, AV_PIX_FMT_RGB0,
                                                    SWS_BICUBIC, NULL, NULL, NULL);
                }
                if (!sws_scaler_ctx) {
                    printf("Couldn't initialize sw scaler\n");
                    return false;
                }

                uint8_t* dest[4] = { frame_buffer, NULL, NULL, NULL };
                int dest_linesize[4] = { width * 4, 0, 0, 0 };
                sws_scale(sws_scaler_ctx, av_frame->data, av_frame->linesize, 0, height, dest, dest_linesize);
                av_packet_unref(av_packet);
                return 1;
            }
            if (av_packet->stream_index == audio_stream_index){
                std::cout << "Decoded AUDIO" << std::endl;
                decode_audio(audio_codec_ctx, av_packet);
                av_packet_unref(av_packet);
                return 2;
            }else {
                av_packet_unref(av_packet);
                continue;
            }

            av_packet_unref(av_packet);
            break;
        }


        return true;
    }
void init()
{
...
if (open_codec_context(&audio_stream_index, &audio_dec_ctx, av_format_ctx, AVMEDIA_TYPE_AUDIO) >= 0) {
      audio_stream = av_format_ctx->streams[audio_stream_index];
}
...
}


    


    My decoder uses the same format context, packet and frame for video and audio decoding and uses separate stream and codec context.

    


    Why am I getting Queue input is backward in time error and how can I propperly encode the audio ? I'm not sure but from the looks of it decodes the audio just fine. And again there are no problems on video encoding/decoding whatsoever.

    


  • Restream IP camera Feed [closed]

    4 janvier 2024, par Reenath Reddy Thummala

    I possess several IP cameras linked to a shared network. Although I intend to leverage each camera stream across multiple microservices, the network faces challenges in managing the bandwidth load. Unfortunately, I lack control over the network and cannot augment its bandwidth. Despite attempting to create a restreaming service using GStreamer, Live555, FFMPEG, and OpenCV, stability remains an issue. Are there any paid services accessible that can accept the source feed as input and provide scalable restream feed URLs ?