Recherche avancée

Médias (0)

Mot : - Tags -/organisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (98)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (17040)

  • Sound playback using FFmpeg and libsoundio in c++

    11 juillet 2021, par ldall03

    I am trying to make a video player desktop application in c++ using primarily FFmpeg and Qt6. As of for now, I can decode and play video frames correctly at the right speed, that is not a problem. I am now trying to get to playback audio, which is much harder than I expected it to be. I am using libsoundio for my audio library but the documentation is really poor and there are not many examples/tutorials on it. I am also a beginner when it comes to audio programming, although I understand the basics. First off, if anyone can recommend an audio library for this type of job let me know, but I would like to use open source libraries. Anyways, here is how I decode my audio data with FFmpeg. I'm not sure if I am doing it correctly as I could barely find documentation on that as well...
I have a struct that contains all the information which is initiated through a function :

    


    struct VideoReader
{
    bool valid;
    int width, height;
    int video_stream_index;
    int audio_stream_index;
    AVRational time_base;

    AVFormatContext* av_format_ctx;
    AVCodecContext* av_vi_codec_ctx;
    AVCodecContext* av_au_codec_ctx;
    AVPacket* packet;
    AVFrame* frame;
    SwsContext* sws_ctx;
    SwrContext* swr_ctx;
};


    


    The function that initiates it is quite long and is not necessary to share but it populates all those values except for the sws_ctx and the swr_ctx.

    


    Here is how I decode packets, this function is simplified, I left the video decoding out of it, ill take care of syncing once I can properly playback audio :

    


    bool video_reader_read_au_frame(VideoReader *video_reader, unsigned char **frame_buffer)
{
    // Unpack video_reader
    auto& av_format_ctx         = video_reader->av_format_ctx;
    auto& av_codec_ctx          = video_reader->av_au_codec_ctx;
    auto& av_packet             = video_reader->packet;
    auto& av_frame              = video_reader->frame;
    auto& swr_ctx               = video_reader->swr_ctx;
    int&  audio_stream_index    = video_reader->audio_stream_index;

    // Decode the video frame data
    int response;
    while (av_read_frame(av_format_ctx, av_packet) >= 0)
    {
        last_frame = false;
        if (av_packet->stream_index != audio_stream_index)
        {
            av_packet_unref(av_packet);
            continue;
        }

        response = avcodec_send_packet(av_codec_ctx, av_packet);
        if (response < 0)
        {
            Logger::error("Could not decode packet.");
            return false;
        }

        response = avcodec_receive_frame(av_codec_ctx, av_frame);
        if (response == AVERROR(EAGAIN) || response == AVERROR_EOF)
        {
            av_packet_unref(av_packet);
            continue;
        }
        else if (response < 0)
        {
            Logger::error("Could not decode packet.");
            return false;
        }
        av_packet_unref(av_packet);
        break;
    }

    // Initialize SwrContext
    if (!swr_ctx) {
        swr_ctx = swr_alloc_set_opts(nullptr,
                                     av_codec_ctx->channel_layout, AV_SAMPLE_FMT_FLT,
                                     av_codec_ctx->sample_rate, av_codec_ctx->channel_layout,
                                     av_codec_ctx->sample_fmt,  av_codec_ctx->sample_rate,
                                     0, nullptr);
        if (!swr_ctx)
        {
            Logger::error("Could not create SwrContext.");
            return false;
        }

        if (swr_init(swr_ctx) < 0)
        {
            Logger::error("Could not initialize SwrContext.");
            return false;
        }
    }


    const int MAX_BUFFER_SIZE = av_samples_get_buffer_size(nullptr, av_frame->channels, av_frame->nb_samples, AV_SAMPLE_FMT_FLT, 1);
    *frame_buffer = (unsigned char*)av_malloc(MAX_BUFFER_SIZE);
    swr_convert(swr_ctx, frame_buffer, av_frame->nb_samples,
                (const unsigned char**)av_frame->data, av_frame->nb_samples);

    av_frame_unref(av_frame);


    return true;
}


    


    Here is how I would normally call this function :

    


    VideoReader vr{};
if(!video_reader_open(&vr, "C:/Path/to/file.mp4"))
{
    Logger::error("Could not initialize VideoReader.");
    return 1;
}
unsigned char* buffer;
if(!video_reader_read_au_frame(&vr, &buffer))
{
    Logger::error("Could not read audio data.");
    return 1;
}

play_audio(&buffer);  <-- Find a way to play audio once buffer has data in it

video_reader_close(&vr);
return 0;


    


    Obviously I will loop over video_reader_read_au_frame(&vr, &buffer) to playback the whole video.

    


    I believe my code puts the samples from the decoded frame in buffer, but I am really not sure.. I am unsure as well if I need to convert to AV_SAMPLE_FMT_FLT audio format or something else or just leave it as it is. For libsoundio, I kind of understand this example : http://libsound.io/ but I'm not sure I fully understand how this library works, especially the callback function. I know I have to pass buffer in outstream->userdata as a void pointer, but I don't know how to use it in the callback function. Any help or guidance would be greatly appreciated. Note that later on in this project I might want to send this data over a network to play the video on another computer in sync.

    


  • Revision 49584 : La limitation du nombre d’articles à afficher ne fonctionnait pas. Erreur ...

    13 juillet 2011, par yffic@… — Log

    La limitation du nombre d’articles à afficher ne fonctionnait pas. Erreur de copier/coller lors de [47121]

  • How to convert short video clips to TS without sound "gaps" between the segments ?

    10 novembre 2022, par Zvika

    I am trying to convert a sequence of short video files from MP4 to TS using ffmpeg.
I get valid TS files, but when playing them in any HLS player, there is a noticeable short gap in the sound between segment to segment.

    


    If I first stitch all the short video files to a single video file, and convert this file to TS while slicing it to segments, it plays perfectly fine.

    


    To the gory details :
My software creates short video clips that should be concateenated to an output video and streamed as HLS.
Each short clip is an H.264 video file and WAV audio file (I can create other formats if needed).
I then convert each such pair of H.264+WAV to a TS file using ffmpeg :
ffmpeg -y -i seg_0.mp4 -i seg_0.wav -c:a libvo_aacenc -c:v copy -bsf:v h264_mp4toannexb seg_0.ts ffmpeg -y -i seg_1.mp4 -i seg_1.wav -c:a libvo_aacenc -c:v copy -bsf:v h264_mp4toannexb -output_ts_offset 2.01 seg_1.ts ffmpeg -y -i seg_2.mp4 -i seg_2.wav -c:a libvo_aacenc -c:v copy -bsf:v h264_mp4toannexb -output_ts_offset 4.02 seg_2.ts
etc.

    


    and I create an appropriate M3U8 file to play all the short clips as a sequence.
The result is not satisfying, as I have audio gaps between each segment and segment, as you can hear here :
https://rnd3-temp-public.s3.amazonaws.com/HLS_4/out_seg2.m3u8

    


    However, if I concat all the pairs together, and convert the concatenated sequence to TS, while requesting ffmpeg to slice them again to segments, using a command like :
ffmpeg -y -f concat -i mp4_list.txt -f concat -i wav_list.txt -c:a libvo_aacenc -c:v copy -bsf:v h264_mp4toannexb -flags +cgop -g 30 -hls_time 2 out2.m3u8
it plays perfectly OK, as you can hear here :
https://rnd3-temp-public.s3.amazonaws.com/HLS/out2.m3u8

    


    How can I get a clear audio output by still encoding each segment separately ? (It's crucial for my workflow)

    


    Thanks !