Recherche avancée

Médias (0)

Mot : - Tags -/utilisateurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (85)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (16239)

  • FFmpeg C API - syncing video and audio

    10 novembre 2015, par Justin Bradley

    I am trimming video and having a hard getting the audio to sync correctly. The code below is as close as I’ve gotten it work. I’ve tried both re-encoding and not re-encoding the output streams.

    The video trims correctly and is written to the output container. The audio stream also trims correctly, but is written to the front of the output container. For example if the trim length is 10s - the correct portion of audio plays for 10s and then the correct portion of video plays.

    //////// audio stream ////////
    const AVStream *input_stream_audio = input_container->streams[audio_stream_index];
    const AVCodec *decoder_audio = avcodec_find_decoder(input_stream_audio->codec->codec_id);
    if(!decoder_audio) {
       cleanup(decoded_packet, output_container, decoded_frame);
       avformat_close_input(&input_container);
       LOGE("=> Audio decoder not found");
       return -1;
    }
    if(avcodec_open2(input_stream_audio->codec, decoder_audio, NULL) < 0) {
       cleanup(decoded_packet, output_container, decoded_frame);
       avformat_close_input(&input_container);
       LOGE("=> Error opening audio decoder");
       return -1;
    }

    AVStream *output_stream_audio = avformat_new_stream(output_container, NULL);
    if(avcodec_copy_context(output_stream_audio->codec, input_stream_audio->codec) != 0){
       LOGE("=> Failed to Copy audio Context ");
       return -1;
    }
    else {
       LOGI("=> Copied audio context ");
       output_stream_audio->codec->codec_id = input_stream_audio->codec->codec_id;
       output_stream_audio->codec->codec_tag = 0;
       output_stream_audio->pts = input_stream_audio->pts;
       output_stream_audio->time_base.num = input_stream_audio->time_base.num;
       output_stream_audio->time_base.den = input_stream_audio->time_base.den;

    }

    if(avio_open(&output_container->pb, output_file, AVIO_FLAG_WRITE) < 0) {
       cleanup(decoded_packet, output_container, decoded_frame);
       avformat_close_input(&input_container);
       LOGE("=> Error opening output file");
       return -1;
    }

    // allocate frame for conversion
    decoded_frame = avcodec_alloc_frame();
    if(!decoded_frame) {
       cleanup(decoded_packet, output_container, decoded_frame);
       avformat_close_input(&input_container);
       LOGE("=> Error allocating frame");
       return -1;
    }

    av_dump_format(input_container, 0, input_file, 0);
    avformat_write_header(output_container, NULL);
    av_init_packet(&decoded_packet);

    decoded_packet.data = NULL;
    decoded_packet.size = 0;
    int current_frame_num = 1;
    int current_frame_num_audio = 1;
    int got_frame, len;

    AVRational default_timebase;
    default_timebase.num = 1;
    default_timebase.den = AV_TIME_BASE;

    int64_t starttime_int64 = av_rescale_q((int64_t)( 12.0 * AV_TIME_BASE ), AV_TIME_BASE_Q, input_stream->time_base);
    int64_t endtime_int64 = av_rescale_q((int64_t)( 18.0 * AV_TIME_BASE ), AV_TIME_BASE_Q, input_stream->time_base);
    LOGI("=> starttime_int64:     %" PRId64, starttime_int64);
    LOGI("=> endtime_int64:       %" PRId64, endtime_int64);

    int64_t starttime_int64_audio = av_rescale_q((int64_t)( 12.0 * AV_TIME_BASE ), AV_TIME_BASE_Q, input_stream_audio->time_base);
    int64_t endtime_int64_audio = av_rescale_q((int64_t)( 18.0 * AV_TIME_BASE ), AV_TIME_BASE_Q, input_stream_audio->time_base);
    LOGI("=> starttime_int64_audio:     %" PRId64, starttime_int64_audio);
    LOGI("=> endtime_int64_audio:       %" PRId64, endtime_int64_audio);

    // loop input container and decode frames
    while(av_read_frame(input_container, &decoded_packet)>=0) {
       // video packets
       if (decoded_packet.stream_index == video_stream_index) {
           len = avcodec_decode_video2(input_stream->codec, decoded_frame, &got_frame, &decoded_packet);
           if(len < 0) {
               cleanup(decoded_packet, output_container, decoded_frame);
               avformat_close_input(&input_container);
               LOGE("=> No frames to decode");
               return -1;
           }
           // this is the trim range we're looking for
           if(got_frame && decoded_frame->pkt_pts >= starttime_int64 && decoded_frame->pkt_pts <= endtime_int64) {
                   av_init_packet(&encoded_packet);
                   encoded_packet.data =  NULL;
                   encoded_packet.size =  0;

                   ret = avcodec_encode_video2(output_stream->codec, &encoded_packet, decoded_frame, &got_frame);
                   if (ret < 0) {
                       cleanup(decoded_packet, output_container, decoded_frame);
                       avformat_close_input(&input_container);
                       LOGE("=> Error encoding frames");
                       return ret;
                   }
                   if(got_frame) {
                       if (output_stream->codec->coded_frame->key_frame) {
                           encoded_packet.flags |= AV_PKT_FLAG_KEY;
                       }

                       encoded_packet.stream_index = output_stream->index;
                       encoded_packet.pts = av_rescale_q(current_frame_num, output_stream->codec->time_base, output_stream->time_base);
                       encoded_packet.dts = av_rescale_q(current_frame_num, output_stream->codec->time_base, output_stream->time_base);

                       ret = av_interleaved_write_frame(output_container, &encoded_packet);
                       if (ret < 0) {
                           cleanup(decoded_packet, output_container, decoded_frame);
                           avformat_close_input(&input_container);
                           LOGE("=> Error encoding frames");
                           return ret;
                       }
                       else {
                           current_frame_num +=1;
                       }
                   }
               av_free_packet(&encoded_packet);
           }
       }
       // audio packets
       else if(decoded_packet.stream_index == audio_stream_index) {
           // this is the trim range we're looking for
           if(decoded_packet.pts >= starttime_int64_audio && decoded_packet.pts <= endtime_int64_audio) {
               av_init_packet(&encoded_packet);

               encoded_packet.data =  decoded_packet.data;
               encoded_packet.size =  decoded_packet.size;
               encoded_packet.stream_index = audio_stream_index;
               encoded_packet.pts = av_rescale_q(current_frame_num_audio, output_stream_audio->codec->time_base, output_stream_audio->time_base);
               encoded_packet.dts = av_rescale_q(current_frame_num_audio, output_stream_audio->codec->time_base, output_stream_audio->time_base);

               ret = av_interleaved_write_frame(output_container, &encoded_packet);
               if (ret < 0) {
                   cleanup(decoded_packet, output_container, decoded_frame);
                   avformat_close_input(&input_container);
                   LOGE("=> Error encoding frames");
                   return ret;
               }
               else {
                   current_frame_num_audio +=1;
               }
              av_free_packet(&encoded_packet);
           }
       }
    }

    Edit

    I have slight improvement on the initial code. The audio and video are still not perfectly synced, but the original problem of the audio playing first followed by the video is resolved.

    I’m now writing the decoded packet to the output container rather than re-encoding it.

    In the end though I have the same problem - the trimmed video’s audio and video streams are not perfectly synced.

    // audio packets
       else if(decoded_packet.stream_index == audio_stream_index) {
           // this is the trim range we're looking for
           if(decoded_packet.pts >= starttime_int64_audio && decoded_packet.pts <= endtime_int64_audio) {
               ret = av_interleaved_write_frame(output_container, &decoded_packet);
               if (ret < 0) {
                   cleanup(decoded_packet, output_container, decoded_frame);
                   avformat_close_input(&input_container);
                   LOGE("=> Error writing audio frame (%s)", av_err2str(ret));
                   return ret;
               }
               else {
                   current_frame_num_audio +=1;
               }
           }
           else if(decoded_frame->pkt_pts > endtime_int64_audio) {
               audio_copy_complete = true;
           }
       }
  • After ffmpeg encode, AVPacket pts and dts is AV_NOPTS_VALUE

    29 novembre 2017, par Li Zehan

    I would like to ask a question about ffmpeg when i use encoder (x264).

    this is my code :

    int
    FFVideoEncoder::init(AVCodecID codecId, int bitrate, int fps, int gopSize,
                        int width, int height, AVPixelFormat format) {
       release();

       const AVCodec *codec = avcodec_find_encoder(codecId);
       m_pCodecCtx = avcodec_alloc_context3(codec);
       m_pCodecCtx->width = width;
       m_pCodecCtx->height = height;
       m_pCodecCtx->pix_fmt = format;
       m_pCodecCtx->bit_rate = bitrate;
       m_pCodecCtx->thread_count = 5;
       m_pCodecCtx->max_b_frames = 0;
       m_pCodecCtx->gop_size = gopSize;

       m_pCodecCtx->time_base.num = 1;
       m_pCodecCtx->time_base.den = fps;

       //H.264
       if (m_pCodecCtx->codec_id == AV_CODEC_ID_H264) {
    //        av_dict_set(&opts, "preset", "slow", 0);
           av_dict_set(&m_pEncoderOpts, "preset", "superfast", 0);
           av_dict_set(&m_pEncoderOpts, "tune", "zerolatency", 0);

           m_pCodecCtx->flags |= CODEC_FLAG_GLOBAL_HEADER;
           m_pCodecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }
       int ret = avcodec_open2(m_pCodecCtx, m_pCodecCtx->codec, &m_pEncoderOpts);
       if (ret == 0) {
           LOGI("open avcodec success!");
       } else {
           LOGE("open avcodec error!");
           return -1;
       }
       return ret;
    }

    int FFVideoEncoder::encode(const Frame &inFrame, AVPacket *outPacket) {
       AVFrame *frame = av_frame_alloc();
    //    avpicture_fill((AVPicture *) frame, inFrame.getData(), AV_PIX_FMT_YUV420P, inFrame.getWidth(),
    //                   inFrame.getHeight());
       av_image_fill_arrays(frame->data, frame->linesize, inFrame.getData(), m_pCodecCtx->pix_fmt,
                            inFrame.getWidth(), inFrame.getHeight(), 1);

       int ret = 0;
       ret = avcodec_send_frame(m_pCodecCtx, frame);
       if (ret != 0) {
           LOGE("send frame error! %s", av_err2str(ret));
       } else {
           ret = avcodec_receive_packet(m_pCodecCtx, outPacket);
           LOGI("extract data size = %d", m_pCodecCtx->extradata_size);
           if (ret != 0) {
               LOGE("receive packet error! %s", av_err2str(ret));
           }
       };
       av_frame_free(&frame);
       return ret;
    }

    I expect that the AVPacket will carry the pts and dts about this frame.

    but in fact, i only can get encoded frame data and size.

    //====================================

    except this question, i have another quesiont :

    x264 docs say that "tune" opts can be set like film、animation and others. but i only can get a normal video when i set "zerolatency" params. When i set others opts, video’s bitrate is very low.

    Thanks your answer.

  • is ffmpeg memory-leakful ?

    18 octobre 2013, par user718146

    I have compiled ffmpeg for Android and tried to use it in my app.
    I did test a bit , but it seemed memory-leakful.

    This is my code :

    void fftest()
    {  
       char *url = "/mnt/sdcard/h264.dat";
       prot_read_avc_buff_init();

    AVFormatContext *fmt_ctx = NULL;
    int ret = avformat_open_input(&fmt_ctx, url, 0, NULL);
    if (ret < 0) {
           LOG(LOG_ERROR, __FUNCTION__, "avformat_open_input for url %s failed (%s)", url, strerror(ret) );

           return ;
       }

       LOGI("avformat_open_input OK ");

       if(fmt_ctx) avformat_close_input(&fmt_ctx);
    }

    Each time after calling this code, the allocated native heap memory of this Process increased. So, do you guys have any idea about fixing this leak. Or, is there substitute of ffmpeg.

    Thanks for replies !