Recherche avancée

Médias (1)

Mot : - Tags -/censure

Autres articles (73)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (7635)

  • ffmpeg : missing frames with mp4 encoding

    6 juillet 2016, par Sierra

    I’m currently developing a desktop app that generates videos from pictures (QImage to be more specific). I’m working with Qt 5.6 and the last build of ffmpeg (build git-0a9e781 (2016-06-10)).

    I encode severals QImage to create an .mp4 video. I already have an output but it seems that some frames are missing.

    Here is my code. I tried to be as clear as possible, removing comments and errors catching.

    ## INITIALIZATION
    #####################################################################

    AVOutputFormat  * outputFormat  = Q_NULLPTR;
    AVFormatContext * formatContext = Q_NULLPTR;

    // filePath: "C:/Users/.../qt_temp.Jv7868.mp4"
    avformat_alloc_output_context2(&formatContext, NULL, NULL, filePath.data());

    outputFormat = formatContext->oformat;
    if (outputFormat->video_codec != AV_CODEC_ID_NONE) {
       // Finding a registered encoder with a matching codec ID...
       *codec = avcodec_find_encoder(outputFormat->video_codec);

       // Adding a new stream to a media file...
       stream = avformat_new_stream(formatContext, *codec);
       stream->id = formatContext->nb_streams - 1;


       AVCodecContext * codecContext = avcodec_alloc_context3(*codec);

       switch ((*codec)->type) {
       case AVMEDIA_TYPE_VIDEO:
           codecContext->codec_id  = outputFormat->video_codec;
           codecContext->bit_rate  = 400000;

           codecContext->width     = 1240;
           codecContext->height    = 874;

           // Timebase: this is the fundamental unit of time (in seconds) in terms of which frame
           // timestamps are represented. For fixed-fps content, timebase should be 1/framerate
           // and timestamp increments should be identical to 1.
           stream->time_base       = (AVRational){1, 24};
           codecContext->time_base = stream->time_base;

           // Emit 1 intra frame every 12 frames at most
           codecContext->gop_size  = 12;
           codecContext->pix_fmt   = AV_PIX_FMT_YUV420P;

           if (codecContext->codec_id == AV_CODEC_ID_H264) {
               av_opt_set(codecContext->priv_data, "preset", "slow", 0);
           }
           break;
       }

       if (formatContext->oformat->flags & AVFMT_GLOBALHEADER) {
           codecContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }
    }

    avcodec_open2(codecContext, codec, NULL);

    // Allocating and initializing a re-usable frames...
    frame = allocPicture(codecContext->width, codecContext->height, codecContext->pix_fmt);
    tmpFrame = allocPicture(codecContext->width, codecContext->height, AV_PIX_FMT_BGRA);

    avcodec_parameters_from_context(stream->codecpar, codecContext);

    av_dump_format(formatContext, 0, filePath.data(), 1);

    if (!(outputFormat->flags & AVFMT_NOFILE)) {
       avio_open(&formatContext->pb, filePath.data(), AVIO_FLAG_WRITE);
    }

    // Writing the stream header, if any...
    avformat_write_header(formatContext, NULL);

    ## RECEIVING A NEW FRAME
    #####################################################################

    // New QImage received: QImage image
    const qint32 width  = image.width();
    const qint32 height = image.height();

    // When we pass a frame to the encoder, it may keep a reference to it internally;
    // make sure we do not overwrite it here!
    av_frame_make_writable(tmpFrame);

    for (qint32 y = 0; y < height(); y++) {
       const uint8_t * scanline = image.scanLine(y);

       for (qint32 x = 0; x < width() * 4; x++) {
           tmpFrame->data[0][y * tmpFrame->linesize[0] + x] = scanline[x];
       }
    }

    // As we only generate a BGRA picture, we must convert it to the
    // codec pixel format if needed.
    if (!swsCtx) {
       swsCtx = sws_getContext(width, height,
                               AV_PIX_FMT_BGRA,
                               codecContext->width, codecContext->height,
                               codecContext->pix_fmt,
                               swsFlags, NULL, NULL, NULL);
    }

    sws_scale(swsCtx,
             (const uint8_t * const *)tmpFrame->data,
             tmpFrame->linesize,
             0,
             codecContext->height,
             frame->data,
             frame->linesize);

    ...

    AVFrame * frame = Q_NULLPTR;
    int gotPacket = 0;

    av_init_packet(&packet);

    // Packet data will be allocated by the encoder
    this->packet.data = NULL;
    this->packet.size = 0;

    frame->pts = nextPts++; // nextPts starts to 0
    avcodec_encode_video2(codecContext, &packet, frame, &gotPacket);

    if (gotPacket) {
       if (codecContext->coded_frame->key_frame) {
          packet.flags |= AV_PKT_FLAG_KEY;
       }

       // Rescale output packet timestamp values from codec to stream timebase
       av_packet_rescale_ts(packet, *codecContext->time_base, stream->time_base);
       packet->stream_index = stream->index;

       // Write the compressed frame to the media file.
       av_interleaved_write_frame(formatContext, packet);

       av_packet_unref(&this->packet);
    }

    ## FINISHING ENCODING
    #####################################################################

    // Retrieving delayed frames if any...
    for (int gotOutput = 1; gotOutput;) {
       avcodec_encode_video2(codecContext, &packet, NULL, &gotOutput);

       if (gotOutput) {
           // Rescale output packet timestamp values from codec to stream timebase
           av_packet_rescale_ts(packet, *codecContext->time_base, stream->time_base);
           packet->stream_index = stream->index;

           // Write the compressed frame to the media file.
           av_interleaved_write_frame(formatContext, packet);
           av_packet_unref(&packet);
       }
    }

    av_write_trailer(formatContext);

    avcodec_free_context(&codecContext);
    av_frame_free(&frame);
    av_frame_free(&tmpFrame);
    sws_freeContext(swsCtx);

    if (!(outputFormat->flags & AVFMT_NOFILE)) {
       // Closing the output file...
       avio_closep(&formatContext->pb);
    }

    avformat_free_context(formatContext);

    A part of the last second is always cut (e.g. when I send 48 frames, 24 fps, media players show 1,9 seconds of the video). I analyzed the video (48 frames, 24fps) with ffmpeg in command line, and I found something weird :
    enter image description here
    When I re encode the video with ffmpeg (in command line) to the same format, I get a more logical output :
    enter image description here

    From what I rode on different topics, I think it is closely connected to the h264 codec but I have no idea how to fix it. I’m not familiar with ffmpeg so any kind of help would be highly appreciated. Thank you.

    EDIT 06/07/2016
    Digging a little bit more in ffmpeg examples, I noticed these lines when closing the media file :

    uint8_t endcode[] = { 0, 0, 1, 0xb7 };
    ...
    /* add sequence end code to have a real mpeg file */
    fwrite(endcode, 1, sizeof(endcode), f);

    Is that sequence could be linked to my problem ? I’m trying to implement this to my code but, for now, it corrupts the media file. Any idea on how can I implement that line for my case ?

  • FFmpeg "movflags" > "faststart" causes invalid MP4 file to be written

    22 août 2016, par williamtroup

    I’m setting up the format layout for the video as follows :

    AVOutputFormat* outputFormat = ffmpeg.av_guess_format(null, "output.mp4", null);

    AVCodec* videoCodec = ffmpeg.avcodec_find_encoder(outputFormat->video_codec);

    AVFormatContext* formatContext = ffmpeg.avformat_alloc_context();
    formatContext->oformat = outputFormat;
    formatContext->video_codec_id = videoCodec->id;

    ffmpeg.avformat_new_stream(formatContext, videoCodec);

    This is how I am setting up the Codec Context :

    AVCodecContext* codecContext = ffmpeg.avcodec_alloc_context3(videoCodec);
    codecContext->bit_rate = 400000;
    codecContext->width = 1280;
    codecContext->height = 720;
    codecContext->gop_size = 12;
    codecContext->max_b_frames = 1;
    codecContext->pix_fmt = videoCodec->pix_fmts[0];
    codecContext->codec_id = videoCodec->id;
    codecContext->codec_type = videoCodec->type;
    codecContext->time_base = new AVRational
    {
       num = 1,
       den = 30
    };

    I’m using the following code to setup the "movflags" > "faststart" option for the header of the video :

    AVDictionary* options = null;

    int result = ffmpeg.av_dict_set(&options, "movflags", "faststart", 0);

    int writeHeaderResult = ffmpeg.avformat_write_header(formatContext, &options);

    The file is opened and the header is written as follows :

    if ((formatContext->oformat->flags & ffmpeg.AVFMT_NOFILE) == 0)
    {
       int ioOptionResult = ffmpeg.avio_open(&formatContext->pb, "output.mp4", ffmpeg.AVIO_FLAG_WRITE);
    }

    int writeHeaderResult = ffmpeg.avformat_write_header(formatContext, &options);

    After this, I write each video frame as follows :

    outputFrame->pts = frameIndex;

    packet.flags |= ffmpeg.AV_PKT_FLAG_KEY;
    packet.pts = frameIndex;
    packet.dts = frameIndex;

    int encodedFrame = 0;
    int encodeVideoResult = ffmpeg.avcodec_encode_video2(codecContext, &packet, outputFrame, &encodedFrame);

    if (encodedFrame != 0)
    {
       packet.pts = ffmpeg.av_rescale_q(packet.pts, codecContext->time_base, m_videoStream->time_base);
       packet.dts = ffmpeg.av_rescale_q(packet.dts, codecContext->time_base, m_videoStream->time_base);
       packet.stream_index = m_videoStream->index;

       if (codecContext->coded_frame->key_frame > 0)
       {
           packet.flags |= ffmpeg.AV_PKT_FLAG_KEY;
       }

       int writeFrameResult = ffmpeg.av_interleaved_write_frame(formatContext, &packet);
    }

    After that, I write the trailer :

    int writeTrailerResult = ffmpeg.av_write_trailer(formatContext);

    The file finishes writing and everything closes and frees up correctly. However, the MP4 file is unplayable (even VLC cant play it). AtomicParsley.exe won’t show any information about the file either.

    The DLLs used for the AutoGen library are :

    avcodec-56.dll
    avdevice-56.dll
    avfilter-5.dll
    avformat-56.dll
    avutil-54.dll
    postproc-53.dll
    swresample-1.dll
    swscale-3.dll
  • Sync Audio/Video in MP4 using AutoGen FFmpeg library

    12 juillet 2016, par williamtroup

    I’m currently having problems making my audio and video streams stay synced.

    These are the AVCodecContexts I’m using :

    For Video :

    AVCodec* videoCodec = ffmpeg.avcodec_find_encoder(AVCodecID.AV_CODEC_ID_H264)
    AVCodecContext* videoCodecContext = ffmpeg.avcodec_alloc_context3(videoCodec);
    videoCodecContext->bit_rate = 400000;
    videoCodecContext->width = 1280;
    videoCodecContext->height = 720;
    videoCodecContext->gop_size = 12;
    videoCodecContext->max_b_frames = 1;
    videoCodecContext->pix_fmt = videoCodec->pix_fmts[0];
    videoCodecContext->codec_id = videoCodec->id;
    videoCodecContext->codec_type = videoCodec->type;
    videoCodecContext->time_base = new AVRational
    {
       num = 1,
       den = 30
    };

    For Audio :

    AVCodec* audioCodec = ffmpeg.avcodec_find_encoder(AVCodecID.AV_CODEC_ID_AAC)
    AVCodecContext* audioCodecContext = ffmpeg.avcodec_alloc_context3(audioCodec);
    audioCodecContext->bit_rate = 1280000;
    audioCodecContext->sample_rate = 48000;
    audioCodecContext->channels = 2;
    audioCodecContext->channel_layout = ffmpeg.AV_CH_LAYOUT_STEREO;
    audioCodecContext->frame_size = 1024;
    audioCodecContext->sample_fmt = audioCodec->sample_fmts[0];
    audioCodecContext->profile = ffmpeg.FF_PROFILE_AAC_LOW;
    audioCodecContext->codec_id = audioCodec->id;
    audioCodecContext->codec_type = audioCodec->type;

    When writing the video frames, I setup the PTS position as follows :

    outputFrame->pts = frameIndex;  // The current index of the image frame being written

    I then encode the frame using avcodec_encode_video2(). After this, I call the following to setup the time stamps :

    ffmpeg.av_packet_rescale_ts(&packet, videoCodecContext->time_base, videoStream->time_base);

    This plays perfectly.

    However, when I do the same for audio, the video plays in slow motion, plays the audio first and then carry’s on with the video afterwards with no sound.

    I cannot find an example anywhere of how to set pts/dts positions for video/audio in an MP4 file. Any examples of help would be great !

    Also, I’m writing the video frames first, after which (once they are all written) I write the audio. I’ve updated this question with the adjusted values suggested in the comments.

    I’ve uploaded a test video to show my results here : http://www.filedropper.com/test_124