Recherche avancée

Médias (91)

Autres articles (85)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (6950)

  • Video too fast FFmpeg

    22 novembre 2012, par Spamdark

    I am having an issue again with ffmpeg, I'm a newbie with ffmpeg, and I can't find a good tutorial up to date...

    This time, when I play a video with ffmpeg, it plays too fast, ffmpeg is ignoring the FPS, I don't want to handle that with a thread sleep, because the videos have differents FPS's.

    I created a thread, there you can find the loop :

    AVPacket framepacket;

    while(av_read_frame(formatContext,&framepacket)>= 0){
       pausecontrol.lock();

       // Is it a video or audio frame¿?
       if(framepacket.stream_index==gotVideoCodec){
           int framereaded;
           // Video? Ok
           avcodec_decode_video2(videoCodecContext,videoFrame,&framereaded,&framepacket);
           // Yeah, did we get it?
           if(framereaded && doit){
               AVRational millisecondbase = {1,1000};
               int f_number = framepacket.dts;
               int f_time = av_rescale_q(framepacket.dts,formatContext->streams[gotVideoCodec]->time_base,millisecondbase);
               currentTime=f_time;
               currentFrameNumber=f_number;

               int stWidth = videoCodecContext->width;
               int stHeight = videoCodecContext->height;
               SwsContext *ctx = sws_getContext(stWidth, stHeight, videoCodecContext->pix_fmt, stWidth,
               stHeight, PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);
               if(ctx!=0){
               sws_scale(ctx,videoFrame->data,videoFrame->linesize,0,videoCodecContext->height,videoFrameRGB->data,videoFrameRGB->linesize);
               QImage framecapsule=QImage(stWidth,stHeight,QImage::Format_RGB888);

               for(int y=0;ydata[0]+y*videoFrameRGB->linesize[0],stWidth*3);
               }
               emit newFrameReady(framecapsule);
               sws_freeContext(ctx);
               }

           }
       }
       if(framepacket.stream_index==gotAudioCodec){
           // Audio? Ok
       }
       pausecontrol.unlock();
       av_free_packet(&framepacket);
    }

    Any Idea ?

  • ffmpeg + ffserver : "Broken ffmpeg default settings detected"

    18 octobre 2012, par Chris Nolet

    I'm just trying to connect ffmpeg to ffserver and stream rawvideo.

    I keep getting the error : broken ffmpeg default settings detected from libx264 and then Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height from ffmpeg before it exits.

    I'm launching ffmpeg with the command : ffmpeg -f x11grab -s 320x480 -r 10 -i :0.0 -tune zerolatency http://localhost:8090/feed1.ffm

    My ffserver.conf file (for ffserver) looks like this :

    Port 8090
    BindAddress 0.0.0.0
    MaxHTTPConnections 2000
    MaxClients 1000
    MaxBandwidth 1000
    CustomLog -
    NoDaemon

    <feed>
     ACL allow 127.0.0.1
    </feed>

    <stream>
     Feed feed1.ffm
     Format asf

     NoAudio

     VideoBitRate 128
     VideoBufferSize 400
     VideoFrameRate 24
     VideoSize 320x480

     VideoGopSize 12

     VideoQMin 1
     VideoQMax 31

     VideoCodec libx264
    </stream>

    <stream>
     Format status
    </stream>

    And the full output is :

    ffmpeg version N-45614-g364c60b Copyright (c) 2000-2012 the FFmpeg developers
     built on Oct 17 2012 04:34:04 with Apple clang version 4.1 (tags/Apple/clang-421.11.65) (based on LLVM 3.1svn)
     configuration: --enable-shared --enable-libx264 --enable-libmp3lame --enable-x11grab --enable-gpl --enable-version3 --enable-nonfree --enable-hardcoded-tables --cc=/usr/bin/clang --host-cflags=&#39;-Os -w -pipe -march=native -Qunused-arguments -mmacosx-version-min=10.7&#39; --extra-cflags=&#39;-x objective-c&#39; --extra-ldflags=&#39;-framework Foundation -framework Cocoa -framework CoreServices -framework ApplicationServices -lobjc&#39;
     libavutil      51. 76.100 / 51. 76.100
     libavcodec     54. 66.100 / 54. 66.100
     libavformat    54. 32.101 / 54. 32.101
     libavdevice    54.  3.100 / 54.  3.100
     libavfilter     3. 19.103 /  3. 19.103
     libswscale      2.  1.101 /  2.  1.101
     libswresample   0. 16.100 /  0. 16.100
     libpostproc    52.  1.100 / 52.  1.100
    [x11grab @ 0x7f87dc01e200] device: :0.0 -> display: :0.0 x: 0 y: 0 width: 320 height: 480
    [x11grab @ 0x7f87dc01e200] Estimating duration from bitrate, this may be inaccurate
    Input #0, x11grab, from &#39;:0.0&#39;:
     Duration: N/A, start: 1350517708.386699, bitrate: 49152 kb/s
       Stream #0:0: Video: rawvideo (BGRA / 0x41524742), bgra, 320x480, 49152 kb/s, 10 tbr, 1000k tbn, 10 tbc
    [tcp @ 0x7f87dc804120] TCP connection to localhost:8090 failed: Connection refused
    [tcp @ 0x7f87dc804b20] TCP connection to localhost:8090 failed: Connection refused
    [libx264 @ 0x7f87dd801000] broken ffmpeg default settings detected
    [libx264 @ 0x7f87dd801000] use an encoding preset (e.g. -vpre medium)
    [libx264 @ 0x7f87dd801000] preset usage: -vpre <speed> -vpre <profile>
    [libx264 @ 0x7f87dd801000] speed presets are listed in x264 --help
    [libx264 @ 0x7f87dd801000] profile is optional; x264 defaults to high
    Output #0, ffm, to &#39;http://localhost:8090/feed1.ffm&#39;:
     Metadata:
       creation_time   : now
       Stream #0:0: Video: h264, yuv420p, 160x128, q=2-31, 128 kb/s, 1000k tbn, 10 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (rawvideo -> libx264)
    Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
    </profile></speed>

    Any help much appreciated :)

  • Compute PTS and DTS correctly to sync audio and video ffmpeg C++

    14 août 2015, par Kaidul Islam

    I am trying to mux H264 encoded data and G711 PCM data into mov multimedia container. I am creating AVPacket from encoded data and initially the PTS and DTS value of video/audio frames is equivalent to AV_NOPTS_VALUE. So I calculated the DTS using current time information. My code -

    bool AudioVideoRecorder::WriteVideo(const unsigned char *pData, size_t iDataSize, bool const bIFrame) {
       .....................................
       .....................................
       .....................................
       AVPacket pkt = {0};
       av_init_packet(&amp;pkt);
       int64_t dts = av_gettime();
       dts = av_rescale_q(dts, (AVRational){1, 1000000}, m_pVideoStream->time_base);
       int duration = 90000 / VIDEO_FRAME_RATE;
       if(m_prevVideoDts > 0LL) {
           duration = dts - m_prevVideoDts;
       }
       m_prevVideoDts = dts;

       pkt.pts = AV_NOPTS_VALUE;
       pkt.dts = m_currVideoDts;
       m_currVideoDts += duration;
       pkt.duration = duration;
       if(bIFrame) {
           pkt.flags |= AV_PKT_FLAG_KEY;
       }
       pkt.stream_index = m_pVideoStream->index;
       pkt.data = (uint8_t*) pData;
       pkt.size = iDataSize;

       int ret = av_interleaved_write_frame(m_pFormatCtx, &amp;pkt);

       if(ret &lt; 0) {
           LogErr("Writing video frame failed.");
           return false;
       }

       Log("Writing video frame done.");

       av_free_packet(&amp;pkt);
       return true;
    }

    bool AudioVideoRecorder::WriteAudio(const unsigned char *pEncodedData, size_t iDataSize) {
       .................................
       .................................
       .................................
       AVPacket pkt = {0};
       av_init_packet(&amp;pkt);

       int64_t dts = av_gettime();
       dts = av_rescale_q(dts, (AVRational){1, 1000000}, (AVRational){1, 90000});
       int duration = AUDIO_STREAM_DURATION; // 20
       if(m_prevAudioDts > 0LL) {
           duration = dts - m_prevAudioDts;
       }
       m_prevAudioDts = dts;
       pkt.pts = AV_NOPTS_VALUE;
       pkt.dts = m_currAudioDts;
       m_currAudioDts += duration;
       pkt.duration = duration;

       pkt.stream_index = m_pAudioStream->index;
       pkt.flags |= AV_PKT_FLAG_KEY;
       pkt.data = (uint8_t*) pEncodedData;
       pkt.size = iDataSize;

       int ret = av_interleaved_write_frame(m_pFormatCtx, &amp;pkt);
       if(ret &lt; 0) {
           LogErr("Writing audio frame failed: %d", ret);
           return false;
       }

       Log("Writing audio frame done.");

       av_free_packet(&amp;pkt);
       return true;
    }

    And I added stream like this -

    AVStream* AudioVideoRecorder::AddMediaStream(enum AVCodecID codecID) {
       ................................
       .................................  
       pStream = avformat_new_stream(m_pFormatCtx, codec);
       if (!pStream) {
           LogErr("Could not allocate stream.");
           return NULL;
       }
       pStream->id = m_pFormatCtx->nb_streams - 1;
       pCodecCtx = pStream->codec;
       pCodecCtx->codec_id = codecID;

       switch(codec->type) {
       case AVMEDIA_TYPE_VIDEO:
           pCodecCtx->bit_rate = VIDEO_BIT_RATE;
           pCodecCtx->width = PICTURE_WIDTH;
           pCodecCtx->height = PICTURE_HEIGHT;
           pStream->time_base = (AVRational){1, 90000};
           pStream->avg_frame_rate = (AVRational){90000, 1};
           pStream->r_frame_rate = (AVRational){90000, 1}; // though the frame rate is variable and around 15 fps
           pCodecCtx->pix_fmt = STREAM_PIX_FMT;
           m_pVideoStream = pStream;
           break;

       case AVMEDIA_TYPE_AUDIO:
           pCodecCtx->sample_fmt = AV_SAMPLE_FMT_S16;
           pCodecCtx->bit_rate = AUDIO_BIT_RATE;
           pCodecCtx->sample_rate = AUDIO_SAMPLE_RATE;
           pCodecCtx->channels = 1;
           m_pAudioStream = pStream;
           break;

       default:
           break;
       }

       /* Some formats want stream headers to be separate. */
       if (m_pOutputFmt->flags &amp; AVFMT_GLOBALHEADER)
           m_pFormatCtx->flags |= CODEC_FLAG_GLOBAL_HEADER;

       return pStream;
    }

    There are several problems with this calculation :

    1. The video is laggy and lags behind than audio increasingly with time.

    2. Suppose, an audio frame is received (WriteAudio(..)) little lately like 3 seconds, then the late frame should be started playing with 3 second delay, but it’s not. The delayed frame is played consecutively with previous frame.

    3. Sometimes I recorded for 40 seconds but the file duration is much like 2 minutes, but audio/video is played only few moments like 40 seconds and rest of the file contains nothing and seekbar jumps at en immediately after 40 seconds (tested in VLC).

    EDIT :

    According to Ronald S. Bultje’s suggestion, what I’ve understand :

    m_pAudioStream->time_base = (AVRational){1, 9000}; // actually no need to set as 9000 is already default value for audio as you said
    m_pVideoStream->time_base = (AVRational){1, 9000};

    should be set as now both audio and video streams are now in same time base units.

    And for video :

    ...................
    ...................

    int64_t dts = av_gettime(); // get current time in microseconds
    dts *= 9000;
    dts /= 1000000; // 1 second = 10^6 microseconds
    pkt.pts = AV_NOPTS_VALUE; // is it okay?
    pkt.dts = dts;
    // and no need to set pkt.duration, right?

    And for audio : (exactly same as video, right ?)

    ...................
    ...................

    int64_t dts = av_gettime(); // get current time in microseconds
    dts *= 9000;
    dts /= 1000000; // 1 second = 10^6 microseconds
    pkt.pts = AV_NOPTS_VALUE; // is it okay?
    pkt.dts = dts;
    // and no need to set pkt.duration, right?

    And I think they are now like sharing same currDts, right ? Please correct me if I am wrong anywhere or missing anything.

    Also, if I want to use video stream time base as (AVRational){1, frameRate} and audio stream time base as (AVRational){1, sampleRate}, how the correct code should look like ?

    EDIT 2.0 :

    m_pAudioStream->time_base = (AVRational){1, VIDEO_FRAME_RATE};
    m_pVideoStream->time_base = (AVRational){1, VIDEO_FRAME_RATE};

    And

    bool AudioVideoRecorder::WriteAudio(const unsigned char *pEncodedData, size_t iDataSize) {
       ...........................
       ......................
       AVPacket pkt = {0};
       av_init_packet(&amp;pkt);

       int64_t dts = av_gettime() / 1000; // convert into millisecond
       dts = dts * VIDEO_FRAME_RATE;
       if(m_dtsOffset &lt; 0) {
           m_dtsOffset = dts;
       }

       pkt.pts = AV_NOPTS_VALUE;
       pkt.dts = (dts - m_dtsOffset);

       pkt.stream_index = m_pAudioStream->index;
       pkt.flags |= AV_PKT_FLAG_KEY;
       pkt.data = (uint8_t*) pEncodedData;
       pkt.size = iDataSize;

       int ret = av_interleaved_write_frame(m_pFormatCtx, &amp;pkt);
       if(ret &lt; 0) {
           LogErr("Writing audio frame failed: %d", ret);
           return false;
       }

       Log("Writing audio frame done.");

       av_free_packet(&amp;pkt);
       return true;
    }

    bool AudioVideoRecorder::WriteVideo(const unsigned char *pData, size_t iDataSize, bool const bIFrame) {
       ........................................
       .................................
       AVPacket pkt = {0};
       av_init_packet(&amp;pkt);
       int64_t dts = av_gettime() / 1000;
       dts = dts * VIDEO_FRAME_RATE;
       if(m_dtsOffset &lt; 0) {
           m_dtsOffset = dts;
       }
       pkt.pts = AV_NOPTS_VALUE;
       pkt.dts = (dts - m_dtsOffset);

       if(bIFrame) {
           pkt.flags |= AV_PKT_FLAG_KEY;
       }
       pkt.stream_index = m_pVideoStream->index;
       pkt.data = (uint8_t*) pData;
       pkt.size = iDataSize;

       int ret = av_interleaved_write_frame(m_pFormatCtx, &amp;pkt);

       if(ret &lt; 0) {
           LogErr("Writing video frame failed.");
           return false;
       }

       Log("Writing video frame done.");

       av_free_packet(&amp;pkt);
       return true;
    }

    Is the last change okay ? The video and audio seems synced. Only problem is - the audio is played without the delay regardless the packet arrived in delay.
    Like -

    packet arrival : 1 2 3 4... (then next frame arrived after 3 sec) .. 5

    audio played : 1 2 3 4 (no delay) 5

    EDIT 3.0 :

    zeroed audio sample data :

    AVFrame* pSilentData;
    pSilentData = av_frame_alloc();
    memset(&amp;pSilentData->data[0], 0, iDataSize);

    pkt.data = (uint8_t*) pSilentData;
    pkt.size = iDataSize;

    av_freep(&amp;pSilentData->data[0]);
    av_frame_free(&amp;pSilentData);

    Is this okay ? But after writing this into file container, there are dot dot noise during playing the media. Whats the problem ?

    EDIT 4.0 :

    Well, For µ-Law audio the zero value is represented as 0xff. So -

    memset(&amp;pSilentData->data[0], 0xff, iDataSize);

    solve my problem.