Recherche avancée

Médias (3)

Mot : - Tags -/image

Autres articles (71)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (7045)

  • ffmpeg + ffserver : "Broken ffmpeg default settings detected"

    18 octobre 2012, par Chris Nolet

    I'm just trying to connect ffmpeg to ffserver and stream rawvideo.

    I keep getting the error : broken ffmpeg default settings detected from libx264 and then Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height from ffmpeg before it exits.

    I'm launching ffmpeg with the command : ffmpeg -f x11grab -s 320x480 -r 10 -i :0.0 -tune zerolatency http://localhost:8090/feed1.ffm

    My ffserver.conf file (for ffserver) looks like this :

    Port 8090
    BindAddress 0.0.0.0
    MaxHTTPConnections 2000
    MaxClients 1000
    MaxBandwidth 1000
    CustomLog -
    NoDaemon

    <feed>
     ACL allow 127.0.0.1
    </feed>

    <stream>
     Feed feed1.ffm
     Format asf

     NoAudio

     VideoBitRate 128
     VideoBufferSize 400
     VideoFrameRate 24
     VideoSize 320x480

     VideoGopSize 12

     VideoQMin 1
     VideoQMax 31

     VideoCodec libx264
    </stream>

    <stream>
     Format status
    </stream>

    And the full output is :

    ffmpeg version N-45614-g364c60b Copyright (c) 2000-2012 the FFmpeg developers
     built on Oct 17 2012 04:34:04 with Apple clang version 4.1 (tags/Apple/clang-421.11.65) (based on LLVM 3.1svn)
     configuration: --enable-shared --enable-libx264 --enable-libmp3lame --enable-x11grab --enable-gpl --enable-version3 --enable-nonfree --enable-hardcoded-tables --cc=/usr/bin/clang --host-cflags=&#39;-Os -w -pipe -march=native -Qunused-arguments -mmacosx-version-min=10.7&#39; --extra-cflags=&#39;-x objective-c&#39; --extra-ldflags=&#39;-framework Foundation -framework Cocoa -framework CoreServices -framework ApplicationServices -lobjc&#39;
     libavutil      51. 76.100 / 51. 76.100
     libavcodec     54. 66.100 / 54. 66.100
     libavformat    54. 32.101 / 54. 32.101
     libavdevice    54.  3.100 / 54.  3.100
     libavfilter     3. 19.103 /  3. 19.103
     libswscale      2.  1.101 /  2.  1.101
     libswresample   0. 16.100 /  0. 16.100
     libpostproc    52.  1.100 / 52.  1.100
    [x11grab @ 0x7f87dc01e200] device: :0.0 -> display: :0.0 x: 0 y: 0 width: 320 height: 480
    [x11grab @ 0x7f87dc01e200] Estimating duration from bitrate, this may be inaccurate
    Input #0, x11grab, from &#39;:0.0&#39;:
     Duration: N/A, start: 1350517708.386699, bitrate: 49152 kb/s
       Stream #0:0: Video: rawvideo (BGRA / 0x41524742), bgra, 320x480, 49152 kb/s, 10 tbr, 1000k tbn, 10 tbc
    [tcp @ 0x7f87dc804120] TCP connection to localhost:8090 failed: Connection refused
    [tcp @ 0x7f87dc804b20] TCP connection to localhost:8090 failed: Connection refused
    [libx264 @ 0x7f87dd801000] broken ffmpeg default settings detected
    [libx264 @ 0x7f87dd801000] use an encoding preset (e.g. -vpre medium)
    [libx264 @ 0x7f87dd801000] preset usage: -vpre <speed> -vpre <profile>
    [libx264 @ 0x7f87dd801000] speed presets are listed in x264 --help
    [libx264 @ 0x7f87dd801000] profile is optional; x264 defaults to high
    Output #0, ffm, to &#39;http://localhost:8090/feed1.ffm&#39;:
     Metadata:
       creation_time   : now
       Stream #0:0: Video: h264, yuv420p, 160x128, q=2-31, 128 kb/s, 1000k tbn, 10 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (rawvideo -> libx264)
    Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
    </profile></speed>

    Any help much appreciated :)

  • Compute PTS and DTS correctly to sync audio and video ffmpeg C++

    14 août 2015, par Kaidul Islam

    I am trying to mux H264 encoded data and G711 PCM data into mov multimedia container. I am creating AVPacket from encoded data and initially the PTS and DTS value of video/audio frames is equivalent to AV_NOPTS_VALUE. So I calculated the DTS using current time information. My code -

    bool AudioVideoRecorder::WriteVideo(const unsigned char *pData, size_t iDataSize, bool const bIFrame) {
       .....................................
       .....................................
       .....................................
       AVPacket pkt = {0};
       av_init_packet(&amp;pkt);
       int64_t dts = av_gettime();
       dts = av_rescale_q(dts, (AVRational){1, 1000000}, m_pVideoStream->time_base);
       int duration = 90000 / VIDEO_FRAME_RATE;
       if(m_prevVideoDts > 0LL) {
           duration = dts - m_prevVideoDts;
       }
       m_prevVideoDts = dts;

       pkt.pts = AV_NOPTS_VALUE;
       pkt.dts = m_currVideoDts;
       m_currVideoDts += duration;
       pkt.duration = duration;
       if(bIFrame) {
           pkt.flags |= AV_PKT_FLAG_KEY;
       }
       pkt.stream_index = m_pVideoStream->index;
       pkt.data = (uint8_t*) pData;
       pkt.size = iDataSize;

       int ret = av_interleaved_write_frame(m_pFormatCtx, &amp;pkt);

       if(ret &lt; 0) {
           LogErr("Writing video frame failed.");
           return false;
       }

       Log("Writing video frame done.");

       av_free_packet(&amp;pkt);
       return true;
    }

    bool AudioVideoRecorder::WriteAudio(const unsigned char *pEncodedData, size_t iDataSize) {
       .................................
       .................................
       .................................
       AVPacket pkt = {0};
       av_init_packet(&amp;pkt);

       int64_t dts = av_gettime();
       dts = av_rescale_q(dts, (AVRational){1, 1000000}, (AVRational){1, 90000});
       int duration = AUDIO_STREAM_DURATION; // 20
       if(m_prevAudioDts > 0LL) {
           duration = dts - m_prevAudioDts;
       }
       m_prevAudioDts = dts;
       pkt.pts = AV_NOPTS_VALUE;
       pkt.dts = m_currAudioDts;
       m_currAudioDts += duration;
       pkt.duration = duration;

       pkt.stream_index = m_pAudioStream->index;
       pkt.flags |= AV_PKT_FLAG_KEY;
       pkt.data = (uint8_t*) pEncodedData;
       pkt.size = iDataSize;

       int ret = av_interleaved_write_frame(m_pFormatCtx, &amp;pkt);
       if(ret &lt; 0) {
           LogErr("Writing audio frame failed: %d", ret);
           return false;
       }

       Log("Writing audio frame done.");

       av_free_packet(&amp;pkt);
       return true;
    }

    And I added stream like this -

    AVStream* AudioVideoRecorder::AddMediaStream(enum AVCodecID codecID) {
       ................................
       .................................  
       pStream = avformat_new_stream(m_pFormatCtx, codec);
       if (!pStream) {
           LogErr("Could not allocate stream.");
           return NULL;
       }
       pStream->id = m_pFormatCtx->nb_streams - 1;
       pCodecCtx = pStream->codec;
       pCodecCtx->codec_id = codecID;

       switch(codec->type) {
       case AVMEDIA_TYPE_VIDEO:
           pCodecCtx->bit_rate = VIDEO_BIT_RATE;
           pCodecCtx->width = PICTURE_WIDTH;
           pCodecCtx->height = PICTURE_HEIGHT;
           pStream->time_base = (AVRational){1, 90000};
           pStream->avg_frame_rate = (AVRational){90000, 1};
           pStream->r_frame_rate = (AVRational){90000, 1}; // though the frame rate is variable and around 15 fps
           pCodecCtx->pix_fmt = STREAM_PIX_FMT;
           m_pVideoStream = pStream;
           break;

       case AVMEDIA_TYPE_AUDIO:
           pCodecCtx->sample_fmt = AV_SAMPLE_FMT_S16;
           pCodecCtx->bit_rate = AUDIO_BIT_RATE;
           pCodecCtx->sample_rate = AUDIO_SAMPLE_RATE;
           pCodecCtx->channels = 1;
           m_pAudioStream = pStream;
           break;

       default:
           break;
       }

       /* Some formats want stream headers to be separate. */
       if (m_pOutputFmt->flags &amp; AVFMT_GLOBALHEADER)
           m_pFormatCtx->flags |= CODEC_FLAG_GLOBAL_HEADER;

       return pStream;
    }

    There are several problems with this calculation :

    1. The video is laggy and lags behind than audio increasingly with time.

    2. Suppose, an audio frame is received (WriteAudio(..)) little lately like 3 seconds, then the late frame should be started playing with 3 second delay, but it’s not. The delayed frame is played consecutively with previous frame.

    3. Sometimes I recorded for 40 seconds but the file duration is much like 2 minutes, but audio/video is played only few moments like 40 seconds and rest of the file contains nothing and seekbar jumps at en immediately after 40 seconds (tested in VLC).

    EDIT :

    According to Ronald S. Bultje’s suggestion, what I’ve understand :

    m_pAudioStream->time_base = (AVRational){1, 9000}; // actually no need to set as 9000 is already default value for audio as you said
    m_pVideoStream->time_base = (AVRational){1, 9000};

    should be set as now both audio and video streams are now in same time base units.

    And for video :

    ...................
    ...................

    int64_t dts = av_gettime(); // get current time in microseconds
    dts *= 9000;
    dts /= 1000000; // 1 second = 10^6 microseconds
    pkt.pts = AV_NOPTS_VALUE; // is it okay?
    pkt.dts = dts;
    // and no need to set pkt.duration, right?

    And for audio : (exactly same as video, right ?)

    ...................
    ...................

    int64_t dts = av_gettime(); // get current time in microseconds
    dts *= 9000;
    dts /= 1000000; // 1 second = 10^6 microseconds
    pkt.pts = AV_NOPTS_VALUE; // is it okay?
    pkt.dts = dts;
    // and no need to set pkt.duration, right?

    And I think they are now like sharing same currDts, right ? Please correct me if I am wrong anywhere or missing anything.

    Also, if I want to use video stream time base as (AVRational){1, frameRate} and audio stream time base as (AVRational){1, sampleRate}, how the correct code should look like ?

    EDIT 2.0 :

    m_pAudioStream->time_base = (AVRational){1, VIDEO_FRAME_RATE};
    m_pVideoStream->time_base = (AVRational){1, VIDEO_FRAME_RATE};

    And

    bool AudioVideoRecorder::WriteAudio(const unsigned char *pEncodedData, size_t iDataSize) {
       ...........................
       ......................
       AVPacket pkt = {0};
       av_init_packet(&amp;pkt);

       int64_t dts = av_gettime() / 1000; // convert into millisecond
       dts = dts * VIDEO_FRAME_RATE;
       if(m_dtsOffset &lt; 0) {
           m_dtsOffset = dts;
       }

       pkt.pts = AV_NOPTS_VALUE;
       pkt.dts = (dts - m_dtsOffset);

       pkt.stream_index = m_pAudioStream->index;
       pkt.flags |= AV_PKT_FLAG_KEY;
       pkt.data = (uint8_t*) pEncodedData;
       pkt.size = iDataSize;

       int ret = av_interleaved_write_frame(m_pFormatCtx, &amp;pkt);
       if(ret &lt; 0) {
           LogErr("Writing audio frame failed: %d", ret);
           return false;
       }

       Log("Writing audio frame done.");

       av_free_packet(&amp;pkt);
       return true;
    }

    bool AudioVideoRecorder::WriteVideo(const unsigned char *pData, size_t iDataSize, bool const bIFrame) {
       ........................................
       .................................
       AVPacket pkt = {0};
       av_init_packet(&amp;pkt);
       int64_t dts = av_gettime() / 1000;
       dts = dts * VIDEO_FRAME_RATE;
       if(m_dtsOffset &lt; 0) {
           m_dtsOffset = dts;
       }
       pkt.pts = AV_NOPTS_VALUE;
       pkt.dts = (dts - m_dtsOffset);

       if(bIFrame) {
           pkt.flags |= AV_PKT_FLAG_KEY;
       }
       pkt.stream_index = m_pVideoStream->index;
       pkt.data = (uint8_t*) pData;
       pkt.size = iDataSize;

       int ret = av_interleaved_write_frame(m_pFormatCtx, &amp;pkt);

       if(ret &lt; 0) {
           LogErr("Writing video frame failed.");
           return false;
       }

       Log("Writing video frame done.");

       av_free_packet(&amp;pkt);
       return true;
    }

    Is the last change okay ? The video and audio seems synced. Only problem is - the audio is played without the delay regardless the packet arrived in delay.
    Like -

    packet arrival : 1 2 3 4... (then next frame arrived after 3 sec) .. 5

    audio played : 1 2 3 4 (no delay) 5

    EDIT 3.0 :

    zeroed audio sample data :

    AVFrame* pSilentData;
    pSilentData = av_frame_alloc();
    memset(&amp;pSilentData->data[0], 0, iDataSize);

    pkt.data = (uint8_t*) pSilentData;
    pkt.size = iDataSize;

    av_freep(&amp;pSilentData->data[0]);
    av_frame_free(&amp;pSilentData);

    Is this okay ? But after writing this into file container, there are dot dot noise during playing the media. Whats the problem ?

    EDIT 4.0 :

    Well, For µ-Law audio the zero value is represented as 0xff. So -

    memset(&amp;pSilentData->data[0], 0xff, iDataSize);

    solve my problem.

  • When using ffmpeg to create mp4 video file from batch of images the whole process is very slow how can i make it faster ?

    29 juin 2015, par Brubaker Haim

    The whole process is slow and also in the end the video file when playing it the frames moving very slow.

    ffmpeg -framerate 1/5 -i screenshot%06d.jpg -c:v libx264 -r 30 -p
    ix_fmt yuv420p out2.mp4

    Is that mean 1 frames each 5 seconds ?
    So if i will make 5/1 it will be 5 frames in a second ?
    What should be the best result ?

    And the second problem is that for testing i have 70 images but in the original i have over 1000 images is there any way to make all this process faster ?