Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (112)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

Sur d’autres sites (15282)

  • Why my code that based on ffmpeg can't sync video' time and audio's time ?

    6 juillet 2021, par ZSpirytus

    Background

    


    Recently, I use ffmpeg to write my first Android video player. But video channel's time is faster than audio channel's time about 2 3 times.

    


    Code

    


    In short, I use PacketDispatcher to read AVPacket from http hlv source :

    


    void PacketDispatcher::RealDispatch() {
    while (GetStatus() != DISPATCHER_STOP) {
        while (GetStatus() == DISPATCHER_PAUSE) {
            LOGD(TAG, "wait signal");
            pthread_mutex_lock(&mutex);
            pthread_cond_wait(&cond, &mutex);
            pthread_mutex_unlock(&mutex);
        }

        AVPacket *av_packet = av_packet_alloc();
        int ret = av_read_frame(av_format_context, av_packet);
        if (ret) {
            LOGE(TAG, "av_read_frame ret=%d", ret);
            break;
        }

        // PacketDispatcher is who read the AVPacket from http hlv source 
        // and dispatch to decoder by its stream index.
        decoder_map[av_packet->stream_index]->Push(av_packet);
    }
}


    


    And then, Decoder written by Producer-Consumer Pattern, Decoder maintain a queue that store all the AVPacket received from PacketDispatcher. The code like this :

    


    // write to the queue
void BaseDecoder::Push(AVPacket *av_packet) {
    pthread_mutex_lock(&av_packet_queue_mutex);
    av_packet_queue.push(av_packet);
    pthread_cond_signal(&av_packet_queue_cond);
    pthread_mutex_unlock(&av_packet_queue_mutex);
}

// real decode logic
void BaseDecoder::RealDecode() {
    SetDecoderState(START);
    LOGI(LogSpec(), "start decode");

    while (true) {
        // 1. check decoder status and queue size to decide if call thread.wait

        // 2. send packet to codec
        AVPacket* av_packet = av_packet_queue.front();
        int ret = avcodec_send_packet(av_codec_ctx, av_packet);

        // 3. read frame from codec
        AVFrame *av_frame = av_frame_alloc();
        ret = avcodec_receive_frame(av_codec_ctx, av_frame);

        if (m_render) {
            // 3. custom decode logic overrided by child class
            void *decode_result = DecodeFrame(av_frame);
            if (decode_result) {
                // 4. dispatch to render
                m_render->Render(decode_result);
            } else {
                LOGD("BaseDecoder", "decode_result=nullptr");
            }
        }
    }
}


    


    Finally, I do rendering logic in Render. Render also written by Producer-Consumer Pattern, it maintain a queue that store AVFrame received from Decoder, the code like this :

    


    // write AVFrame
void BaseRender::Render(void *frame_data) {
    Lock();
    frame_queue.push(frame_data);
    Signal();
    UnLock();
}

// render to surface or Open SL
void BaseRender::RealRender() {
    // frame data that contain frame pts and other metadata
    frame_data->pts = av_frame->pts = av_frame->best_effort_timestamp * av_q2d(GetTimeBase());
    // video only
    frame_data->video_extra_delay = av_frame->repeat_pict * 1.0 / fps * 2.0;
    if (m_render_synchronizer && m_render_synchronizer->Sync(frame_data)) {
        continue;
    }
}


    


    And then, the synchronizer will decide to sleep time or drop video frame according to the frame pts, frame pts is :

    


    frame_data->pts = av_frame->best_effort_timestamp * av_q2d(GetTimeBase());


    


    Also, video extra delay is :

    


    frame_data->video_extra_delay = av_frame->repeat_pict * 1.0 / fps * 2.0;


    


    RenderSynchronizer code like this :

    


    bool RenderSynchronizer::Sync(void *frame_data) {&#xA;    auto base_frame_data = static_cast<baseframedata>(frame_data);&#xA;    if (base_frame_data->media_type == AVMEDIA_TYPE_AUDIO) {&#xA;        return ReceiveAudioFrame(static_cast<pcmdata>(frame_data));&#xA;    } else if (base_frame_data->media_type == AVMEDIA_TYPE_VIDEO) {&#xA;        return ReceiveVideoFrame(static_cast<rgbadata>(frame_data));&#xA;    }&#xA;    return false;&#xA;}&#xA;&#xA;bool RenderSynchronizer::ReceiveAudioFrame(PCMData *pcm_data) {&#xA;    audio_pts = pcm_data->pts;&#xA;    return false;&#xA;}&#xA;&#xA;bool RenderSynchronizer::ReceiveVideoFrame(RGBAData *rgba_data) {&#xA;    video_pts = rgba_data->pts;&#xA;&#xA;    if (audio_pts &lt;= 0 || video_pts &lt;= 0) {&#xA;        return false;&#xA;    }&#xA;&#xA;    double diff = video_pts - audio_pts;&#xA;    if (diff > 0) {&#xA;        if (diff > 1) {&#xA;            av_usleep((unsigned int) (rgba_data->extra_delay * 1000000.0));&#xA;        } else {&#xA;            av_usleep((unsigned int) ((diff &#x2B; rgba_data->extra_delay) * 1000000.0));&#xA;        }&#xA;        return false;&#xA;    } else if (diff &lt; 0) {&#xA;        LOGD(TAG, "drop video frame");&#xA;        return true;&#xA;    } else {&#xA;        return false;&#xA;    }&#xA;}&#xA;</rgbadata></pcmdata></baseframedata>

    &#xA;

    Why my code can not sync video time and audio time ? Thanks for your reading and answers.

    &#xA;

  • python multiprocessing and ffmpeg corrupts files

    1er juillet 2012, par misterte

    I'm currently trying to convert several videos to three different outputs, all using ffmpeg : mp4, webm and jpeg. I also need to run this script in different directories and create within the directories webm, mp4 and jpeg subdirectories where the respective converted files are stored.

    I am running the following script inside a directory with 8 .mov test files in it. Files work ok as .mov.

    (It's a bit long, so here you can view it online)

    The script creates all files and directories. I can also note that all Consumer processes take tasks and complete them. The problem is resulting .mp4 and .webm files are corrupted.

    Here you can see an example output. It's a bit long, so I think it's best if I point out the part I think is relevant.

    ...
    [h264 @ 0x121815a00]no frame!
    Error while decoding stream #0.0
    [h264 @ 0x121815a00]AVC: nal size -6554108
    [h264 @ 0x121815a00]no frame!
    Error while decoding stream #0.0
    [h264 @ 0x121815a00]AVC: nal size 391580264
    [h264 @ 0x121815a00]no frame!
    ...

    This does not happen if I run ffmpeg straight from the console.

    ffmpeg -i movie.mov -b 500k -s 640x360 -padtop 60 -padbottom 60 movie_out.webm

    I can even run it in parallel shells and the output will not be affected.

    Can anyone see what the problem is ?

    thnx !

    A.

  • Error to recording video strem with FFMPEG in Nodejs(using lib ffmpeg-static)

    15 décembre 2020, par Louis Hudson

    When I start recording, terminal show me this error and stop recording :/

    &#xA;

    I need to recording in MP4 format. Is there any config to solve this problem ?

    &#xA;

      [sdp @ 0x7fbb02009e00] Could not find codec parameters for stream 1 (Video: h264, none): unspecified size&#xA;    Consider increasing the value for the &#x27;analyzeduration&#x27; and &#x27;probesize&#x27; options&#xA;    Input #0, sdp, from &#x27;/Users/user/Documents/4web-server/src/recording/h264.sdp&#x27;:&#xA;      Metadata:&#xA;        title           : RTP Youtube&#xA;      Duration: N/A, bitrate: N/A&#xA;        Stream #0:0: Audio: opus, 48000 Hz, stereo, fltp&#xA;        Stream #0:1: Video: h264, none, 90k tbr, 90k tbn, 180k tbc&#xA;    [mp4 @ 0x7fbb0101a400] &#xA;    dimensions not set&#xA;    Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument&#xA;    Stream mapping:&#xA;      Stream #0:1 -> #0:0 (copy)&#xA;        Last message repeated 1 times&#xA;    Recording process exit, code: 1, signal: null&#xA;    Stop mediasoup RTP transport and consumer&#xA;    Recording stopped&#xA;

    &#xA;

    My sdp config to make recording

    &#xA;

    v=0&#xA;    o=- 0 0 IN IP4 127.0.0.1&#xA;    s=RTP Youtube&#xA;    c=IN IP4 127.0.0.1&#xA;    t=0 0&#xA;    m=audio 5004 RTP/AVPF 111&#xA;    a=rtcp:5005&#xA;    a=rtpmap:111 opus/48000/2&#xA;    a=fmtp:111 minptime=10;useinbandfec=1&#xA;    m=video 5006 RTP/AVPF 125&#xA;    a=rtcp:5007&#xA;    a=rtpmap:125 H264/90000&#xA;    a=fmtp:125 level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=42e01f&#xA;&#xA;&#xA;&#xA;&#xA;&#xA; &#xA;

    &#xA;