Recherche avancée

Médias (3)

Mot : - Tags -/collection

Autres articles (53)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • L’utiliser, en parler, le critiquer

    10 avril 2011

    La première attitude à adopter est d’en parler, soit directement avec les personnes impliquées dans son développement, soit autour de vous pour convaincre de nouvelles personnes à l’utiliser.
    Plus la communauté sera nombreuse et plus les évolutions seront rapides ...
    Une liste de discussion est disponible pour tout échange entre utilisateurs.

  • Création définitive du canal

    12 mars 2010, par

    Lorsque votre demande est validée, vous pouvez alors procéder à la création proprement dite du canal. Chaque canal est un site à part entière placé sous votre responsabilité. Les administrateurs de la plateforme n’y ont aucun accès.
    A la validation, vous recevez un email vous invitant donc à créer votre canal.
    Pour ce faire il vous suffit de vous rendre à son adresse, dans notre exemple "http://votre_sous_domaine.mediaspip.net".
    A ce moment là un mot de passe vous est demandé, il vous suffit d’y (...)

Sur d’autres sites (10685)

  • How to fetch both live video frame and its timestamp from ffmpeg on Windows

    22 février 2017, par vijiboy

    Searching for an alternative as OpenCV would not provide timestamps for live camera stream (on Windows), which are required in my computer vision algorithm, I found ffmpeg and this excellent article https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
    The solution uses ffmpeg, accessing its standard output (stdout) stream. I extended it to read the standard error (stderr) stream as well.

    Working up the python code on windows, while I received the video frames from ffmpeg stdout, but the stderr freezes after delivering the showinfo videofilter details (timestamp) for first frame.

    I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected ?

    Expected : It should write video frames to disk as well as print timestamp details.
    Actual : It writes video files but does not get the timestamp (showinfo) details.

    Here’s the code I tried :

    import subprocess as sp
    import numpy
    import cv2

    command = [ 'ffmpeg',
               '-i', 'e:\sample.wmv',
               '-pix_fmt', 'rgb24',
               '-vcodec', 'rawvideo',
               '-vf', 'showinfo', # video filter - showinfo will provide frame timestamps
               '-an','-sn', #-an, -sn disables audio and sub-title processing respectively
               '-f', 'image2pipe', '-'] # we need to output to a pipe

    pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes???

    for i in range(10):
       raw_image = pipe.stdout.read(1280*720*3)
       img_info = pipe.stderr.read(244) # 244 characters is the current output of showinfo video filter
       print "showinfo output", img_info
       image1 =  numpy.fromstring(raw_image, dtype='uint8')
       image2 = image1.reshape((720,1280,3))  

       # write video frame to file just to verify
       videoFrameName = 'Video_Frame{0}.png'.format(i)
       cv2.imwrite(videoFrameName,image2)

       # throw away the data in the pipe's buffer.
       pipe.stdout.flush()
       pipe.stderr.flush()

    So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm...

  • How to fetch both live video frame and timestamp from ffmpeg to python on Windows

    6 mars 2017, par vijiboy

    Searching for an alternative as OpenCV would not provide timestamps for live camera stream (on Windows), which are required in my computer vision algorithm, I found ffmpeg and this excellent article https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
    The solution uses ffmpeg, accessing its standard output (stdout) stream. I extended it to read the standard error (stderr) stream as well.

    Working up the python code on windows, while I received the video frames from ffmpeg stdout, but the stderr freezes after delivering the showinfo videofilter details (timestamp) for first frame.

    I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected ?

    Expected : It should write video frames to disk as well as print timestamp details.
    Actual : It writes video files but does not get the timestamp (showinfo) details.

    Here’s the code I tried :

    import subprocess as sp
    import numpy
    import cv2

    command = [ 'ffmpeg',
               '-i', 'e:\sample.wmv',
               '-pix_fmt', 'rgb24',
               '-vcodec', 'rawvideo',
               '-vf', 'showinfo', # video filter - showinfo will provide frame timestamps
               '-an','-sn', #-an, -sn disables audio and sub-title processing respectively
               '-f', 'image2pipe', '-'] # we need to output to a pipe

    pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes???

    for i in range(10):
       raw_image = pipe.stdout.read(1280*720*3)
       img_info = pipe.stderr.read(244) # 244 characters is the current output of showinfo video filter
       print "showinfo output", img_info
       image1 =  numpy.fromstring(raw_image, dtype='uint8')
       image2 = image1.reshape((720,1280,3))  

       # write video frame to file just to verify
       videoFrameName = 'Video_Frame{0}.png'.format(i)
       cv2.imwrite(videoFrameName,image2)

       # throw away the data in the pipe's buffer.
       pipe.stdout.flush()
       pipe.stderr.flush()

    So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm...

  • Write audio packet to file using ffmpeg

    27 février 2017, par iamyz

    I am trying to write audio packet to file using ffmpeg. The source device sending the packet after some interval. e.g.

    First packet has a time stamp 00:00:00
    Second packet has a time stamp 00:00:00.5000000
    Third packet has a time stamp 00:00:01
    And so on...

    Means two packet per second.

    I want to encode those packets and write to a file.

    I am referring the Ffmpeg example from link Muxing.c

    While encoding and writing there is no error. But output file has only 2 sec audio duration and speed is also super fast.

    The video frames are proper according the settings.

    I think the problem is related to calculation of pts, dts and duration of packet.

    How should I calculate proper values for pts, dts and duration. Or is this problem related to other thing ?

    Code :

    void AudioWriter::WriteAudioChunk(IntPtr chunk, int lenght, TimeSpan timestamp)
    {
       int buffer_size = av_samples_get_buffer_size(NULL, outputStream->tmp_frame->channels, outputStream->tmp_frame->nb_samples,  outputStream->AudioStream->codec->sample_fmt, 0);

       uint8_t *audioData = reinterpret_cast(static_cast(chunk));
       int ret = avcodec_fill_audio_frame(outputStream->tmp_frame,outputStream->Channels, outputStream->AudioStream->codec->sample_fmt, audioData, buffer_size, 1);

       if (!ret)
          throw gcnew System::IO::IOException("A audio file was not opened yet.");

       write_audio_frame(outputStream->FormatContext, outputStream, audioData);
    }


    static int write_audio_frame(AVFormatContext *oc, AudioWriterData^ ost, uint8_t *audioData)
    {
          AVCodecContext *c;
          AVPacket pkt = { 0 };
          int ret;
          int got_packet;
          int dst_nb_samples;

          av_init_packet(&pkt);
          c = ost->AudioStream->codec;

          AVFrame *frame = ost->tmp_frame;

         if (frame)
         {
             dst_nb_samples = av_rescale_rnd(swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples, c->sample_rate, c->sample_rate, AV_ROUND_UP);
             if (dst_nb_samples != frame->nb_samples)
               throw gcnew Exception("dst_nb_samples != frame->nb_samples");

             ret = av_frame_make_writable(ost->AudioFrame);
             if (ret < 0)
                throw gcnew Exception("Unable to make writable.");

             ret = swr_convert(ost->swr_ctx, ost->AudioFrame->data, dst_nb_samples, (const uint8_t **)frame->data, frame->nb_samples);
             if (ret < 0)
               throw gcnew Exception("Unable to convert to destination format.");

             frame = ost->AudioFrame;

             AVRational timebase = { 1, c->sample_rate };
             frame->pts = av_rescale_q(ost->samples_count, timebase, c->time_base);
             ost->samples_count += dst_nb_samples;
         }

         ret = avcodec_encode_audio2(c, &pkt, frame, &got_packet);
         if (ret < 0)
           throw gcnew Exception("Error encoding audio frame.");

         if (got_packet)
         {
           ret = write_frame(oc, &c->time_base, ost->AudioStream, &pkt);
           if (ret < 0)
               throw gcnew Exception("Audio is not written.");
         }
         else
            throw gcnew Exception("Audio packet encode failed.");

         return (ost->AudioFrame || got_packet) ? 0 : 1;
    }

    static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
    {
       av_packet_rescale_ts(pkt, *time_base, st->time_base);
       pkt->stream_index = st->index;
       return av_interleaved_write_frame(fmt_ctx, pkt);
    }