Recherche avancée

Médias (0)

Mot : - Tags -/navigation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (67)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Contribute to translation

    13 avril 2011

    You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
    To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
    MediaSPIP is currently available in French and English (...)

Sur d’autres sites (6379)

  • How can I use my exe in a new Process() call ?

    24 février 2017, par looksgoodhoss

    I am working on a project where I create a 10 second sample from a video. To do this, I am using FFMPEG. I would like for the user to upload their own video where the sampling will then take place. The processing will be done in an Azure worker-role and that is where my problem lies.

    If I execute the following command (excuse the absolute paths, they’re my next problem) in Command Prompt then the sampling is completed successfully.

    ffmpeg -t 10 -i C:\Users\looksgoodhoss\Documents\Videos\video.mp4 -map_metadata 0 -acodec copy C:\Users\looksgoodhoss\Documents\Videos\vid.mp4 -y

    I am trying to bring this command into my Visual Studio project via a new Process() call. The video.mp4 and vid.mp4 are trivial names to test and work out my bug.

    bool success = false;
               string EXEArguements = @"ffmpeg -t 10 -i C:\Users\looksgoodhoss\Documents\Videos\video.mp4 -map_metadata 0 -acodec copy C:\Users\looksgoodhoss\Documents\Videos\vid.mp4 -y";
               string EXEPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot") + @"\", @"approot\ffmpeg.exe");

               try
               {
                   Process proc = new Process();
                   //proc.StartInfo.FileName = @"C:\FFMPEG\bin\ffmpeg";
                   proc.StartInfo.FileName = EXEPath;
                   proc.StartInfo.Arguments = EXEArguements;
                   proc.StartInfo.CreateNoWindow = true;
                   proc.StartInfo.UseShellExecute = false;
                   proc.StartInfo.ErrorDialog = false;

                   Trace.TraceInformation("FFMPEG completed."); // is shown in log

                   proc.Start();
                   proc.WaitForExit();
                   success = true;
               }
               catch (Exception e)
               {
                   throw;
               }
               return success;

    The message "FFMPEG completed" is shown in the Compute Emulator UI and so I know that this block of code is executing, however, they’re is no sample video created despite the command being the same.

    Am I executing FFMPEG incorrectly in my Visual Studio project ? I think this is my problem because the same command can successfully be performed through Command Prompt.

    Any help or advice would be greatly appreciated,

    Thanks.

  • Write audio packet to file using ffmpeg

    27 février 2017, par iamyz

    I am trying to write audio packet to file using ffmpeg. The source device sending the packet after some interval. e.g.

    First packet has a time stamp 00:00:00
    Second packet has a time stamp 00:00:00.5000000
    Third packet has a time stamp 00:00:01
    And so on...

    Means two packet per second.

    I want to encode those packets and write to a file.

    I am referring the Ffmpeg example from link Muxing.c

    While encoding and writing there is no error. But output file has only 2 sec audio duration and speed is also super fast.

    The video frames are proper according the settings.

    I think the problem is related to calculation of pts, dts and duration of packet.

    How should I calculate proper values for pts, dts and duration. Or is this problem related to other thing ?

    Code :

    void AudioWriter::WriteAudioChunk(IntPtr chunk, int lenght, TimeSpan timestamp)
    {
       int buffer_size = av_samples_get_buffer_size(NULL, outputStream->tmp_frame->channels, outputStream->tmp_frame->nb_samples,  outputStream->AudioStream->codec->sample_fmt, 0);

       uint8_t *audioData = reinterpret_cast(static_cast(chunk));
       int ret = avcodec_fill_audio_frame(outputStream->tmp_frame,outputStream->Channels, outputStream->AudioStream->codec->sample_fmt, audioData, buffer_size, 1);

       if (!ret)
          throw gcnew System::IO::IOException("A audio file was not opened yet.");

       write_audio_frame(outputStream->FormatContext, outputStream, audioData);
    }


    static int write_audio_frame(AVFormatContext *oc, AudioWriterData^ ost, uint8_t *audioData)
    {
          AVCodecContext *c;
          AVPacket pkt = { 0 };
          int ret;
          int got_packet;
          int dst_nb_samples;

          av_init_packet(&pkt);
          c = ost->AudioStream->codec;

          AVFrame *frame = ost->tmp_frame;

         if (frame)
         {
             dst_nb_samples = av_rescale_rnd(swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples, c->sample_rate, c->sample_rate, AV_ROUND_UP);
             if (dst_nb_samples != frame->nb_samples)
               throw gcnew Exception("dst_nb_samples != frame->nb_samples");

             ret = av_frame_make_writable(ost->AudioFrame);
             if (ret < 0)
                throw gcnew Exception("Unable to make writable.");

             ret = swr_convert(ost->swr_ctx, ost->AudioFrame->data, dst_nb_samples, (const uint8_t **)frame->data, frame->nb_samples);
             if (ret < 0)
               throw gcnew Exception("Unable to convert to destination format.");

             frame = ost->AudioFrame;

             AVRational timebase = { 1, c->sample_rate };
             frame->pts = av_rescale_q(ost->samples_count, timebase, c->time_base);
             ost->samples_count += dst_nb_samples;
         }

         ret = avcodec_encode_audio2(c, &pkt, frame, &got_packet);
         if (ret < 0)
           throw gcnew Exception("Error encoding audio frame.");

         if (got_packet)
         {
           ret = write_frame(oc, &c->time_base, ost->AudioStream, &pkt);
           if (ret < 0)
               throw gcnew Exception("Audio is not written.");
         }
         else
            throw gcnew Exception("Audio packet encode failed.");

         return (ost->AudioFrame || got_packet) ? 0 : 1;
    }

    static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
    {
       av_packet_rescale_ts(pkt, *time_base, st->time_base);
       pkt->stream_index = st->index;
       return av_interleaved_write_frame(fmt_ctx, pkt);
    }
  • How to fetch both live video frame and timestamp from ffmpeg to python on Windows

    6 mars 2017, par vijiboy

    Searching for an alternative as OpenCV would not provide timestamps for live camera stream (on Windows), which are required in my computer vision algorithm, I found ffmpeg and this excellent article https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
    The solution uses ffmpeg, accessing its standard output (stdout) stream. I extended it to read the standard error (stderr) stream as well.

    Working up the python code on windows, while I received the video frames from ffmpeg stdout, but the stderr freezes after delivering the showinfo videofilter details (timestamp) for first frame.

    I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected ?

    Expected : It should write video frames to disk as well as print timestamp details.
    Actual : It writes video files but does not get the timestamp (showinfo) details.

    Here’s the code I tried :

    import subprocess as sp
    import numpy
    import cv2

    command = [ 'ffmpeg',
               '-i', 'e:\sample.wmv',
               '-pix_fmt', 'rgb24',
               '-vcodec', 'rawvideo',
               '-vf', 'showinfo', # video filter - showinfo will provide frame timestamps
               '-an','-sn', #-an, -sn disables audio and sub-title processing respectively
               '-f', 'image2pipe', '-'] # we need to output to a pipe

    pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes???

    for i in range(10):
       raw_image = pipe.stdout.read(1280*720*3)
       img_info = pipe.stderr.read(244) # 244 characters is the current output of showinfo video filter
       print "showinfo output", img_info
       image1 =  numpy.fromstring(raw_image, dtype='uint8')
       image2 = image1.reshape((720,1280,3))  

       # write video frame to file just to verify
       videoFrameName = 'Video_Frame{0}.png'.format(i)
       cv2.imwrite(videoFrameName,image2)

       # throw away the data in the pipe's buffer.
       pipe.stdout.flush()
       pipe.stderr.flush()

    So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm...