Recherche avancée

Médias (91)

Autres articles (42)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Contribute to translation

    13 avril 2011

    You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
    To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
    MediaSPIP is currently available in French and English (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (3804)

  • Ffmpeg only receives a piece of information from the pipe

    4 juillet 2017, par Maxim Fedorov

    First of all - my english is not very good, i`m sorry for that.

    I use ffmpeg from c# to convert images to video. To interact with ffmpeg, I use pipes.

    public async Task ExecuteCommand(
           string arguments,
           Action<namedpipeserverstream> sendDataUsingPipe)
       {
           var inStream = new NamedPipeServerStream(
               "from_ffmpeg",
               PipeDirection.In,
               1,
               PipeTransmissionMode.Byte,
               PipeOptions.Asynchronous,
               PipeBufferSize,
               PipeBufferSize);

           var outStream = new NamedPipeServerStream(
               "to_ffmpeg",
               PipeDirection.Out,
               1,
               PipeTransmissionMode.Byte,
               PipeOptions.Asynchronous,
               PipeBufferSize,
               PipeBufferSize);

           var waitInConnectionTask = inStream.WaitForConnectionAsync();
           var waitOutConnectionTask = outStream.WaitForConnectionAsync();

           byte[] byteData;

           using (inStream)
           using (outStream)
           using (var inStreamReader = new StreamReader(inStream))
           using (var process = new Process())
           {
               process.StartInfo = new ProcessStartInfo
               {
                   RedirectStandardOutput = true,
                   RedirectStandardError = true,
                   RedirectStandardInput = true,
                   FileName = PathToFfmpeg,
                   Arguments = arguments,
                   UseShellExecute = false,
                   CreateNoWindow = true
               };

               process.Start();

               await waitOutConnectionTask;

               sendDataUsingPipe.Invoke(outStream);

               outStream.Disconnect();
               outStream.Close();

               await waitInConnectionTask;

               var logTask = Task.Run(() => process.StandardError.ReadToEnd());
               var dataBuf = ReadAll(inStream);

               var shouldBeEmpty = inStreamReader.ReadToEnd();
               if (!string.IsNullOrEmpty(shouldBeEmpty))
                   throw new Exception();

               var processExitTask = Task.Run(() => process.WaitForExit());
               await Task.WhenAny(logTask, processExitTask);
               var log = logTask.Result;

               byteData = dataBuf;

               process.Close();
               inStream.Disconnect();
               inStream.Close();
           }

           return byteData;
       }
    </namedpipeserverstream>

    Action "sendDataUsingPipe" looks like

    Action<namedpipeserverstream> sendDataUsingPipe = stream =>
           {
               foreach (var imageBytes in data)
               {
                   using (var image = Image.FromStream(new MemoryStream(imageBytes)))
                   {
                       image.Save(stream, ImageFormat.Jpeg);
                   }
               }
           };
    </namedpipeserverstream>

    When I send 10/20/30 images (regardless of the size) ffmpeg processes everything.
    When I needed to transfer 600/700 / .. images, then in the ffmpeg log I see that it only received 189-192, and in the video there are also only 189-192 images.
    There are no errors in the logs or exceptions in the code.

    What could be the reason for this behavior ?

  • FFMPEG API : How to clear real-time buffer ?

    29 novembre 2018, par user67

    Here’s the c++ code that I’m using to access my webcam.

    int Camera::Init(char* file_name,
                   char* device_name,
                   char* format,
                   char* resolution,
                   char* frame_rate,
                   char* pixel_format)
    {
       av_log(NULL, AV_LOG_INFO, "---INIT STARTED\n");
       avdevice_register_all();
       av_register_all();

       AVDictionary* properties_collection = NULL;
       av_dict_set(&amp;properties_collection, "f", format, NULL);
       av_dict_set(&amp;properties_collection, "video_size", resolution, NULL);
       av_dict_set(&amp;properties_collection, "framerate", frame_rate, NULL);
       av_dict_set(&amp;properties_collection, "pix_fmt", pixel_format, NULL);
       AVInputFormat *input_format = av_find_input_format("dshow");
       char command_line[256];
       sprintf(command_line, "video=%s", device_name);
       AVFormatContext *input_context = avformat_alloc_context();
       //input_context->flags |= AVFMT_FLAG_NOBUFFER;      //DOESN'T HELP
       //input_context->max_picture_buffer = 0;            //ERR

       int err_code = 0;
       err_code = avformat_open_input(&amp;input_context,
                                       command_line,
                                       input_format,
                                       &amp;properties_collection);
       int i = 0;
       while (i++ &lt; 30)
       {
           Sleep(1000);
           //avformat_flush(input_context); //DOESN'T HELP
           //av_free(input_context); //ERR
       }
       system("pause");
       return 0;
    }

    Right after "avformat_open_input()" it starts reading frames to some internal buffer without me even calling "av_read_frame()".
    After about 10 seconds it start’s giving me error messages :

    [dshow @ 0014ed40] real-time buffer [VirtualBox Webcam - FULL HD 1080P Webcam] [video input]
    too full or near too full (62% of size: 3041280 [rtbufsize parameter])!
    frame dropped!
    ...
    ...
    ...
    [dshow @ 0014ed40] real-time buffer [VirtualBox Webcam - FULL HD 1080P Webcam] [video input]
    too full or near too full (100% of size: 3041280 [rtbufsize parameter])!
    frame dropped!

    How to clear this buffer or avoid using it ?

    Thanks in advance.

    P.S.Please pardon my english.

    P.P.S.Have a good day.

  • ffmpeg : two videos side-by-side with audio1 lang=ger & audio2 lang=eng

    19 janvier 2023, par miridigital

    I have two videos from two GoPro cameras. Both videos are rendered via ffmpeg side-by-side into a single video (left and right) and the audio is currently combined/mixed into two channels (stereo). I can hear both camera audio channels at the same time.

    &#xA;

    Two channels stereo :

    &#xA;

    ffmpeg -i video1.mp4 -i video2.mp4 -filter_complex "[0:v][1:v]hstack=inputs=2[v]; [0:a][1:a]amerge[a]" -map "[v]" -map "[a]" -ac 2 side-by-side.mp4&#xA;

    &#xA;

    audio from both videos mixed into a single file with 2 channels - mediainfo

    &#xA;

    Now I want to switch between the two audio channels (cam1 or cam2) while I'm playing the side-by-side video.

    &#xA;

    My first try : With four channels (without -ac 2 parameter) :

    &#xA;

    ffmpeg -i video1.mp4 -i video2.mp4 -filter_complex "[0:v][1:v]hstack=inputs=2[v]; [0:a][1:a]amerge[a]" -map "[v]" -map "[a]" side-by-side.mp4&#xA;

    &#xA;

    audio from both videos mixed into a single file with 4 channels - mediainfo

    &#xA;

    But the most video players can't easily select the channels 1+2 or 3+4 while playing.

    &#xA;

    So I tried two languages :

    &#xA;

    ffmpeg -i video1.mp4 -i video2.mp4 -filter_complex "[0:v][1:v]hstack=inputs=2[v]; [0:a][1:a]amerge[a]" -map "[v]" -map "[a]" -metadata:s:a:0 language=ger -metadata:s:a:1 language=eng side-by-side.mp4&#xA;

    &#xA;

    audio from both videos mixed into a single file with 4 channels with languages - mediainfo

    &#xA;

    But that's wrong. I can only see german with 4 channels. How can I put channels 1+2 into german and channels 3+4 into english ? Afterwards I should be able to use the multi language feature from most video players to switch the audio between cameras.

    &#xA;

    Thank you,
    &#xA;Miriam

    &#xA;