Recherche avancée

Médias (91)

Autres articles (104)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (11147)

  • How to get output from ffmpeg process in c#

    13 juillet 2018, par Anirudha Gupta

    In the code I written in WPF, I run some filter in FFmpeg, If I run the command in terminal (PowerShell or cmd prompt) It will give me information line by line what’s going on.

    I am calling the process from C# code and it’s work fine. The problem I have with my code is actually I am not able to get any output from the process I run.

    I have tried some answers from StackOverflow for FFmpeg process. I see 2 opportunities in my code. I can either fix it by Timer approach or secondly hook an event to OutputDataReceived.

    I tried OutputDataReceived event, My code never got it worked. I tried Timer Approach but still, it’s not hitting my code. Please check the code below

           _process = new Process
           {
               StartInfo = new ProcessStartInfo
               {
                   FileName = ffmpeg,
                   Arguments = arguments,
                   UseShellExecute = false,
                   RedirectStandardOutput = true,
                   RedirectStandardError = true,
                   CreateNoWindow = true,
               },
               EnableRaisingEvents = true
           };

           _process.OutputDataReceived += Proc_OutputDataReceived;

           _process.Exited += (a, b) =>
           {
               System.Threading.Tasks.Task.Run(() =>
               {
                   System.Threading.Tasks.Task.Delay(5000);
                   System.IO.File.Delete(newName);
               });

               //System.IO.File.Delete()
           };

           _process.Start();
           _timer = new Timer();
           _timer.Interval = 500;
           _timer.Start();
           _timer.Tick += Timer_Tick;
       }


       private void Timer_Tick(object sender, EventArgs e)
       {
           while (_process.StandardOutput.EndOfStream)
           {
              string line = _process.StandardOutput.ReadLine();
           }
           // Check the process.

       }
  • compat/w32pthreads : change pthread_t into pointer to malloced struct

    12 décembre 2024, par Anton Khirnov
    compat/w32pthreads : change pthread_t into pointer to malloced struct
    

    pthread_t is currently defined as a struct, which gets placed into
    caller's memory and filled by pthread_create() (which accepts a
    pthread_t*).

    The problem with this approach is that pthread_join() accepts pthread_t
    itself rather than a pointer to it, so it gets a _copy_ of this
    structure. This causes non-deterministic failures of pthread_join() to
    produce the correct return value - depending on whether the thread
    already finished before pthread_join() is called (and thus the copy
    contains the correct value), or not (then it contains 0).

    Change the definition of pthread_t into a pointer to a struct, that gets
    malloced by pthread_create() and freed by pthread_join().

    Fixes random failures of fate-ffmpeg-error-rate-fail on Windows after
    433cf391f58210432be907d817654929a66e80ba.

    See also [1] for an alternative approach that does not require dynamic
    allocation, but relies on an assumption that the pthread_t value
    remains in a fixed memory location.

    [1] https://code.videolan.org/videolan/x264/-/commit/23829dd2b2c909855481f46cc884b3c25d92c2d7

    Reviewed-By : Martin Storsjö <martin@martin.st>

    • [DH] compat/w32pthreads.h
  • FFMPEG - C API - GIF creation

    6 mars 2020, par Vuwox

    I have an image processing pipeline, and I have images in memory that I convert into AVFrame and I try to create a GIF with those.

    I started from this topic and I just replaced the video decoder part with a conversion of my image in memory to an AVFrame.

    This work pretty well, but I have issue with the GIF framerate.

    Into the init_filters(...) method, I don’t understand the time_base and pixel_aspect variables of the argument structure :

       snprintf(args, sizeof(args),
            "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
            width, height, in_fmt, time_base.num, time_base.den,
            pixel_aspect.num, pixel_aspect.den);

    I would like to have a FPS=12, so should I define them as follow ?

    AVRational time_base = AVRational{1, 12};
    AVRational pixel_aspect= AVRational{1, 1};

    Next, in the loop where I feed the frame to the filter buffer (for the palettegen), I also have an options that I don’t understand, what the AVFrame->pts refers to ?

       // palettegen need a whole stream, just add frame to buffer.
       ret = av_buffersrc_add_frame_flags(buffersrc_ctx, picture_rgb24, AV_BUFFERSRC_FLAG_KEEP_REF);
       if (ret &lt; 0) {
           av_log(nullptr, AV_LOG_ERROR, "error add frame to buffer source %s\n", av_make_error_string(msg_v2, MSG_LEN, ret));
       }

       picture_rgb24->pts += 1; // HERE

    As far as I understand, its supposed to be the timestamp of the frame, in my case, I have a GIF, should I increase by 1 every time ? or 1000ms / 12 frame = 83.33 ms ? Im not sure, I tried to found the information but no luck so far.

    There is also the init_muxer(...) method where its possible to set the time_base of the output (GIF) :

       o_codec_ctx->time_base = AVRational{1, 12};

    So I get a bit confused with all the place where we have to set the framerate.

    Right now, the GIF is well generated (with the palette) in memory using FFMPEG C API, the only problem is that the GIF is way too fast and not at the right framerate.