Recherche avancée

Médias (3)

Mot : - Tags -/collection

Autres articles (99)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (11186)

  • Android ffmpeg save and append h264 streamed videos

    8 octobre 2012, par Stefan Alexandru

    I need to save a video file generated by two video streams coming from two different sources. I'm using rtsp over tcp/ip, and the videos are encoded with h264.
    I need to first record the video from the first source and than continue with the second source.
    So what I tried was to declare two AVFormatContext instances, initialize both with avformat_open_input(&context, "rtsp://......",NULL,&options)
    and then read frames with av_read_frame(context,&packet)
    and write them in the video file av_write_frame(oc,&packet);
    It works fine saving the video from the first source, but if by example I saved y frames from the first context, when I try reading and saving the frames from the second context in the same file, for the first y frames I am tring to save, av_write_frame(oc,&packet2);
    would retun -22
    , and would not add the frame to the file.

    I think the problem is that the context variable remembers how many frames were read, and it gives every read packet an identification number, to make sure it isn't written twice. But when I'm using a new context those identification numbers reset, the AVOutputFormat or the AVFormatContext also retain the id of the package they are expecting to receive, and would not write anything until they receive a package with that id.
    Now I'm wondering how could I solve this inconvenience. I can't find any setter for that id, or any way to reuse the same context. I thought to modify the ffmpeg sources but they are pretty complex and I couldn't find what I was looking for.
    An alternative would be to save the two video in two different files but, I don't know how to append them afterwards, as ffmpeg can only append videos encoded with mpeg and rencoding the video isn't really an option, as it will take to much time. Also I couldn't find any other functional way to append two mp4 videos encoded with h264.

    I'll be happy to hear any kind of usable ideea to this problem.

  • Save taken snapshot

    8 octobre 2014, par User056

    I have done everything how is in this question. everything is alright except one. I can’t save the taken snapshot. If I’ll follow with debug everything is alright.. what’s wrong ?

    public class FFMPEG
    {
       Process ffmpeg;
       public void exec(string input, string parametri, string output)
       {
           ffmpeg = new Process();

           ffmpeg.StartInfo.Arguments = " -i " + input + (parametri != null ? " " + parametri : "") + " " + output;
           ffmpeg.StartInfo.FileName = HttpContext.Current.Server.MapPath("~/ffmpeg.exe");
           ffmpeg.StartInfo.UseShellExecute = false;
           ffmpeg.StartInfo.RedirectStandardOutput = true;
           ffmpeg.StartInfo.RedirectStandardError = true;
           ffmpeg.StartInfo.CreateNoWindow = true;

           ffmpeg.Start();
           ffmpeg.WaitForExit();
           ffmpeg.Close();
       }

       public void GetThumbnail(string video, string jpg, string velicina)
       {
           if (velicina == null) velicina = "640x480";
           exec(video, "-ss 00:00:06 " + velicina, jpg);
       }

    }


    FFMPEG f = new FFMPEG();
               f.GetThumbnail(Server.MapPath("~/Uploads/" + unique), Server.MapPath("~/Thumbnails/" + unique.Remove(unique.IndexOf(".")) + ".jpg"), "1200x223");
  • x86/tx_float : save a branch during coefficient deinterleaving

    9 août 2022, par Lynne
    x86/tx_float : save a branch during coefficient deinterleaving
    

    Directly branch into the special 64-point deinterleave
    subroutine rather than going through the general deinterleave.

    64-point transform timings on Zen 3 :
    Before :
    1974 decicycles in av_tx (fft),16776864 runs, 352 skips
    After :
    1956 decicycles in av_tx (fft),16775378 runs, 1838 skips

    • [DH] libavutil/x86/tx_float.asm