Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (83)

  • Qu’est ce qu’un masque de formulaire

    13 juin 2013, par

    Un masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
    Chaque formulaire de publication d’objet peut donc être personnalisé.
    Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
    Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 is the first MediaSPIP stable release.
    Its official release date is June 21, 2013 and is announced here.
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • MediaSPIP Player : les contrôles

    26 mai 2010, par

    Les contrôles à la souris du lecteur
    En plus des actions au click sur les boutons visibles de l’interface du lecteur, il est également possible d’effectuer d’autres actions grâce à la souris : Click : en cliquant sur la vidéo ou sur le logo du son, celui ci se mettra en lecture ou en pause en fonction de son état actuel ; Molette (roulement) : en plaçant la souris sur l’espace utilisé par le média (hover), la molette de la souris n’exerce plus l’effet habituel de scroll de la page, mais diminue ou (...)

Sur d’autres sites (7020)

  • why ffmpeg starts many processes

    29 octobre 2020, par james

    I have a doubt about the functioning of the ffmpeg program, I noticed that after starting the program on a 4gb memory raspberry pi4 many processes are started, is it normal or is there something wrong with my program ? I remember that on an amazon ec2 instance by controlling with htop only one process was created

    


    Code I used

    


    from subprocess import Popen

ffmpeg_process = Popen(["ffmpeg", "-hide_banner", "-loglevel", "panic", "-y", "-i", "./video/video1.mp4",
                        "-vcodec", "h264", "-acodec", "mp3", "./video/video2.mp4"],
                        start_new_session=True)

ffmpeg_process.wait()


    


    enter image description here

    


  • FFMPEG H264 with custom overlay per frame

    4 octobre 2020, par La bla bla

    We have a stream that is stored in the cloud (Amazon S3) as individual H264 frames. The frames are stored as framexxxxxx.264, the numbering doesn't start from 0 but rather from some larger number, say 1000 (so, frame001000.264)

    


    The goal is to create a mp4 clip which is either timelapse or just faster for inspection and other checking (much faster, compressing around 3 hours of video down to < 20 minutes), this also requires we overlay the frame number (the filename) on the frame itself

    &#xA;

    At first I was creating a timelapse by pulling from S3 only the keyframes (i-frames ? still rather new to codecs & stuff) and overlaying the filename on them and saving as png (which probably isn't needed, but that's what I did) using (this command is used inside a python script)

    &#xA;

    ffmpeg -y -i {h264_name} -vf \"scale=1920:-1, &#xA;drawtext=fontfile=/usr/share/fonts/truetype/ubuntu-font-family/Ubuntu-B.ttf:fontsize=34:text={txt}:fontcolor=white:x=50:y=50:bordercolor=black:borderw=2\" &#xA;-c:a copy -pix_fmt yuv420p {basename}.png&#xA;

    &#xA;

    after this I combined all the frames by using python to convert the lowest numbered frame to 0.png and incrementing (so it would be continuous, because I only used keyframes the numbers originally weren't sequential) and running

    &#xA;

    ffmpeg -y -f image2 -i %d.png -r {self.params.fps} -vcodec libx264 -crf {self.params.crf} -pix_fmt yuv420p {out_file}&#xA;

    &#xA;

    and this worked great, but the difference between keyframes was too long to allow for proper inspection

    &#xA;

    so now for the question(s)

    &#xA;

    since I know frames that are not keyframes (p-frames ?) can't be used alone by ffmpeg, the method of overlaying the file name and converting it to png (or keep as h264, same thing) won't work, or at least, I couldn't find a way for it to work, maybe there's a way to specify a frame's keyframe ?, how can one overlay the filename (and not the frame number as shown here for example)

    &#xA;

    Also, is it possible to skip some p-frames between the keyframes ? (so if a keyframe is every 30 frames, we would take a keyframe, a frame 15 frames later, and next another keyframe)

    &#xA;

    I thought about using ffmpeg's pipe option to feed it with the files as they're being downloaded, but I'm not sure if I can specify drawtext this way

    &#xA;

    Also, if there's another alternative that can achieve that (at first I was converting to png, using python and OpenCV to add the filename and then merging the pngs to mp4, but then I found drawtext can do that in a single command so I used it)

    &#xA;

  • Merging input Streams with nodejs/ffmpeg

    14 septembre 2020, par jAndy

    I'm creating a very basic and rudimentary Video-Web-Chat. On the client side, I'm going to use a simple getUserMedia API call to capture the webcam data and send video-data as data-blob to my server.

    &#xA;

    From There, I'm planning to either use the fluent-ffmpeg library or just spawn ffmpeg myself and pipe that raw data to ffmpeg, which in turn, does some magic and pushes that out as HLS stream to an Amazon AWS Service (for instance), which then gets actually displayed on a Web Browser for all participating people in the video chat.

    &#xA;

    So far, I think all of this should be fairly easy to implement, but I keep my head spinning around the question, how I can create a "combined" or "merged" frame and stream, so the output HLS data from my server to the distributing cloud service has only to be one combined data stream to receive.

    &#xA;

    If there are 3 people in that video chat, my server receives 3 data streams from those clients and combines these data streams (from the individual web-cam data sources) into one output stream.

    &#xA;

    How could that be accomplished ?&#xA;Can I "create" a new frame with ffmpeg, so to speak ? I would be very thankful if anybody could give me a heads up here, maybe I'm thinking in a complete wrong direction.

    &#xA;

    Another question which arises to me is, if I really can just "dump" any data, which I'm receiving from a binary blob created from getUserMedia or MultiStreamRecorder to ffmpeg or if I have to specify somewhere and somehow the exact codecs being used etc.?

    &#xA;