Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (86)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (10452)

  • PipedInputStream / PipedOutputStream, ImageIO and ffmpeg

    19 avril 2015, par jdevelop

    I have the following code in Scala :

         val pos = new PipedOutputStream()
         val pis = new PipedInputStream(pos)

         Future {
           LOG.trace("Start rendering")
           generateFrames(videoRenderParams.length) {
             img ⇒ ImageIO.write(img, "PNG", pos)
           }
           pos.flush()
           IOUtils.closeQuietly(pos)
           LOG.trace("Finished rendering")
         } onComplete {
           case Success(_) ⇒
             LOG.trace("Complete successfully")
           case Failure(err) ⇒
             LOG.error("Can't render stuff", err)
             IOUtils.closeQuietly(pis)
             IOUtils.closeQuietly(pos)
         }

         val prc = (ffmpegCli #< pis).!(logger)

    the Future simply writes the generated images one by one to the OutputStream. Now the ffmpeg process reads the input images from stdin and converts them to MP4 file.

    That works pretty well, but for some reason sometimes I’m getting the following stacktraces :

    I/O error Pipe closed for process: <input stream="stream" />
    java.io.IOException: Pipe closed
       at java.io.PipedInputStream.checkStateForReceive(PipedInputStream.java:260)
       at java.io.PipedInputStream.receive(PipedInputStream.java:226)
       at java.io.PipedOutputStream.write(PipedOutputStream.java:149)
       at scala.sys.process.BasicIO$.loop$1(BasicIO.scala:236)
       at scala.sys.process.BasicIO$.transferFullyImpl(BasicIO.scala:242)
       at scala.sys.process.BasicIO$.transferFully(BasicIO.scala:223)
       at scala.sys.process.ProcessImpl$PipeThread.runloop(ProcessImpl.scala:159)
       at scala.sys.process.ProcessImpl$PipeSource.run(ProcessImpl.scala:179)

    At the same time I’m getting the following error from another stream :

    javax.imageio.IIOException: I/O error writing PNG file!
       at com.sun.imageio.plugins.png.PNGImageWriter.write(PNGImageWriter.java:1168)
       at javax.imageio.ImageWriter.write(ImageWriter.java:615)
       at javax.imageio.ImageIO.doWrite(ImageIO.java:1612)
       at javax.imageio.ImageIO.write(ImageIO.java:1578)
       at

    So it seems that the streams were broken somewhere in between, so ffmpeg can not read the data, and ImageIO can not write the data.

    What is even more interesting - the problem is reproducible only on certain Linux server (Amazon). It works flawlessly on other Linux boxes. So I wonder if somebody could point me out to the possible causes of this error.

    What I’ve tried so far :

    • use Oracle JDK 8 and OpenJDK
    • use different versions of FFMPEG

    Nothing worked by the moment.

  • AWS : Best way to generate a thumbnail for every frame of a s3 uploaded video

    4 janvier 2018, par danielfranca

    I need to process a video file, transcode it and generate a thumbnail for every frame.

    It should happen every time there’s a new video on a specific AWS bucket.

    I found out that AWS Lambda should be the best service for that

    However, it is not working as expected and I’ll explain why

    I’ve created a simple Python2.7 file using FFVideo
    It seems that this library doesn’t support Python3.

    It is a nice abstraction on top of ffmpeg

    To deploy the package I had run lld on the FFVideo shared object, and then copied everything to my project directory, as described in their documentation.
    Zipped it and upload to AWS Lambda

    Yet it doesn’t work, I keep getting errors as if the /usr/lib64/libstdc++ is missing, even after copied it to the projecct dir, also tried /usr/lib64 and /lib64

    Then as a second thought I wonder if just running ffmpeg wouldn’t be easier...
    So I just copied ffmpeg to the project dir and did a simple Python script to call it.

    Missing shared objects, ok, lld again and copied everything to the directory.

    Then AWS Lambda seems to be completely broken, I can’t save it anymore and it just says "Fix errors before saving"
    But no error message, nothing

    I even have attempted to write inline a simple code, but now AWS Lambda don’t even open the online editor.
    I also tried to remove all the shared objects I have added, returning to the original state, but still same generic error.
    Same thing if I just create a new lambda function with same old code.

    Doesn’t matter what I do it never even enable the Save button anymore.
    I thought it might be just some AWS unstability, but it been a while.

    I’ve looked to a similar project using Node
    and it doesn’t seem to include anything except ffmpeg

    My other idea is to use SQS to trigger a python script somewhere else to create the thumbnails

    Any idea how is the best approach for that ?

  • FFMPEG H264 with custom overlay per frame

    4 octobre 2020, par La bla bla

    We have a stream that is stored in the cloud (Amazon S3) as individual H264 frames. The frames are stored as framexxxxxx.264, the numbering doesn't start from 0 but rather from some larger number, say 1000 (so, frame001000.264)

    &#xA;

    The goal is to create a mp4 clip which is either timelapse or just faster for inspection and other checking (much faster, compressing around 3 hours of video down to < 20 minutes), this also requires we overlay the frame number (the filename) on the frame itself

    &#xA;

    At first I was creating a timelapse by pulling from S3 only the keyframes (i-frames ? still rather new to codecs & stuff) and overlaying the filename on them and saving as png (which probably isn't needed, but that's what I did) using (this command is used inside a python script)

    &#xA;

    ffmpeg -y -i {h264_name} -vf \"scale=1920:-1, &#xA;drawtext=fontfile=/usr/share/fonts/truetype/ubuntu-font-family/Ubuntu-B.ttf:fontsize=34:text={txt}:fontcolor=white:x=50:y=50:bordercolor=black:borderw=2\" &#xA;-c:a copy -pix_fmt yuv420p {basename}.png&#xA;

    &#xA;

    after this I combined all the frames by using python to convert the lowest numbered frame to 0.png and incrementing (so it would be continuous, because I only used keyframes the numbers originally weren't sequential) and running

    &#xA;

    ffmpeg -y -f image2 -i %d.png -r {self.params.fps} -vcodec libx264 -crf {self.params.crf} -pix_fmt yuv420p {out_file}&#xA;

    &#xA;

    and this worked great, but the difference between keyframes was too long to allow for proper inspection

    &#xA;

    so now for the question(s)

    &#xA;

    since I know frames that are not keyframes (p-frames ?) can't be used alone by ffmpeg, the method of overlaying the file name and converting it to png (or keep as h264, same thing) won't work, or at least, I couldn't find a way for it to work, maybe there's a way to specify a frame's keyframe ?, how can one overlay the filename (and not the frame number as shown here for example)

    &#xA;

    Also, is it possible to skip some p-frames between the keyframes ? (so if a keyframe is every 30 frames, we would take a keyframe, a frame 15 frames later, and next another keyframe)

    &#xA;

    I thought about using ffmpeg's pipe option to feed it with the files as they're being downloaded, but I'm not sure if I can specify drawtext this way

    &#xA;

    Also, if there's another alternative that can achieve that (at first I was converting to png, using python and OpenCV to add the filename and then merging the pngs to mp4, but then I found drawtext can do that in a single command so I used it)

    &#xA;