
Recherche avancée
Autres articles (46)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Soumettre améliorations et plugins supplémentaires
10 avril 2011Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (9001)
-
FFMPEG H264 with custom overlay per frame
4 octobre 2020, par La bla blaWe have a stream that is stored in the cloud (Amazon S3) as individual H264 frames. The frames are stored as
framexxxxxx.264
, the numbering doesn't start from 0 but rather from some larger number, say 1000 (so,frame001000.264
)

The goal is to create a mp4 clip which is either timelapse or just faster for inspection and other checking (much faster, compressing around 3 hours of video down to < 20 minutes), this also requires we overlay the frame number (the filename) on the frame itself


At first I was creating a timelapse by pulling from S3 only the keyframes (i-frames ? still rather new to codecs & stuff) and overlaying the filename on them and saving as png (which probably isn't needed, but that's what I did) using (this command is used inside a python script)


ffmpeg -y -i {h264_name} -vf \"scale=1920:-1, 
drawtext=fontfile=/usr/share/fonts/truetype/ubuntu-font-family/Ubuntu-B.ttf:fontsize=34:text={txt}:fontcolor=white:x=50:y=50:bordercolor=black:borderw=2\" 
-c:a copy -pix_fmt yuv420p {basename}.png



after this I combined all the frames by using python to convert the lowest numbered frame to
0.png
and incrementing (so it would be continuous, because I only used keyframes the numbers originally weren't sequential) and running

ffmpeg -y -f image2 -i %d.png -r {self.params.fps} -vcodec libx264 -crf {self.params.crf} -pix_fmt yuv420p {out_file}



and this worked great, but the difference between keyframes was too long to allow for proper inspection


so now for the question(s)


since I know frames that are not keyframes (p-frames ?) can't be used alone by ffmpeg, the method of overlaying the file name and converting it to png (or keep as h264, same thing) won't work, or at least, I couldn't find a way for it to work, maybe there's a way to specify a frame's keyframe ?, how can one overlay the filename (and not the frame number as shown here for example)


Also, is it possible to skip some p-frames between the keyframes ? (so if a keyframe is every 30 frames, we would take a keyframe, a frame 15 frames later, and next another keyframe)


I thought about using ffmpeg's pipe option to feed it with the files as they're being downloaded, but I'm not sure if I can specify drawtext this way


Also, if there's another alternative that can achieve that (at first I was converting to png, using python and OpenCV to add the filename and then merging the pngs to mp4, but then I found drawtext can do that in a single command so I used it)


-
AWS : Best way to generate a thumbnail for every frame of a s3 uploaded video
4 janvier 2018, par danielfrancaI need to process a video file, transcode it and generate a thumbnail for every frame.
It should happen every time there’s a new video on a specific AWS bucket.
I found out that AWS Lambda should be the best service for that
However, it is not working as expected and I’ll explain why
I’ve created a simple Python2.7 file using FFVideo
It seems that this library doesn’t support Python3.It is a nice abstraction on top of ffmpeg
To deploy the package I had run
lld
on the FFVideo shared object, and then copied everything to my project directory, as described in their documentation.
Zipped it and upload to AWS LambdaYet it doesn’t work, I keep getting errors as if the /usr/lib64/libstdc++ is missing, even after copied it to the projecct dir, also tried /usr/lib64 and /lib64
Then as a second thought I wonder if just running
ffmpeg
wouldn’t be easier...
So I just copied ffmpeg to the project dir and did a simple Python script to call it.Missing shared objects, ok,
lld
again and copied everything to the directory.Then AWS Lambda seems to be completely broken, I can’t save it anymore and it just says "Fix errors before saving"
But no error message, nothingI even have attempted to write inline a simple code, but now AWS Lambda don’t even open the online editor.
I also tried to remove all the shared objects I have added, returning to the original state, but still same generic error.
Same thing if I just create a new lambda function with same old code.Doesn’t matter what I do it never even enable the Save button anymore.
I thought it might be just some AWS unstability, but it been a while.I’ve looked to a similar project using Node
and it doesn’t seem to include anything except ffmpegMy other idea is to use SQS to trigger a python script somewhere else to create the thumbnails
Any idea how is the best approach for that ?
-
PipedInputStream / PipedOutputStream, ImageIO and ffmpeg
19 avril 2015, par jdevelopI have the following code in Scala :
val pos = new PipedOutputStream()
val pis = new PipedInputStream(pos)
Future {
LOG.trace("Start rendering")
generateFrames(videoRenderParams.length) {
img ⇒ ImageIO.write(img, "PNG", pos)
}
pos.flush()
IOUtils.closeQuietly(pos)
LOG.trace("Finished rendering")
} onComplete {
case Success(_) ⇒
LOG.trace("Complete successfully")
case Failure(err) ⇒
LOG.error("Can't render stuff", err)
IOUtils.closeQuietly(pis)
IOUtils.closeQuietly(pos)
}
val prc = (ffmpegCli #< pis).!(logger)the Future simply writes the generated images one by one to the OutputStream. Now the ffmpeg process reads the input images from stdin and converts them to MP4 file.
That works pretty well, but for some reason sometimes I’m getting the following stacktraces :
I/O error Pipe closed for process: <input stream="stream" />
java.io.IOException: Pipe closed
at java.io.PipedInputStream.checkStateForReceive(PipedInputStream.java:260)
at java.io.PipedInputStream.receive(PipedInputStream.java:226)
at java.io.PipedOutputStream.write(PipedOutputStream.java:149)
at scala.sys.process.BasicIO$.loop$1(BasicIO.scala:236)
at scala.sys.process.BasicIO$.transferFullyImpl(BasicIO.scala:242)
at scala.sys.process.BasicIO$.transferFully(BasicIO.scala:223)
at scala.sys.process.ProcessImpl$PipeThread.runloop(ProcessImpl.scala:159)
at scala.sys.process.ProcessImpl$PipeSource.run(ProcessImpl.scala:179)At the same time I’m getting the following error from another stream :
javax.imageio.IIOException: I/O error writing PNG file!
at com.sun.imageio.plugins.png.PNGImageWriter.write(PNGImageWriter.java:1168)
at javax.imageio.ImageWriter.write(ImageWriter.java:615)
at javax.imageio.ImageIO.doWrite(ImageIO.java:1612)
at javax.imageio.ImageIO.write(ImageIO.java:1578)
atSo it seems that the streams were broken somewhere in between, so ffmpeg can not read the data, and ImageIO can not write the data.
What is even more interesting - the problem is reproducible only on certain Linux server (Amazon). It works flawlessly on other Linux boxes. So I wonder if somebody could point me out to the possible causes of this error.
What I’ve tried so far :
- use Oracle JDK 8 and OpenJDK
- use different versions of FFMPEG
Nothing worked by the moment.