
Recherche avancée
Médias (17)
-
Matmos - Action at a Distance
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
DJ Dolores - Oslodum 2004 (includes (cc) sample of “Oslodum” by Gilberto Gil)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Danger Mouse & Jemini - What U Sittin’ On ? (starring Cee Lo and Tha Alkaholiks)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Cornelius - Wataridori 2
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Rapture - Sister Saviour (Blackstrobe Remix)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Chuck D with Fine Arts Militia - No Meaning No
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (36)
-
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...) -
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)
Sur d’autres sites (5014)
-
Fluent ffmpeg low video quality using original file bitrate
18 février 2021, par KopEreI have downloaded few sample videos from youtube with 1080p quality now I try to scale it to 1080p, 720p, and 480p but after encoding the quality is much lower (Visible compression artifacts). When I set the bitrate automatically the quality is comparable to that of the original file but the file size is a few times bigger than original. I am not sure if I am making a mistake in FFmpeg options or if youtube has any advanced algorithms that allow such a low bitrate.




ffmpeg(req.file.path)
.output(`${videoPath}/1920x1080.mp4`)
.outputOption('-x264-params', 'keyint=48:min-keyint=48:scenecut=0:ref=5:bframes=3:b-adapt=2', '-force_key_frames', 'expr:gte(t,n_forced*2)')
.addOption('-preset ultrafast')
.videoCodec('libx264')
.videoBitrate(`${bitRate/1000}k`)
.audioBitrate('320k')
.audioChannels(2)
.size('1920x1080')







-
How to properly splice a video-blob in javascript
1er mars 2021, par marksI am recording a webcam-video through recordRTC which has an
ondataavailable
callback which is called every 2000 milliseconds. Theondataavailable
method provides aBlob
as parameter, so it will always contain the current video blob. I would like to upload the video in slices to the server while the recording is ongoing to avoid having a big file to upload in the end as one.

For that, the blobs are sliced with Blob.slice() :


ondataavailable(blob) {
 // Create a new blob from last blob size to the end of the blob
 const sliceBlob = blob.slice(this._state.lastBlobSize, blob.size, blob.type)
 this.setState('lastBlobSize', blob.size)
 this.fileHandler.upload(sliceBlob)
}



When I upload the blobs to the server and convert them to
.webm
videos which are later concatenated again using FFMpeg, only the first blob is a valid video, but every following webm-file does not play.

Here is what the logged blobs look like :



I assume that the blobs are invalid due to the byte-length not containing valid sizes or something like that, so all videos after the first start with an illegal offset and can not be read therefore. But I have no idea how to change the slices to get properly working parts.


Also, the
videoBitsPerSecond
is set to128000 * 4
on the recordRTC config, which would be32000
byte per video slice (16000 per second) which might help finding the correct offset. Hover, looking at the "sliced blob" sizes, this does not seem to directly correlate - the blob sizes are bigger.

If there is no way to easily perform the slices like this, is there another way to achieve "streaming" smaller files to the server while the recording is ongoing ?



Edit :


As to CBroe's comment, I modified the controller to do the following when a blob is received :


$filePath = "/var/www/html/public/assets/blob.webm";
 $fp = fopen($filePath, file_exists($filePath) ? "a" : "w");
 fwrite($fp, file_get_contents($_FILES["blob"]["tmp_name"]));
 fclose($fp);



So it will always append the latest blob to the complete file. However, the file is still invalid when the video is properly stopped and the last blob is uploaded. Any suggestions ?


-
ffmpeg : crop video into two grayscale sub-videos ; guarantee monotonical frames ; and get timestamps
13 mars 2021, par lurix66The need


Hello, I need to extract two regions of a .h264 video file via the
crop
filter into two files. The output videos need to be monochrome and extension .mp4. The encoding (or format ?) should guarantee that video frames are organized monotonically. Finally, I need to get the timestamps for both files (which I'd bet are the same timestamps that I would get from the input file, see below).

In the end I will be happy to do everything in one command via an elegant one liner (via a complex filter I guess), but I start doing it in multiple steps to break it down in simpler problems.


In this path I get into many difficulties and despite having searched in many places I don't seem to find solutions that work. Unfortunately I'm no expert of ffmpeg or video conversion, so the more I search, the more details I discover, the less I solve problems.


Below you find some of my attempts to work with the following options :


- 

-filter:v "crop=400:ih:260:0,format=gray"
to do the crop and the monochrome conversion-vf showinfo
possibly combined with-vsync 0
or-copyts
to get the timestamps via stderr redirection&> filename
-c:v mjpeg
to force monotony of frames (are there other ways ?)








1. cropping each region and obtaining monochrome videos


$ ffmpeg -y -hide_banner -i inVideo.h264 -filter:v "crop=400:ih:260:0,format=gray" outL.mp4
$ ffmpeg -y -hide_banner -i inVideo.h264 -filter:v "crop=400:ih:1280:0,format=gray" outR.mp4



The issue here is that in the output files the frames are not organized monotonically (I don't understand why ; how come would that make sense in any video format ? I can't say if that comes from the input file).


EDIT. Maybe it is not frames, but packets, as returned by
av .demux()
method that are not monotonic (see below "instructions to reproduce...")

I have got the advice to do a
ffmpeg -i outL.mp4 outL.mjpeg
after, but this produces two videos that look very pixellated (at least playing them with ffplay) despite being surprisingly 4x bigger than the input. Needless to say, I need both monotonic frames and lossless conversion.

EDIT. I acknowledge the advice to specify
-q:v 1
; this fixes the pixellation effect but produces a file even bigger, 12x in size. Is it necessary ? (see below "instructions to reproduce...")

2. getting the timestamps


I found this piece of advice, but I don't want to generate hundreds of image files, so I tried the following :


$ ffmpeg -y -hide_banner -i outL.mp4 -vf showinfo -vsync 0 &>tsL.txt
$ ffmpeg -y -hide_banner -i outR.mp4 -vf showinfo -vsync 0 &>tsR.txt



The issue here is that I don't get any output because ffmpeg claims it needs an output file.


The need to produce an output file, and the doubt that the timestamps could be lost in the previous conversions, leads me back to making a first attempt of a one liner, where I am testing also the
-copyts
option, and the forcing the encoding with-c:v mjpeg
option as per the advice mentioned above (don't know if in the right position though)

ffmpeg -y -hide_banner -i testTex2.h264 -copyts -filter:v "crop=400:ih:1280:0,format=gray" -vf showinfo -c:v mjpeg eyeL.mp4 &>tsL.txt



This does not work because surprisingly the output .mp4 I get is the same as the input. If instead I put the
-vf showinfo
option just before the stderr redirection, I get no redirected output

ffmpeg -y -hide_banner -i testTex2.h264 -copyts -filter:v "crop=400:ih:260:0,format=gray" -c:v mjpeg outR.mp4 -vf showinfo dummy.mp4 &>tsR.txt



In this case I get the desired timestamps output (too much : I will need some solution to grab only the pts and pts_time data out of it) but I have to produce a big dummy file. The worst thing is anyway, that the mjpeg encoding produces a low resolution very pixellated video again


I admit that the logic how to place the options and the output files on the command line is obscure to me. Possible combinations are many, and the more options I try the more complicated it gets, and I am not getting much closer to the solution.


3. [EDIT] instructions how to reproduce this


- 

- get a .h264 video
- turn it into .mp by ffmpeg command
$ ffmpeg -i inVideo.h264 out.mp4
- run the following python cell in a jupyter-notebook
- see that the packets timestamps have diffs greater and less than zero










%matplotlib inline
import av
import numpy as np
import matplotlib.pyplot as mpl

fname, ext="outL.direct", "mp4"

cont=av.open(f"{fname}.{ext}")
pk_pts=np.array([p.pts for p in cont.demux(video=0) if p.pts is not None])

cont=av.open(f"{fname}.{ext}")
fm_pts=np.array([f.pts for f in cont.decode(video=0) if f.pts is not None])

print(pk_pts.shape,fm_pts.shape)

mpl.subplot(211)
mpl.plot(np.diff(pk_pts))

mpl.subplot(212)
mpl.plot(np.diff(fm_pts))



- 

- finally create also the mjpeg encoded files in various ways, and check packets monotony with the same script (see also file size)




$ ffmpeg -i inVideo.h264 out.mjpeg
$ ffmpeg -i inVideo.h264 -c:v mjpeg out.c_mjpeg.mp4
$ ffmpeg -i inVideo.h264 -c:v mjpeg -q:v 1 out.c_mjpeg_q1.mp4



Finally, the question


What is a working way / the right way to do it ?


Any hints, even about single steps and how to rightly combine them will be appreciated. Also, I am not limited tio the command line, and I would be able to try some more programmatic solution in python (jupyter notebook) instead of the command line if someone points me in that direction.