
Recherche avancée
Médias (1)
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (71)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)
Sur d’autres sites (12851)
-
FFMPEG (WINDOWS) - Jerky Videos with vidstabdetect & vidstabtransform
26 avril 2016, par Onish MistryI require to stabilize multiple video clips and finally stitch all the clips, along with images into one final video. These "Scenes" consisting video clips as well as images also can have overlays like Texts and/or other Images.
Basically the code I have in place as of now does everything for me just fine, where all the video clips are first converted into frame images. It then reads all the frames, puts on the overlays, adds a fade transition in-between "Scenes".
Coming to the issue I am facing with stabilization, when I extract image frames out of the stabilized video clip and simply try to recreate video from those extracted image frames, it comes out with a weird jerk, almost like as if it is missing those stabilization calculations or something, not sure. It still looks a bit stabilized but with missing frames. I have checked duration and number of frames extracted, everything matches with the source, non-stabilized video.
Below is the command used to stabilize the video, result of which is a perfectly stabilized video.
ffmpeg -i 1.MOV -r 30 -vf vidstabdetect=result="transforms.trf" -f null NUL && ffmpeg -i 1.MOV -r 30 -vf vidstabtransform=smoothing=30:input="transforms.trf" -vcodec libx264 -b:v 2000k -f mp4 results.mp4
Below is the command I use for video to image :
ffmpeg -i results.mp4 -r 30 -qscale 1 -f image2 %d.jpg
Below is the command I use for image to video :
ffmpeg -i %d.jpg -r 30 -vcodec libx264 -b:v 2000k -f mp4 final.mp4
Any help or suggestions are welcomed and appreciated.
Thanks,
-
Frames grabbed from UDP H264 stream with ffmpeg are gray/distorded
12 septembre 2019, par Clément BourdarieI am grabbing frames from a udp h264 stream with Javacv’s FFMPEG on windows, and putting them into a JavaFX imageview. The problem is that most of the image isn’t received well (it is gray, distored...) :
I used to have the same problem before and I made it work by using flush on the frame grabber after each frame, but I forgot to save my work and lost it, and this time the correction doesn’t work.
Here’s the part where i configure/launch FFMPEG :
final Java2DFrameConverter converter = new Java2DFrameConverter();
// Show drone camera
FFmpegFrameGrabber grabber = new FFmpegFrameGrabber("udp://227.0.0.1:2200");
grabber.setFrameRate(_frameRate);
grabber.setFormat(_format);
grabber.setVideoBitrate(25000000);
grabber.setVideoOption("preset", "ultrafast");
grabber.setNumBuffers(0);
grabber.start();
// Grab frames as long as the thread is running
while(_running){
final Frame frame = grabber.grab();
if (frame != null) {
final BufferedImage bufferedImage = converter.convert(frame);
if (bufferedImage != null) {
_cameraView.setImage(SwingFXUtils.toFXImage(bufferedImage, null));
}
}
Thread.sleep( 1000 / _frameRate );// don't grab frames faster than they are provided
grabber.flush();
}
_grabber.close();_format is "h264" and _frameRate is 30.
Also, the system is flooded with prints like these (I’m not sure they are related to the problem though) :
[h264 @ 00000000869c0a80] Invalid NAL unit 4, skipping.
[h264 @ 00000000869c0a80] Invalid NAL unit 4, skipping.
[h264 @ 00000000869c0a80] Invalid NAL unit 4, skipping.
[h264 @ 00000000869c0a80] Reference 6 >= 4
[h264 @ 00000000869c0a80] error while decoding MB 115 14, bytestream 1979
[h264 @ 00000000869c0a80] concealing 6414 DC, 6414 AC, 6414 MV errors in P frame
[h264 @ 0000000078f81180] Invalid NAL unit 0, skipping.
[h264 @ 0000000078f81180] Invalid NAL unit 0, skipping.
[h264 @ 0000000078f81180] concealing 4811 DC, 4811 AC, 4811 MV errors in B frameI don’t understand why it doesn’t work anymore
-
ffmpeg concat of .ts files where issues in one file causes all audio/video after to be out of sync [closed]
23 juillet, par The Shoe ShinerI'll start by saying I need a solution that can be automated in code, i.e. not one that requires me to manually select timestamps and trim files, and unfortuntely my media server (Jellyfin) only has access to ffmpeg, so I'm looking for an ffmpeg solution.


I have a set of .ts files that were recorded from a live stream. The media server will immediately restart the recording if for some reason ffmpeg fails during the copy process. So I end up with multiple files that I can then concatenate. Most of the time this works fine.


The current issue is that (as happens occasionally) one of these files, which is about 10 seconds long, has about 5 seconds of good data, and 5 seconds of what appears (in players) to be essentially nothing, video is still and no audio. The rest of the files, before and after, are are fine. The media server (which uses ffprobe) reports that the framerate for the problematic file is 15fps, however I suspect that this is a result of the gap causing bad math, because the first 5 seconds appear to be fine and running at 29.###fps, so maybe the missing data at the end is affecting the calculated framerate, but admittedly I don't know exactly what's happening under the hood.


I need to concat these files, but after I run the concat, all video after that problematic 10s file is out of sync with the audio.


I'm fine with gaps, pauses, missing audio, etc in that small portion of the final file, but what I need to avoid is having that single file cause all the remaining content to be out-of-sync or otherwise broken. So I think I need a way to "correct" that problematic file before the concat, and I need to do so in a way that does not require me to manually select timestamps, per my first comment.


I have personally seen .ts have timestamp issues so my code already does a simple remux of all the ts files to mp4 prior to concat, and have also tried a remux of a final concatenated ts file, neither of which resolve this issue.


Is there any other ffmpeg process I can run on the files, that might force ffmpeg to correct timestamps, which would allow me to concat successfully ? I'm not necessarily trying to recover data in the bad file, I'm just trying to ensure that I can concat a set of files and have the final file resemble what one would see when watching each file independently.


Edit : More details about what I have tried (In addition to what was already mentioned).


- 

- During the remux I used the
-shortest
flag, assuming maybe the audio stream was a different length than the video. - Also tried the
-reset_timestamps 1
flag in the remux. - The remux command looked like this :
ffmpeg.exe -y -i "Invictus 2025_07_22_05_25_00_4_concat0.ts" -reset_timestamps 1 -vcodec copy -acodec copy -map 0:v -map 0:a -shortest "Invictus 2025_07_22_05_25_00_4_concat0.mp4"








- During the remux I used the