
Recherche avancée
Autres articles (63)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)
Sur d’autres sites (3630)
-
While transcoding to mpegts ffmpeg creates the first frame after 1.48 seconds delay
15 mars 2018, par Biraj B ChoudhuryI have transcoded a file to mpegts using the following command
./ffmpeg -y -i big_buck_bunny_720p_5mb.mp4 -vcodec libx264 -x264opts "keyint=48:min-keyint=48:no_scenecut" -r 23.976 -c:a copy -f mpegts test.mpegts
When I run ffprobe on it -
./ffprobe -i test.mpegts -select_streams v -show_frames -of csv
I see that the first frame starts at 1.48 seconds why is this so ?
Input #0, mpegts, from 'test.mpegts':
Duration: 00:00:29.61, start: 1.483422, bitrate: 1964 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(progressive), 1280x720 [SAR 1:1 DAR 16:9], 23.98 fps, 23.98 tbr, 90k tbn, 47.95 tbc
Stream #0:1[0x101](und): Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, 5.1, fltp, 406 kb/s
frame,video,0,1,133508,1.483422,133508,1.483422,133508,1.483422,3753,0.041700,564,112008,1280,720,yuv420p,1:1,I,0,0,0,0,0
frame,video,0,0,137262,1.525133,137262,1.525133,137262,1.525133,3753,0.041700,125584,1110,1280,720,yuv420p,1:1,B,2,0,0,0,0After some research I added "muxdelay 0" to my command
./ffmpeg -y -i big_buck_bunny_720p_5mb.mp4 -vcodec libx264 -x264opts "keyint=48:min-keyint=48:no_scenecut" -r 23.976 -c:a copy -muxdelay 0 -f mpegts test.mpegts
And now I get the following in ffprobe
Input #0, mpegts, from 'test.mpegts':
Duration: 00:00:29.61, start: 0.083422, bitrate: 1987 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(progressive), 1280x720 [SAR 1:1 DAR 16:9], 23.98 fps, 23.98 tbr, 90k tbn, 47.95 tbc
Stream #0:1[0x101](und): Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, 5.1, fltp, 406 kb/s
frame,video,0,1,7508,0.083422,7508,0.083422,7508,0.083422,3753,0.041700,564,112008,1280,720,yuv420p,1:1,I,0,0,0,0,0
frame,video,0,0,11262,0.125133,11262,0.125133,11262,0.125133,3753,0.041700,125584,1110,1280,720,yuv420p,1:1,B,2,0,0,0,0Can anybody help me understand what is this muxdelay that is contributing to 1.4 seconds of delay and what is contributing to the remaining 0.08 seconds of delay.
The first frame is at 0.000 when the output is mp4 so this is something particular to mpegts.
-
Programmatically accessing PTS times in MP4 container
9 novembre 2022, par mcandrilBackground


For a research project, we are recording video data from two cameras and feed a synchronization pulse directly into the microphone ADC every second.


Problem


We want to derive a frame time stamp in the clock of the pulse source for each camera frame to relate the camera images temporally. With our current methods (see below), we get a frame offset of around 2 frames between the cameras. Unfortunately, inspection of the video shows that we are clearly 6 frames off (at least at one point) between the cameras.
I assume that this is because we are relating audio and video signal wrong (see below).


Approach I think I need help with


I read that in the MP4 container, there should be PTS times for video and audio. How do we access those programmatically. Python would be perfect, but if we have to call ffmpeg via system calls, we may do that too ...


What we currently fail with


The original idea was to find video and audio times as


audio_sample_times = range(N_audiosamples)/audio_sampling_rate
video_frame_times = range(N_videoframes)/video_frame_rate



then identify audio_pulse_times in audio_sample_times base, calculate the relative position of each video_time to the audio_pulse_times around it, and select the same relative value to the corresponding source_pulse_times.


However, a first indication that this approach is problematic is already that for some videos, N_audiosamples/audio_sampling_rate differs from N_videoframes/video_frame_rate by multiple frames.


What I have found by now


OpenCV's cv2.CAP_PROP_POS_MSEC seems to do exactly what we do, and not access any PTS ...


Edit : What I took from the winning answer


container = av.open(video_path)
signal = []
audio_sample_times = []
video_sample_times = []

for frame in tqdm(container.decode(video=0, audio=0)):
 if isinstance(frame, av.audio.frame.AudioFrame):
 sample_times = (frame.pts + np.arange(frame.samples)) / frame.sample_rate
 audio_sample_times += list(sample_times)
 signal_f_ch0 = frame.to_ndarray().reshape((-1, len(frame.layout.channels))).T[0]
 signal += list(signal_f_ch0)
 elif isinstance(frame, av.video.frame.VideoFrame):
 video_sample_times.append(float(frame.pts*frame.time_base))

signal = np.abs(np.array(signal))
audio_sample_times = np.array(audio_sample_times)
video_sample_times = np.array(video_sample_times)



Unfortunately, in my particular case, all pts are consecutive and gapless, so the result is the same as with the naive solution ...
By picture clues, we identified a section of 10s in the videos, somewhere in which they desync, but can't find any traces of that in the data.


-
How to check when ffmpeg completes a task ?
25 mai 2018, par AndrewI’m just learning how to use ffmpeg a few hours ago to generate video thumbnails.
These are some results :
I’d used the same size (width - height) to Youtube’s. Each image contains max 25 thumbnails (5x5) with the size 160x90.
Everything looks good until :
public async Task GetVideoThumbnailsAsync(string videoPath, string videoId)
{
byte thumbnailWidth = 160;
byte thumbnailHeight = 90;
string fps = "1/2";
videoPath = Path.Combine(_environment.WebRootPath, videoPath);
string videoThumbnailsPath = Path.Combine(_environment.WebRootPath, $"assets/images/video_thumbnails/{videoId}");
string outputImagePath = Path.Combine(videoThumbnailsPath, "item_%d.jpg");
Directory.CreateDirectory(videoThumbnailsPath);
using (var ffmpeg = new Process())
{
ffmpeg.StartInfo.Arguments = $" -i {videoPath} -vf fps={fps} -s {thumbnailWidth}x{thumbnailHeight} {outputImagePath}";
ffmpeg.StartInfo.FileName = Path.Combine(_environment.ContentRootPath, "FFmpeg/ffmpeg.exe");
ffmpeg.Start();
}
await Task.Delay(3000);
await GenerateThumbnailsAsync(videoThumbnailsPath, videoId);
}I’m getting a trouble with the line :
await Task.Delay(3000);
When I learn the way to use ffmpeg, they didn’t mention about it. After some hours failed, I notice that :
An mp4 video (1 min 31 sec - 1.93Mb) requires some delay time 1000ms. And other, an mp4 video (1 min 49 sec - 7.25Mb) requires some delay time 3000ms.
If I don’t use
Task.Delay
and try to get all the files immediately, it would return 0 (there was no file in the directory).Plus, each file which has a difference length to the others requires a difference delay time. I don’t know how to calculate it.
And my question is : How to check when the task has completed ?
P/s : I don’t mean to relate to javascript, but in js, there is something called
Promise
:var promise = new Promise(function (done) {
var todo = function () {
done();
};
todo();
});
promise.then(function () {
console.log('DONE...');
});I want to edit the code like that.
Thank you !