Recherche avancée

Médias (0)

Mot : - Tags -/diogene

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (73)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (8423)

  • FFMPEG - PNG Overlay every n times

    27 septembre 2019, par Jass

    I’m trying to overlay the png over the stream every N time, for example, every 1 hour.

    The example I already have :

    ffmpeg -i rtsp:input -i watermark.png -filter_complex "overlay=(main_w-overlay_w)/2:main_h-overlay_h:enable='between(t,5,15)'" rtsp:out
  • Programmatically accessing PTS times in MP4 container

    9 novembre 2022, par mcandril

    Background

    


    For a research project, we are recording video data from two cameras and feed a synchronization pulse directly into the microphone ADC every second.

    


    Problem

    


    We want to derive a frame time stamp in the clock of the pulse source for each camera frame to relate the camera images temporally. With our current methods (see below), we get a frame offset of around 2 frames between the cameras. Unfortunately, inspection of the video shows that we are clearly 6 frames off (at least at one point) between the cameras.
I assume that this is because we are relating audio and video signal wrong (see below).

    


    Approach I think I need help with

    


    I read that in the MP4 container, there should be PTS times for video and audio. How do we access those programmatically. Python would be perfect, but if we have to call ffmpeg via system calls, we may do that too ...

    


    What we currently fail with

    


    The original idea was to find video and audio times as

    


    audio_sample_times = range(N_audiosamples)/audio_sampling_rate
video_frame_times = range(N_videoframes)/video_frame_rate


    


    then identify audio_pulse_times in audio_sample_times base, calculate the relative position of each video_time to the audio_pulse_times around it, and select the same relative value to the corresponding source_pulse_times.

    


    However, a first indication that this approach is problematic is already that for some videos, N_audiosamples/audio_sampling_rate differs from N_videoframes/video_frame_rate by multiple frames.

    


    What I have found by now

    


    OpenCV's cv2.CAP_PROP_POS_MSEC seems to do exactly what we do, and not access any PTS ...

    


    Edit : What I took from the winning answer

    


    container = av.open(video_path)
signal = []
audio_sample_times = []
video_sample_times = []

for frame in tqdm(container.decode(video=0, audio=0)):
    if isinstance(frame, av.audio.frame.AudioFrame):
        sample_times = (frame.pts + np.arange(frame.samples)) / frame.sample_rate
        audio_sample_times += list(sample_times)
        signal_f_ch0 = frame.to_ndarray().reshape((-1, len(frame.layout.channels))).T[0]
        signal += list(signal_f_ch0)
    elif isinstance(frame, av.video.frame.VideoFrame):
        video_sample_times.append(float(frame.pts*frame.time_base))

signal = np.abs(np.array(signal))
audio_sample_times = np.array(audio_sample_times)
video_sample_times = np.array(video_sample_times)


    


    Unfortunately, in my particular case, all pts are consecutive and gapless, so the result is the same as with the naive solution ...
By picture clues, we identified a section of 10s in the videos, somewhere in which they desync, but can't find any traces of that in the data.

    


  • avfilter/af_pan : reject expressions referencing the same channel multiple times

    24 mars 2018, par Marton Balint
    avfilter/af_pan : reject expressions referencing the same channel multiple times
    

    Fixes parsing of expressions like c0=c0+c0 or c0=c0|c0=c1. Previously no
    error was thrown and for input channels, only the last gain factor was used,
    for output channels the source channel gains were combined.

    Signed-off-by : Marton Balint <cus@passwd.hu>

    • [DH] libavfilter/af_pan.c