Recherche avancée

Médias (91)

Autres articles (53)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (9773)

  • avformat/mov : Dont blindly trust the stream duration in seting chapter times

    23 mai 2014, par Michael Niedermayer
    avformat/mov : Dont blindly trust the stream duration in seting chapter times
    

    Signed-off-by : Michael Niedermayer <michaelni@gmx.at>

    • [DH] libavformat/mov.c
  • FFMPEG - PNG Overlay every n times

    27 septembre 2019, par Jass

    I’m trying to overlay the png over the stream every N time, for example, every 1 hour.

    The example I already have :

    ffmpeg -i rtsp:input -i watermark.png -filter_complex "overlay=(main_w-overlay_w)/2:main_h-overlay_h:enable='between(t,5,15)'" rtsp:out
  • Programmatically accessing PTS times in MP4 container

    9 novembre 2022, par mcandril

    Background

    &#xA;

    For a research project, we are recording video data from two cameras and feed a synchronization pulse directly into the microphone ADC every second.

    &#xA;

    Problem

    &#xA;

    We want to derive a frame time stamp in the clock of the pulse source for each camera frame to relate the camera images temporally. With our current methods (see below), we get a frame offset of around 2 frames between the cameras. Unfortunately, inspection of the video shows that we are clearly 6 frames off (at least at one point) between the cameras.&#xA;I assume that this is because we are relating audio and video signal wrong (see below).

    &#xA;

    Approach I think I need help with

    &#xA;

    I read that in the MP4 container, there should be PTS times for video and audio. How do we access those programmatically. Python would be perfect, but if we have to call ffmpeg via system calls, we may do that too ...

    &#xA;

    What we currently fail with

    &#xA;

    The original idea was to find video and audio times as

    &#xA;

    audio_sample_times = range(N_audiosamples)/audio_sampling_rate&#xA;video_frame_times = range(N_videoframes)/video_frame_rate&#xA;

    &#xA;

    then identify audio_pulse_times in audio_sample_times base, calculate the relative position of each video_time to the audio_pulse_times around it, and select the same relative value to the corresponding source_pulse_times.

    &#xA;

    However, a first indication that this approach is problematic is already that for some videos, N_audiosamples/audio_sampling_rate differs from N_videoframes/video_frame_rate by multiple frames.

    &#xA;

    What I have found by now

    &#xA;

    OpenCV's cv2.CAP_PROP_POS_MSEC seems to do exactly what we do, and not access any PTS ...

    &#xA;

    Edit : What I took from the winning answer

    &#xA;

    container = av.open(video_path)&#xA;signal = []&#xA;audio_sample_times = []&#xA;video_sample_times = []&#xA;&#xA;for frame in tqdm(container.decode(video=0, audio=0)):&#xA;    if isinstance(frame, av.audio.frame.AudioFrame):&#xA;        sample_times = (frame.pts &#x2B; np.arange(frame.samples)) / frame.sample_rate&#xA;        audio_sample_times &#x2B;= list(sample_times)&#xA;        signal_f_ch0 = frame.to_ndarray().reshape((-1, len(frame.layout.channels))).T[0]&#xA;        signal &#x2B;= list(signal_f_ch0)&#xA;    elif isinstance(frame, av.video.frame.VideoFrame):&#xA;        video_sample_times.append(float(frame.pts*frame.time_base))&#xA;&#xA;signal = np.abs(np.array(signal))&#xA;audio_sample_times = np.array(audio_sample_times)&#xA;video_sample_times = np.array(video_sample_times)&#xA;

    &#xA;

    Unfortunately, in my particular case, all pts are consecutive and gapless, so the result is the same as with the naive solution ...&#xA;By picture clues, we identified a section of 10s in the videos, somewhere in which they desync, but can't find any traces of that in the data.

    &#xA;