Recherche avancée

Médias (1)

Mot : - Tags -/MediaSPIP

Autres articles (59)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

Sur d’autres sites (4903)

  • avcodec/pnm : avoid mirroring PFM images vertically

    16 novembre 2022, par Leo Izen
    avcodec/pnm : avoid mirroring PFM images vertically
    

    PFM (aka Portable FloatMap) encodes its scanlines from bottom-to-top,
    not from top-to-bottom, unlike other NetPBM formats. Without this
    patch, FFmpeg ignores this exception and decodes/encodes PFM images
    mirrored vertically from their proper orientation.

    For reference, see the NetPBM tool pfmtopam, which encodes a .pam
    from a .pfm, using the correct orientation (and which FFmpeg reads
    correctly). Also compare ffplay to magick display, which shows the
    correct orientation as well.

    See : http://www.pauldebevec.com/Research/HDR/PFM/ and see :
    https://netpbm.sourceforge.net/doc/pfm.html for descriptions of this
    image format.

    Signed-off-by : Leo Izen <leo.izen@gmail.com>
    Reviewed-by : Anton Khirnov <anton@khirnov.net>
    Signed-off-by : James Almer <jamrial@gmail.com>

    • [DH] libavcodec/pnmdec.c
    • [DH] libavcodec/pnmenc.c
    • [DH] tests/ref/lavf/gbrpf32be.pfm
    • [DH] tests/ref/lavf/gbrpf32le.pfm
    • [DH] tests/ref/lavf/grayf32be.pfm
    • [DH] tests/ref/lavf/grayf32le.pfm
  • Programmatically accessing PTS times in MP4 container

    9 novembre 2022, par mcandril

    Background

    &#xA;

    For a research project, we are recording video data from two cameras and feed a synchronization pulse directly into the microphone ADC every second.

    &#xA;

    Problem

    &#xA;

    We want to derive a frame time stamp in the clock of the pulse source for each camera frame to relate the camera images temporally. With our current methods (see below), we get a frame offset of around 2 frames between the cameras. Unfortunately, inspection of the video shows that we are clearly 6 frames off (at least at one point) between the cameras.&#xA;I assume that this is because we are relating audio and video signal wrong (see below).

    &#xA;

    Approach I think I need help with

    &#xA;

    I read that in the MP4 container, there should be PTS times for video and audio. How do we access those programmatically. Python would be perfect, but if we have to call ffmpeg via system calls, we may do that too ...

    &#xA;

    What we currently fail with

    &#xA;

    The original idea was to find video and audio times as

    &#xA;

    audio_sample_times = range(N_audiosamples)/audio_sampling_rate&#xA;video_frame_times = range(N_videoframes)/video_frame_rate&#xA;

    &#xA;

    then identify audio_pulse_times in audio_sample_times base, calculate the relative position of each video_time to the audio_pulse_times around it, and select the same relative value to the corresponding source_pulse_times.

    &#xA;

    However, a first indication that this approach is problematic is already that for some videos, N_audiosamples/audio_sampling_rate differs from N_videoframes/video_frame_rate by multiple frames.

    &#xA;

    What I have found by now

    &#xA;

    OpenCV's cv2.CAP_PROP_POS_MSEC seems to do exactly what we do, and not access any PTS ...

    &#xA;

    Edit : What I took from the winning answer

    &#xA;

    container = av.open(video_path)&#xA;signal = []&#xA;audio_sample_times = []&#xA;video_sample_times = []&#xA;&#xA;for frame in tqdm(container.decode(video=0, audio=0)):&#xA;    if isinstance(frame, av.audio.frame.AudioFrame):&#xA;        sample_times = (frame.pts &#x2B; np.arange(frame.samples)) / frame.sample_rate&#xA;        audio_sample_times &#x2B;= list(sample_times)&#xA;        signal_f_ch0 = frame.to_ndarray().reshape((-1, len(frame.layout.channels))).T[0]&#xA;        signal &#x2B;= list(signal_f_ch0)&#xA;    elif isinstance(frame, av.video.frame.VideoFrame):&#xA;        video_sample_times.append(float(frame.pts*frame.time_base))&#xA;&#xA;signal = np.abs(np.array(signal))&#xA;audio_sample_times = np.array(audio_sample_times)&#xA;video_sample_times = np.array(video_sample_times)&#xA;

    &#xA;

    Unfortunately, in my particular case, all pts are consecutive and gapless, so the result is the same as with the naive solution ...&#xA;By picture clues, we identified a section of 10s in the videos, somewhere in which they desync, but can't find any traces of that in the data.

    &#xA;

  • Recording RTSP steam with Python

    6 mai 2022, par ロジャー

    Currently I am using MediaPipe with Python to monitor RTSP steam from my camera, working as a security camera. Whenever the MediaPipe holistic model detects humans, the script writes the frame to a file.

    &#xA;

    i.e.

    &#xA;

    # cv2.VideoCapture(RTSP)&#xA;# read frame&#xA;# while mediapipe detect&#xA;#   cv2.VideoWriter write frame&#xA;# store file&#xA;

    &#xA;

    Recently I want to add audio recording support. I have done some research that it is not possible to record audio with OpenCV. It has to be done with FFMPEG or PyAudio.

    &#xA;

    I am facing these difficulities.

    &#xA;

      &#xA;
    1. When a person walk through in front of the camera, it takes maybe less than 2 seconds. For the RTSP stream being read by OpenCV, human is detected with MediaPipe, and start FFMPEG for recording, that human should have walked far far away already. So FFMPEG method seems not working for me.

      &#xA;

    2. &#xA;

    3. For PyAudio method I am currently studying, I need to create 2 threads establishing individual RTSP connections. One thread is for video to be read by OpenCV and MediaPipe. The other thread is for audio to be recorded when the OpenCV thread notice human is detected. I have tried using several devices to read the RTSP streams. The devices are showing timestamps (watermark on the video) with several seconds in difference. So I doubt if I can get video from OpenCV and audio from PyAudio in sync when merging them into one single video.

      &#xA;

    4. &#xA;

    &#xA;

    Is there any suggestion how to solve this problem ?

    &#xA;

    Thanks.

    &#xA;