Recherche avancée

Médias (1)

Mot : - Tags -/biographie

Autres articles (112)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

Sur d’autres sites (7133)

  • FFMPEG Scene Detection - Trim Blank Background from Video at the Start and End

    10 juin 2014, par user3521682

    Summary :
    Need to programatically trim video when scene is not changing at the beginning & end.

    Example video : http://www.filehosting.co.nz/finished3.mp4
    (Quality is much higher in the real video)

    Background :

    Large number of videos for online store, each video begins with a blank background, then the model walks on (at a random time, few seconds), and then walks off after a random time (around 15 seconds). The end of the video is trimmed seemingly random ; could be up to 15 seconds of ’nothing’ at the end of the video.

    The camera does not move. There is no sound on the videos.
    The videos come from a camera in MOV format, sideways.

    I already have FFMPEG converting from MOV to MP4, rotating the video, adding an audio-track, and trimming the audio at the end of the video.

    Research :
    I understand that I should probably re-encode video with a very high (?) tolerance for i-frames (so that only two are made per video) and then export the times to a text file, and use the text file to cut the video (probably parse it in BASH and use that to build the FFMPEG commands).

    Does anyone have any idea how I could generate just two key-frames per video ?

    Example video : http://www.filehosting.co.nz/finished3.mp4
    (Quality is much higher in the real video)

  • Can't read mp4 or avi in opencv ubuntu

    15 novembre 2016, par Diederik

    I’m trying to read in a .mp4 (or .avi) file using this script :

    import cv2
    import math
    import sys
    import numpy as np

    class VideoAnalysis:

       def __init__(self, mp4_video):
           self.video = mp4_video
           self.pos_frame = self.video.get(cv2.cv.CV_CAP_PROP_POS_FRAMES)

       def run(self):
           while True:
               flag, frame = self.video.read()
               if flag:
                   cv2.imshow("frame",frame)
                   self.pos_frame = self.video.get(cv2.cv.CV_CAP_PROP_POS_FRAMES)
                   print(str(self.pos_frame)+" frames")
               else:
                   # The next frame is not ready, so we try to read it again
                   self.video.set(cv2.cv.CV_CAP_PROP_POS_FRAMES, self.pos_frame-1)
                   print("frame is not ready")
                   # It is better to wait for a while for the next frame to be ready
                   cv2.waitKey(1000)
               if cv2.waitKey(1) & 0xFF == ord('q'):
                   break

    cap = cv2.VideoCapture("opdracht.mp4")
    while not cap.isOpened():
       cap = cv2.VideoCapture("opdracht.mp4")
       print("Wait for the header")

    video_analyse = VideoAnalysis(cap)
    video_analyse.run()

    I started of by just using python 2.7 and opencv. At that moment it kept spinning in the loop ’waiting for header’, after some research I learned that I had to install ffmpeg, so I tried using sudo apt-get install ffmpeg and it installed but then the script came to be stuck in the loop "frame not ready". After some reading I learned that maybe I had to recompile both ffmpeg and opencv from source, so I did. I now run ffmpeg 3.2 and OpenCV 2.4.13 but still opencv can’t read a single frame from my video (which is in the same folder as the script).

    I really don’t understand what I am doing wrong.

  • Recording RTSP steam with Python

    6 mai 2022, par ロジャー

    Currently I am using MediaPipe with Python to monitor RTSP steam from my camera, working as a security camera. Whenever the MediaPipe holistic model detects humans, the script writes the frame to a file.

    


    i.e.

    


    # cv2.VideoCapture(RTSP)
# read frame
# while mediapipe detect
#   cv2.VideoWriter write frame
# store file


    


    Recently I want to add audio recording support. I have done some research that it is not possible to record audio with OpenCV. It has to be done with FFMPEG or PyAudio.

    


    I am facing these difficulities.

    


      

    1. When a person walk through in front of the camera, it takes maybe less than 2 seconds. For the RTSP stream being read by OpenCV, human is detected with MediaPipe, and start FFMPEG for recording, that human should have walked far far away already. So FFMPEG method seems not working for me.

      


    2. 


    3. For PyAudio method I am currently studying, I need to create 2 threads establishing individual RTSP connections. One thread is for video to be read by OpenCV and MediaPipe. The other thread is for audio to be recorded when the OpenCV thread notice human is detected. I have tried using several devices to read the RTSP streams. The devices are showing timestamps (watermark on the video) with several seconds in difference. So I doubt if I can get video from OpenCV and audio from PyAudio in sync when merging them into one single video.

      


    4. 


    


    Is there any suggestion how to solve this problem ?

    


    Thanks.