Recherche avancée

Médias (0)

Mot : - Tags -/publication

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (67)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (6194)

  • How to obtain time markers for video splitting using python/OpenCV

    10 novembre 2018, par Bleddyn Raw-Rees

    I’m working on my MSc project which is researching automated deletion of low value content in digital file stores. I’m specifically looking at the sort of long shots that often occur in natural history filming whereby a static camera is left rolling in order to capture the rare snow leopard or whatever. These shots may only have some 60s of useful content with perhaps several hours of worthless content either side.

    As a first step I have a simple motion detection program from Adrian Rosebrock’s tutorial [http://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/#comment-393376]. Next I intend to use FFMPEG to split the video.

    What I would like help with is how to get in and out points based on the first and last points that motion is detected in the video.

    Here is the code should you wish to see it...

    # import the necessary packages
    import argparse
    import datetime
    import imutils
    import time
    import cv2

    # construct the argument parser and parse the arguments
    ap = argparse.ArgumentParser()
    ap.add_argument("-v", "--video", help="path to the video file")
    ap.add_argument("-a", "--min-area", type=int, default=500, help="minimum area size")
    args = vars(ap.parse_args())

    # if the video argument is None, then we are reading from webcam
    if args.get("video", None) is None:
    camera = cv2.VideoCapture(0)
    time.sleep(0.25)

    # otherwise, we are reading from a video file
    else:
       camera = cv2.VideoCapture(args["video"])

    # initialize the first frame in the video stream
    firstFrame = None

    # loop over the frames of the video
    while True:
       # grab the current frame and initialize the occupied/unoccupied
       # text
       (grabbed, frame) = camera.read()
       text = "Unoccupied"

       # if the frame could not be grabbed, then we have reached the end
       # of the video
       if not grabbed:
           break

       # resize the frame, convert it to grayscale, and blur it
       frame = imutils.resize(frame, width=500)
       gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
       gray = cv2.GaussianBlur(gray, (21, 21), 0)

       # if the first frame is None, initialize it
       if firstFrame is None:
           firstFrame = gray
           continue

       # compute the absolute difference between the current frame and
       # first frame
       frameDelta = cv2.absdiff(firstFrame, gray)
       thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]

       # dilate the thresholded image to fill in holes, then find contours
       # on thresholded image
       thresh = cv2.dilate(thresh, None, iterations=2)
       (_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

       # loop over the contours
       for c in cnts:
           # if the contour is too small, ignore it
           if cv2.contourArea(c) < args["min_area"]:
               continue

           # compute the bounding box for the contour, draw it on the frame,
           # and update the text
           (x, y, w, h) = cv2.boundingRect(c)
           cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
           text = "Occupied"

       # draw the text and timestamp on the frame
       cv2.putText(frame, "Room Status: {}".format(text), (10, 20),
           cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
       cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
           (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)

       # show the frame and record if the user presses a key
       cv2.imshow("Security Feed", frame)
       cv2.imshow("Thresh", thresh)
       cv2.imshow("Frame Delta", frameDelta)
       key = cv2.waitKey(1) & 0xFF

       # if the `q` key is pressed, break from the lop
       if key == ord("q"):
           break

    # cleanup the camera and close any open windows
    camera.release()
    cv2.destroyAllWindows()
  • How to decode in C a stream from this noname almost-UVC grayscale camera

    18 janvier 2019, par scriptfoo

    Edit : I found the cause. The stream always begins with something which is not a JPEG. Only after it there is a normal MJPEG stream. Interestingly, not all of the small examples of using V4L2/MJPEG decoders can divide what the camera produces properly into frames. Something called capturev4l2.c is a rare example of doing it properly. Possibly there is some detail, which decides if the camera’s bugginess is worked around or not.

    I have a noname almost-UVC-compliant camera (it fails several compatibility tests). This is a relatively cheap global shutter camera, and thus I would like to use it instead of something properly documented. It outputs what is reported (and properly played) by mplayer as

    Opening video decoder: [ffmpeg] FFmpeg's libavcodec codec family
    libavcodec version 57.107.100 (external)
    Selected video codec: [ffmjpeg] vfm: ffmpeg (FFmpeg MJPEG)

    ffprobe shows the following :

    [mjpeg @ 0x55c086dcc080] Format mjpeg detected only with low score of 25, misdetection possible!
    Input #0, mjpeg, from '/home/sc/Desktop/a.raw':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 640x480, 25 tbr, 1200k tbn, 25 tbc

    But as opposed to mplayer, it is unable to play it.

    I tried decode_jpeg_raw from mjpegtools, it complains about the header, which seems to change with each captured stream. So does not look like an unwrapped stream of JPEG images.

    I thus tried 0_hello_world.c from libavcodec/libavformat, but its stops at avformat_open_input() with an error Invalid data found when processing input. A 100-frame sample file is sitting here a.raw. Do you have any idea how to determine a method of decoding it in C into anything plain bitmap ?

    The file is grayscale, does not begin with a constant value, guvcview and mplayer are the only players I know, which can decode it without artifacts...

  • avcodec/nvenc : only unregister input resources when absolutely needed

    24 avril 2019, par Timo Rothenpieler
    avcodec/nvenc : only unregister input resources when absolutely needed
    

    This reverts nvenc to old behaviour, which in some super rare edge cases
    performs better.
    The implication of this is that any potential API user who relies on
    nvenc cleaning up every frames device resources after it's done using
    them will have to change their usage pattern.

    That should not be a problem, since pretty much every normal usage
    pattern automatically implies that surfaces are reused from a common
    pool, since constant re-allocation is also very expensive.

    • [DH] libavcodec/nvenc.c