Recherche avancée

Médias (91)

Autres articles (42)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • Configuration spécifique d’Apache

    4 février 2011, par

    Modules spécifiques
    Pour la configuration d’Apache, il est conseillé d’activer certains modules non spécifiques à MediaSPIP, mais permettant d’améliorer les performances : mod_deflate et mod_headers pour compresser automatiquement via Apache les pages. Cf ce tutoriel ; mode_expires pour gérer correctement l’expiration des hits. Cf ce tutoriel ;
    Il est également conseillé d’ajouter la prise en charge par apache du mime-type pour les fichiers WebM comme indiqué dans ce tutoriel.
    Création d’un (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

Sur d’autres sites (3813)

  • DNN OpenCV Python using RSTP always crash after few minutes

    1er juillet 2022, par renaldyks

    Description :

    


    I want to create a people counter using DNN. The model I'm using is MobileNetSSD. The camera I use is IPCam from Hikvision. Python communicates with IPCam using the RSTP protocol.

    


    The program that I made is good and there are no bugs, when running the sample video the program does its job well. But when I replaced it with IPcam there was an unknown error.

    


    Error :

    


    Sometimes the error is :

    


    [h264 @ 000001949f7adfc0] error while decoding MB 13 4, bytestream -6
[h264 @ 000001949f825ac0] left block unavailable for requested intra4x4 mode -1
[h264 @ 000001949f825ac0] error while decoding MB 0 17, bytestream 762


    


    Sometimes the error does not appear and the program is killed.

    



    


    Update Error

    


    After revising the code, I caught the error. The error found is

    


    [h264 @ 0000019289b3fa80] error while decoding MB 4 5, bytestream -25


    


    Now I don't know what to do, because the error is not in Google.

    


    Source Code :

    


    Old Code

    


    This is my very earliest code before getting suggestions from the comments field.

    


    import time
import cv2
import numpy as np
import math
import threading

print("Load MobileNeteSSD model")

prototxt = "MobileNetSSD_deploy.prototxt"
model = "MobileNetSSD_deploy.caffemodel"

CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat",
           "bottle", "bus", "car", "cat", "chair", "cow", "diningtable",
           "dog", "horse", "motorbike", "person", "pottedplant", "sheep",
           "sofa", "train", "tvmonitor"]

net = cv2.dnn.readNetFromCaffe(prototxt, model)

pos_line = 0
offset = 50
car = 0
detected = False
check = 0
prev_frame_time = 0


def detect():
    global check, car, detected
    check = 0
    if(detected == False):
        car += 1
        detected = True


def center_object(x, y, w, h):
    cx = x + int(w / 2)
    cy = y + int(h / 2)
    return cx, cy


def process_frame_MobileNetSSD(next_frame):
    global car, check, detected

    rgb = cv2.cvtColor(next_frame, cv2.COLOR_BGR2RGB)
    (H, W) = next_frame.shape[:2]

    blob = cv2.dnn.blobFromImage(next_frame, size=(300, 300), ddepth=cv2.CV_8U)
    net.setInput(blob, scalefactor=1.0/127.5, mean=[127.5, 127.5, 127.5])
    detections = net.forward()

    for i in np.arange(0, detections.shape[2]):
        confidence = detections[0, 0, i, 2]

        if confidence > 0.5:

            idx = int(detections[0, 0, i, 1])
            if CLASSES[idx] != "person":
                continue

            label = CLASSES[idx]

            box = detections[0, 0, i, 3:7] * np.array([W, H, W, H])
            (startX, startY, endX, endY) = box.astype("int")

            center_ob = center_object(startX, startY, endX-startX, endY-startY)
            cv2.circle(next_frame, center_ob, 4, (0, 0, 255), -1)

            if center_ob[0] < (pos_line+offset) and center_ob[0] > (pos_line-offset):
                # car+=1
                detect()

            else:
                check += 1
                if(check >= 5):
                    detected = False

            cv2.putText(next_frame, label+' '+str(round(confidence, 2)),
                        (startX, startY-10), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
            cv2.rectangle(next_frame, (startX, startY),
                          (endX, endY), (0, 255, 0), 3)

    return next_frame


def PersonDetection_UsingMobileNetSSD():
    cap = cv2.VideoCapture()
    cap.open("rtsp://admin:Admin12345@192.168.100.20:554/Streaming/channels/2/")

    global car,pos_line,prev_frame_time

    frame_count = 0

    while True:
        try:
            time.sleep(0.1)
            new_frame_time = time.time()
            fps = int(1/(new_frame_time-prev_frame_time))
            prev_frame_time = new_frame_time

            ret, next_frame = cap.read()
            w_video = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
            h_video = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
            pos_line = int(h_video/2)-50

            if ret == False: break

            frame_count += 1
            cv2.line(next_frame, (int(h_video/2), 0),
                     (int(h_video/2), int(h_video)), (255, 127, 0), 3)
            next_frame = process_frame_MobileNetSSD(next_frame)

            cv2.rectangle(next_frame, (248,22), (342,8), (0,0,0), -1)
            cv2.putText(next_frame, "Counter : "+str(car), (250, 20),
                        cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
            cv2.putText(next_frame, "FPS : "+str(fps), (0, int(h_video)-10),
                        cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
            cv2.imshow("Video Original", next_frame)
            # print(car)

        except Exception as e:
            print(str(e))

        if cv2.waitKey(1) & 0xFF == ord('q'): 
            break


    print("/MobileNetSSD Person Detector")


    cap.release()
    cv2.destroyAllWindows()

if __name__ == "__main__":
    t1 = threading.Thread(PersonDetection_UsingMobileNetSSD())
    t1.start()


    


    New Code

    


    I have revised my code and the program still stops taking frames. I just revised the PersonDetection_UsingMobileNetSSD() function. I've also removed the multithreading I was using. The code has been running for about 30 minutes but after a broken frame, the code will never re-execute the program block if ret == True.

    


    def PersonDetection_UsingMobileNetSSD():
    cap = cv2.VideoCapture()
    cap.open("rtsp://admin:Admin12345@192.168.100.20:554/Streaming/channels/2/")

    global car,pos_line,prev_frame_time

    frame_count = 0

    while True:
        try:
            if cap.isOpened():
                ret, next_frame = cap.read()
                if ret:
                    new_frame_time = time.time()
                    fps = int(1/(new_frame_time-prev_frame_time))
                    prev_frame_time = new_frame_time
                    w_video = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
                    h_video = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
                    pos_line = int(h_video/2)-50

                    # next_frame = cv2.resize(next_frame,(720,480),fx=0,fy=0, interpolation = cv2.INTER_CUBIC)

                    if ret == False: break

                    frame_count += 1
                    cv2.line(next_frame, (int(h_video/2), 0),
                            (int(h_video/2), int(h_video)), (255, 127, 0), 3)
                    next_frame = process_frame_MobileNetSSD(next_frame)

                    cv2.rectangle(next_frame, (248,22), (342,8), (0,0,0), -1)
                    cv2.putText(next_frame, "Counter : "+str(car), (250, 20),
                                cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
                    cv2.putText(next_frame, "FPS : "+str(fps), (0, int(h_video)-10),
                                cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
                    cv2.imshow("Video Original", next_frame)
                    # print(car)
                else:
                    print("Crashed Frame")
            else:
                print("Cap is not open")

        except Exception as e:
            print(str(e))

        if cv2.waitKey(1) & 0xFF == ord('q'): 
            break


    print("/MobileNetSSD Person Detector")


    cap.release()
    cv2.destroyAllWindows()


    


    Requirement :

    


    Hardware : Intel i5-1035G1, RAM 8 GB, NVIDIA GeForce MX330

    


    Software : Python 3.6.2 , OpenCV 4.5.1, Numpy 1.16.0

    


    Question :

    


      

    1. What should i do for fixing this error ?
    2. 


    3. What causes this to happen ?
    4. 


    


    Best Regards,

    



    


    Thanks

    


  • Bash loop over directory tree with ffmpeg, wrong spaces ? [duplicate]

    17 avril 2017, par ArekBulski

    This question already has an answer here :

    I am trying to interate over multiple video files that are grouped in directories, and ffmpeg returns errors about paths, it seems like paths are broken, end at first space. Can you point me too what is the problem here ? Files and directories have spaces.

    $ for f in $(find -type f -name *.mkv); do ffmpeg -n -i "$f" -c:v copy "~/Pobrane/$f" ; done

    Loop splits paths by space and takes words as entries. How to fix this ?

    $ for f in $(find -type f -name *.mkv); do echo "$f"; done
    ./homeworks/2017-04-03
    00-54-57
    homework3b.mkv
    ./homeworks/2017-04-03
    00-21-36
    homework1.mkv
    ./homeworks/2017-04-03
  • ffmpeg failing to add png mask to video : Requested planes not available

    23 août 2022, par Alexandr Sugak

    I am trying to add png mask to make webm video round (cut off its corners).

    


    The command I am using :

    


    video="./dist/tmp/19_2.webm"
mask="./dist/tmp/mask.png"
output="./dist/tmp/circle.webm"

ffmpeg -report -c:v libvpx-vp9 -i "${video}" -loop 1 -i "${mask}" -filter_complex " \
[1:v]alphaextract[alf];\
[0:v][alf]alphamerge" \
-c:a copy -c:v libvpx-vp9 "${output}"


    


    The command output :

    


    sh ./scripts/video_mask.sh 
ffmpeg started on 2022-08-23 at 17:27:48
Report written to "ffmpeg-20220823-172748.log"
Log level: 48
ffmpeg version 5.1-tessus Copyright (c) 2000-2022 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
  configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg --extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl --enable-libaom --enable-libass --enable-libbluray --enable-libdav1d --enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-version3 --pkg-config-flags=--static --disable-ffplay
  libavutil      57. 28.100 / 57. 28.100
  libavcodec     59. 37.100 / 59. 37.100
  libavformat    59. 27.100 / 59. 27.100
  libavdevice    59.  7.100 / 59.  7.100
  libavfilter     8. 44.100 /  8. 44.100
  libswscale      6.  7.100 /  6.  7.100
  libswresample   4.  7.100 /  4.  7.100
  libpostproc    56.  6.100 / 56.  6.100
[libvpx-vp9 @ 0x7fa072f05140] v1.11.0-30-g888bafc78
    Last message repeated 1 times
Input #0, matroska,webm, from './dist/tmp/19_2.webm':
  Metadata:
    ENCODER         : Lavf59.27.100
  Duration: 00:00:02.77, start: -0.007000, bitrate: 308 kb/s
  Stream #0:0(eng): Video: vp9 (Profile 0), yuva420p(tv, unknown/bt709/iec61966-2-1, progressive), 640x480, SAR 1:1 DAR 4:3, 1k tbr, 1k tbn (default)
    Metadata:
      ALPHA_MODE      : 1
      ENCODER         : Lavc59.37.100 libvpx-vp9
      DURATION        : 00:00:02.744000000
  Stream #0:1(eng): Audio: opus, 48000 Hz, mono, fltp (default)
    Metadata:
      ENCODER         : Lavc59.37.100 libopus
      DURATION        : 00:00:02.767000000
Input #1, png_pipe, from './dist/tmp/mask.png':
  Duration: N/A, bitrate: N/A
  Stream #1:0: Video: png, pal8(pc), 640x480 [SAR 2835:2835 DAR 4:3], 25 fps, 25 tbr, 25 tbn
[libvpx-vp9 @ 0x7fa082f04880] v1.11.0-30-g888bafc78
Stream mapping:
  Stream #0:0 (libvpx-vp9) -> alphamerge
  Stream #1:0 (png) -> alphaextract:default
  alphamerge:default -> Stream #0:0 (libvpx-vp9)
  Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[libvpx-vp9 @ 0x7fa082f04880] v1.11.0-30-g888bafc78
[Parsed_alphaextract_0 @ 0x7fa083906e80] Requested planes not available.
[Parsed_alphaextract_0 @ 0x7fa083906e80] Failed to configure input pad on Parsed_alphaextract_0
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #0:0
Conversion failed!


    


    I've tried different combination of codecs and pixel formats but I still get the same error. My initial understanding was that ffmpeg fails to find the alpha channel in the input video. By setting -c:v libvpx-vp9 option it looks like ffmpeg correctly picks up yuva420p pixel format but it still gives the same error.

    


    What I am doing wrong ?

    


    Update : if I remove the alphaextract step as suggested in comments, the ffmpeg starts processing video indefinitely (the video I use to test is only 2 sec long). If I specify the number of frames manually, then the output is generated but the mask does not seem to have any effect :

    


    ffmpeg -c:v libvpx-vp9 -i "${video}" -loop 1 -i "${mask}" -filter_complex " \
[0:v][1:v]alphamerge" \
-c:a copy -b:v 2000k -vframes 60 "${output}"


    


     sh ./scripts/video_mask.sh 
ffmpeg version 5.1-tessus Copyright (c) 2000-2022 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
  configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg --extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl --enable-libaom --enable-libass --enable-libbluray --enable-libdav1d --enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-version3 --pkg-config-flags=--static --disable-ffplay
  libavutil      57. 28.100 / 57. 28.100
  libavcodec     59. 37.100 / 59. 37.100
  libavformat    59. 27.100 / 59. 27.100
  libavdevice    59.  7.100 / 59.  7.100
  libavfilter     8. 44.100 /  8. 44.100
  libswscale      6.  7.100 /  6.  7.100
  libswresample   4.  7.100 /  4.  7.100
  libpostproc    56.  6.100 / 56.  6.100
[libvpx-vp9 @ 0x7fdd6b005f00] v1.11.0-30-g888bafc78
    Last message repeated 1 times
Input #0, matroska,webm, from './dist/tmp/19_2.webm':
  Metadata:
    ENCODER         : Lavf59.27.100
  Duration: 00:00:02.77, start: -0.007000, bitrate: 308 kb/s
  Stream #0:0(eng): Video: vp9 (Profile 0), yuva420p(tv, unknown/bt709/iec61966-2-1, progressive), 640x480, SAR 1:1 DAR 4:3, 1k tbr, 1k tbn (default)
    Metadata:
      ALPHA_MODE      : 1
      ENCODER         : Lavc59.37.100 libvpx-vp9
      DURATION        : 00:00:02.744000000
  Stream #0:1(eng): Audio: opus, 48000 Hz, mono, fltp (default)
    Metadata:
      ENCODER         : Lavc59.37.100 libopus
      DURATION        : 00:00:02.767000000
Input #1, png_pipe, from './dist/tmp/mask.png':
  Duration: N/A, bitrate: N/A
  Stream #1:0: Video: png, pal8(pc), 640x480 [SAR 2835:2835 DAR 4:3], 25 fps, 25 tbr, 25 tbn
File './dist/tmp/circle.webm' already exists. Overwrite? [y/N] y
[libvpx-vp9 @ 0x7fdd6b007ec0] v1.11.0-30-g888bafc78
Stream mapping:
  Stream #0:0 (libvpx-vp9) -> alphamerge
  Stream #1:0 (png) -> alphamerge
  alphamerge:default -> Stream #0:0 (libvpx-vp9)
  Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[libvpx-vp9 @ 0x7fdd6b007ec0] v1.11.0-30-g888bafc78
[libvpx-vp9 @ 0x7fdd6b024580] v1.11.0-30-g888bafc78
Output #0, webm, to './dist/tmp/circle.webm':
  Metadata:
    encoder         : Lavf59.27.100
  Stream #0:0: Video: vp9, yuva420p(tv, unknown/bt709/iec61966-2-1, progressive), 640x480 [SAR 1:1 DAR 4:3], q=2-31, 2000 kb/s, 1k fps, 1k tbn
    Metadata:
      encoder         : Lavc59.37.100 libvpx-vp9
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
  Stream #0:1(eng): Audio: opus, 48000 Hz, mono, fltp (default)
    Metadata:
      ENCODER         : Lavc59.37.100 libopus
      DURATION        : 00:00:02.767000000
frame=   60 fps= 16 q=2.0 Lsize=     285kB time=00:00:01.98 bitrate=1175.5kbits/s speed=0.526x    
video:270kB audio:11kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.529399%