Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (95)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (9596)

  • Change video file used for saving in table

    9 août 2017, par CR7

    I am trying to convert uploaded video file to mp4 format. It is working properly. But after that in my DB, it is saving the old video file details instead of converted file. Also the converted file is saved in the uploaded video name and extension. I am using streamio-ffmpeg gem

    In my model I have the following code.

    mount_uploader :video, VideoUploader

    and in the video_uploader.rb

    require 'streamio-ffmpeg'

    class VideoUploader < CarrierWave::Uploader::Base
     process :encode_video

     def encode_video
       file = FFMPEG::Movie.new(current_path)      
       filename = "#{current_path.chomp(File.extname(current_path))}.mp4"
       file.transcode(filename)
       FileUtils.mv(filename, current_path)
     end  
    end

    How can I set the current_path file to the converted file path

  • DNN OpenCV Python using RSTP always crash after few minutes

    1er juillet 2022, par renaldyks

    Description :

    


    I want to create a people counter using DNN. The model I'm using is MobileNetSSD. The camera I use is IPCam from Hikvision. Python communicates with IPCam using the RSTP protocol.

    


    The program that I made is good and there are no bugs, when running the sample video the program does its job well. But when I replaced it with IPcam there was an unknown error.

    


    Error :

    


    Sometimes the error is :

    


    [h264 @ 000001949f7adfc0] error while decoding MB 13 4, bytestream -6
[h264 @ 000001949f825ac0] left block unavailable for requested intra4x4 mode -1
[h264 @ 000001949f825ac0] error while decoding MB 0 17, bytestream 762


    


    Sometimes the error does not appear and the program is killed.

    



    


    Update Error

    


    After revising the code, I caught the error. The error found is

    


    [h264 @ 0000019289b3fa80] error while decoding MB 4 5, bytestream -25


    


    Now I don't know what to do, because the error is not in Google.

    


    Source Code :

    


    Old Code

    


    This is my very earliest code before getting suggestions from the comments field.

    


    import time
import cv2
import numpy as np
import math
import threading

print("Load MobileNeteSSD model")

prototxt = "MobileNetSSD_deploy.prototxt"
model = "MobileNetSSD_deploy.caffemodel"

CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat",
           "bottle", "bus", "car", "cat", "chair", "cow", "diningtable",
           "dog", "horse", "motorbike", "person", "pottedplant", "sheep",
           "sofa", "train", "tvmonitor"]

net = cv2.dnn.readNetFromCaffe(prototxt, model)

pos_line = 0
offset = 50
car = 0
detected = False
check = 0
prev_frame_time = 0


def detect():
    global check, car, detected
    check = 0
    if(detected == False):
        car += 1
        detected = True


def center_object(x, y, w, h):
    cx = x + int(w / 2)
    cy = y + int(h / 2)
    return cx, cy


def process_frame_MobileNetSSD(next_frame):
    global car, check, detected

    rgb = cv2.cvtColor(next_frame, cv2.COLOR_BGR2RGB)
    (H, W) = next_frame.shape[:2]

    blob = cv2.dnn.blobFromImage(next_frame, size=(300, 300), ddepth=cv2.CV_8U)
    net.setInput(blob, scalefactor=1.0/127.5, mean=[127.5, 127.5, 127.5])
    detections = net.forward()

    for i in np.arange(0, detections.shape[2]):
        confidence = detections[0, 0, i, 2]

        if confidence > 0.5:

            idx = int(detections[0, 0, i, 1])
            if CLASSES[idx] != "person":
                continue

            label = CLASSES[idx]

            box = detections[0, 0, i, 3:7] * np.array([W, H, W, H])
            (startX, startY, endX, endY) = box.astype("int")

            center_ob = center_object(startX, startY, endX-startX, endY-startY)
            cv2.circle(next_frame, center_ob, 4, (0, 0, 255), -1)

            if center_ob[0] < (pos_line+offset) and center_ob[0] > (pos_line-offset):
                # car+=1
                detect()

            else:
                check += 1
                if(check >= 5):
                    detected = False

            cv2.putText(next_frame, label+' '+str(round(confidence, 2)),
                        (startX, startY-10), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
            cv2.rectangle(next_frame, (startX, startY),
                          (endX, endY), (0, 255, 0), 3)

    return next_frame


def PersonDetection_UsingMobileNetSSD():
    cap = cv2.VideoCapture()
    cap.open("rtsp://admin:Admin12345@192.168.100.20:554/Streaming/channels/2/")

    global car,pos_line,prev_frame_time

    frame_count = 0

    while True:
        try:
            time.sleep(0.1)
            new_frame_time = time.time()
            fps = int(1/(new_frame_time-prev_frame_time))
            prev_frame_time = new_frame_time

            ret, next_frame = cap.read()
            w_video = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
            h_video = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
            pos_line = int(h_video/2)-50

            if ret == False: break

            frame_count += 1
            cv2.line(next_frame, (int(h_video/2), 0),
                     (int(h_video/2), int(h_video)), (255, 127, 0), 3)
            next_frame = process_frame_MobileNetSSD(next_frame)

            cv2.rectangle(next_frame, (248,22), (342,8), (0,0,0), -1)
            cv2.putText(next_frame, "Counter : "+str(car), (250, 20),
                        cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
            cv2.putText(next_frame, "FPS : "+str(fps), (0, int(h_video)-10),
                        cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
            cv2.imshow("Video Original", next_frame)
            # print(car)

        except Exception as e:
            print(str(e))

        if cv2.waitKey(1) & 0xFF == ord('q'): 
            break


    print("/MobileNetSSD Person Detector")


    cap.release()
    cv2.destroyAllWindows()

if __name__ == "__main__":
    t1 = threading.Thread(PersonDetection_UsingMobileNetSSD())
    t1.start()


    


    New Code

    


    I have revised my code and the program still stops taking frames. I just revised the PersonDetection_UsingMobileNetSSD() function. I've also removed the multithreading I was using. The code has been running for about 30 minutes but after a broken frame, the code will never re-execute the program block if ret == True.

    


    def PersonDetection_UsingMobileNetSSD():
    cap = cv2.VideoCapture()
    cap.open("rtsp://admin:Admin12345@192.168.100.20:554/Streaming/channels/2/")

    global car,pos_line,prev_frame_time

    frame_count = 0

    while True:
        try:
            if cap.isOpened():
                ret, next_frame = cap.read()
                if ret:
                    new_frame_time = time.time()
                    fps = int(1/(new_frame_time-prev_frame_time))
                    prev_frame_time = new_frame_time
                    w_video = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
                    h_video = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
                    pos_line = int(h_video/2)-50

                    # next_frame = cv2.resize(next_frame,(720,480),fx=0,fy=0, interpolation = cv2.INTER_CUBIC)

                    if ret == False: break

                    frame_count += 1
                    cv2.line(next_frame, (int(h_video/2), 0),
                            (int(h_video/2), int(h_video)), (255, 127, 0), 3)
                    next_frame = process_frame_MobileNetSSD(next_frame)

                    cv2.rectangle(next_frame, (248,22), (342,8), (0,0,0), -1)
                    cv2.putText(next_frame, "Counter : "+str(car), (250, 20),
                                cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
                    cv2.putText(next_frame, "FPS : "+str(fps), (0, int(h_video)-10),
                                cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
                    cv2.imshow("Video Original", next_frame)
                    # print(car)
                else:
                    print("Crashed Frame")
            else:
                print("Cap is not open")

        except Exception as e:
            print(str(e))

        if cv2.waitKey(1) & 0xFF == ord('q'): 
            break


    print("/MobileNetSSD Person Detector")


    cap.release()
    cv2.destroyAllWindows()


    


    Requirement :

    


    Hardware : Intel i5-1035G1, RAM 8 GB, NVIDIA GeForce MX330

    


    Software : Python 3.6.2 , OpenCV 4.5.1, Numpy 1.16.0

    


    Question :

    


      

    1. What should i do for fixing this error ?
    2. 


    3. What causes this to happen ?
    4. 


    


    Best Regards,

    



    


    Thanks

    


  • dnn_backend_native_layer_mathunary : add abs support

    25 mai 2020, par Ting Fu
    dnn_backend_native_layer_mathunary : add abs support
    

    more math unary operations will be added here

    It can be tested with the model file generated with below python scripy :

    import tensorflow as tf
    import numpy as np
    import imageio

    in_img = imageio.imread('input.jpeg')
    in_img = in_img.astype(np.float32)/255.0
    in_data = in_img[np.newaxis, :]

    x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
    x1 = tf.subtract(x, 0.5)
    x2 = tf.abs(x1)
    y = tf.identity(x2, name='dnn_out')

    sess=tf.Session()
    sess.run(tf.global_variables_initializer())

    graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
    tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

    print("image_process.pb generated, please use \
    path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

    output = sess.run(y, feed_dict=x : in_data)
    imageio.imsave("out.jpg", np.squeeze(output))

    Signed-off-by : Ting Fu <ting.fu@intel.com>
    Signed-off-by : Guo, Yejun <yejun.guo@intel.com>

    • [DH] libavfilter/dnn/Makefile
    • [DH] libavfilter/dnn/dnn_backend_native.h
    • [DH] libavfilter/dnn/dnn_backend_native_layer_mathunary.c
    • [DH] libavfilter/dnn/dnn_backend_native_layer_mathunary.h
    • [DH] libavfilter/dnn/dnn_backend_native_layers.c
    • [DH] tools/python/convert_from_tensorflow.py
    • [DH] tools/python/convert_header.py