Recherche avancée

Médias (1)

Mot : - Tags -/punk

Autres articles (32)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (5968)

  • FPS goes down while performing object detection using TensorFlow on multiple threads

    14 mai 2020, par Apoorv Mishra

    I am trying to run object detection on multiple cameras. I am using SSD mobinet v2 frozen graph to perform object detection with TensorFlow and OpenCV. I had implemented threading to invoke the separate thread for separate camera. But doing so I'm getting low FPS with multiple video streams.

    



    Note : The model is working fine with single stream. Also when number of detected objects in different frames are low, I'm getting decent FPS.

    



    My threading logic is working fine. I guess I'm having issue with using the graph and session. Please let me know what am I doing wrong.

    



    with tf.device('/GPU:0'):
    with detection_graph.as_default():
        with tf.Session(config=config, graph=detection_graph) as sess:
            while True:
                # Read frame from camera
                raw_image = pipe.stdout.read(IMG_H*IMG_W*3)
                image =  np.fromstring(raw_image, dtype='uint8')     # convert read bytes to np
                image_np = image.reshape((IMG_H,IMG_W,3))
                img_copy = image_np[170:480, 100:480]
                # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
                image_np_expanded = np.expand_dims(img_copy, axis=0)
                # Extract image tensor
                image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
                # Extract detection boxes
                boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
                # Extract detection scores
                scores = detection_graph.get_tensor_by_name('detection_scores:0')
                # Extract detection classes
                classes = detection_graph.get_tensor_by_name('detection_classes:0')
                # Extract number of detections
                num_detections = detection_graph.get_tensor_by_name(
                        'num_detections:0')
                # Actual detection.
                (boxes, scores, classes, num_detections) = sess.run(
                        [boxes, scores, classes, num_detections],
                        feed_dict={image_tensor: image_np_expanded})
                # Visualization of the results of a detection.
                boxes = np.squeeze(boxes)
                scores = np.squeeze(scores)
                classes = np.squeeze(classes).astype(np.int32)

                box_to_display_str_map = collections.defaultdict(list)
                box_to_color_map = collections.defaultdict(str)

                for i in range(min(max_boxes_to_draw, boxes.shape[0])):
                    if scores is None or scores[i] > threshold:
                        box = tuple(boxes[i].tolist())
                        if classes[i] in six.viewkeys(category_index):
                            class_name = category_index[classes[i]]['name']
                        display_str = str(class_name)
                        display_str = '{}: {}%'.format(display_str, int(100 * scores[i]))
                        box_to_display_str_map[box].append(display_str)
                        box_to_color_map[box] = STANDARD_COLORS[
                                classes[i] % len(STANDARD_COLORS)]
                for box,color in box_to_color_map.items():
                    ymin, xmin, ymax, xmax = box
                    flag = jam_check(xmin, ymin, xmax, ymax, frame_counter)
                    draw_bounding_box_on_image_array(
                            img_copy,
                            ymin,
                            xmin,
                            ymax,
                            xmax,
                            color=color,
                            thickness=line_thickness,
                            display_str_list=box_to_display_str_map[box],
                            use_normalized_coordinates=use_normalized_coordinates)

                image_np[170:480, 100:480] = img_copy

                image_np = image_np[...,::-1]

                pipe.stdout.flush()

                yield cv2.imencode('.jpg', image_np, [int(cv2.IMWRITE_JPEG_QUALITY), 50])[1].tobytes()


    



    I've set the config as :

    



    config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction=0.4


    


  • ffplay inaccurate cutting with ts and m3u8 files

    4 août 2020, par Lemon Sky

    Ffmpeg is able to cut media files accurately according to the following article :
Ffmpeg inaccurate cutting with ts and m3u8 files despite resamping audio filter

    


    It uses the flags -copyts -start_at_zero with ffmpeg. These flags are not available for ffplay. Is it possible to reach frame-accurate seeking for playback ?

    


    If not : Is it possible to pre-process the file (e.g. embed the effects of -copyts -start_at_zero into the ts file) so that those flags are no longer needed ? (Simply re-encoding the video with -copyts -start_at_zero does not yield the right results)

    


    Expected

    


    ffmpeg -y -ss 00:00:13 -copyts -start_at_zero -i http://tyberis.com/output.m3u8 -af aresample=async=1 -ss 15 -to 20 -map 0:a ts-cut-m3u8.wav
ffplay ts-cut-m3u8.wav


    


    Actual

    


    ffplay http://tyberis.com/output.m3u8 -af aresample=async=1 -ss 15 


    


    The inaccuracy is around 0.3s in this small example file (the inaccuracy is larger in other files).

    


  • What is ffprobe metadata "programs" ?

    27 août 2020, par nonayme

    All in the topic : what are "progams" in context of video ?

    


    As you may expect, any google lookup containg "programs" yield many irrelevant results... no matter search tricks you can apply.

    


    I tried some general search about video metadata, found a glossary about digital & multimedia, and also a search engine about metadata but didnt get anything.

    


    In the ffprobe doc it seems to be sort of the container, but what is its use ?

    


    thanks for any reply which lift this fog ^^