
Recherche avancée
Médias (2)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (58)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)
Sur d’autres sites (11937)
-
Logic and lawyers
22 mai 2013, par Mans — Law and libertyReading about various patent litigation cases, I am struck by the frequency with which common logical fallacies such as the Appeal to Consequences are committed. We shall look at a couple of recent examples. In conjunction with the Federal Circuit ruling in CLS Bank v. Alice Corp., Judge Moore, joined … Continue reading
-
Cannot play H.264 stream using MSE
8 juillet 2016, par lianI’m trying to stream H.264 data over websockets to a video element using MSE.
I’m able to send and play a pre-recorded file, but when i try to create the data stream with the following command :
ffmpeg -i rtsp://192.168.1.101/stream.264 -vcodec copy -an -f mp4 -reset_timestamps 1
-movflags empty_moov+default_base_moof+frag_keyframe -loglevel quiet -I get this in chromes media-internals :
render_id: 215
player_id: 13
pipeline_state: kStopped
event: WEBMEDIAPLAYER_CREATED
url: blob:http%3A//127.0.0.1%3A8080/e468616d-5bde-47e0-8e59-1881ddb53abc
debug: ISO BMFF boxes that run to EOS are not supported
error: media::MediaSourceState::Append: stream parsing failed. Data size=727 append_window_start=0 append_window_end=inf
pipeline_error: chunk demuxer: append failedI think the problem has something to do with the debug line about ISO BMFF boxes, but have been unable to find any references to this message.
-
FPS goes down while performing object detection using TensorFlow on multiple threads
14 mai 2020, par Apoorv MishraI am trying to run object detection on multiple cameras. I am using SSD mobinet v2 frozen graph to perform object detection with TensorFlow and OpenCV. I had implemented threading to invoke the separate thread for separate camera. But doing so I'm getting low FPS with multiple video streams.



Note : The model is working fine with single stream. Also when number of detected objects in different frames are low, I'm getting decent FPS.



My threading logic is working fine. I guess I'm having issue with using the graph and session. Please let me know what am I doing wrong.



with tf.device('/GPU:0'):
 with detection_graph.as_default():
 with tf.Session(config=config, graph=detection_graph) as sess:
 while True:
 # Read frame from camera
 raw_image = pipe.stdout.read(IMG_H*IMG_W*3)
 image = np.fromstring(raw_image, dtype='uint8') # convert read bytes to np
 image_np = image.reshape((IMG_H,IMG_W,3))
 img_copy = image_np[170:480, 100:480]
 # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
 image_np_expanded = np.expand_dims(img_copy, axis=0)
 # Extract image tensor
 image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
 # Extract detection boxes
 boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
 # Extract detection scores
 scores = detection_graph.get_tensor_by_name('detection_scores:0')
 # Extract detection classes
 classes = detection_graph.get_tensor_by_name('detection_classes:0')
 # Extract number of detections
 num_detections = detection_graph.get_tensor_by_name(
 'num_detections:0')
 # Actual detection.
 (boxes, scores, classes, num_detections) = sess.run(
 [boxes, scores, classes, num_detections],
 feed_dict={image_tensor: image_np_expanded})
 # Visualization of the results of a detection.
 boxes = np.squeeze(boxes)
 scores = np.squeeze(scores)
 classes = np.squeeze(classes).astype(np.int32)

 box_to_display_str_map = collections.defaultdict(list)
 box_to_color_map = collections.defaultdict(str)

 for i in range(min(max_boxes_to_draw, boxes.shape[0])):
 if scores is None or scores[i] > threshold:
 box = tuple(boxes[i].tolist())
 if classes[i] in six.viewkeys(category_index):
 class_name = category_index[classes[i]]['name']
 display_str = str(class_name)
 display_str = '{}: {}%'.format(display_str, int(100 * scores[i]))
 box_to_display_str_map[box].append(display_str)
 box_to_color_map[box] = STANDARD_COLORS[
 classes[i] % len(STANDARD_COLORS)]
 for box,color in box_to_color_map.items():
 ymin, xmin, ymax, xmax = box
 flag = jam_check(xmin, ymin, xmax, ymax, frame_counter)
 draw_bounding_box_on_image_array(
 img_copy,
 ymin,
 xmin,
 ymax,
 xmax,
 color=color,
 thickness=line_thickness,
 display_str_list=box_to_display_str_map[box],
 use_normalized_coordinates=use_normalized_coordinates)

 image_np[170:480, 100:480] = img_copy

 image_np = image_np[...,::-1]

 pipe.stdout.flush()

 yield cv2.imencode('.jpg', image_np, [int(cv2.IMWRITE_JPEG_QUALITY), 50])[1].tobytes()




I've set the config as :



config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction=0.4