
Recherche avancée
Médias (29)
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (36)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)
Sur d’autres sites (6678)
-
handling subprocess file output
14 janvier 2019, par Sandip KumarI am working on a project in which at some point user uploads a video and in backend i have to generate thumbnail for that video. I preferred ffmpeg for this purpose and the system is in django environment.
This is my view functiondef upload(request):
if request.method == 'POST':
form = DocumentForm(request.POST, request.FILES)
if form.is_valid():
newdoc = Document(docfile=request.FILES['docfile'])
filename = str(request.FILES['docfile'].name)
newdoc.save()
op = subprocess.call(['ffmpeg', '-i',"media/private/"+up, '-ss', '00:00:03.000', '-vframes', '1', 'abc.jpg']) // Generates thumbnail
newdoc.thumbnail = op
newdoc.save()
return HttpResponseRedirect(reverse('list'))
else:
form = DocumentForm()The Document has the thumbnail field. I know that subprocess.call() returns returncode not any object so using op variable is useless. So my query is how do i save generated thumbnail in model.
-
FPS goes down while performing object detection using TensorFlow on multiple threads
14 mai 2020, par Apoorv MishraI am trying to run object detection on multiple cameras. I am using SSD mobinet v2 frozen graph to perform object detection with TensorFlow and OpenCV. I had implemented threading to invoke the separate thread for separate camera. But doing so I'm getting low FPS with multiple video streams.



Note : The model is working fine with single stream. Also when number of detected objects in different frames are low, I'm getting decent FPS.



My threading logic is working fine. I guess I'm having issue with using the graph and session. Please let me know what am I doing wrong.



with tf.device('/GPU:0'):
 with detection_graph.as_default():
 with tf.Session(config=config, graph=detection_graph) as sess:
 while True:
 # Read frame from camera
 raw_image = pipe.stdout.read(IMG_H*IMG_W*3)
 image = np.fromstring(raw_image, dtype='uint8') # convert read bytes to np
 image_np = image.reshape((IMG_H,IMG_W,3))
 img_copy = image_np[170:480, 100:480]
 # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
 image_np_expanded = np.expand_dims(img_copy, axis=0)
 # Extract image tensor
 image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
 # Extract detection boxes
 boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
 # Extract detection scores
 scores = detection_graph.get_tensor_by_name('detection_scores:0')
 # Extract detection classes
 classes = detection_graph.get_tensor_by_name('detection_classes:0')
 # Extract number of detections
 num_detections = detection_graph.get_tensor_by_name(
 'num_detections:0')
 # Actual detection.
 (boxes, scores, classes, num_detections) = sess.run(
 [boxes, scores, classes, num_detections],
 feed_dict={image_tensor: image_np_expanded})
 # Visualization of the results of a detection.
 boxes = np.squeeze(boxes)
 scores = np.squeeze(scores)
 classes = np.squeeze(classes).astype(np.int32)

 box_to_display_str_map = collections.defaultdict(list)
 box_to_color_map = collections.defaultdict(str)

 for i in range(min(max_boxes_to_draw, boxes.shape[0])):
 if scores is None or scores[i] > threshold:
 box = tuple(boxes[i].tolist())
 if classes[i] in six.viewkeys(category_index):
 class_name = category_index[classes[i]]['name']
 display_str = str(class_name)
 display_str = '{}: {}%'.format(display_str, int(100 * scores[i]))
 box_to_display_str_map[box].append(display_str)
 box_to_color_map[box] = STANDARD_COLORS[
 classes[i] % len(STANDARD_COLORS)]
 for box,color in box_to_color_map.items():
 ymin, xmin, ymax, xmax = box
 flag = jam_check(xmin, ymin, xmax, ymax, frame_counter)
 draw_bounding_box_on_image_array(
 img_copy,
 ymin,
 xmin,
 ymax,
 xmax,
 color=color,
 thickness=line_thickness,
 display_str_list=box_to_display_str_map[box],
 use_normalized_coordinates=use_normalized_coordinates)

 image_np[170:480, 100:480] = img_copy

 image_np = image_np[...,::-1]

 pipe.stdout.flush()

 yield cv2.imencode('.jpg', image_np, [int(cv2.IMWRITE_JPEG_QUALITY), 50])[1].tobytes()




I've set the config as :



config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction=0.4



-
Unable to upload video from iphon using paperclip-ffmpeg in ruby on rails
13 mai 2015, par sankwhen i upload video from iphon then I am getting below error.
It works perfectly when I upload same video from desktop.
Command :: PATH=/usr/bin/:$PATH; file -b --mime '/tmp/6da355e988ec841811d8803dfd5cf44c20150513-8103-b4lkam.MOV'
[paperclip] Content Type Spoof: Filename IMG_2637.MOV (["video/quicktime"]), content type discovered from file command: inode/x-empty. See documentation to allow this combination.
(0.6ms) ROLLBACK
Completed 400 Bad Request in 58ms (Views: 0.9ms | ActiveRecord: 5.0ms)I used gems "paperclip", " > 4.1" and ’paperclip-ffmpeg’
and in video model i used
validates_attachment_content_type :student_video, content_type: /\Avideo\/.*\Z/
also tried out
validates_attachment_content_type :student_video, :content_type => ['video/x-
# msvideo', 'video/avi', 'video/quicktime', 'video/3gpp', 'video/x-ms-
# wmv', 'video/mp4', 'flv-application/octet-stream', 'video/x-
# flv', 'video/mpeg', 'video/mpeg4', 'video/x-la-asf', 'video/x-ms-asf']but i m getting same error.
I do not know what was an error.
please help me.
thanks in advance.