Recherche avancée

Médias (91)

Autres articles (70)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (10206)

  • FFMPEG making requests for each frame when decoding a stream, slow performance

    3 juillet 2020, par Byti

    I am having an issue playing MOV camera captured files from an iPhone. My FFMPEG implementation has no problem playing most file formats, this issue is exclusive only for camera captured MOV.

    



    When trying to open the file, I can see in the logs that many requests are made, each requests decoding only one frame, before making a new request which results the video being buffered extremely slowly.
It takes roughly a minute to buffer about a few seconds of the video.

    



    Another thing to mention is that the very same problematic file is played without an issue locally. The problem is when trying to decode while streaming.

    



    I compiled my code on Xcode 11, iOS SDK 13, with cocoapods mobile-ffmpeg-https 4.2.

    



    Here is a rough representation of my code, its pretty standard :

    



      

    1. Here is how I open AVFormatContext :
    2. 


    



    AVFormatContext *context = avformat_alloc_context();
    context->interrupt_callback.callback = FFMPEGFormatContextIOHandler_IO_CALLBACK;
    context->interrupt_callback.opaque = (__bridge void *)(handler);

    av_log_set_level(AV_LOG_TRACE);

    int result = avformat_open_input(&context, [request.urlAsString UTF8String], NULL, NULL);

    if (result != 0) {
        if (context != NULL) {
            avformat_free_context(context);
        }

        return nil;
    }

    result = avformat_find_stream_info(context, NULL);

    if (result < 0) {
        avformat_close_input(&context);
        return nil;
    }


    



      

    1. Video decoder is opened like so, audio decoder is nearly identical
    2. 


    



    AVCodecParameters *params = context->streams[streamIndex]->codecpar;
    AVCodec *codec = avcodec_find_decoder(params->codec_id);

    if (codec == NULL) {
        return NULL;
    }

    AVCodecContext *codecContext = avcodec_alloc_context3(codec);

    if (codecContext == NULL) {
        return NULL;
    }

    codecContext->thread_count = 6;

    int result = avcodec_parameters_to_context(codecContext, params);

    if (result < 0) {
        avcodec_free_context(&codecContext);
        return NULL;
    }

    result = avcodec_open2(codecContext, codec, NULL);

    if (result < 0) {
        avcodec_free_context(&codecContext);
        return NULL;
    }


    



      

    1. I read the data from the server like so :
    2. 


    



    AVPacket packet;

int result = av_read_frame(formatContext, &avPacket);

if (result == 0) {
   avcodec_send_packet(codecContext, &avPacket);

   // .... decode ....
}


    



    Logs after opening the decoders :

    



    // [tls] Request is made here
// [tls] Request response headers are here
Probing mov,mp4,m4a,3gp,3g2,mj2 score:100 size:2048
Probing mp3 score:1 size:2048
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x115918e00] Format mov,mp4,m4a,3gp,3g2,mj2 probed with size=2048 and score=100
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x115918e00] type:'ftyp' parent:'root' sz: 20 8 23077123
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x115918e00] ISO: File Type Major Brand: qt  
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x115918e00] type:'wide' parent:'root' sz: 8 28 23077123
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x115918e00] type:'mdat' parent:'root' sz: 23066642 36 23077123
// [tls] Request is made here
// [tls] Request response headers are here
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x115918e00] stream 0, sample 4, dts 133333
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x115918e00] stream 1, sample 48, dts 1114558
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x115918e00] stream 2, sample 1, dts 2666667
[h264 @ 0x116080200] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 1
// [tls] Request is made here
// [tls] Request response headers are here
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x115918e00] stream 0, sample 4, dts 133333
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x115918e00] stream 1, sample 48, dts 1114558
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x115918e00] stream 2, sample 1, dts 2666667
[h264 @ 0x116080200] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 1
// [tls] Request is made here
// [tls] Request response headers are here
// ...


    



    These are some warnings I found in the log

    



    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x11c030800] interrupted
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x11c030800] stream 0: start_time: 0.000 duration: 11.833
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x11c030800] stream 1: start_time: 0.000 duration: 11.832
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x11c030800] stream 2: start_time: 0.000 duration: 11.833
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x11c030800] stream 3: start_time: 0.000 duration: 11.833
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x11c030800] format: start_time: 0.000 duration: 11.833 bitrate=15601 kb/s
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x11c030800] Could not find codec parameters for stream 0 (Video: h264, 1 reference frame (avc1 / 0x31637661), none(bt709, left), 1920x1080, 1/1200, 15495 kb/s): unspecified pixel format
Consider increasing the value for the 'analyzeduration' and 'probesize' options
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x11c030800] After avformat_find_stream_info() pos: 23077123 bytes read:16293 seeks:1 frames:0


    



    Also when calling avformat_open_input(...), 2 GET requests are made, before the returning.
Notice the "Probing mp3 score:1", that is not shown for other MOV files or any other files.

    



    I have tried different versions of ffmpeg, I have tried messing around with the delays of the stream, I tried removing my custom interrupt callback, nothing have worked.

    



    Code works fine with any other videos I have tested (mp4, mkv, avi).

    



    Metadata of the test file :

    



    Metadata:
    major_brand     : qt  
    minor_version   : 0
    compatible_brands: qt  
    creation_time   : 2019-04-14T08:17:03.000000Z
    com.apple.quicktime.make: Apple
    com.apple.quicktime.model: iPhone 7
    com.apple.quicktime.software: 12.2
    com.apple.quicktime.creationdate: 2019-04-14T11:17:03+0300
  Duration: 00:00:16.83, bitrate: N/A
    Stream #0:0(und), 0, 1/600: Video: h264, 1 reference frame (avc1 / 0x31637661), none(bt709), 1920x1080 (0x0), 0/1, 15301 kb/s, 30 fps, 30 tbr, 600 tbn (default)
    Metadata:
      creation_time   : 2019-04-14T08:17:03.000000Z
      handler_name    : Core Media Video
      encoder         : H.264
    Stream #0:1(und), 0, 1/44100: Audio: aac (mp4a / 0x6134706D), 44100 Hz, mono, 100 kb/s (default)
    Metadata:
      creation_time   : 2019-04-14T08:17:03.000000Z
      handler_name    : Core Media Audio
    Stream #0:2(und), 0, 1/600: Data: none (mebx / 0x7862656D), 0/1, 0 kb/s (default)
    Metadata:
      creation_time   : 2019-04-14T08:17:03.000000Z
      handler_name    : Core Media Metadata
    Stream #0:3(und), 0, 1/600: Data: none (mebx / 0x7862656D), 0/1, 0 kb/s (default)
    Metadata:
      creation_time   : 2019-04-14T08:17:03.000000Z
      handler_name    : Core Media Metadata


    


  • FPS goes down while performing object detection using TensorFlow on multiple threads

    14 mai 2020, par Apoorv Mishra

    I am trying to run object detection on multiple cameras. I am using SSD mobinet v2 frozen graph to perform object detection with TensorFlow and OpenCV. I had implemented threading to invoke the separate thread for separate camera. But doing so I'm getting low FPS with multiple video streams.

    



    Note : The model is working fine with single stream. Also when number of detected objects in different frames are low, I'm getting decent FPS.

    



    My threading logic is working fine. I guess I'm having issue with using the graph and session. Please let me know what am I doing wrong.

    



    with tf.device('/GPU:0'):
    with detection_graph.as_default():
        with tf.Session(config=config, graph=detection_graph) as sess:
            while True:
                # Read frame from camera
                raw_image = pipe.stdout.read(IMG_H*IMG_W*3)
                image =  np.fromstring(raw_image, dtype='uint8')     # convert read bytes to np
                image_np = image.reshape((IMG_H,IMG_W,3))
                img_copy = image_np[170:480, 100:480]
                # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
                image_np_expanded = np.expand_dims(img_copy, axis=0)
                # Extract image tensor
                image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
                # Extract detection boxes
                boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
                # Extract detection scores
                scores = detection_graph.get_tensor_by_name('detection_scores:0')
                # Extract detection classes
                classes = detection_graph.get_tensor_by_name('detection_classes:0')
                # Extract number of detections
                num_detections = detection_graph.get_tensor_by_name(
                        'num_detections:0')
                # Actual detection.
                (boxes, scores, classes, num_detections) = sess.run(
                        [boxes, scores, classes, num_detections],
                        feed_dict={image_tensor: image_np_expanded})
                # Visualization of the results of a detection.
                boxes = np.squeeze(boxes)
                scores = np.squeeze(scores)
                classes = np.squeeze(classes).astype(np.int32)

                box_to_display_str_map = collections.defaultdict(list)
                box_to_color_map = collections.defaultdict(str)

                for i in range(min(max_boxes_to_draw, boxes.shape[0])):
                    if scores is None or scores[i] > threshold:
                        box = tuple(boxes[i].tolist())
                        if classes[i] in six.viewkeys(category_index):
                            class_name = category_index[classes[i]]['name']
                        display_str = str(class_name)
                        display_str = '{}: {}%'.format(display_str, int(100 * scores[i]))
                        box_to_display_str_map[box].append(display_str)
                        box_to_color_map[box] = STANDARD_COLORS[
                                classes[i] % len(STANDARD_COLORS)]
                for box,color in box_to_color_map.items():
                    ymin, xmin, ymax, xmax = box
                    flag = jam_check(xmin, ymin, xmax, ymax, frame_counter)
                    draw_bounding_box_on_image_array(
                            img_copy,
                            ymin,
                            xmin,
                            ymax,
                            xmax,
                            color=color,
                            thickness=line_thickness,
                            display_str_list=box_to_display_str_map[box],
                            use_normalized_coordinates=use_normalized_coordinates)

                image_np[170:480, 100:480] = img_copy

                image_np = image_np[...,::-1]

                pipe.stdout.flush()

                yield cv2.imencode('.jpg', image_np, [int(cv2.IMWRITE_JPEG_QUALITY), 50])[1].tobytes()


    



    I've set the config as :

    



    config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction=0.4


    


  • vf_dnn_processing.c : add dnn backend openvino

    25 mai 2020, par Guo, Yejun
    vf_dnn_processing.c : add dnn backend openvino
    

    We can try with the srcnn model from sr filter.
    1) get srcnn.pb model file, see filter sr
    2) convert srcnn.pb into openvino model with command :
    python mo_tf.py —input_model srcnn.pb —data_type=FP32 —input_shape [1,960,1440,1] —keep_shape_ops

    See the script at https://github.com/openvinotoolkit/openvino/tree/master/model-optimizer
    We'll see srcnn.xml and srcnn.bin at current path, copy them to the
    directory where ffmpeg is.

    I have also uploaded the model files at https://github.com/guoyejun/dnn_processing/tree/master/models

    3) run with openvino backend :
    ffmpeg -i input.jpg -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=openvino:model=srcnn.xml:input=x:output=srcnn/Maximum -y srcnn.ov.jpg
    (The input.jpg resolution is 720*480)

    Also copy the logs on my skylake machine (4 cpus) locally with openvino backend
    and tensorflow backend. just for your information.

    $ time ./ffmpeg -i 480p.mp4 -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=tensorflow:model=srcnn.pb:input=x:output=y -y srcnn.tf.mp4

    frame= 343 fps=2.1 q=31.0 Lsize= 2172kB time=00:00:11.76 bitrate=1511.9kbits/s speed=0.0706x
    video:1973kB audio:187kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead : 0.517637%
    [aac @ 0x2f5db80] Qavg : 454.353
    real 2m46.781s
    user 9m48.590s
    sys 0m55.290s

    $ time ./ffmpeg -i 480p.mp4 -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=openvino:model=srcnn.xml:input=x:output=srcnn/Maximum -y srcnn.ov.mp4

    frame= 343 fps=4.0 q=31.0 Lsize= 2172kB time=00:00:11.76 bitrate=1511.9kbits/s speed=0.137x
    video:1973kB audio:187kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead : 0.517640%
    [aac @ 0x31a9040] Qavg : 454.353
    real 1m25.882s
    user 5m27.004s
    sys 0m0.640s

    Signed-off-by : Guo, Yejun <yejun.guo@intel.com>
    Signed-off-by : Pedro Arthur <bygrandao@gmail.com>

    • [DH] doc/filters.texi
    • [DH] libavfilter/vf_dnn_processing.c