Recherche avancée

Médias (91)

Autres articles (112)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

Sur d’autres sites (4357)

  • Dreamcast Track Sizes

    1er mars 2015, par Multimedia Mike — Sega Dreamcast

    I’ve been playing around with Sega Dreamcast discs lately. Not playing the games on the DC discs, of course, just studying their structure. To review, the Sega Dreamcast game console used special optical discs named GD-ROMs, where the GD stands for “gigadisc”. They are capable of holding about 1 gigabyte of data.

    You know what’s weird about these discs ? Each one manages to actually store a gigabyte of data. Each disc has a CD portion and a GD portion. The CD portion occupies the first 45000 sectors and can be read in any standard CD drive. This area is divided between a brief data track and a brief (usually) audio track.

    The GD region starts at sector 45000. Sometimes, it’s just one humongous data track that consumes the entire GD region. More often, however, the data track is split between the first track and the last track in the region and there are 1 or more audio tracks in between. But the weird thing is, the GD region is always full. I made a study of it (click for a larger, interactive graph) :


    Dreamcast Track Sizes

    Some discs put special data or audio bonuses in the CD region for players to discover. But every disc manages to fill out the GD region. I checked up on a lot of those audio tracks that divide the GD data and they’re legitimate music tracks. So what’s the motivation ? Why would the data track be split in 2 pieces like that ?

    I eventually realized that I probably answered this question in this blog post from 4 years ago. The read speed from the outside of an optical disc is higher than the inside of the same disc. When I inspect the outer data tracks of some of these discs, sure enough, there seem to be timing-sensitive multimedia FMV files living on the outer stretches.

    One day, I’ll write a utility to take apart the split ISO-9660 filesystem offset from a weird sector.

  • dnn/vf_dnn_detect.c : add tensorflow output parse support

    6 mai 2021, par Ting Fu
    dnn/vf_dnn_detect.c : add tensorflow output parse support
    

    Testing model is tensorflow offical model in github repo, please refer
    https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md
    to download the detect model as you need.
    For example, local testing was carried on with 'ssd_mobilenet_v2_coco_2018_03_29.tar.gz', and
    used one image of dog in
    https://github.com/tensorflow/models/blob/master/research/object_detection/test_images/image1.jpg

    Testing command is :
    ./ffmpeg -i image1.jpg -vf dnn_detect=dnn_backend=tensorflow:input=image_tensor:output=\
    "num_detections&detection_scores&detection_classes&detection_boxes":model=ssd_mobilenet_v2_coco.pb,\
    showinfo -f null -

    We will see the result similar as below :
    [Parsed_showinfo_1 @ 0x33e65f0] side data - detection bounding boxes :
    [Parsed_showinfo_1 @ 0x33e65f0] source : ssd_mobilenet_v2_coco.pb
    [Parsed_showinfo_1 @ 0x33e65f0] index : 0, region : (382, 60) -> (1005, 593), label : 18, confidence : 9834/10000.
    [Parsed_showinfo_1 @ 0x33e65f0] index : 1, region : (12, 8) -> (328, 549), label : 18, confidence : 8555/10000.
    [Parsed_showinfo_1 @ 0x33e65f0] index : 2, region : (293, 7) -> (682, 458), label : 1, confidence : 8033/10000.
    [Parsed_showinfo_1 @ 0x33e65f0] index : 3, region : (342, 0) -> (690, 325), label : 1, confidence : 5878/10000.

    There are two boxes of dog with cores 94.05% & 93.45% and two boxes of person with scores 80.33% & 58.78%.

    Signed-off-by : Ting Fu <ting.fu@intel.com>
    Signed-off-by : Guo, Yejun <yejun.guo@intel.com>

    • [DH] libavfilter/vf_dnn_detect.c
  • ffmpeg live stream latency

    22 août 2014, par Alex Fu

    I’m currently working on live streaming video from device A (source) to device B (destination) directly via local WiFi network.

    I’ve built FFMPEG to work on the Android platform and I have been able to stream video from A -> B successfully at the expense of latency (takes about 20 seconds for a movement or change to appear on screen ; as if the video was 20 seconds behind actual events).

    Initial start up is around 4 seconds. I’ve been able to trim that initial start up time down by lowering probesize and max_analyze_duration but the 20 second delay is still there.

    I’ve sprinkled some timing events around the code to try an figure out where the most time is being spent...

    • naInit : 0.24575 sec
    • naSetup : 0.043705 sec

    The first video frame isn’t obtained until 0.035342 sec after the decodeAndRender function is called. Subsequent decoding times can be illustrated here : enter image description here http://jsfiddle.net/uff0jdf7/1/ (interactive graph)

    From all the timing data i’ve recorded, nothing really jumps out at me unless I’m doing the timing wrong. Some have suggested that I am buffering too much data, however, as far as I can tell, I’m only buffering an image at a time. Is this too much ?

    Also, the source video that’s coming in is in the format of P264 ; it’s a custom implementation of H264 apparently.

    jint naSetup(JNIEnv *pEnv, jobject pObj, int pWidth, int pHeight) {
     width = pWidth;
     height = pHeight;

     //create a bitmap as the buffer for frameRGBA
     bitmap = createBitmap(pEnv, pWidth, pHeight);
     if (AndroidBitmap_lockPixels(pEnv, bitmap, &amp;pixel_buffer) &lt; 0) {
       LOGE("Could not lock bitmap pixels");
       return -1;
     }

     //get the scaling context
     sws_ctx = sws_getContext(codecCtx->width, codecCtx->height, codecCtx->pix_fmt,
         pWidth, pHeight, AV_PIX_FMT_RGBA, SWS_BILINEAR, NULL, NULL, NULL);

     // Assign appropriate parts of bitmap to image planes in pFrameRGBA
     // Note that pFrameRGBA is an AVFrame, but AVFrame is a superset
     // of AVPicture
     av_image_fill_arrays(frameRGBA->data, frameRGBA->linesize, pixel_buffer, AV_PIX_FMT_RGBA, pWidth, pHeight, 1);
     return 0;
    }

    void decodeAndRender(JNIEnv *pEnv) {
     ANativeWindow_Buffer windowBuffer;
     AVPacket packet;
     AVPacket outputPacket;
     int frame_count = 0;
     int got_frame;

     while (!stop &amp;&amp; av_read_frame(formatCtx, &amp;packet) >= 0) {
       // Is this a packet from the video stream?
       if (packet.stream_index == video_stream_index) {

         // Decode video frame
         avcodec_decode_video2(codecCtx, decodedFrame, &amp;got_frame, &amp;packet);

         // Did we get a video frame?
         if (got_frame) {
           // Convert the image from its native format to RGBA
           sws_scale(sws_ctx, (uint8_t const * const *) decodedFrame->data,
               decodedFrame->linesize, 0, codecCtx->height, frameRGBA->data,
               frameRGBA->linesize);

           // lock the window buffer
           if (ANativeWindow_lock(window, &amp;windowBuffer, NULL) &lt; 0) {
             LOGE("Cannot lock window");
           } else {
             // draw the frame on buffer
             int h;
             for (h = 0; h &lt; height; h++) {
               memcpy(windowBuffer.bits + h * windowBuffer.stride * 4,
                      pixel_buffer + h * frameRGBA->linesize[0],
                      width * 4);
             }
             // unlock the window buffer and post it to display
             ANativeWindow_unlockAndPost(window);

             // count number of frames
             ++frame_count;
           }
         }
       }

       // Free the packet that was allocated by av_read_frame
       av_free_packet(&amp;packet);
     }

     LOGI("Total # of frames decoded and rendered %d", frame_count);
    }