Recherche avancée

Médias (1)

Mot : - Tags -/école

Autres articles (98)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Dépôt de média et thèmes par FTP

    31 mai 2013, par

    L’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
    Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)

Sur d’autres sites (11220)

  • Pushing Projects to Github

    17 février 2012, par Multimedia Mike — Game Hacking, Python

    I finally got around to importing some old projects into my Github account. I guess it’s good to have a backup out there in the cloud.

    GhettoRSS
    https://github.com/multimediamike/GhettoRSS
    I describe this as a true offline RSS reader. Technically, it’s arguably not a true offline RSS reader. Rather, it does what most people actually want an offline RSS reader to do.

    I wrote this about 2 years ago when I had a long daily train ride with a disconnected netbook. I quickly learned that I couldn’t count on offline RSS readers simply because most RSS feeds to not contain much meat. Thus, I created a program that follows URLs in RSS feeds, downloads web pages and supporting images and CSS files, and caches them in an offline database which can be read via a local web browser.

    I wrote more information about this little project 2 years ago (here is part 1 and here is part 2). I fixed a few bugs in preparation for posting it but I probably won’t work on this anymore since I don’t have any use for it (the commute is long gone, but I didn’t even use it when I was commuting because I decided I just didn’t care enough to read the feeds on the train).

    xbfuse
    https://github.com/multimediamike/xbfuse
    This is a FUSE module for mounting Xbox/360 optical disc filesystems. Here is when I first discussed it. The tool has had its own little homepage for a long time. This tool has seen some development, as I learned from Googling for “xbfuse”. Regrettably, no one who has modified the tool has ever contacted me about it (at least, not that I can recall). This is unfortunate because the patches I have seen floating around which fix my xbfuse for various installations usually boil down replacing many occurrences of an include path in the autotool-generated build system. There is probably a simpler, cleaner fix.

    gcfuse
    https://github.com/multimediamike/gcfuse
    Written prior to xbfuse, this is a FUSE module for mounting GameCube optical disc filesystems. I first discussed this here and here. This tool has not seen too much direct development although someone eventually used it as the basis for WiiFuse which, as you can predict, mounts optical disc filesystems from Nintendo Wii games.

  • OpenCV Reporting TBR instead of FPS when using capture.get(CV_CAP_PROP_FPS)

    22 février 2012, par Malife

    I have several videos that I am trying to process using OpenCV and Qt 4.7.4 on Mac OS 10.6.8 (Snow Leopard). If I create a cv::VideoCapture object and then query for the frame rate related to such video, what I get back is the TBR and not FPS.

    For instance if use ffprobe Video1.mp4 what I get is :

    >> ffprobe Video1.mp4      
    ffprobe version 0.7.8, Copyright (c) 2007-2011 the FFmpeg developers
    built on Nov 24 2011 14:31:00 with gcc 4.2.1 (Apple Inc. build 5666) (dot 3)
    configuration: --prefix=/opt/local --enable-gpl --enable-postproc --enable-swscale --  
    enable-avfilter --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-
    libdirac --enable-libschroedinger --enable-libopenjpeg --enable-libxvid --enable-libx264
    --enable-libvpx --enable-libspeex --mandir=/opt/local/share/man --enable-shared --
    enable-pthreads --cc=/usr/bin/gcc-4.2 --arch=x86_64 --enable-yasm
    libavutil    50. 43. 0 / 50. 43. 0
    libavcodec   52.123. 0 / 52.123. 0
    libavformat  52.111. 0 / 52.111. 0
    libavdevice  52.  5. 0 / 52.  5. 0
    libavfilter   1. 80. 0 /  1. 80. 0
    libswscale    0. 14. 1 /  0. 14. 1
    libpostproc  51.  2. 0 / 51.  2. 0

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Video1.mp4':
    Metadata:
    major_brand     : isom
    minor_version   : 0
    compatible_brands: mp41avc1qt  
    creation_time   : 2012-01-09 23:09:43
    encoder         : vlc 1.1.3 stream output
    encoder-eng     : vlc 1.1.3 stream output
    Duration: 00:10:10.22, start: 0.000000, bitrate: 800 kb/s
    Stream #0.0(eng): Video: h264 (Baseline), yuvj420p, 704x480 [PAR 10:11 DAR 4:3], 798
    kb/s, 27.71 fps, 1001 tbr, 1001 tbn, 2002 tbc
    Metadata:
     creation_time   : 2012-01-09 23:09:43

    Which correctly reports FPS = 27.71 and TBR = 1001. Nevertheless if I use the following OpenCV code to query for the FPS :

    QString filename = QFileDialog::getOpenFileName(this,
                                           "Open Video",
                                           "Video Files (*.mp4, *.mpg)");

    capture.release();
    capture.open(filename.toAscii().data());

    if (!capture.isOpened()){
       qDebug() <<"Error when opening the video!";
       return;
    }


    qDebug() << "Frame Rate:" << capture.get(CV_CAP_PROP_FPS);
    qDebug() << "Num of Frames:" << capture.get(CV_CAP_PROP_FRAME_COUNT);
    qDebug() << "OpenCV Version" << CV_VERSION;

    The output I get is :

    Frame Rate: 1001
    Num of Frames: 610832
    OpenCV Version 2.3.1

    Which reports the TBR instead of the FPS. This behavior is consistent when I try to open different videos.

    I checked OpenCV's bug tracker and I also found this stack overflow question to be a similar but not quite the same problem, so I am at a loss to what to do next. Any hint or idea is most welcome since I've tried lots of things and seem to be getting nowhere.

  • Modifying motion vectors in ffmpeg H.264 decoder

    14 février 2012, par qontranami

    For research purposes, I am trying to modify H.264 motion vectors (MVs) for each P- and B-frame prior to motion compensation during the decoding process. I am using FFmpeg for this purpose. An example of a modification is replacing each MV with its original spatial neighbors and then using the resultant MVs for motion compensation, rather than the original ones. Please direct me appropriately.

    So far, I have been able to do a simple modification of MVs in the file /libavcodec/h264_cavlc.c. In the function, ff_h264_decode_mb_cavlc(), modifying the mx and my variables, for instance, by increasing their values modifies the MVs used during decoding.

    For example, as shown below, the mx and my values are increased by 50, thus lengthening the MVs used in the decoder.

    mx += get_se_golomb(&s->gb)+50;
    my += get_se_golomb(&s->gb)+50;

    However, in this regard, I don't know how to access the neighbors of mx and my for my spatial mean analysis that I mentioned in the first paragraph. I believe that the key to doing so lies in manipulating the array, mv_cache.

    Another experiment that I performed was in the file, libavcodec/error_resilience.c. Based on the guess_mv() function, I created a new function, mean_mv() that is executed in ff_er_frame_end() within the first if-statement. That first if-statement exits the function ff_er_frame_end() if one of the conditions is a zero error-count (s->error_count == 0). However, I decided to insert my mean_mv() function at this point so that is always executed when there is a zero error-count. This experiment somewhat yielded the results I wanted as I could start seeing artifacts in the top portions of the video but they were restricted just to the upper-right corner. I'm guessing that my inserted function is not being completed so as to meet playback deadlines or something.

    Below is the modified if-statement. The only addition is my function, mean_mv(s).

    if(!s->error_recognition || s->error_count==0 || s->avctx->lowres ||
          s->avctx->hwaccel ||
          s->avctx->codec->capabilities&CODEC_CAP_HWACCEL_VDPAU ||
          s->picture_structure != PICT_FRAME || // we dont support ER of field pictures yet, though it should not crash if enabled
          s->error_count==3*s->mb_width*(s->avctx->skip_top + s->avctx->skip_bottom)) {
           //av_log(s->avctx, AV_LOG_DEBUG, "ff_er_frame_end in er.c\n"); //KG
           if(s->pict_type==AV_PICTURE_TYPE_P)
               mean_mv(s);
           return;

    And here's the mean_mv() function I created based on guess_mv().

    static void mean_mv(MpegEncContext *s){
       //uint8_t fixed[s->mb_stride * s->mb_height];
       //const int mb_stride = s->mb_stride;
       const int mb_width = s->mb_width;
       const int mb_height= s->mb_height;
       int mb_x, mb_y, mot_step, mot_stride;

       //av_log(s->avctx, AV_LOG_DEBUG, "mean_mv\n"); //KG

       set_mv_strides(s, &mot_step, &mot_stride);

       for(mb_y=0; mb_ymb_height; mb_y++){
           for(mb_x=0; mb_xmb_width; mb_x++){
               const int mb_xy= mb_x + mb_y*s->mb_stride;
               const int mot_index= (mb_x + mb_y*mot_stride) * mot_step;
               int mv_predictor[4][2]={{0}};
               int ref[4]={0};
               int pred_count=0;
               int m, n;

               if(IS_INTRA(s->current_picture.f.mb_type[mb_xy])) continue;
               //if(!(s->error_status_table[mb_xy]&MV_ERROR)){
               //if (1){
               if(mb_x>0){
                   mv_predictor[pred_count][0]= s->current_picture.f.motion_val[0][mot_index - mot_step][0];
                   mv_predictor[pred_count][1]= s->current_picture.f.motion_val[0][mot_index - mot_step][1];
                   ref         [pred_count]   = s->current_picture.f.ref_index[0][4*(mb_xy-1)];
                   pred_count++;
               }

               if(mb_x+1current_picture.f.motion_val[0][mot_index + mot_step][0];
                   mv_predictor[pred_count][1]= s->current_picture.f.motion_val[0][mot_index + mot_step][1];
                   ref         [pred_count]   = s->current_picture.f.ref_index[0][4*(mb_xy+1)];
                   pred_count++;
               }

               if(mb_y>0){
                   mv_predictor[pred_count][0]= s->current_picture.f.motion_val[0][mot_index - mot_stride*mot_step][0];
                   mv_predictor[pred_count][1]= s->current_picture.f.motion_val[0][mot_index - mot_stride*mot_step][1];
                   ref         [pred_count]   = s->current_picture.f.ref_index[0][4*(mb_xy-s->mb_stride)];
                   pred_count++;
               }

               if(mb_y+1current_picture.f.motion_val[0][mot_index + mot_stride*mot_step][0];
                   mv_predictor[pred_count][1]= s->current_picture.f.motion_val[0][mot_index + mot_stride*mot_step][1];
                   ref         [pred_count]   = s->current_picture.f.ref_index[0][4*(mb_xy>mb_stride)];
                   pred_count++;
               }

               if(pred_count==0) continue;

               if(pred_count>=1){
                   int sum_x=0, sum_y=0, sum_r=0;
                   int k;

                   for(k=0; k/ Sum all the MVx from MVs avail. for EC
                       sum_y+= mv_predictor[k][1]; // Sum all the MVy from MVs avail. for EC
                       sum_r+= ref[k];
                       // if(k && ref[k] != ref[k-1])
                       // goto skip_mean_and_median;
                   }

                   mv_predictor[pred_count][0] = sum_x/k;
                   mv_predictor[pred_count][1] = sum_y/k;
                   ref         [pred_count]    = sum_r/k;
               }

               s->mv[0][0][0] = mv_predictor[pred_count][0];
               s->mv[0][0][1] = mv_predictor[pred_count][1];

               for(m=0; mcurrent_picture.f.motion_val[0][mot_index + m + n * mot_stride][0] = s->mv[0][0][0];
                       s->current_picture.f.motion_val[0][mot_index + m + n * mot_stride][1] = s->mv[0][0][1];
                   }
               }

               decode_mb(s, ref[pred_count]);

               //}
           }
       }
    }

    I would really appreciate some assistance on how to go about this properly.