Recherche avancée

Médias (91)

Autres articles (51)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (6407)

  • ffmpeg sidedata or metadata per frame

    26 septembre 2019, par Dan Gordon

    I’m trying to add either some side data or metadata per frame using the FFMpeg encoding example

    Here’s what I have tried so far :

    /* encode 1 second of video */
    for (i = 0; i < 25; i++) {
       fflush(stdout);
       /* make sure the frame data is writable */
       ret = av_frame_make_writable(frame);
       if (ret < 0)
           exit(1);
       /* prepare a dummy image */
       /* Y */
       for (y = 0; y < c->height; y++) {
           for (x = 0; x < c->width; x++) {
               frame->data[0][y * frame->linesize[0] + x] = x + y + i * 3;
           }
       }
       /* Cb and Cr */
       for (y = 0; y < c->height/2; y++) {
           for (x = 0; x < c->width/2; x++) {
               frame->data[1][y * frame->linesize[1] + x] = 128 + y + i * 2;
               frame->data[2][y * frame->linesize[2] + x] = 64 + x + i * 5;
           }
       }
       frame->pts = I;

       AVFrameSideData *angle = av_frame_new_side_data (frame, AV_FRAME_DATA_GOP_TIMECODE, sizeof(int32_t));
       if(!angle)
           return AVERROR(ENOMEM);
       unint8_t a = i;
       angle->data = &a;

       frame->side_data = angle
       /* encode the image */
       encode(c, frame, pkt, f);
    }

    I have also tried using and setting it equal to a AVDictionary

    AVDictionary *d = NULL;
    av_dict_set(&d, "foo", "bar", 0);
    frame->metadata = d;

    But nothing is getting added to the encode.

    How do I add data to each frame individually ?

  • dxa : fix decoding of first I-frame by separating I/P-frame decoding

    17 août 2013, par Janne Grunau
    dxa : fix decoding of first I-frame by separating I/P-frame decoding
    

    5ef7c84 broke decoding for the first keyframe due to an unnecessary
    check for a reference frame.

    CC : libav-stable@libav.org

    • [DBH] libavcodec/dxa.c
  • Affectiva drops every second frame

    19 juin 2019, par machinery

    I am running Affectiva SDK 4.0 on a GoPro video recording. I’m using a C++ program on Ubuntu 16.04. The GoPro video was recorded with 60 fps. The problem is that Affectiva only provides results for around half of the frames (i.e. 30 fps). If I look at the timestamps provided by Affectiva, the last timestamp matches the video duration, that means Affectiva somehow skips around every second frame.

    Before running Affectiva I was running ffmpeg with the following command to make sure that the video has a constant frame rate of 60 fps :

    ffmpeg -i in.MP4 -vf -y -vcodec libx264 -preset medium -r 60 -map_metadata 0:g -strict -2 out.MP4 null 2>&1

    When I inspect the presentation timestamp using ffprobe -show_entries frame=pict_type,pkt_pts_time -of csv -select_streams v in.MP4 I’m getting for the raw video the following values :

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/media/GoPro_concat/GoPro_concat.MP4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf58.20.100
     Duration: 01:14:46.75, start: 0.000000, bitrate: 15123 kb/s
       Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuvj420p(pc, bt709), 1280x720 [SAR 1:1 DAR 16:9], 14983 kb/s, 59.94 fps, 59.94 tbr, 60k tbn, 119.88 tbc (default)
       Metadata:
         handler_name    :  GoPro AVC
         timecode        : 13:17:26:44
       Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 127 kb/s (default)
       Metadata:
         handler_name    :  GoPro AAC
       Stream #0:2(eng): Data: none (tmcd / 0x64636D74)
       Metadata:
         handler_name    :  GoPro AVC
         timecode        : 13:17:26:44
    Unsupported codec with id 0 for input stream 2
    frame,0.000000,I
    frame,0.016683,P
    frame,0.033367,P
    frame,0.050050,P
    frame,0.066733,P
    frame,0.083417,P
    frame,0.100100,P
    frame,0.116783,P
    frame,0.133467,I
    frame,0.150150,P
    frame,0.166833,P
    frame,0.183517,P
    frame,0.200200,P
    frame,0.216883,P
    frame,0.233567,P
    frame,0.250250,P
    frame,0.266933,I
    frame,0.283617,P
    frame,0.300300,P
    frame,0.316983,P
    frame,0.333667,P
    frame,0.350350,P
    frame,0.367033,P
    frame,0.383717,P
    frame,0.400400,I
    frame,0.417083,P
    frame,0.433767,P
    frame,0.450450,P
    frame,0.467133,P
    frame,0.483817,P
    frame,0.500500,P
    frame,0.517183,P
    frame,0.533867,I
    frame,0.550550,P
    frame,0.567233,P
    frame,0.583917,P
    frame,0.600600,P
    frame,0.617283,P
    frame,0.633967,P
    frame,0.650650,P
    frame,0.667333,I
    frame,0.684017,P
    frame,0.700700,P
    frame,0.717383,P
    frame,0.734067,P
    frame,0.750750,P
    frame,0.767433,P
    frame,0.784117,P
    frame,0.800800,I
    frame,0.817483,P
    frame,0.834167,P
    frame,0.850850,P
    frame,0.867533,P
    frame,0.884217,P
    frame,0.900900,P
    frame,0.917583,P
    frame,0.934267,I
    frame,0.950950,P
    frame,0.967633,P
    frame,0.984317,P
    frame,1.001000,P
    frame,1.017683,P
    frame,1.034367,P
    frame,1.051050,P
    frame,1.067733,I
    ...

    I have uploaded the full output on OneDrive.

    If I run Affectiva on the raw video (not processed by ffmpeg) I face the same problem of dropped frames. I was using Affectiva with affdex::VideoDetector detector(60);

    Is there a problem with the ffmpeg command or with Affectiva ?

    Edit : I think I have found out where the problem could be. It seems that Affectiva is not processing the whole video but just stops after a certain amount of processed frames without any error message. Below I have posted the C++ code I’m using. In the onProcessingFinished() method I’m printing something to the console when the processing is finished. But this message is never printed, so Affectiva never comes to the end.

    Is there something wrong with my code or should I encode the videos into another format than MP4 ?

    #include "VideoDetector.h"
    #include "FrameDetector.h"

    #include <iostream>
    #include <fstream>
    #include <mutex>
    #include

    std::mutex m;
    std::condition_variable conditional_variable;
    bool processed = false;

    class Listener : public affdex::ImageListener {
    public:
       Listener(std::ofstream * fout) {
           this->fout = fout;
     }
     virtual void onImageCapture(affdex::Frame image){
         //std::cout &lt;&lt; "called";
     }
     virtual void onImageResults(std::map faces, affdex::Frame image){
         //std::cout &lt;&lt; faces.size() &lt;&lt; " faces detected:" &lt;&lt; std::endl;

         for(auto&amp; kv : faces){
           (*this->fout) &lt;&lt; image.getTimestamp() &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.first &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.emotions.joy &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.emotions.fear &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.emotions.disgust &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.emotions.sadness &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.emotions.anger &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.emotions.surprise &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.emotions.contempt &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.emotions.valence &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.emotions.engagement &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.measurements.orientation.pitch &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.measurements.orientation.yaw &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.measurements.orientation.roll &lt;&lt; ",";
           (*this->fout) &lt;&lt; kv.second.faceQuality.brightness &lt;&lt; std::endl;


           //std::cout &lt;&lt;  kv.second.emotions.fear &lt;&lt; std::endl;
           //std::cout &lt;&lt;  kv.second.emotions.surprise  &lt;&lt; std::endl;
           //std::cout &lt;&lt;  (int) kv.second.emojis.dominantEmoji;
         }
     }
    private:
       std::ofstream * fout;
    };

    class ProcessListener : public affdex::ProcessStatusListener{
    public:
       virtual void onProcessingException (affdex::AffdexException ex){
           std::cerr &lt;&lt; "[Error] " &lt;&lt; ex.getExceptionMessage();
       }
       virtual void onProcessingFinished (){
           {
               std::lock_guard lk(m);
               processed = true;
               std::cout &lt;&lt; "[Affectiva] Video processing finised." &lt;&lt; std::endl;
           }
           conditional_variable.notify_one();
       }
    };

    int main(int argc, char ** argsv)
    {
       affdex::VideoDetector detector(60, 1, affdex::FaceDetectorMode::SMALL_FACES);
       //affdex::VideoDetector detector(60, 1, affdex::FaceDetectorMode::LARGE_FACES);
       std::string classifierPath="/home/wrafael/affdex-sdk/data";
       detector.setClassifierPath(classifierPath);
       detector.setDetectAllEmotions(true);

       // Output
       std::ofstream fout(argsv[2]);
       fout &lt;&lt; "timestamp" &lt;&lt; ",";
       fout &lt;&lt; "faceId" &lt;&lt; ",";
       fout &lt;&lt; "joy" &lt;&lt; ",";
       fout &lt;&lt; "fear" &lt;&lt; ",";
       fout &lt;&lt; "disgust" &lt;&lt; ",";
       fout &lt;&lt; "sadness" &lt;&lt; ",";
       fout &lt;&lt; "anger" &lt;&lt; ",";
       fout &lt;&lt; "surprise" &lt;&lt; ",";
       fout &lt;&lt; "contempt" &lt;&lt; ",";
       fout &lt;&lt; "valence" &lt;&lt; ",";
       fout &lt;&lt; "engagement"  &lt;&lt; ",";
       fout &lt;&lt; "pitch" &lt;&lt; ",";
       fout &lt;&lt; "yaw" &lt;&lt; ",";
       fout &lt;&lt; "roll" &lt;&lt; ",";
       fout &lt;&lt; "brightness" &lt;&lt; std::endl;

       Listener l(&amp;fout);
       ProcessListener pl;
       detector.setImageListener(&amp;l);
       detector.setProcessStatusListener(&amp;pl);

       detector.start();
       detector.process(argsv[1]);

       // wait for the worker
       {
       std::unique_lock lk(m);
       conditional_variable.wait(lk, []{return processed;});
       }
       fout.flush();
       fout.close();
    }
    </mutex></fstream></iostream>

    Edit 2 : I have now digged further into the problem and looked only at one GoPro file with a duration of 19min 53s (GoPro splits the recordings). When I run Affectiva with affdex::VideoDetector detector(60, 1, affdex::FaceDetectorMode::SMALL_FACES); on that raw video the following file is produced. Affectiva stops after 906s without any error message and without printing "[Affectiva] Video processing finised".

    When I now transform the video using ffmpeg -i raw.MP4 -y -vcodec libx264 -preset medium -r 60 -map_metadata 0:g -strict -2 out.MP4 and then run Affectiva with affdex::VideoDetector detector(60, 1, affdex::FaceDetectorMode::SMALL_FACES);, Affectiva runs until the end and prints
    "[Affectiva] Video processing finised" but the frame rate is only at 23 fps. Here is the file.

    When I now run Affectiva with affdex::VideoDetector detector(62, 1, affdex::FaceDetectorMode::SMALL_FACES); on this transformed file, Affectiva stops after 509s and "[Affectiva] Video processing finised" is not printed. Here is the file.