Recherche avancée

Médias (91)

Autres articles (69)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (7639)

  • ffmpeg C API - creating queue of frames

    25 mai 2016, par lupod

    I have created using the C API of ffmpeg a C++ application that reads frames from a file and writes them to a new file. Everything works fine, as long as I write immediately the frames to the output. In other words, the following structure of the program outputs the correct result (I put only the pseudocode for now, if needed I can also post some real snippets but the classes that I have created for handling the ffmpeg functionalities are quite large) :

    AVFrame* frame = av_frame_alloc();
    int got_frame;

    // readFrame returns 0 if file is ended, got frame = 1 if
    // a complete frame has been extracted
    while(readFrame(inputfile,frame, &got_frame)) {
     if (got_frame) {
       // I actually do some processing here
       writeFrame(outputfile,frame);
     }
    }
    av_frame_free(&frame);

    The next step has been to parallelize the application and, as a consequence, frames are not written immediately after they are read (I do not want to go into the details of the parallelization). In this case problems arise : there is some flickering in the output, as if some frames get repeated randomly. However, the number of frames and the duration of the output video remains correct.

    What I am trying to do now is to separate completely the reading from writing in the serial implementation in order to understand what is going on. I am creating a queue of pointers to frames :

    std::queue queue;
    int ret = 1, got_frame;
    while (ret) {
     AVFrame* frame = av_frame_alloc();
     ret = readFrame(inputfile,frame,&got_frame);
     if (got_frame)
       queue.push(frame);
    }

    To write frames to the output file I do :

    while (!queue.empty()) {
     frame = queue.front();
     queue.pop();
     writeFrame(outputFile,frame);
     av_frame_free(&frame);
    }

    The result in this case is an output video with the correct duration and number of frames that is only a repetition of the last 3 (I think) frames of the video.

    My guess is that something might go wrong because of the fact that in the first case I use always the same memory location for reading frames, while in the second case I allocate many different frames.

    Any suggestions on what could be the problem ?

  • Is there a way to change the ffplay playback speed while running

    20 mai 2024, par richjhart

    I am trying to modify ffplay 7.0 to allow us to modify the playback speed while running. Our use-case only involves basic video-only mp4 files.

    


    We can set the playback speed with something like the following :

    


    ffplay -i ..\Video\SampleVideoLong.mp4 -vf "setpts=5.0*PTS" -loglevel debug -sync video

    


    But I don't know how to change that option "live" (note if I don't choose -sync video, it gets very laggy - but that's fine as our use-case is video only.

    


    I have tried the following (with just a fixed rate at the moment) :

    


            {
            static double l_CurrentSpeed = 1.0;
            double l_NewSpeed = 2.0;
            double l_Diff = l_NewSpeed / l_CurrentSpeed;
            double l_ClockSpeed = cur_stream->extclk.speed;
            double l_NewClockSpeed = l_ClockSpeed * l_Diff;
            av_log(NULL, AV_LOG_DEBUG,
                "Changing speed from %f to %f\n",
                l_ClockSpeed, l_NewClockSpeed
                );

            set_clock_speed(&cur_stream->vidclk, l_NewClockSpeed);
            //set_clock(&cur_stream->vidclk, l_NewClockSpeed, cur_stream->extclk.serial);
            l_CurrentSpeed = l_NewSpeed;
        }


    


    I've set both the pts (set_clock()) and the "speed" (set_clock_speed()), but neither of these had any effect.

    


    Would I need to something extra, or is there a way to update the setpts expression in the video filter from ffplay ?

    


    Note the metadata of the file we are trying with is :

    


    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '..\Video\SampleVideoLong.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2mp41
    encoder         : Lavf61.1.100
  Duration: 00:18:10.56, start: 0.160000, bitrate: 1755 kb/s
  Stream #0:0[0x1](und), 1, 1/90000: Video: mpeg2video (Main), 1 reference frame (mp4v / 0x7634706D), yuv420p(tv, bt709, progressive, left), 1920x1080 [SAR 1:1 DAR 16:9], 0/1, 1754 kb/s, 25 fps, 25 tbr, 90k tbn (default)
      Metadata:
        handler_name    : VideoHandler
        vendor_id       : [0][0][0][0]
        encoder         : XDCAM EX 1080p25
      Side data:
        cpb: bitrate max/min/avg: 0/0/0 buffer size: 278528 vbv_delay: N/A


    


    I believe the files we'll be using should have identical or similar metadata.

    


  • mencoder. Encoding from multiple input image files compatible with web browser (No video support and MIME type) [duplicate]

    23 juin 2020, par iblasi

    I have multiple JPG files that I want to use to make a TimeLapse video compatible with the web browser to upload it on my web page.
Create a video with mencoder from multiple images is explained in some webpages such us here, that shows how to create a video.

    


    ls -Ltr my_Pics/*.jpg >files.txt
mencoder -nosound -ovc lavc -lavcopts vcodec=mpeg4 -o video.avi -mf type=jpeg:fps=4 mf://@files.txt


    


    The video is set with no sound and to have one picture every 250ms (4 fps).
These command lines create an AVI video that I can see correctly with the VLC video tool. However, if I try to open it in a web browser it shows an error :

    


    


    No Video with Supported Format and MIME type found

    


    


    So, based on other similar comments (as here), I tryed to use ffmpeg renaming all my files as ffmpeg requires a number serial format. But it happens the same, that I can see it in VLC but not in the browser.

    


    ffmpeg -r 4 -i ./output/%04d.jpg -vcodec libx264 video.mp4


    


    Based on research made on internet I am quite sure that it is due the the encoding and/or container. I tryed multiple options of codecs nd containers existing on documentation (here) but still not able to find a way to work.

    


    If, once I create the video, I use the VLC tool to manually convert the video to ".m4v" I was able to create a video that the web browser recognizes. But I would like to do it with command lines to automate it.