Recherche avancée

Médias (0)

Mot : - Tags -/performance

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (82)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (12132)

  • ffmpeg/libavcodec memory management

    23 juillet 2015, par Jason C

    The libavcodec documentation is not very specific about when to free allocated data and how to free it. After reading through documentation and examples, I’ve put together the sample program below. There are some specific questions inlined in the source but my general question is, am I freeing all memory properly in the code below ? I realize the program below doesn’t do any cleanup after errors — the focus is on final cleanup.

    The testfile() function is the one in question.

    extern "C" {
    #include "libavcodec/avcodec.h"
    #include "libavformat/avformat.h"
    #include "libswscale/swscale.h"
    }

    #include <cstdio>

    using namespace std;


    void AVFAIL (int code, const char *what) {
       char msg[500];
       av_strerror(code, msg, sizeof(msg));
       fprintf(stderr, "failed: %s\nerror: %s\n", what, msg);
       exit(2);
    }

    #define AVCHECK(f) do { int e = (f); if (e &lt; 0) AVFAIL(e, #f); } while (0)
    #define AVCHECKPTR(p,f) do { p = (f); if (!p) AVFAIL(AVERROR_UNKNOWN, #f); } while (0)


    void testfile (const char *filename) {

       AVFormatContext *format;
       unsigned streamIndex;
       AVStream *stream = NULL;
       AVCodec *codec;
       SwsContext *sws;
       AVPacket packet;
       AVFrame *rawframe;
       AVFrame *rgbframe;
       unsigned char *rgbdata;

       av_register_all();

       // load file header
       AVCHECK(av_open_input_file(&amp;format, filename, NULL, 0, NULL));
       AVCHECK(av_find_stream_info(format));

       // find video stream
       for (streamIndex = 0; streamIndex &lt; format->nb_streams &amp;&amp; !stream; ++ streamIndex)
           if (format->streams[streamIndex]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
               stream = format->streams[streamIndex];
       if (!stream) {
           fprintf(stderr, "no video stream\n");
           exit(2);
       }

       // initialize codec
       AVCHECKPTR(codec, avcodec_find_decoder(stream->codec->codec_id));
       AVCHECK(avcodec_open(stream->codec, codec));
       int width = stream->codec->width;
       int height = stream->codec->height;

       // initialize frame buffers
       int rgbbytes = avpicture_get_size(PIX_FMT_RGB24, width, height);
       AVCHECKPTR(rawframe, avcodec_alloc_frame());
       AVCHECKPTR(rgbframe, avcodec_alloc_frame());
       AVCHECKPTR(rgbdata, (unsigned char *)av_mallocz(rgbbytes));
       AVCHECK(avpicture_fill((AVPicture *)rgbframe, rgbdata, PIX_FMT_RGB24, width, height));

       // initialize sws (for conversion to rgb24)
       AVCHECKPTR(sws, sws_getContext(width, height, stream->codec->pix_fmt, width, height, PIX_FMT_RGB24, SWS_FAST_BILINEAR, NULL, NULL, NULL));

       // read all frames fromfile
       while (av_read_frame(format, &amp;packet) >= 0) {      

           int frameok = 0;
           if (packet.stream_index == (int)streamIndex)
               AVCHECK(avcodec_decode_video2(stream->codec, rawframe, &amp;frameok, &amp;packet));

           av_free_packet(&amp;packet); // Q: is this necessary or will next av_read_frame take care of it?

           if (frameok) {
               sws_scale(sws, rawframe->data, rawframe->linesize, 0, height, rgbframe->data, rgbframe->linesize);
               // would process rgbframe here
           }

           // Q: is there anything i need to free here?

       }

       // CLEANUP: Q: am i missing anything / doing anything unnecessary?
       av_free(sws); // Q: is av_free all i need here?
       av_free_packet(&amp;packet); // Q: is this necessary (av_read_frame has returned &lt; 0)?
       av_free(rgbframe);
       av_free(rgbdata);
       av_free(rawframe); // Q: i can just do this once at end, instead of in loop above, right?
       avcodec_close(stream->codec); // Q: do i need av_free(codec)?
       av_close_input_file(format); // Q: do i need av_free(format)?

    }


    int main (int argc, char **argv) {

       if (argc != 2) {
           fprintf(stderr, "usage: %s filename\n", argv[0]);
           return 1;
       }

       testfile(argv[1]);

    }
    </cstdio>

    Specific questions :

    1. Is there anything I need to free in the frame processing loop ; or will libav take care of memory management there for me ?
    2. Is av_free the correct way to free an SwsContext ?
    3. The frame loop exits when av_read_frame returns < 0. In that case, do I still need to av_free_packet when it’s done ?
    4. Do I need to call av_free_packet every time through the loop or will av_read_frame free/reuse the old AVPacket automatically ?
    5. I can just av_free the AVFrames at the end of the loop instead of reallocating them each time through, correct ? It seems to be working fine, but I’d like to confirm that it’s working because it’s supposed to, rather than by luck.
    6. Do I need to av_free(codec) the AVCodec or do anything else after avcodec_close on the AVCodecContext ?
    7. Do I need to av_free(format) the AVFormatContext or do anything else after av_close_input_file ?

    I also realize that some of these functions are deprecated in current versions of libav. For reasons that are not relevant here, I have to use them.

  • ffmpeg split videos from times in csv file

    10 juin 2018, par martins

    I’m using Python 3.6. I am trying to split videos into subclips of specified time. I have a folder with 250 ".mp4" files and a separate csv with the times at which I want each video to be subclipped. For instance, a first .mp4 file would be "firstvideo.mp4" up to "twohundredfiftyvideo.mp4". Separately, I have a csv file with each video file name in column A and the timing at which each video needs to be split (column B to I). All videos need to be split into 4 subclips. The .csv looks like :

           col A          col B      col C       colD      colE    .. colI
    row1 firstvideo.mp4  00:00:10 - 00:00:20 - 00:01:15 - 00:02:04 .. 00:07:15
    row2 secondvideo.mp4 00:00:15 - 00:00:34 - 00:01:05 - 00:01:55 .. 00:08:23

    "firstvideo.mp4" needs a first split from second 10 to 20, a second split from 1m15s - 2m04s and so on (colI is the time at which the fourth subclip should stop). The process should iterate for each of the 250 rows of the csv file corresponding to the 250 mp4 files in a folder.

    So far, the only thing I know is to use ffmpeg to split videos into 4 subclips and generate 4 different output files but do not know how to read from the csv line by line... This is the code I have so far.

    ffmpeg -i firstvideo.mp4 -vcodec copy -acodec copy -ss 00:00:10 -t 00:00:20
    firstvideo_1.mp4 -vcodec copy -acodec copy -ss 00:01:15 -t 00:02:04 firstvideo_2.mp4 -vcodec copy -acodec copy -ss 00:03:48 -t 00:04:23
    firstvideo_3.mp4 -vcodec copy -acodec copy -ss 00:05:30 -t 00:07:15 firstvideo_4.mp4

    I name each output file with a _1,_2,_3 or _4 appended to the original video file name. Ideally, I would generate 4 subclips per video (i.e. 250 videos x 4subclip/video = 1,000 mp4 files) and then concatenate each video 4 subclips into one file (i.e. 250 additional files). In fact, I don’t care about the 4 subclips, I’d delete them after they generated my concatenated file.

    Thanks for your time anyways,

  • bwdif_vulkan : convert to storage images

    17 février, par Lynne
    bwdif_vulkan : convert to storage images
    

    texture() uses bilinear scaling ; imageLoad() accesses the image directly.
    The reason why texture() was used throughout Vulkan filters is that
    back when they were written, they were targetting old Intel hardware,
    which had a texel cache only for sampled images.

    These days, GPUs have a generic cache that doesn't care what source it
    gets populated with. Additionally, bypassing the sampling circuitry saves
    us some performance.

    Finally, all the old texture() code had an issue where unnormalized
    coordinates were used, but an offset of 0.5 was not added, hence each
    pixel ended up being interpolated. This fixes this.

    • [DH] libavfilter/vf_bwdif_vulkan.c
    • [DH] libavfilter/vulkan/bwdif.comp