Recherche avancée

Médias (0)

Mot : - Tags -/gis

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (51)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (8913)

  • FFmpeg - MJPEG decoding - getting different values

    27 décembre 2016, par ahmadh

    I have a set of JPEG frames which I am muxing into an avi, which gives me a mjpeg video. This is the command I run on the console :

    ffmpeg -y -start_number 0 -i %06d.JPEG -codec copy vid.avi

    When I try to demux the video using ffmpeg C api, I get frames which are slightly different in values. Demuxing code looks something like this :

    AVFormatContext* fmt_ctx = NULL;
    AVCodecContext* cdc_ctx = NULL;
    AVCodec* vid_cdc = NULL;
    int ret;
    unsigned int height, width;

    ....
    // read_nframes is the number of frames to read
    output_arr = new unsigned char [height * width * 3 *
                                   sizeof(unsigned char) * read_nframes];

    avcodec_open2(cdc_ctx, vid_cdc, NULL);

    int num_bytes;
    uint8_t* buffer = NULL;
    const AVPixelFormat out_format = AV_PIX_FMT_RGB24;

    num_bytes = av_image_get_buffer_size(out_format, width, height, 1);
    buffer = (uint8_t*)av_malloc(num_bytes * sizeof(uint8_t));

    AVFrame* vid_frame = NULL;
    vid_frame = av_frame_alloc();
    AVFrame* conv_frame = NULL;
    conv_frame = av_frame_alloc();

    av_image_fill_arrays(conv_frame->data, conv_frame->linesize, buffer,
                        out_format, width, height, 1);

    struct SwsContext *sws_ctx = NULL;
    sws_ctx = sws_getContext(width, height, cdc_ctx->pix_fmt,
                            width, height, out_format,
                            SWS_BILINEAR, NULL,NULL,NULL);

    int frame_num = 0;
    AVPacket vid_pckt;
    while (av_read_frame(fmt_ctx, &vid_pckt) >=0) {
       ret = avcodec_send_packet(cdc_ctx, &vid_pckt);
       if (ret < 0)
           break;

       ret = avcodec_receive_frame(cdc_ctx, vid_frame);
       if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
           break;
       if (ret >= 0) {
           // convert image from native format to planar GBR
           sws_scale(sws_ctx, vid_frame->data,
                     vid_frame->linesize, 0, vid_frame->height,
                     conv_frame->data, conv_frame->linesize);

           unsigned char* r_ptr = output_arr +
               (height * width * sizeof(unsigned char) * 3 * frame_num);
           unsigned char* g_ptr = r_ptr + (height * width * sizeof(unsigned char));
           unsigned char* b_ptr = g_ptr + (height * width * sizeof(unsigned char));
           unsigned int pxl_i = 0;

           for (unsigned int r = 0; r < height; ++r) {
               uint8_t* avframe_r = conv_frame->data[0] + r*conv_frame->linesize[0];
               for (unsigned int c = 0; c < width; ++c) {
                   r_ptr[pxl_i] = avframe_r[0];
                   g_ptr[pxl_i]   = avframe_r[1];
                   b_ptr[pxl_i]   = avframe_r[2];
                   avframe_r += 3;
                   ++pxl_i;
               }
           }

           ++frame_num;

           if (frame_num >= read_nframes)
               break;
       }
    }

    ...

    In my experience around two-thirds of the pixel values are different, each by +-1 (in a range of [0,255]). I am wondering is it due to some decoding scheme FFmpeg uses for reading JPEG frames ? I tried encoding and decoding png frames, and it works perfectly fine.

    In short my goal is to get the same pixel by pixel values for each JPEG frame as I would I have gotten if I was reading the JPEG images directly. Here is the stand-alone code I used. It includes cmake files to build code, and a couple of jpeg frames with the converted avi file to test this problem. (give —filetype png to test the png decoding).

  • Automation for Downloading, Encoding, Renaming and Uploading [on hold]

    5 février 2019, par Madara Uchiha

    How do I automate - downloading anime episodes from Torrent > Encode the videos with ffmpeg or gui > rename (with encoder tag) > upload to Google drive ?
    Should work 24/7. Need help !

  • Why is one ffmpeg webm dash stream much larger than the others ?

    5 janvier 2017, par ranvel

    Over the summer, I worked on putting together a script which took a x264 video/mp3 stream and broke it up into the different streams so that it would work via MSE-DASH. (Based heavily on the instructions on the webmproject.org website) Those same scripts have ceased to work, turning a 6GB video into several 25 Gb videos. I kept up with updates of ffmpeg and so I don’t know when it stopped working, but I am guessing it was due to the way that their DASH Webm implementation was updated.

    I found new method which works better, but still has a major problem with one stream. I was hoping someone could explain how this encoding works so that I could understand the underlying cause.

    #!/bin/bash
    COMMON_OPTS="-map 0:0 -an -threads 11 -cpu-used 4 -cmp chroma"
    WEBM_OPTS="-f webm -c:v vp9 -keyint_min 50 -g 50 -dash 1"

    ffmpeg -i $1 -vn -acodec libvorbis -ab 128k audio.webm &
    ffmpeg -i $1 $COMMON_OPTS $WEBM_OPTS -b:v 500k -vf scale=1280:720 -y vid-500k.webm &
    ffmpeg -i $1 $COMMON_OPTS $WEBM_OPTS -b:v 700k -vf scale=1280:720 -y vid-700k.webm &
    ffmpeg -i $1 $COMMON_OPTS $WEBM_OPTS -b:v 1000k -vf scale=1280:720 -y vid-1000k.webm &
    ffmpeg -i $1 $COMMON_OPTS $WEBM_OPTS -b:v 1500k -vf scale=1280:720 -y vid-1500k.webm  

    The transcode is not yet complete, but you can see where this is headed :

    -rw-r--r--  1 user  staff    87M Jan  4 23:27 audio.webm
    -rw-r--r--  1 user  staff    27M Jan  4 23:42 vid-1000k.webm
    -rw-r--r--  1 user  staff   285M Jan  4 23:42 vid-1500k.webm
    -rw-r--r--  1 user  staff    15M Jan  4 23:42 vid-500k.webm
    -rw-r--r--  1 user  staff    20M Jan  4 23:42 vid-700k.webm

    The 1500k variant is disproportionately larger than the other streams.

    The other problem is that when I use a shorter video, lets say eight or nine minutes, the above configuration runs as expected and everything is perfect. I don’t know where the limit for this is since each test costs a lot of processing power and time, but if it’s less than ten minutes, it works and if its longer than an hour, it produces massive files.