Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (61)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (10235)

  • ffmpeg nvenc encode too slow

    13 août 2016, par sweetsource

    i use ffmpeg 3.1 compile with nvenc,when i run the ffmpeg encode example like this:

    #include

    #include <libavutil></libavutil>opt.h>
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavutil></libavutil>channel_layout.h>
    #include <libavutil></libavutil>common.h>
    #include <libavutil></libavutil>imgutils.h>
    #include <libavutil></libavutil>mathematics.h>
    #include <libavutil></libavutil>samplefmt.h>
    #include <ace></ace>ace_os.h>

    #define INBUF_SIZE 4096
    #define AUDIO_INBUF_SIZE 20480
    #define AUDIO_REFILL_THRESH 4096



    /*
    * Video encoding example
    */
    static void video_encode_example(const char *filename, const char* codec_name)
    {
       AVCodec *codec;
       AVCodecContext *c= NULL;
       int i, ret, x, y, got_output;
       ACE_INT64 nstart,nend;
       FILE *f;
       AVFrame *frame;
       AVPacket pkt;
       uint8_t endcode[] = { 0, 0, 1, 0xb7 };

       printf("Encode video file %s\n", filename);

       /* find the video encoder */
       codec = avcodec_find_encoder_by_name(codec_name);
       if (!codec) {
           fprintf(stderr, "Codec not found\n");
           exit(1);
       }

       c = avcodec_alloc_context3(codec);
       if (!c) {
           fprintf(stderr, "Could not allocate video codec context\n");
           exit(1);
       }

       /* put sample parameters */
       c->bit_rate = 400000;
       /* resolution must be a multiple of two */
       c->width = 352;
       c->height = 288;
       /* frames per second */
       c->time_base = (AVRational){1,25};
       /* emit one intra frame every ten frames
        * check frame pict_type before passing frame
        * to encoder, if frame->pict_type is AV_PICTURE_TYPE_I
        * then gop_size is ignored and the output of encoder
        * will always be I frame irrespective to gop_size
        */
       c->gop_size = 25;
       c->max_b_frames = 0;
       c->thread_count = 1;
       c->refs = 4;
       c->pix_fmt = AV_PIX_FMT_YUV420P;

       if(!strcmp(codec_name,"libx264")
       {
           av_opt_set(c->priv_data, "preset", "superfast", 0);
           av_opt_set(c->priv_data, "tune", "zerolatency", 0);
       }

       if(!strcmp(codec_name,"h264_nvenc")
       {
           av_opt_set(m_pEncodeCtx->priv_data, "gpu","any",0);
           av_opt_set(m_pEncodeCtx->priv_data, "preset", "llhp", 0);
           av_opt_set(m_pEncodeCtx->priv_data,"profile","main",0);
           m_pEncodeCtx->refs = 0;
           m_pEncodeCtx->flags = 0;
           m_pEncodeCtx->qmax = 31;
           m_pEncodeCtx->qmin = 2;
       }

       /* open it */
       if (avcodec_open2(c, codec, NULL) &lt; 0) {
           fprintf(stderr, "Could not open codec\n");
           exit(1);
       }

       f = fopen(filename, "wb");
       if (!f) {
           fprintf(stderr, "Could not open %s\n", filename);
           exit(1);
       }

       frame = av_frame_alloc();
       if (!frame) {
           fprintf(stderr, "Could not allocate video frame\n");
           exit(1);
       }
       frame->format = c->pix_fmt;
       frame->width  = c->width;
       frame->height = c->height;

       /* the image can be allocated by any means and av_image_alloc() is
        * just the most convenient way if av_malloc() is to be used */
       ret = av_image_alloc(frame->data, frame->linesize, c->width, c->height,
                            c->pix_fmt, 32);
       if (ret &lt; 0) {
           fprintf(stderr, "Could not allocate raw picture buffer\n");
           exit(1);
       }

       /* encode 1 second of video */
       for (i = 0; i &lt; 25; i++) {
           av_init_packet(&amp;pkt);
           pkt.data = NULL;    // packet data will be allocated by the encoder
           pkt.size = 0;

           fflush(stdout);
           /* prepare a dummy image */
           /* Y */
           for (y = 0; y &lt; c->height; y++) {
               for (x = 0; x &lt; c->width; x++) {
                   frame->data[0][y * frame->linesize[0] + x] = x + y + i * 3;
               }
           }

           /* Cb and Cr */
           for (y = 0; y &lt; c->height/2; y++) {
               for (x = 0; x &lt; c->width/2; x++) {
                   frame->data[1][y * frame->linesize[1] + x] = 128 + y + i * 2;
                   frame->data[2][y * frame->linesize[2] + x] = 64 + x + i * 5;
               }
           }

           frame->pts = i;

           /* encode the image */
           nstart = ACE_OS::gettimeofday().get_msec();
           ret = avcodec_encode_video2(c, &amp;pkt, frame, &amp;got_output);
           if (ret &lt; 0) {
               fprintf(stderr, "Error encoding frame\n");
               exit(1);
           }

           if (got_output) {
               printf("%s take time:%d\n",codec_name,ACE_OS::gettimeofday().get_msec()-nstart);
               printf("Write frame %3d (size=%5d)\n", i, pkt.size);
               fwrite(pkt.data, 1, pkt.size, f);
               av_packet_unref(&amp;pkt);
           }
       }

       /* get the delayed frames */
       for (got_output = 1; got_output; i++) {
           fflush(stdout);

           ret = avcodec_encode_video2(c, &amp;pkt, NULL, &amp;got_output);
           if (ret &lt; 0) {
               fprintf(stderr, "Error encoding frame\n");
               exit(1);
           }

           if (got_output) {
               printf("Write frame %3d (size=%5d)\n", i, pkt.size);
               fwrite(pkt.data, 1, pkt.size, f);
               av_packet_unref(&amp;pkt);
           }
       }

       /* add sequence end code to have a real MPEG file */
       fwrite(endcode, 1, sizeof(endcode), f);
       fclose(f);

       avcodec_close(c);
       av_free(c);
       av_freep(&amp;frame->data[0]);
       av_frame_free(&amp;frame);
       printf("\n");
    }



    int main(int argc, char **argv)
    {
       const char *output_type;

       /* register all the codecs */
       avcodec_register_all();
       video_encode_example("test.h264", "h264_nvenc");


       return 0;
    }

    it encode one frame to a packet about 1800ms,this is too slow. I use Nvidia Grid K1.Is there some parameter error ? Thanke you very much

  • Programmatically creating a video using FFmpeg, using SDL's sprite screenshot BMP

    28 octobre 2016, par alok

    I have an animation/sprite developed in C++ on SDL2 libs (based on this answer). The bitmaps are saved to a certain path. They are of dimensions 640x480 and format is given by the SDL constant SDL_PIXELFORMAT_ARGB8888.

    I have a second program written in C on top of FFmpeg libs, which reads one image from the above path (just one for the time being, will read the whole series when it works for just one).
    This does the following (in gist - skipping validation & comments for conciseness)

    AVCodec *codec;
    AVCodecContext *c = NULL;
    int i, ret, x, y, got_output;
    FILE *f;
    AVFrame *frame;
    AVPacket pkt;
    uint8_t endcode[] = { 0, 0, 1, 0xb7 };

    codec = avcodec_find_encoder(codec_id);
    c = avcodec_alloc_context3(codec);
    c->bit_rate = 400000;
    /* resolution must be a multiple of two */
    c->width = 640;
    c->height = 480;
    c->time_base = (AVRational ) { 1, 25 };
    c->gop_size = 5;
    c->max_b_frames = 1;
    c->pix_fmt = AV_PIX_FMT_YUV420P;

    av_opt_set(c->priv_data, "preset", "slow", 0);
    avcodec_open2(c, codec, NULL);

    fopen(filename, "wb");
    frame = av_frame_alloc();
    av_image_alloc(frame->data, frame->linesize, c->width, c->height, c->pix_fmt, 32);

    for (i = 0; i &lt; 25; ++i) {
       readSingleFile("/tmp/alok1/ss099.bmp", &amp;frame->data);//Read the saved BMP into frame->data
       frame->pts = i;
       frame->width = 640;
       frame->height = 480;
       frame->format = -1;

       av_init_packet(&amp;pkt);
       pkt.data = NULL; // packet data will be allocated by the encoder
       pkt.size = 0;
       ret = avcodec_encode_video2(c, &amp;pkt, frame, &amp;got_output);


       if (got_output) {
           printf("Write frame %3d (size=%5d)\n", i, pkt.size);
           fwrite(pkt.data, 1, pkt.size, f);
       }
       av_packet_unref(&amp;pkt);
    }
    for (got_output = 1; got_output; i++) {
       fflush(stdout);

       ret = avcodec_encode_video2(c, &amp;pkt, NULL, &amp;got_output);
       if (ret &lt; 0) {
           fprintf(stderr, "Error encoding frame\n");
           exit(1);
       }

       if (got_output) {
           printf("[DELAYED]Write frame %3d (size=%5d)\n", i, pkt.size);
           fwrite(pkt.data, 1, pkt.size, f);
           av_packet_unref(&amp;pkt);
       }
    }

    fwrite(endcode, 1, sizeof(endcode), f);
    //cleanup

    As a result of the above code(which compiles without trouble), I can get a video which plays for 1 second - this part is working as expected. Problem is that the image seen is a green full screen like below.enter how video looks

    The image that is being read using the readSingleImage(...) function is rendered by image viewer(linux, gwenview and okular) as follows : original bitmap image

    Any pointers as to what could be going wrong ?

  • ffmpeg livestream only shows one frame at a time

    28 octobre 2016, par user3308335

    So, I’ve tried to turn one of my pis into a silly "baby cam" for my pet and I followed the tutorial made by Ustream.tv on how to do this.

    This is the script I run to start the stream :

    #!/bin/bash
    RTMP_URL=<rtmpurl>
    STREAM_KEY=<streamkey>
    while :
    do
     raspivid -n -hf -t 0 -w 640 -h 480 -fps 15 -b 400000 -o - | ffmpeg -i - -vcodec copy -an  -f flv $RTMP_URL/$STREAM_KEY
     sleep 2
    done
    </streamkey></rtmpurl>

    However, whenever I go to view the stream, the stream shows only one frame. The same frame until I refresh the browser, watch the ad again, and then it’s a new same frame that it’ll show.

    Does anyone have an idea why this might be happening or any troubleshooting tricks for me to try ?