Recherche avancée

Médias (91)

Autres articles (30)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (5942)

  • Restrict bandwidth usage in cloud game server

    13 décembre 2012, par Nandy

    I am working on cloud game server development.

    During the testing of some games, there is a spike observed of around 10 MBps. Normally game consumes 4 6 MBps network bandwidth.

    Is there any way to keep consumed bandwidth <5 MBps without much affecting video quality ?

    720p resolution is being used. We are using x264 encoder, are there any params of this encoder which may help me out to achieve expected o/p ?

  • Theatrical quality ffmpeg/x264 encoding of a high-motion 1080p video

    2 décembre 2011, par Ian

    I've been struggling with encoding videos using FFMPEG and x264. The output stutters when played back in Quicktime, while in VLC it shows a lot of compression artifacts at the same places Quicktime stutters. So it seems like Quicktime is stuttering because it's trying to suppress the corruption/artifacts.

    The videos have a lot of random motion in them, including frames where 75% of the pixels will change at a random interval (the video is software generated so it's truly pseudo-random). The compression seems to be choking in these places where it's likely detecting a "scene cut" incorrectly. It also seems to choke at regular intervals where I guess it's doing a keyframe.

    I've based my encoding preset off of the x264-hq preset that comes with FFMPEG. I've tried turning off scene cut detection, and playing with the keyint/g and keyint_min options. Setting g to 1 makes it work, but blows out the filesize. I've tried the lossless presets, but they won't playback at all in Quicktime. Oddly, I haven't had any problems when working with a lower-resolution test video (1440x810).

    Here's the preset I have right now, which works, but yields a file that's approximately 60% larger than the (non-working) hq preset yields. Is there any way to improve upon this ? The filesize doesn't matter much, I just want something that will playback anywhere and be very high quality.

    coder=1
    flags=+loop
    cmp=+chroma
    partitions=+parti8x8+parti4x4+partp8x8+partp4x4+partb8x8
    me_method=umh
    subq=8
    me_range=16
    g=1
    keyint_min=1
    sc_threshold=0
    i_qfactor=0.71
    b_strategy=1crf=20
    qcomp=0.6
    qmin=20
    qmax=51
    qdiff=4
    bf=16
    refs=4
    trellis=1
    flags2=+dct8x8+wpred+bpyramid+mixed_refs
    wpredp=2
    

    Here's the command :

    ffmpeg \
      -r 60 -i "frame-%06d.tiff" \
      -vcodec libx264 -vpre my_preset \
      -threads 0 \
      -r 60 -an -f out.mp4
    
  • Why doesn't this FFmpeg code create a video from a series of images ?

    20 octobre 2011, par user551117

    I have successfully compiled the FFmpeg library for use in an iOS application. I would like to use it for encoding a video from a series of images, but I can't seem to make it work.

    The following is the code that I am using to encode this video :

    AVCodec *codec;
    AVCodecContext *c= NULL;
    int i, out_size, size, outbuf_size;
    FILE *f;
    AVFrame *picture;
    uint8_t *outbuf;

    printf("Video encoding\n");

    /// find the mpeg video encoder
    codec=avcodec_find_encoder(CODEC_ID_MPEG4);
    //codec = avcodec_find_encoder(CODEC_ID_MPEG4);
    if (!codec) {
       fprintf(stderr, "codec not found\n");
       exit(1);
    }

    c= avcodec_alloc_context();
    picture= avcodec_alloc_frame();

    // put sample parameters
    c->bit_rate = 400000;
    /// resolution must be a multiple of two
    c->width = 320;
    c->height = 480;
    //frames per second
    c->time_base= (AVRational){1,25};
    c->gop_size = 10; /// emit one intra frame every ten frames
    c->max_b_frames=1;
    c->pix_fmt = PIX_FMT_YUV420P;

    //open it
    if (avcodec_open(c, codec) &lt; 0) {
       fprintf(stderr, "could not open codec\n");
       exit(1);
    }

    f = fopen([[NSTemporaryDirectory() stringByAppendingPathComponent:filename] UTF8String], "w");
    if (!f) {
       fprintf(stderr, "could not open %s\n",[filename UTF8String]);
       exit(1);
    }

    // alloc image and output buffer
    outbuf_size = 100000;
    outbuf = malloc(outbuf_size);
    size = c->width * c->height;

    #pragma mark -
    AVFrame* outpic = avcodec_alloc_frame();
    int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);

    //create buffer for the output image
    uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);

    #pragma mark -  
    for(i=1;i&lt;48;i++) {
       fflush(stdout);

       int numBytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
       uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

       UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"%d.png", i]];
       CGImageRef newCgImage = [image CGImage];

       CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
       CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
       buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);  

       avpicture_fill((AVPicture*)picture, buffer, PIX_FMT_RGB24, c->width, c->height);
       avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);

       struct SwsContext* fooContext = sws_getContext(c->width, c->height,
                                                      PIX_FMT_RGB24,
                                                      c->width, c->height,
                                                      PIX_FMT_YUV420P,
                                                      SWS_FAST_BILINEAR, NULL, NULL, NULL);

       //perform the conversion
       sws_scale(fooContext, picture->data, picture->linesize, 0, c->height, outpic->data, outpic->linesize);
       // Here is where I try to convert to YUV

       // encode the image

       out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
       printf("encoding frame %3d (size=%5d)\n", i, out_size);
       fwrite(outbuf, 1, out_size, f);

       free(buffer);
       buffer = NULL;      

    }

    // get the delayed frames
    for(; out_size; i++) {
       fflush(stdout);

       out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
       printf("write frame %3d (size=%5d)\n", i, out_size);
       fwrite(outbuf, 1, outbuf_size, f);      
    }

    // add sequence end code to have a real mpeg file
    outbuf[0] = 0x00;
    outbuf[1] = 0x00;
    outbuf[2] = 0x01;
    outbuf[3] = 0xb7;
    fwrite(outbuf, 1, 4, f);
    fclose(f);
    free(outbuf);

    avcodec_close(c);
    av_free(c);
    av_free(picture);
    printf("\n");

    What could be wrong with this code ?