Recherche avancée

Médias (1)

Mot : - Tags -/framasoft

Autres articles (75)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (10303)

  • Theatrical quality ffmpeg/x264 encoding of a high-motion 1080p video

    2 décembre 2011, par Ian

    I've been struggling with encoding videos using FFMPEG and x264. The output stutters when played back in Quicktime, while in VLC it shows a lot of compression artifacts at the same places Quicktime stutters. So it seems like Quicktime is stuttering because it's trying to suppress the corruption/artifacts.

    The videos have a lot of random motion in them, including frames where 75% of the pixels will change at a random interval (the video is software generated so it's truly pseudo-random). The compression seems to be choking in these places where it's likely detecting a "scene cut" incorrectly. It also seems to choke at regular intervals where I guess it's doing a keyframe.

    I've based my encoding preset off of the x264-hq preset that comes with FFMPEG. I've tried turning off scene cut detection, and playing with the keyint/g and keyint_min options. Setting g to 1 makes it work, but blows out the filesize. I've tried the lossless presets, but they won't playback at all in Quicktime. Oddly, I haven't had any problems when working with a lower-resolution test video (1440x810).

    Here's the preset I have right now, which works, but yields a file that's approximately 60% larger than the (non-working) hq preset yields. Is there any way to improve upon this ? The filesize doesn't matter much, I just want something that will playback anywhere and be very high quality.

    coder=1
    flags=+loop
    cmp=+chroma
    partitions=+parti8x8+parti4x4+partp8x8+partp4x4+partb8x8
    me_method=umh
    subq=8
    me_range=16
    g=1
    keyint_min=1
    sc_threshold=0
    i_qfactor=0.71
    b_strategy=1crf=20
    qcomp=0.6
    qmin=20
    qmax=51
    qdiff=4
    bf=16
    refs=4
    trellis=1
    flags2=+dct8x8+wpred+bpyramid+mixed_refs
    wpredp=2
    

    Here's the command :

    ffmpeg \
      -r 60 -i "frame-%06d.tiff" \
      -vcodec libx264 -vpre my_preset \
      -threads 0 \
      -r 60 -an -f out.mp4
    
  • Why doesn't this FFmpeg code create a video from a series of images ?

    20 octobre 2011, par user551117

    I have successfully compiled the FFmpeg library for use in an iOS application. I would like to use it for encoding a video from a series of images, but I can't seem to make it work.

    The following is the code that I am using to encode this video :

    AVCodec *codec;
    AVCodecContext *c= NULL;
    int i, out_size, size, outbuf_size;
    FILE *f;
    AVFrame *picture;
    uint8_t *outbuf;

    printf("Video encoding\n");

    /// find the mpeg video encoder
    codec=avcodec_find_encoder(CODEC_ID_MPEG4);
    //codec = avcodec_find_encoder(CODEC_ID_MPEG4);
    if (!codec) {
       fprintf(stderr, "codec not found\n");
       exit(1);
    }

    c= avcodec_alloc_context();
    picture= avcodec_alloc_frame();

    // put sample parameters
    c->bit_rate = 400000;
    /// resolution must be a multiple of two
    c->width = 320;
    c->height = 480;
    //frames per second
    c->time_base= (AVRational){1,25};
    c->gop_size = 10; /// emit one intra frame every ten frames
    c->max_b_frames=1;
    c->pix_fmt = PIX_FMT_YUV420P;

    //open it
    if (avcodec_open(c, codec) < 0) {
       fprintf(stderr, "could not open codec\n");
       exit(1);
    }

    f = fopen([[NSTemporaryDirectory() stringByAppendingPathComponent:filename] UTF8String], "w");
    if (!f) {
       fprintf(stderr, "could not open %s\n",[filename UTF8String]);
       exit(1);
    }

    // alloc image and output buffer
    outbuf_size = 100000;
    outbuf = malloc(outbuf_size);
    size = c->width * c->height;

    #pragma mark -
    AVFrame* outpic = avcodec_alloc_frame();
    int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);

    //create buffer for the output image
    uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);

    #pragma mark -  
    for(i=1;i<48;i++) {
       fflush(stdout);

       int numBytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
       uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

       UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"%d.png", i]];
       CGImageRef newCgImage = [image CGImage];

       CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
       CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
       buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);  

       avpicture_fill((AVPicture*)picture, buffer, PIX_FMT_RGB24, c->width, c->height);
       avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);

       struct SwsContext* fooContext = sws_getContext(c->width, c->height,
                                                      PIX_FMT_RGB24,
                                                      c->width, c->height,
                                                      PIX_FMT_YUV420P,
                                                      SWS_FAST_BILINEAR, NULL, NULL, NULL);

       //perform the conversion
       sws_scale(fooContext, picture->data, picture->linesize, 0, c->height, outpic->data, outpic->linesize);
       // Here is where I try to convert to YUV

       // encode the image

       out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
       printf("encoding frame %3d (size=%5d)\n", i, out_size);
       fwrite(outbuf, 1, out_size, f);

       free(buffer);
       buffer = NULL;      

    }

    // get the delayed frames
    for(; out_size; i++) {
       fflush(stdout);

       out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
       printf("write frame %3d (size=%5d)\n", i, out_size);
       fwrite(outbuf, 1, outbuf_size, f);      
    }

    // add sequence end code to have a real mpeg file
    outbuf[0] = 0x00;
    outbuf[1] = 0x00;
    outbuf[2] = 0x01;
    outbuf[3] = 0xb7;
    fwrite(outbuf, 1, 4, f);
    fclose(f);
    free(outbuf);

    avcodec_close(c);
    av_free(c);
    av_free(picture);
    printf("\n");

    What could be wrong with this code ?

  • Howto to stream a source 1:1 with ffserver ?

    23 octobre 2016, par thefatman

    I am experimenting with ffmpeg and ffserver and would like to restream a mpegts stream 1:1 without changing framerate, resolution, bitrade, codec, audio format etc.

    How do I achieve this with ffmpeg and ffserver ?

    I tried to use NoDefaults in the ffserver.conf but ffserver will complain that I have not set bitrate, frame rate etc.

    I just want to restream everything 1:1 without changing anything. How can I do this ?

    Thank you.