Recherche avancée

Médias (91)

Autres articles (101)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • L’agrémenter visuellement

    10 avril 2011

    MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
    Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté.

  • Possibilité de déploiement en ferme

    12 avril 2011, par

    MediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
    Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)

Sur d’autres sites (8110)

  • Concat two mp4 files with ffmpeg without losing quality [migrated]

    13 juin 2013, par jenia

    i have now a problem with concatenating of 2 videos using ffmpeg.
    So, i am encoding the source mp4 files to ts with

      ffmpeg -i output1.mp4 -scodec copy -vbsf h264_mp4toannexb i0.ts

    but the file i get looks much worse, then the source file.

    here is the information about both the files

      Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'output1.mp4':
      Metadata:
      major_brand     : isom
      minor_version   : 1
      compatible_brands: isom
      creation_time   : 2013-06-13 15:40:36
      Duration: 00:00:15.72, start: 0.000000, bitrate: 2053 kb/s
      Stream #0.0(und): Video: h264 (High), yuv420p, 1280x720, 1931 kb/s, 25 fps, 25 tbr,   12800 tbn, 50 tbc
      Stream #0.1(und): Audio: aac, 44100 Hz, stereo, s16, 128 kb/s

     Input #0, mpegts, from 'i0.ts':
     Duration: 00:00:15.64, start: 1.400000, bitrate: 1382 kb/s
     Program 1
     Metadata:
     service_name    : Service01
     service_provider: Libav
     Stream #0.0[0x100]: Video: mpeg2video (Main), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 104857 kb/s, 25 fps, 25 tbr, 90k tbn, 50 tbc
     Stream #0.1[0x101](und): Audio: mp2, 44100 Hz, stereo, s16, 128 kb/s

    So, how can i solve this problem ?
    Thanks in advance !

  • FFMpeg encoded video will only play in FFPlay

    8 novembre 2013, par mohM

    I've been debugging my program for a couple of weeks now with the output video only showing a blank screen (was testing with VLC, WMP and WMPClassic). I happened to try using FFPlay and lo and behold the video works perfectly. I've read that this is usually caused by an incorrect pixel format, and that switching to PIX_FMT_YUV420P will make it work universally...but I'm already using that pixel format in the encoding process. Is there anything else that is causing this ?

    AVCodec* codec;
    AVCodecContext* c = NULL;
    uint8_t* outbuf;
    int i, out_size, outbuf_size;

    avcodec_register_all();

    printf("Video encoding\n");

    // Find the mpeg1 video encoder
    codec = avcodec_find_encoder(CODEC_ID_H264);
    if (!codec) {
       fprintf(stderr, "Codec not found\n");
       exit(1);
    }
    else printf("H264 codec found\n");

    c = avcodec_alloc_context3(codec);

    c->bit_rate = 400000;
    c->width = 1920;                                        // resolution must be a multiple of two (1280x720),(1900x1080),(720x480)
    c->height = 1200;
    c->time_base.num = 1;                                   // framerate numerator
    c->time_base.den = 25;                                  // framerate denominator
    c->gop_size = 10;                                       // emit one intra frame every ten frames
    c->max_b_frames = 1;                                    // maximum number of b-frames between non b-frames
    //c->keyint_min = 1;                                        // minimum GOP size
    //c->i_quant_factor = (float)0.71;                      // qscale factor between P and I frames
    //c->b_frame_strategy = 20;
    //c->qcompress = (float)0.6;
    //c->qmin = 20;                                         // minimum quantizer
    //c->qmax = 51;                                         // maximum quantizer
    //c->max_qdiff = 4;                                     // maximum quantizer difference between frames
    //c->refs = 4;                                          // number of reference frames
    //c->trellis = 1;                                           // trellis RD Quantization
    c->pix_fmt = PIX_FMT_YUV420P;
    c->codec_id = CODEC_ID_H264;
    //c->codec_type = AVMEDIA_TYPE_VIDEO;

    // Open the encoder
    if (avcodec_open2(c, codec,NULL) < 0) {
       fprintf(stderr, "Could not open codec\n");
       exit(1);
    }
    else printf("H264 codec opened\n");

    outbuf_size = 100000 + c->width*c->height*(32>>3);//*(32>>3);           // alloc image and output buffer
    outbuf = static_cast(malloc(outbuf_size));
    printf("Setting buffer size to: %d\n",outbuf_size);

    FILE* f = fopen("example.mpg","wb");
    if(!f) printf("x  -  Cannot open video file for writing\n");
    else printf("Opened video file for writing\n");

    // encode 5 seconds of video
    for(i=0;iwidth, c->height);
       uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes*sizeof(uint8_t));

       AVFrame* inpic = avcodec_alloc_frame();
       AVFrame* outpic = avcodec_alloc_frame();

       outpic->pts = (int64_t)((float)i * (1000.0/((float)(c->time_base.den))) * 90);
       avpicture_fill((AVPicture*)inpic, (uint8_t*)pPixels, PIX_FMT_RGB32, c->width, c->height);                   // Fill picture with image
       avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);
       av_image_alloc(outpic->data, outpic->linesize, c->width, c->height, c->pix_fmt, 1);

       inpic->data[0] += inpic->linesize[0]*(screenHeight-1);                                                      // Flipping frame
       inpic->linesize[0] = -inpic->linesize[0];                                                                   // Flipping frame

       struct SwsContext* fooContext = sws_getContext(screenWidth, screenHeight, PIX_FMT_RGB32, c->width, c->height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
       sws_scale(fooContext, inpic->data, inpic->linesize, 0, c->height, outpic->data, outpic->linesize);

       // encode the image
       out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
       printf("Encoding frame %3d (size=%5d)\n", i, out_size);
       fwrite(outbuf, 1, out_size, f);
       delete [] pPixels;
       av_free(outbuffer);    
       av_free(inpic);
       av_free(outpic);
    }

    // get the delayed frames
    for(; out_size; i++) {
       fflush(stdout);

       out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
       printf("Writing frame %3d (size=%5d)\n", i, out_size);
       fwrite(outbuf, 1, out_size, f);
    }

    // add sequence end code to have a real mpeg file
    outbuf[0] = 0x00;
    outbuf[1] = 0x00;
    outbuf[2] = 0x01;
    outbuf[3] = 0xb7;
    fwrite(outbuf, 1, 4, f);
    fclose(f);

    avcodec_close(c);
    free(outbuf);
    av_free(c);
    printf("Closed codec and Freed\n");
  • iOS FFMPEG encode images to video

    10 avril 2013, par brad.roush

    I am trying to take a set of UIImages and create a video file out of them with FFMPEG. Seems there are lots of questions about this topic but non have been able to get this working correctly for me. This one was particularly helpful in giving me a starting point. This iFrameExtractor example was also very helpful but I want to do this in reverse, then add audio.

    This is the closest I have gotten and it creates a short silent video with flashing colors and no images :

    // Register all formats and codecs
    av_register_all();

    AVCodec *codec;
    AVCodecContext *c= NULL;

    int i, out_size, size, outbuf_size;
    FILE *file;
    AVFrame *picture;
    uint8_t *outbuf;

    NSLog(@"Video encoding");

    /* find the mpeg video encoder */
    codec = avcodec_find_encoder(CODEC_ID_MPEG2VIDEO);
    if (!codec) {
       fprintf(stderr, "codec not found\n");
       exit(1);
    }

    c= avcodec_alloc_context3(codec);
    picture= avcodec_alloc_frame();

    /* put sample parameters */
    c->bit_rate = 400000;
    /* resolution must be a multiple of two */
    c->width = 352;
    c->height = 288;
    /* frames per second */
    c->time_base= (AVRational){1,25};
    c->gop_size = 10; /* emit one intra frame every ten frames */
    c->max_b_frames=1;
    c->pix_fmt = PIX_FMT_YUV420P;

    /* open it */
    if (avcodec_open2(c, codec, nil) < 0) {
       fprintf(stderr, "could not open codec\n");
       exit(1);
    }

    // Put file in place

    NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
    NSString *docs_dir = [paths objectAtIndex:0];
    NSString *filePath = [docs_dir stringByAppendingPathComponent:@"test.mpeg"];

    // Seed file
    NSString *sourcePath = [[NSBundle mainBundle] pathForResource:@"test" ofType:@"mov"];
    NSError* error = nil;
    if (![[NSFileManager defaultManager] copyItemAtPath:sourcePath toPath:filePath error:&error]) {
       NSLog(@"Test Video creation failed:%@",[error userInfo]);
    } else NSLog(@"Test Video Created");

    const char *filename = [filePath UTF8String];
    file = fopen(filename, "wb");
    if (!file) {
       fprintf(stderr, "could not open %s\n", "filename");
       exit(1);
    }

    /* alloc image and output buffer */
    outbuf_size = 100000;
    outbuf = malloc(outbuf_size);
    size = c->width * c->height;

    //#pragma mark -
    AVFrame* outpic = avcodec_alloc_frame();
    int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);

    //create buffer for the output image
    uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);


    AVPacket packet;
    av_init_packet(&packet);


    //#pragma mark -
    for(i=1;i<50;i++) {
       fflush(stdout);

       int numBytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
       uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

       UIImage *image = [UIImage imageWithContentsOfFile:[docs_dir stringByAppendingPathComponent:[NSString stringWithFormat:@"%d.png",i]]];
       CGImageRef newCgImage = [image CGImage];

       CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
       CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
       buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);

       avpicture_fill((AVPicture*)picture, buffer, PIX_FMT_RGB8, c->width, c->height);
       avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);

       struct SwsContext* fooContext = sws_getContext(c->width, c->height,
                                                      PIX_FMT_RGB8,
                                                      c->width, c->height,
                                                      PIX_FMT_YUV420P,
                                                      SWS_FAST_BILINEAR, NULL, NULL, NULL);

       //perform the conversion

       sws_scale(fooContext, outpic->data, outpic->linesize,
                 0, c->height, outpic->data, outpic->linesize);
       // Tried This but it didn't work
       //sws_scale(fooContext, picture->data, picture->linesize, 0, c->height, outpic->data, outpic->linesize);


       out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
       // Tried this but it didn't work
       //int test = 0;
       //out_size = avcodec_encode_video2(c, &packet, outpic, &test);



       printf("encoding frame %3d (size=%5d)\n", i, out_size);
       fwrite(outbuf, 1, out_size, file);

       free(buffer);
       buffer = NULL;

    }

    /* get the delayed frames */
    /*
    for(; out_size; i++) {
       fflush(stdout);

       out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
       printf("write frame %3d (size=%5d)\n", i, out_size);
       fwrite(outbuf, 1, outbuf_size, file);
    }
    */

    /* add sequence end code to have a real mpeg file */
    outbuf[0] = 0x00;
    outbuf[1] = 0x00;
    outbuf[2] = 0x01;
    outbuf[3] = 0xb7;
    fwrite(outbuf, 1, 4, file);
    fclose(file);
    free(outbuf);

    avcodec_close(c);
    av_free(c);
    av_free(picture);
    printf("\n");

    Any ideas will be helpful here. If anyone knows of any other good objective-c examples, this would also be great.