Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (69)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (10875)

  • Generate video from bitmap images using FFMPEG

    29 juin 2013, par Pferd

    I'm trying to encode a bunch of images into a video using FFMPEG in visual studio. However I couldnt get it. Can some one please tell me where am I going wrong. please find the attached code here !

    void Encode::encodeVideoFromImage(char* filename){


     // gET THE encoder here. ~ try with mpeg1Video as a start!
     int i;
     AVCodec* codec = avcodec_find_encoder(CODEC_ID_MPEG1VIDEO);
    //AVCodec* codec = avcodec_find_encoder(CODEC_ID_MPEG4);
    //AVCodec* codec = avcodec_find_encoder(CODEC_ID_MPEG2VIDEO);

    if (!codec)
    {
       MessageBox(_T("can't find codec"), _T("Warning!"), MB_ICONERROR | MB_OK);
    }

    // Initialize codec

    AVCodecContext* c = avcodec_alloc_context();

    // Put sample parameters
    c->bit_rate = 400000;
    // Resolution must be a multiple of two
    c->width = 800;
    c->height = 600;  
    c->time_base.num = 1;
    c->time_base.den = 25;
    c->gop_size = 8; // Emit one intra frame every ten frames
    c->max_b_frames=1;
    c->pix_fmt = PIX_FMT_YUV420P;

    // Open the codec.
    if (avcodec_open(c, codec) < 0)
    {
      // fprintf(stderr, "could not open codec\n");
        MessageBox(_T("can't open codec"), _T("Warning!"), MB_ICONERROR | MB_OK);
    }


    // Open the output file
    FILE* f = fopen(filename, "wb");
    if (!f)
    {
       // fprintf(stderr, "could not open %s\n", filename);
        MessageBox(_T("Unable to open file"), _T("Warning!"), MB_ICONERROR | MB_OK);
    }

    // alloc image and output buffer
    int in_width, in_height, out_width, out_height;

    //here, make sure inbuffer points to the input BGR32 data,
    //and the input and output dimensions are set correctly.

    int out_size=1000000;

    in_width=c->width;
    out_width=c->width;
    in_height=c->height;
    out_height=c->height;

       //create ffmpeg frame structures.  
    AVFrame* inpic = avcodec_alloc_frame();
    AVFrame* outpic = avcodec_alloc_frame();

    // bytes needed for the output image
    int nbytes = avpicture_get_size(PIX_FMT_BGR32, out_width, out_height);

    //create buffer for the output image
    uint8_t* outbuffer = (uint8_t *)av_malloc(nbytes*sizeof(uint8_t));
    uint8_t* inbuffer = (uint8_t *)av_malloc(nbytes*sizeof(uint8_t));

    CImage capImage;
    CString pictureNumber;

    /* encode 1 frame of video */

    for(i=0;i<50;i++) {
        fflush(stdout);
        /* Use existing images */
        pictureNumber="";          
        pictureNumber.Format(_T("%d"),i+1);

    capImage.Load(_T("C:\\imageDump\\test")+pictureNumber+_T(".bmp")); // TBD from memory!
       //MessageBox(_T("C:\\imageDump\\test")+pictureNumber+_T(".bmp"), _T("Warning!"), MB_ICONERROR | MB_OK);
    inbuffer =  (uint8_t*)capImage.GetBits();


           // convert RGB to YUV 420 here!
           //  RGBtoYUV420P(pBits,picture_buf,bpp,true,c->width,c->height,false);
    //inbuffer=pBits;

    avpicture_fill((AVPicture*)inpic, inbuffer, PIX_FMT_BGR32, in_width, in_height);
    avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, out_width, out_height);

    //create the conversion context
    SwsContext* fooContext = sws_getContext(in_width, in_height, PIX_FMT_BGR32, out_width, out_height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);

    //perform the conversion
    sws_scale(fooContext, inpic->data, inpic->linesize, 0, in_height, outpic->data, outpic->linesize);

    out_size = avcodec_encode_video(c, outbuffer, out_size, inpic);
       //printf("encoding frame %3d (size=%5d)\n", i, out_size);
    fwrite(outbuffer, 1, out_size, f);
    capImage.Destroy();
    //free(inbuffer);
    }

    // Get the delayed frames
    for(; out_size; i++) {
        fflush(stdout);

        out_size = avcodec_encode_video(c, outbuffer, out_size, NULL);
        //printf("write frame %3d (size=%5d)\n", i, out_size);
        fwrite(outbuffer, 1, out_size, f);
    }

    /* add sequence end code to have a real mpeg file */
    outbuffer[0] = 0x00;
    outbuffer[1] = 0x00;
    outbuffer[2] = 0x01;
    outbuffer[3] = 0xb7;
    fwrite(outbuffer, 1, 4, f);
    fclose(f);
    free(inbuffer);
    free(outbuffer);

    avcodec_close(c);
    av_free(c);
    av_free(inpic);
    av_free(outpic);
    //printf("\n");

    }

    Thank you !

  • How to concat two/many mp4 files(Mac OS X Lion 10.7.5) with different resolution, bit rate [on hold]

    3 septembre 2013, par praveen

    I have to concat different mp4 files into single mp4 file. i am using following ffmpeg command but this command is only working if both file is same(copy, or if all video property is same(codec, resolution,bitrate....) ) other wise result is unexpected video. (I am working on Mac OS X Lion 10.7.5)

    ffmpeg commad :

    ffmpeg -i images/1/output.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate1.ts
    ffmpeg -i images/1/Video2.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate2.ts
    ffmpeg -i "concat:intermediate1.ts|intermediate2.ts" -c copy -bsf:a aac_adtstoasc output2.mp4

    console output :

    [mpegts @ 0x7f8c6c03d800] max_analyze_duration 5000000 reached at 5000000 microseconds

    Input #0, mpegts, from 'concat:intermediate1.ts|intermediate2.ts':
    Duration: 00:00:16.52, start: 1.400000, bitrate: 1342 kb/s
    Program 1
    Metadata:
    service_name    : Service01
    service_provider: FFmpeg
    Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p, 1024x768 [SAR   1:1 DAR 4:3], 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:1[0x101](und): Audio: aac ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 101 kb/s



    Output #0, mp4, to 'output2.mp4':
    Metadata:
    encoder         : Lavf54.63.104
    Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 1024x768 [SAR 1:1 DAR 4:3],      q=2-31, 25 fps, 90k tbn, 90k tbc
    Stream #0:1(und): Audio: aac ([64][0][0][0] / 0x0040), 48000 Hz, stereo, 101 kb/s
    Stream mapping:
    Stream #0:0 -> #0:0 (copy)
    Stream #0:1 -> #0:1 (copy)
    Press [q] to stop, [?] for help
    frame=  586 fps=0.0 q=-1.0 Lsize=    2449kB time=00:00:20.11 bitrate= 997.4kbits/s    
    video:2210kB audio:225kB subtitle:0 global headers:0kB muxing overhead 0.578335%

    Please help

  • Files created with a direct stream copy using FFmpeg's libavformat API play back too fast at 3600 fps

    2 octobre 2013, par Chris Ballinger

    I am working on a libavformat API wrapper that converts MP4 files with H.264 and AAC to MPEG-TS segments suitable for streaming. I am just doing a simple stream copy without re-encoding, but the files I produce play the video back at 3600 fps instead of 24 fps.

    Here are some outputs from ffprobe https://gist.github.com/chrisballinger/6733678, the broken file is below :

    r_frame_rate=1/1
    avg_frame_rate=0/0
    time_base=1/90000
    start_pts=0
    start_time=0.000000
    duration_ts=2999
    duration=0.033322

    The same input file manually sent through ffmpeg has proper timestamp information :

    r_frame_rate=24/1
    avg_frame_rate=0/0
    time_base=1/90000
    start_pts=126000
    start_time=1.400000
    duration_ts=449850
    duration=4.998333

    I believe the problem lies somewhere in my setup of libavformat here : https://github.com/OpenWatch/FFmpegWrapper/blob/master/FFmpegWrapper/FFmpegWrapper.m#L349 where I repurposed a bunch of code from ffmpeg.c that was required for the direct stream copy.

    Since 3600 seems like a "magic number" (60*60), it could be as simple as me not setting the time scale properly, but I can't figure out where my code diverges from ffmpeg/avconv itself.

    Similar question here, but I don't think they got as far as I did : Muxing a H.264 Annex B & AAC stream using libavformat with vcopy/acopy