Recherche avancée

Médias (1)

Mot : - Tags -/illustrator

Autres articles (70)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

Sur d’autres sites (8595)

  • ffmpeg - convert image sequence to video with reversed order

    8 avril 2017, par 0__

    Looking at the docs, it is not apparent to me whether ffmpeg would allow me to convert an image sequence to a video in reverse order, for example using this sequence :

    frame-1000.jpg
    frame-999.jpg
    frame-998.jpg
    ...
    frame-1.jpg

    Is it possible to give a "step direction" for the frame indices ?

  • Encoding a screenshot into a video using FFMPEG

    2 juillet 2013, par mohM

    I'm trying to get the pixels from the screen, and encode the screenshot into a video using ffmpeg. I've seen a couple of examples but they either assume you already have the pixel data, or use image file input. It seems like whether I use sws_scale() or not (which is included in the examples I've seen), or whether I'm typecasting a HBITMAP or RGBQUAD* it's telling me that the image src data is bad and is encoding a blank image rather than the screenshot. Is there something I'm missing here ?

    AVCodec* codec;
    AVCodecContext* c = NULL;
    AVFrame* inpic;
    uint8_t* outbuf, *picture_buf;
    int i, out_size, size, outbuf_size;
    HBITMAP hBmp;
    //int x,y;

    avcodec_register_all();

    printf("Video encoding\n");

    // Find the mpeg1 video encoder
    codec = avcodec_find_encoder(CODEC_ID_H264);
    if (!codec) {
       fprintf(stderr, "Codec not found\n");
       exit(1);
    }
    else printf("H264 codec found\n");

    c = avcodec_alloc_context3(codec);
    inpic = avcodec_alloc_frame();

    c->bit_rate = 400000;
    c->width = screenWidth;                                     // resolution must be a multiple of two
    c->height = screenHeight;
    c->time_base.num = 1;
    c->time_base.den = 25;
    c->gop_size = 10;                                           // emit one intra frame every ten frames
    c->max_b_frames=1;
    c->pix_fmt = PIX_FMT_YUV420P;
    c->codec_id = CODEC_ID_H264;
    //c->codec_type = AVMEDIA_TYPE_VIDEO;

    //av_opt_set(c->priv_data, "preset", "slow", 0);
    //printf("Setting presets to slow for performance\n");

    // Open the encoder
    if (avcodec_open2(c, codec,NULL) < 0) {
       fprintf(stderr, "Could not open codec\n");
       exit(1);
    }
    else printf("H264 codec opened\n");

    outbuf_size = 100000 + 12*c->width*c->height;           // alloc image and output buffer
    //outbuf_size = 100000;
    outbuf = static_cast(malloc(outbuf_size));
    size = c->width * c->height;
    picture_buf = static_cast(malloc((size*3)/2));
    printf("Setting buffer size to: %d\n",outbuf_size);

    FILE* f = fopen("example.mpg","wb");
    if(!f) printf("x  -  Cannot open video file for writing\n");
    else printf("Opened video file for writing\n");

    /*inpic->data[0] = picture_buf;
    inpic->data[1] = inpic->data[0] + size;
    inpic->data[2] = inpic->data[1] + size / 4;
    inpic->linesize[0] = c->width;
    inpic->linesize[1] = c->width / 2;
    inpic->linesize[2] = c->width / 2;*/


    //int x,y;
    // encode 1 second of video
    for(i=0;itime_base.den;i++) {
       fflush(stdout);


       HWND hDesktopWnd = GetDesktopWindow();
       HDC hDesktopDC = GetDC(hDesktopWnd);
       HDC hCaptureDC = CreateCompatibleDC(hDesktopDC);
       hBmp = CreateCompatibleBitmap(GetDC(0), screenWidth, screenHeight);
       SelectObject(hCaptureDC, hBmp);
       BitBlt(hCaptureDC, 0, 0, screenWidth, screenHeight, hDesktopDC, 0, 0, SRCCOPY|CAPTUREBLT);
       BITMAPINFO bmi = {0};
       bmi.bmiHeader.biSize = sizeof(bmi.bmiHeader);
       bmi.bmiHeader.biWidth = screenWidth;
       bmi.bmiHeader.biHeight = screenHeight;
       bmi.bmiHeader.biPlanes = 1;
       bmi.bmiHeader.biBitCount = 32;
       bmi.bmiHeader.biCompression = BI_RGB;
       RGBQUAD *pPixels = new RGBQUAD[screenWidth*screenHeight];
       GetDIBits(hCaptureDC,hBmp,0,screenHeight,pPixels,&bmi,DIB_RGB_COLORS);

    inpic->pts = (float) i * (1000.0/(float)(c->time_base.den))*90;
       avpicture_fill((AVPicture*)inpic, (uint8_t*)pPixels, PIX_FMT_BGR32, c->width, c->height);                   // Fill picture with image
       av_image_alloc(inpic->data, inpic->linesize, c->width, c->height, c->pix_fmt, 1);
       //printf("Allocated frame\n");
       //SaveBMPFile(L"screenshot.bmp",hBmp,hDc,screenWidth,screenHeight);
       ReleaseDC(hDesktopWnd,hDesktopDC);
       DeleteDC(hCaptureDC);
       DeleteObject(hBmp);

       // encode the image
       out_size = avcodec_encode_video(c, outbuf, outbuf_size, inpic);
       printf("Encoding frame %3d (size=%5d)\n", i, out_size);
       fwrite(outbuf, 1, out_size, f);
    }

    // get the delayed frames
    for(; out_size; i++) {
       fflush(stdout);

       out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
       printf("Writing frame %3d (size=%5d)\n", i, out_size);
       fwrite(outbuf, 1, out_size, f);
    }

    // add sequence end code to have a real mpeg file
    outbuf[0] = 0x00;
    outbuf[1] = 0x00;
    outbuf[2] = 0x01;
    outbuf[3] = 0xb7;
    fwrite(outbuf, 1, 4, f);
    fclose(f);
    free(picture_buf);
    free(outbuf);

    avcodec_close(c);
    av_free(c);
    av_free(inpic);
    printf("Closed codec and Freed\n");
  • ffmpeg Could not find input stream [closed]

    17 avril 2013, par Peter Walker

    I've seen lot's of post on streaming video feeds to ffserver but still haven't found what I wanted.

    I use not only on the mp4 file but avi and other formats as well.

       ffmpeg -i test5.mp4 http://localhost:8090/feed1.ffm

    I keep finding this error.

       Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test5.mp4':
       Metadata:
              major_brand     : isom
              minor_version   : 512
              compatible_brands: isomiso2mp41
              encoder         : Lavf53.21.1
       Duration: 00:05:00.00, start: 0.000000, bitrate: 278 kb/s
       Stream #0.0(und): Video: mpeg4 (Simple Profile), yuv420p, 640x320 [PAR 1:1 DAR     2:1], 277 kb/s, 25 fps, 25 tbr, 25 tbn, 25 tbc
       Incompatible sample format '(null)' for codec 'mp2', auto-selecting format 's16'
       Last message repeated 1 times
       Output #0, ffm, to 'http://localhost:8090/feed1.ffm':
       Stream #0.0: Video: [0][0][0][0] / 0x0000, yuv420p, q=0-0, 1000k tbn
       Stream #0.1: Video: mpeg1video, (null), 32622x226393720, q=2-31, pass 1, pass 2,     226393 kb/s, 1000k tbn
       Stream #0.2: Audio: mp2, 22050 Hz, 1 channels, s16, 226391 kb/s
       Stream #0.3: Video: msmpeg4, yuv420p, 352x240, q=2-31, 226391 kb/s, 1000k tbn, 15  tbc
       Could not find input stream matching output stream #0.2
       *** glibc detected *** ffmpeg: free(): invalid pointer: 0x0000000000a53d00 ***
       ======= Backtrace: =========
       /lib/x86_64-linux-gnu/libc.so.6(+0x7e626)[0x7f6e0d4af626]
       /usr/lib/x86_64-linux-gnu/libavformat.so.53(avformat_free_context+0xb0)    [0x7f6e0f2c9c70]
       ffmpeg[0x4089bf]
       ffmpeg[0x40a52f]
       ffmpeg[0x407a04]
       /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7f6e0d45276d]

    The config file I used for the ffserver is

       Port 8090
       RTSPPort 7654
       BindAddress 0.0.0.0
       MaxHTTPConnections 2000
       MaxClients 1000
       MaxBandwidth 500000
       CustomLog -
       NoDaemon
       <feed>
               File /tmp/feed1.ffm
               FileMaxSize 5M
       #       NoAudio
       </feed>

       <stream>
               Feed feed1.ffm
               Format rtp
       #       NoAudio
               VideoFrameRate 30
       </stream>

    Can anyone help me figure out some something or something to look for ?
    I tried googling but almost everyone uses video feeds instead or it just works well for them.
    Any clue or direction/suggestion would help. Thank you so much in advance.