Recherche avancée

Médias (1)

Mot : - Tags -/ogg

Autres articles (50)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

Sur d’autres sites (6439)

  • ffmpeg Could not find input stream [closed]

    17 avril 2013, par Peter Walker

    I've seen lot's of post on streaming video feeds to ffserver but still haven't found what I wanted.

    I use not only on the mp4 file but avi and other formats as well.

       ffmpeg -i test5.mp4 http://localhost:8090/feed1.ffm

    I keep finding this error.

       Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test5.mp4':
       Metadata:
              major_brand     : isom
              minor_version   : 512
              compatible_brands: isomiso2mp41
              encoder         : Lavf53.21.1
       Duration: 00:05:00.00, start: 0.000000, bitrate: 278 kb/s
       Stream #0.0(und): Video: mpeg4 (Simple Profile), yuv420p, 640x320 [PAR 1:1 DAR     2:1], 277 kb/s, 25 fps, 25 tbr, 25 tbn, 25 tbc
       Incompatible sample format '(null)' for codec 'mp2', auto-selecting format 's16'
       Last message repeated 1 times
       Output #0, ffm, to 'http://localhost:8090/feed1.ffm':
       Stream #0.0: Video: [0][0][0][0] / 0x0000, yuv420p, q=0-0, 1000k tbn
       Stream #0.1: Video: mpeg1video, (null), 32622x226393720, q=2-31, pass 1, pass 2,     226393 kb/s, 1000k tbn
       Stream #0.2: Audio: mp2, 22050 Hz, 1 channels, s16, 226391 kb/s
       Stream #0.3: Video: msmpeg4, yuv420p, 352x240, q=2-31, 226391 kb/s, 1000k tbn, 15  tbc
       Could not find input stream matching output stream #0.2
       *** glibc detected *** ffmpeg: free(): invalid pointer: 0x0000000000a53d00 ***
       ======= Backtrace: =========
       /lib/x86_64-linux-gnu/libc.so.6(+0x7e626)[0x7f6e0d4af626]
       /usr/lib/x86_64-linux-gnu/libavformat.so.53(avformat_free_context+0xb0)    [0x7f6e0f2c9c70]
       ffmpeg[0x4089bf]
       ffmpeg[0x40a52f]
       ffmpeg[0x407a04]
       /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7f6e0d45276d]

    The config file I used for the ffserver is

       Port 8090
       RTSPPort 7654
       BindAddress 0.0.0.0
       MaxHTTPConnections 2000
       MaxClients 1000
       MaxBandwidth 500000
       CustomLog -
       NoDaemon
       <feed>
               File /tmp/feed1.ffm
               FileMaxSize 5M
       #       NoAudio
       </feed>

       <stream>
               Feed feed1.ffm
               Format rtp
       #       NoAudio
               VideoFrameRate 30
       </stream>

    Can anyone help me figure out some something or something to look for ?
    I tried googling but almost everyone uses video feeds instead or it just works well for them.
    Any clue or direction/suggestion would help. Thank you so much in advance.

  • Encoding a screenshot into a video using FFMPEG

    2 juillet 2013, par mohM

    I'm trying to get the pixels from the screen, and encode the screenshot into a video using ffmpeg. I've seen a couple of examples but they either assume you already have the pixel data, or use image file input. It seems like whether I use sws_scale() or not (which is included in the examples I've seen), or whether I'm typecasting a HBITMAP or RGBQUAD* it's telling me that the image src data is bad and is encoding a blank image rather than the screenshot. Is there something I'm missing here ?

    AVCodec* codec;
    AVCodecContext* c = NULL;
    AVFrame* inpic;
    uint8_t* outbuf, *picture_buf;
    int i, out_size, size, outbuf_size;
    HBITMAP hBmp;
    //int x,y;

    avcodec_register_all();

    printf("Video encoding\n");

    // Find the mpeg1 video encoder
    codec = avcodec_find_encoder(CODEC_ID_H264);
    if (!codec) {
       fprintf(stderr, "Codec not found\n");
       exit(1);
    }
    else printf("H264 codec found\n");

    c = avcodec_alloc_context3(codec);
    inpic = avcodec_alloc_frame();

    c->bit_rate = 400000;
    c->width = screenWidth;                                     // resolution must be a multiple of two
    c->height = screenHeight;
    c->time_base.num = 1;
    c->time_base.den = 25;
    c->gop_size = 10;                                           // emit one intra frame every ten frames
    c->max_b_frames=1;
    c->pix_fmt = PIX_FMT_YUV420P;
    c->codec_id = CODEC_ID_H264;
    //c->codec_type = AVMEDIA_TYPE_VIDEO;

    //av_opt_set(c->priv_data, "preset", "slow", 0);
    //printf("Setting presets to slow for performance\n");

    // Open the encoder
    if (avcodec_open2(c, codec,NULL) &lt; 0) {
       fprintf(stderr, "Could not open codec\n");
       exit(1);
    }
    else printf("H264 codec opened\n");

    outbuf_size = 100000 + 12*c->width*c->height;           // alloc image and output buffer
    //outbuf_size = 100000;
    outbuf = static_cast(malloc(outbuf_size));
    size = c->width * c->height;
    picture_buf = static_cast(malloc((size*3)/2));
    printf("Setting buffer size to: %d\n",outbuf_size);

    FILE* f = fopen("example.mpg","wb");
    if(!f) printf("x  -  Cannot open video file for writing\n");
    else printf("Opened video file for writing\n");

    /*inpic->data[0] = picture_buf;
    inpic->data[1] = inpic->data[0] + size;
    inpic->data[2] = inpic->data[1] + size / 4;
    inpic->linesize[0] = c->width;
    inpic->linesize[1] = c->width / 2;
    inpic->linesize[2] = c->width / 2;*/


    //int x,y;
    // encode 1 second of video
    for(i=0;itime_base.den;i++) {
       fflush(stdout);


       HWND hDesktopWnd = GetDesktopWindow();
       HDC hDesktopDC = GetDC(hDesktopWnd);
       HDC hCaptureDC = CreateCompatibleDC(hDesktopDC);
       hBmp = CreateCompatibleBitmap(GetDC(0), screenWidth, screenHeight);
       SelectObject(hCaptureDC, hBmp);
       BitBlt(hCaptureDC, 0, 0, screenWidth, screenHeight, hDesktopDC, 0, 0, SRCCOPY|CAPTUREBLT);
       BITMAPINFO bmi = {0};
       bmi.bmiHeader.biSize = sizeof(bmi.bmiHeader);
       bmi.bmiHeader.biWidth = screenWidth;
       bmi.bmiHeader.biHeight = screenHeight;
       bmi.bmiHeader.biPlanes = 1;
       bmi.bmiHeader.biBitCount = 32;
       bmi.bmiHeader.biCompression = BI_RGB;
       RGBQUAD *pPixels = new RGBQUAD[screenWidth*screenHeight];
       GetDIBits(hCaptureDC,hBmp,0,screenHeight,pPixels,&amp;bmi,DIB_RGB_COLORS);

    inpic->pts = (float) i * (1000.0/(float)(c->time_base.den))*90;
       avpicture_fill((AVPicture*)inpic, (uint8_t*)pPixels, PIX_FMT_BGR32, c->width, c->height);                   // Fill picture with image
       av_image_alloc(inpic->data, inpic->linesize, c->width, c->height, c->pix_fmt, 1);
       //printf("Allocated frame\n");
       //SaveBMPFile(L"screenshot.bmp",hBmp,hDc,screenWidth,screenHeight);
       ReleaseDC(hDesktopWnd,hDesktopDC);
       DeleteDC(hCaptureDC);
       DeleteObject(hBmp);

       // encode the image
       out_size = avcodec_encode_video(c, outbuf, outbuf_size, inpic);
       printf("Encoding frame %3d (size=%5d)\n", i, out_size);
       fwrite(outbuf, 1, out_size, f);
    }

    // get the delayed frames
    for(; out_size; i++) {
       fflush(stdout);

       out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
       printf("Writing frame %3d (size=%5d)\n", i, out_size);
       fwrite(outbuf, 1, out_size, f);
    }

    // add sequence end code to have a real mpeg file
    outbuf[0] = 0x00;
    outbuf[1] = 0x00;
    outbuf[2] = 0x01;
    outbuf[3] = 0xb7;
    fwrite(outbuf, 1, 4, f);
    fclose(f);
    free(picture_buf);
    free(outbuf);

    avcodec_close(c);
    av_free(c);
    av_free(inpic);
    printf("Closed codec and Freed\n");
  • ffmpeg concatenate images in one image

    26 juillet 2016, par drlexa

    I use this to get frames from video and concatenate them in one image :

    ffmpeg -i output.mp4 -vf 'fps=2,tile=1000x1' out.jpg

    But there is a problem : I do not know number of frames that will be fetched. Here I hardcoded tile size 1000x1, but if there will be more than 1000 frames, then will be an error. Before starting ffmpeg I do not know actual size of tile.

    So I want use command like :

    ffmpeg -i output.mp4 -vf 'fps=2,tile=*x1' out.jpg

    That means : I want you to concatenate ALL images that will be fetched in one row, but I cannot use * as an argument for tile.

    Is there some way to solve my problem ?