Recherche avancée

Médias (1)

Mot : - Tags -/wave

Autres articles (57)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (10648)

  • Error using FFMPEG to convert each input image into H264 compiling in Visual Studio running in MevisLab

    21 février 2014, par user3012914

    I am creating a ML Module in MevisLab Framework, I am using FFMPEG to convert each image i get into a H264 Video and save it after I get all the frames. But unfortunately I have problem allocating the output buffer size. The application crashes when I include this in my code.If I am not including it, the output file size is just 4kb. Nothing is stored in it.

    I am also not very sure whether it is correct way of getting the HBitmap into the Encoder. Would be great to have your suggestions.

    My Code :

    BITMAPINFO bitmapInfo;
               HDC        hdc;

               ZeroMemory(&bitmapInfo, sizeof(bitmapInfo));

               BITMAPINFOHEADER &bitmapInfoHeader = bitmapInfo.bmiHeader;
               bitmapInfoHeader.biSize            = sizeof(bitmapInfoHeader);
               bitmapInfoHeader.biWidth           = _imgWidth;
               bitmapInfoHeader.biHeight          = _imgHeight;
               bitmapInfoHeader.biPlanes          =  1;
               bitmapInfoHeader.biBitCount        = 24;
               bitmapInfoHeader.biCompression     = BI_RGB;
               bitmapInfoHeader.biSizeImage       = ((bitmapInfoHeader.biWidth * bitmapInfoHeader.biBitCount / 8 + 3) & 0xFFFFFFFC) * bitmapInfoHeader.biHeight;
               bitmapInfoHeader.biXPelsPerMeter   = 10000;
               bitmapInfoHeader.biYPelsPerMeter   = 10000;
               bitmapInfoHeader.biClrUsed         = 0;
               bitmapInfoHeader.biClrImportant    = 0;
               //RGBQUAD* Ref = new RGBQUAD[_imgWidth,_imgHeight];
               HDC hdcscreen = GetDC(0);

               hdc = CreateCompatibleDC(hdcscreen);
               ReleaseDC(0, hdcscreen);

               _hbitmap = CreateDIBSection(hdc, (BITMAPINFO*) &bitmapInfoHeader, DIB_RGB_COLORS, &_bits, NULL, NULL);

    To get the BitMap I use the above code. Then I allocate the Codec Context as followed

    c->bit_rate = 400000;
                   // resolution must be a multiple of two
                   c->width = 1920;
                   c->height = 1080;
                   // frames per second
                   frame_rate = _framesPerSecondFld->getIntValue();
                   //AVRational rational = {1,10};
                   //c->time_base = (AVRational){1,25};
                    //c->time_base = (AVRational){1,25};
                    c->gop_size = 10; // emit one intra frame every ten frames
                    c->max_b_frames = 1;
                    c->keyint_min = 1;   //minimum GOP size
                    c->time_base.num = 1;                                  // framerate numerator
                    c->time_base.den = _framesPerSecondFld->getIntValue();
                    c->i_quant_factor = (float)0.71;                        // qscale factor between P and I frames
                    c->pix_fmt = AV_PIX_FMT_RGB32;
                    std::string msg;
                    msg.append("Context is stored");
                    _messageFld->setStringValue(msg.c_str());

    I create the Bitmap Image as followed from the input

    PagedImage *inImg = getUpdatedInputImage(0);
           ML_CHECK(inImg);
           ImageVector imgExt = inImg->getImageExtent();
           if ((imgExt.x = _imgWidth) && (imgExt.y == _imgHeight))
           {
           if (((imgExt.x % 4)==0) && ((imgExt.y % 4) == 0))
           {
                    // read out input image and write output image into video
                   // get input image as an array
                   void* imgData = NULL;
                   SubImageBox imageBox(imgExt); // get the whole image
                   getTile(inImg, imageBox, MLuint8Type, &imgData);
                   iData = (MLuint8*)imgData;
                   int r = 0; int g = 0;int  b = 0;
                   // since we have only images with
                   // a z-ext of 1, we can compute the c stride as follows
                   int cStride = _imgWidth * _imgHeight;
                   uint8_t offset  = 0;
                   // pointer into the bitmap that is
                   // used to write images into the avi
                   UCHAR* dst = (UCHAR*)_bits;
                   for (int y = _imgHeight-1; y >= 0; y--)
                   { // reversely scan the image. if y-rows of DIB are set in normal order, no compression will be available.
                       offset = _imgWidth * y;
                       for (int x = 0; x < _imgWidth; x++)
                       {
                           if (_isGreyValueImage)
                           {
                               r = iData[offset + x];
                               *dst++ = (UCHAR)r;
                               *dst++ = (UCHAR)r;
                               *dst++ = (UCHAR)r;
                           }
                           else
                           {
                               b = iData[offset + x]; // windows bitmap need reverse order: bgr instead of rgb
                               g = iData[offset + x + cStride          ];
                               r = iData[offset + x + cStride + cStride];

                               *dst++ = (UCHAR)r;
                               *dst++ = (UCHAR)g;
                               *dst++ = (UCHAR)b;
                           }
                           // alpha channel in input image is ignored
                       }
                   }

    Then I add it to the Encoder as followed as write as H264

    in_width   = c->width;
                    in_height  = c->height;
                    out_width  = c->width;
                    out_height = c->height;
                    ibytes = avpicture_get_size(PIX_FMT_BGR32, in_width, in_height);
                    obytes = avpicture_get_size(PIX_FMT_YUV420P, out_width, out_height);
                    outbuf_size = 100000 + c->width*c->height*(32>>3);      // allocate output buffer
                    outbuf = static_cast(malloc(outbuf_size));

                    if(!obytes)
                    {
                        std::string msg;
                        msg.append("Bytes cannot be allocated");
                        _messageFld->setStringValue(msg.c_str());
                    }
                    else
                    {
                        std::string msg;
                        msg.append("Bytes allocation done");
                        _messageFld->setStringValue(msg.c_str());
                    }
                    //create buffer for the output image
                    inbuffer  =  (uint8_t*)av_malloc(ibytes);
                    outbuffer =  (uint8_t*)av_malloc(obytes);
                    inbuffer  =  (uint8_t*)dst;

                    //create ffmpeg frame structures.  These do not allocate space for image data,
                    //just the pointers and other information about the image.
                    AVFrame* inpic = avcodec_alloc_frame();
                    AVFrame* outpic = avcodec_alloc_frame();

                    //this will set the pointers in the frame structures to the right points in
                    //the input and output buffers.
                    avpicture_fill((AVPicture*)inpic, inbuffer, PIX_FMT_BGR32, in_width, in_height);
                    avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, out_width, out_height);
                    av_image_alloc(outpic->data, outpic->linesize, c->width, c->height, c->pix_fmt, 1);
                    inpic->data[0] += inpic->linesize[0]*(_imgHeight-1);                                                      // flipping frame
                    inpic->linesize[0] = -inpic->linesize[0];    

                    if(!inpic)
                    {
                        std::string msg;
                        msg.append("Image is empty");
                        _messageFld->setStringValue(msg.c_str());
                    }
                    else
                    {
                        std::string msg;
                        msg.append("Picture has allocations");
                        _messageFld->setStringValue(msg.c_str());
                    }

                    //create the conversion context
                    fooContext = sws_getContext(in_width, in_height, PIX_FMT_BGR32, out_width, out_height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
                    //perform the conversion
                    sws_scale(fooContext, inpic->data, inpic->linesize, 0, in_height, outpic->data, outpic->linesize);
                    //out_size = avcodec_encode_video(c, outbuf,outbuf_size, outpic);
                    if(!out_size)
                    {
                        std::string msg;
                        msg.append("Outsize is not valid");
                        _messageFld->setStringValue(msg.c_str());
                    }
                    else
                    {
                        std::string msg;
                        msg.append("Outsize is valid");
                        _messageFld->setStringValue(msg.c_str());
                    }
                        fwrite(outbuf, 1, out_size, f);
                        if(!fwrite)
                    {
                        std::string msg;
                        msg.append("Frames couldnt be written");
                        _messageFld->setStringValue(msg.c_str());
                    }
                    else
                    {
                        std::string msg;
                        msg.append("Frames written to the file");
                        _messageFld->setStringValue(msg.c_str());
                    }
                       // for (;out_size; i++)
                       // {
                             out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
                             std::string msg;                      
                             msg.append("Writing Frames");
                             _messageFld->setStringValue(msg.c_str());// encode the delayed frames
                             _numFramesFld->setIntValue(_numFramesFld->getIntValue()+1);
                             fwrite(outbuf, 1, out_size, f);
                       // }
                        outbuf[0] = 0x00;
                        outbuf[1] = 0x00;                                                                                               // add sequence end code to have a real mpeg file
                        outbuf[2] = 0x01;
                        outbuf[3] = 0xb7;
                        fwrite(outbuf, 1, 4, f);
    }

    Then close and clean the Image Buffer and file

     ML_TRACE_IN("MovieCreator::_endRecording()")
    if (_numFramesFld->getIntValue() == 0)
    {
       _messageFld->setStringValue("Empty movie, nothing saved.");
    }
    else
    {
       _messageFld->setStringValue("Movie written to disk.");
       _numFramesFld->setIntValue(0);
    if (_hbitmap)
    {
       DeleteObject(_hbitmap);
    }
    if (c != NULL)
    {
          av_free(outbuffer);    
          av_free(inpic);
          av_free(outpic);
          fclose(f);
          avcodec_close(c);                                                                                               // freeing memory
          free(outbuf);
          av_free(c);
    }
    }

    }

    I think the Main Problem is over here !!

                        //out_size = avcodec_encode_video(c, outbuf,outbuf_size, outpic);
  • ffmpeg socks4 proxy parameter with rtmp

    27 octobre 2014, par user3337066

    I am unable to capture some livestreams because of the proxy issues. So in rtmpdump i can use :

    rtmpdump -v -r rtmp://a_rtmp_address -p http://a_http_address -S 85.185.244.101:1080 -B 10 -o aaa.flv

    But I need to use ffmpeg or avconv. But I can not find a parameter corresponds to that -S 85.185.244.101:1080 parameter.

    Can anyone please give me an ffmpeg command corresponding to this rtmpdump command.

  • 2 source videos in FFMPEG... how to map audio ?

    21 juin 2012, par dcoffey3296

    i am currently placing 2 videos side by side in FFMPEG. Here is the command :

    ffmpeg -i input.mov -vf "[in] scale=1280:720, pad=2*1280:720 [left]; movie=right.mov, scale=1280:720 [right]; [left][right]  overlay=1280:0 [out]" -b:v 1000k -vcodec libx264 -an sidebyside.mp4

    I now need to manage the audio. I keep trying to specify :

    -acodec libfaac -ac 2 -map 0:1 -map 0:2

    to take the 2 audio channels from the first input and use them. I keep getting the error :

    [aformat @ 0x7febf2e01fc0] auto-inserting filter 'auto-inserted resampler 0' between the filter 'src' and the filter 'aformat'
       [aresample @ 0x7febf2e02180] [SWR @ 0x7febf40dd000] Input channel layout isnt supported
    Error opening filters!

    I'm looking for the best way to specify which video provides the audio ! Thanks for any advice !

    Here's the complete output :

    ffmpeg version 0.11.1 Copyright (c) 2000-2012 the FFmpeg developers
     built on Jun  9 2012 21:40:17 with clang 3.0 (tags/Apple/clang-211.10.1)
     configuration: --prefix= --enable-gpl --enable-version3 --enable-nonfree --enable-libx264 --enable-libxvid --enable-postproc --enable-swscale --enable-avfilter --enable-pthreads --enable-yasm --enable-libfaac --enable-libmp3lame --cc=clang --enable-libvorbis
     libavutil      51. 54.100 / 51. 54.100
     libavcodec     54. 23.100 / 54. 23.100
     libavformat    54.  6.100 / 54.  6.100
     libavdevice    54.  0.100 / 54.  0.100
     libavfilter     2. 77.100 /  2. 77.100
     libswscale      2.  1.100 /  2.  1.100
     libswresample   0. 15.100 /  0. 15.100
     libpostproc    52.  0.100 / 52.  0.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'left.mov':
     Metadata:
       major_brand     : qt  
       minor_version   : 537199360
       compatible_brands: qt  
       creation_time   : 2012-06-19 21:13:20
     Duration: 00:02:28.81, start: 0.000000, bitrate: 36378 kb/s
       Stream #0:0(eng): Video: mpeg2video (Main) (xdvf / 0x66766478), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 35000 kb/s, 29.97 fps, 29.97 tbr, 2997 tbn, 59.94 tbc
       Metadata:
         creation_time   : 2012-06-19 21:13:20
         handler_name    : Apple Alias Data Handler
       Stream #0:1(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, 1 channels (FL), s16, 768 kb/s
       Metadata:
         creation_time   : 2012-06-19 21:13:20
         handler_name    : Apple Alias Data Handler
       Stream #0:2(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, 1 channels (FR), s16, 768 kb/s
       Metadata:
         creation_time   : 2012-06-19 21:13:20
         handler_name    : Apple Alias Data Handler
       Stream #0:3(eng): Data: none (tmcd / 0x64636D74)
       Metadata:
         creation_time   : 2012-06-19 21:13:20
         handler_name    : Apple Alias Data Handler
         timecode        : 02:20:28;08
    File 'output.mp4' already exists. Overwrite ? [y/N] y
    w:1920 h:1080 pixfmt:yuv420p tb:1/2997 sar:1/1 sws_param:flags=2
    [buffersink @ 0x7febf2c18c80] No opaque field provided
    [movie @ 0x7febf2c191c0] seek_point:0 format_name:(null) file_name:/Users/danielpcoffey/Desktop/tommy.mov stream_index:0
    [scale @ 0x7febf2c19320] w:1920 h:1080 fmt:yuv420p sar:1/1 -> w:1280 h:720 fmt:yuv420p sar:1/1 flags:0x4
    [pad @ 0x7febf2c19800] w:1280 h:720 -> w:2560 h:720 x:0 y:0 color:0x000000FF
    [scale @ 0x7febf2c1ca40] w:1920 h:1080 fmt:yuv420p sar:1/1 -> w:1280 h:720 fmt:yuva420p sar:1/1 flags:0x4
    [overlay @ 0x7febf2c1ce20] main w:2560 h:720 fmt:yuv420p overlay x:1280 y:0 w:1280 h:720 fmt:yuva420p
    [overlay @ 0x7febf2c1ce20] main_tb:1/2997 overlay_tb:1/2997 -> tb:1/2997 exact:1
    [aformat @ 0x7febf2e01fc0] auto-inserting filter 'auto-inserted resampler 0' between the filter 'src' and the filter 'aformat'
    [aresample @ 0x7febf2e02180] [SWR @ 0x7febf40dd000] Input channel layout isnt supported
    Error opening filters!