Recherche avancée

Médias (1)

Mot : - Tags -/intégration

Autres articles (82)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Participer à sa documentation

    10 avril 2011

    La documentation est un des travaux les plus importants et les plus contraignants lors de la réalisation d’un outil technique.
    Tout apport extérieur à ce sujet est primordial : la critique de l’existant ; la participation à la rédaction d’articles orientés : utilisateur (administrateur de MediaSPIP ou simplement producteur de contenu) ; développeur ; la création de screencasts d’explication ; la traduction de la documentation dans une nouvelle langue ;
    Pour ce faire, vous pouvez vous inscrire sur (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (10602)

  • Stack AVFrame side by side (libav/ffmpeg)

    22 février 2018, par dronemastersaga

    So I am trying to combine two H264 livestreams of 1920x1080 resolution side-by-side to a livestream of 3840x1080 resolution.

    For this, I can decode streams to AVFrames in libav/FFmpeg that I would like to combine into a bigger frame. The Input AVFrames : Two 1920x1080 frames in NV12 format (description : planar YUV 4:2:0, 12bpp, 1 plane for Y and 1 plane for the UV components, which are interleaved (first byte U and the following byte V))

    The way I have figured out is with colorspace conversion (YUV to BGR) in libav, then to change it to OpenCV Mat, then to use hconcat in OpenCV to stack together, then colorspace conversion (BGR to YUV) in AVFormat.

    Below is the method currently being used :

    //Prior code is too long: Basically it decodes 2 streams to AVFrames frame1 and frame2 in a loop
    sws_scale(swsContext, (const uint8_t *const *) frame1->data, frame1->linesize, 0, 1080, (uint8_t *const *) frameBGR1->data, frameBGR1->linesize);
    sws_scale(swsContext, (const uint8_t *const *) frame2->data, frame2->linesize, 0, 1080, (uint8_t *const *) frameBGR2->data, frameBGR2->linesize);
    Mat matFrame1(1080, 1920, CV_8UC3, frameBGR1->data[0], (size_t) frameBGR1->linesize[0]);
    Mat matFrame2(1080, 1920, CV_8UC3, frameBGR2->data[0], (size_t) frameBGR2->linesize[0]);
    Mat fullFrame;
    hconcat(matFrame1, matFrame2, fullFrame);
    const int stride[] = { static_cast<int>(fullFrame.step[0]) };
    sws_scale(modifyContext, (const uint8_t * const *)&amp;fullFrame.data, stride, 0, fullFrame.rows, newFrame->data, newFrame->linesize);
    //From here, newFrame is sent to the encoder
    </int>

    The resulting image is satisfactory but it does lose quality in colorspace conversion. However this method is too slow to use (I’m at 15 fps and I need 30). Is there a way to stack AVFrames directly without colorspace conversion ? Or is there any better way to do this ? I searched a lot about this and I couldn’t find any solution to this. Please advise.

  • FFMPEG, resize and pad a video by and odd number of pixels ?

    18 février 2018, par Jules

    I’m trying to resize and pad a video from 1917 x 1080 to 1920 x 1080.

    I’ve tried various syntax, which works but doesn’t change the size.

    ffmpeg -i input.mp4 -filter:v scale=1920:1080,pad=1920:1080 -c:a copy output.mp4

    However, I have go to this point by resize, joint audio and rotate. The initial size is 640 x 1136, I believe this is the source of the problem.

    ffmpeg -i input.mp4 -filter:v scale=900:1200 -c:a copy output.mp4

    ffmpeg \
    -i input.m4a \
    -i resize.mp4 -acodec copy -vcodec copy -shortest \
    output.mp4

    ffmpeg -i input.mp4" -vf "transpose=2" output/mp4

    So I’m wondering if I should do something different earlier

  • Intel IPP RGBToYUV420 function is getting IppStsSizeErr result code

    6 février 2018, par yesilcimen.ahmet

    I am using IPP 2017.0.3(r55431) and Delphi 10.2, I am trying convert RGB to YUV420P, but I am getting IppStsSizeErr result code.

    I have m_dst_picture, m_src_picture: AVPicture structure created by FFMPEG.

    { allocate the encoded raw picture }

    ret := avpicture_alloc(@m_dst_picture, AV_PIX_FMT_YUV420P, c^.width, c^.height);

    if (ret &lt; 0) then
       Exit(False);

    { allocate BGR frame that we can pass to the YUV frame }
    ret := avpicture_alloc(@m_src_picture, AV_PIX_FMT_BGR24, c^.width, c^.height);
    if (ret &lt; 0) then
      Exit(False);
    //It works fine.
    { convert BGR frame (m_src_picture) to and YUV frame (m_dst_picture) }
    sws_scale(sws_ctx, @m_src_picture.data[0], @m_src_picture.linesize, 0, c^.height, @m_dst_picture.data[0], @m_dst_picture.linesize);

    I want to convert the RGB buffer directly to YUV420P. The original code first loads RGB into the AVPicture then convert RGB to YUV420P with sws_scale and it causes slowness.

    Here I copy the BGR buffer to m_src_picture of FFMPEG. But this leads to performance loss, so I want to convert it directly to YUV420P using Intel IPP.

    procedure WriteFrameBGR24(frame: PByte);
    var
     y: Integer;
    begin
     for y := 0 to m_c^.height - 1 do
       Move(PByte(frame - (y * dstStep))^, PByte(m_src_picture.data[0] + (y * m_src_picture.linesize[0]))^, dstStep);
    end;

    In the code below I am trying to convert using Intel IPP.

    { Converting RGB to YUV420P. }

    **roiSize is 1920 and 1080

    **The values created by FFMPEG for YUV420P in m_dst_picture.linesize are [0]=1920,[1]=960,[2]=960 respectively.

    Do I need to convert the values of the linesize to another value ?

    **The reason why the srcStep parameter is a minus sign is the Bottom-Up Bitmap and the frame pointer indicates the Bmp.ScanLine[0] address, which indicates the highest pointer address.

    srcStep := (((width * (3 * 8)) + 31) and not 31) div 8; //for 24 bitmap

    { Swap of BGR channels to RGB. }
    //It works fine    
    st := ippiSwapChannels_8u_C3IR(frame, -srcStep, roiSize, @BGRToRGBArray[0]);

    { Convert RGB to YUV420P. }
    //IppStsSizeErr  
    st := ippiRGBToYUV420_8u_C3P3R(frame, -srcStep, @m_dst_picture.data[0], @m_dst_picture.linesize[0], roiSize);

    How do I solve this problem ?

    Thank you.