Recherche avancée

Médias (1)

Mot : - Tags -/book

Autres articles (16)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Qu’est ce qu’un éditorial

    21 juin 2013, par

    Ecrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
    Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
    Vous pouvez personnaliser le formulaire de création d’un éditorial.
    Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (2971)

  • How to convert H264 RTP stream from PCAP to a playable video file

    21 août 2014, par yoosha

    I have captured stream of H264 in PCAP files and trying to create media files from the data. The container is not important (avi,mp4,mkv,…).
    When I’m using videosnarf or rtpbreak (combined with python code that adds 00 00 00 01 before each packet) and then ffmpeg, the result is OK only if the input frame rate is constant (or near constant). However, when the input is vfr, the result plays too fast (and on same rare cases too slow).
    For example :

    videosnarf -i captured.pcap –c
    ffmpeg -i H264-media-1.264 output.avi

    After doing some investigation of the issue I believe now that since the videosnarf (and rtpbreak) are removing the RTP header from the packets, the timestamp is lost and ffmpeg is referring to the input data as cbr.

    1. I would like to know if there is a way to pass (on a separate file ?)
      the timestamps vector or any other information to ffmpeg so the
      result will be created correctly ?
    2. Is there any other way I can take the data out of the PCAP file and play it or convert it and then play it ?
    3. Since all work is done in Python, any suggestion of libraries/modules that can help with the work (even if requires some codding) is welcome as well.

    Note : All work is done offline, no limitations on the output. It can be cbr/vbr, any playable container and transcoding. The only "limitation" I have : it should all run on linux…

    Thanks
    Y

    Some additional information :
    Since the nothing provides the FFMPEG with the timestamp data, i decided to try a different approach : skip videosnarf and use Python code to pipe the packets directly to ffmpeg (using the "-f -i -" options) but then it refuses to accept it unless I provide an SDP file...
    How do I provide the SDP file ? is it an additional input file ? ("-i config.sdp")

    The following code is an unsuccessful try doing the above :

    import time  
    import sys  
    import shutil  
    import subprocess  
    import os  
    import dpkt  

    if len(sys.argv) < 2:  
       print "argument required!"  
       print "txpcap <pcap file="file">"  
       sys.exit(2)  
    pcap_full_path = sys.argv[1]  

    ffmp_cmd = ['ffmpeg','-loglevel','debug','-y','-i','109c.sdp','-f','rtp','-i','-','-na','-vcodec','copy','p.mp4']  

    ffmpeg_proc = subprocess.Popen(ffmp_cmd,stdout = subprocess.PIPE,stdin = subprocess.PIPE)  

    with open(pcap_full_path, "rb") as pcap_file:  
       pcapReader = dpkt.pcap.Reader(pcap_file)  
       for ts, data in pcapReader:  
           if len(data) &lt; 49:  
               continue  
           ffmpeg_proc.stdin.write(data[42:])

    sout, err = ffmpeg_proc.communicate()  
    print "stdout ---------------------------------------"  
    print sout  
    print "stderr ---------------------------------------"  
    print err  
    </pcap>

    In general this will pipe the packets from the PCAP file to the following command :

    ffmpeg -loglevel debug -y -i 109c.sdp -f rtp -i - -na -vcodec copy p.mp4

    SDP file : [RTP includes dynamic payload type # 109, H264]

    v=0
    o=- 0 0 IN IP4 ::1
    s=No Name
    c=IN IP4 ::1
    t=0 0
    a=tool:libavformat 53.32.100
    m=video 0 RTP/AVP 109
    a=rtpmap:109 H264/90000
    a=fmtp:109
    packetization-mode=1 ;profile-level-id=64000c ;sprop-parameter-sets=Z2QADKwkpAeCP6wEQAAAAwBAAAAFI8UKkg==,aMvMsiw= ;
    b=AS:200

    Results :

    ffmpeg version 0.10.2 Copyright (c) 2000-2012 the FFmpeg developers
    built on Mar 20 2012 04:34:50 with gcc 4.4.6 20110731 (Red Hat
    4.4.6-3) configuration : —prefix=/usr —libdir=/usr/lib64 —shlibdir=/usr/lib64 —mandir=/usr/share/man —enable-shared —enable-runtime-cpudetect —enable-gpl —enable-version3 —enable-postproc —enable-avfilter —enable-pthreads —enable-x11grab —enable-vdpau —disable-avisynth —enable-frei0r —enable-libopencv —enable-libdc1394 —enable-libdirac —enable-libgsm —enable-libmp3lame —enable-libnut —enable-libopencore-amrnb —enable-libopencore-amrwb —enable-libopenjpeg —enable-librtmp —enable-libschroedinger —enable-libspeex —enable-libtheora —enable-libvorbis —enable-libvpx —enable-libx264 —enable-libxavs —enable-libxvid —extra-cflags=’-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector —param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC’ —disable-stripping libavutil 51. 35.100 / 51. 35.100 libavcodec 53. 61.100 / 53. 61.100 libavformat 53. 32.100
    / 53. 32.100 libavdevice 53. 4.100 / 53. 4.100
    libavfilter 2. 61.100 / 2. 61.100 libswscale 2. 1.100
    / 2. 1.100 libswresample 0. 6.100 / 0. 6.100
    libpostproc 52. 0.100 / 52. 0.100 [sdp @ 0x15c0c00] Format sdp
    probed with size=2048 and score=50 [sdp @ 0x15c0c00] video codec set
    to : h264 [NULL @ 0x15c7240] RTP Packetization Mode : 1 [NULL @
    0x15c7240] RTP Profile IDC : 64 Profile IOP : 0 Level : c [NULL @
    0x15c7240] Extradata set to 0x15c78e0 (size : 36) !error,_recognition
    separate : 1 ; 1 [h264 @ 0x15c7240] error,_recognition combined : 1 ;
    10001 [sdp @ 0x15c0c00] decoding for stream 0 failed [sdp @
    0x15c0c00] Could not find codec parameters (Video : h264) [sdp @
    0x15c0c00] Estimating duration from bitrate, this may be inaccurate
    109c.sdp : could not find codec parameters Traceback (most recent
    call last) : File "./ffpipe.py", line 26, in
    ffmpeg_proc.stdin.write(data[42 :]) IOError : [Errno 32] Broken pipe

    (forgive the mass above, the editor keep on complaining about code that is not indented OK ??)

    I’m working on this issue for days... any help/suggestion/hint will be appreciated.

  • Merge commit '6c916192f3d7441f5896f6c0fe151874fcd91fe4'

    9 avril 2017, par Clément Bœsch
    Merge commit '6c916192f3d7441f5896f6c0fe151874fcd91fe4'
    

    * commit '6c916192f3d7441f5896f6c0fe151874fcd91fe4' :
    mimic : Convert to the new bitstream reader
    metasound : Convert to the new bitstream reader
    lagarith : Convert to the new bitstream reader
    indeo : Convert to the new bitstream reader
    imc : Convert to the new bitstream reader
    webp : Convert to the new bitstream reader

    This merge is a noop, see
    http://ffmpeg.org/pipermail/ffmpeg-devel/2017-April/209609.html

    Merged-by : Clément Bœsch <u@pkh.me>

  • C++ ffmpeg and SDL2 video rendering memory leak

    10 avril 2017, par kj192

    I have made a small program what plays a video in SDL2.0 and FFmpeg.
    The software does work and do it is purpose.
    I have left the software running and I have faced a huge memory consumption and started to look online what can I do against it.
    I have used the following tutorials :
    http://www.developersite.org/906-59411-FFMPEG
    http://ardrone-ailab-u-tokyo.blogspot.co.uk/2012/07/212-ardrone-20-video-decording-ffmpeg.html

    I wonder if someone can give advice what do I do wrong. I have tried valgrind but I can’t find any information. I have did try to comment out sections and what I have seen even if I’m not rendering to the display the memory usage is growing and after delete something still not been freed up :

    if (av_read_frame(pFormatCtx, &amp;packet) >= 0)

    the whole source code is here :
    main :

    #include
    #include <ios>
    #include <iostream>
    #include <fstream>
    #include
    #include <sdl2></sdl2>SDL.h>
    #include "video.h"
    using namespace std;

    void memory()
    {
    using std::ios_base;
    using std::ifstream;
    using std::string;

    double vm_usage     = 0.0;
    double resident_set = 0.0;

    // 'file' stat seems to give the most reliable results
    //
    ifstream stat_stream("/proc/self/stat",ios_base::in);

    // dummy vars for leading entries in stat that we don't care about
    //
    string pid, comm, state, ppid, pgrp, session, tty_nr;
    string tpgid, flags, minflt, cminflt, majflt, cmajflt;
    string utime, stime, cutime, cstime, priority, nice;
    string O, itrealvalue, starttime;

    // the two fields we want
    //
    unsigned long vsize;
    long rss;

    stat_stream >> pid >> comm >> state >> ppid >> pgrp >> session >> tty_nr
               >> tpgid >> flags >> minflt >> cminflt >> majflt >> cmajflt
               >> utime >> stime >> cutime >> cstime >> priority >> nice
               >> O >> itrealvalue >> starttime >> vsize >> rss; // don't care about the rest

    stat_stream.close();

    long page_size_kb = sysconf(_SC_PAGE_SIZE) / 1024; // in case x86-64 is configured to use 2MB pages
    vm_usage     = vsize / 1024.0;
    resident_set = rss * page_size_kb;
    std::cout&lt;&lt;"VM: " &lt;&lt; vm_usage &lt;&lt; " RE:"&lt;&lt; resident_set &lt;&lt; std::endl;
    }


    int main()
    {
    //This example using 1280x800 video
    av_register_all();
    if( SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER ))
    {
       fprintf(stderr, "Could not initialize SDL - %s\n", SDL_GetError());
       exit(1);
    }
    SDL_Window* sdlWindow = SDL_CreateWindow("Video Window", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 1280, 800, SDL_WINDOW_OPENGL);
    if( !sdlWindow )
    {
       fprintf(stderr, "SDL: could not set video mode - exiting\n");
       exit(1);
    }
    SDL_Renderer* sdlRenderer = SDL_CreateRenderer(sdlWindow, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC | SDL_RENDERER_TARGETTEXTURE);
    SDL_Texture* sdlTexture = SDL_CreateTexture(sdlRenderer, SDL_PIXELFORMAT_YV12, SDL_TEXTUREACCESS_STREAMING, 1280, 800);
    if(!sdlTexture)
    {
       return -1;
    }
    SDL_SetTextureBlendMode(sdlTexture,SDL_BLENDMODE_BLEND );
    //VIDEO RESOLUTION
    SDL_Rect sdlRect;
    sdlRect.x = 0;
    sdlRect.y = 0;
    sdlRect.w = 1280;
    sdlRect.h = 800;    
    memory();
    for(int i = 1; i &lt; 6; i++)
    {
       memory();  
       video* vid = new video("vid.mp4");  
       while (!vid -> getFinished())
       {
           memory();
           vid -> Update(sdlTexture);
           SDL_RenderCopy(sdlRenderer,sdlTexture,&amp;sdlRect,&amp;sdlRect);
           SDL_RenderPresent(sdlRenderer);
       }
       delete vid;
       memory();
    }  
    SDL_DestroyTexture(sdlTexture);
    SDL_DestroyRenderer(sdlRenderer);
    SDL_DestroyWindow(sdlWindow);
    SDL_Quit();
    return 0;
    }
    </fstream></iostream></ios>

    video.cpp

    #include "video.h"

    video::video(const std::string&amp; name) : _finished(false)
    {
    av_register_all();
    pFormatCtx = NULL;
    pCodecCtxOrig = NULL;
    pCodecCtx = NULL;
    pCodec = NULL;
    pFrame = NULL;
    sws_ctx = NULL;
    if (avformat_open_input(&amp;pFormatCtx, name.c_str(), NULL, NULL) != 0)
    {
    _finished = true; // Couldn't open file
    }
    // Retrieve stream information
    if (avformat_find_stream_info(pFormatCtx, NULL) &lt; 0)
    {
    _finished = true; // Couldn't find stream information
    }
    videoStream = -1;
    for (i = 0; i &lt; pFormatCtx->nb_streams; i++)
    {
       if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
       {
           videoStream = i;
           break;
       }
    }
    if (videoStream == -1)
    {
       _finished = true; // Didn't find a video stream
    }
    // Get a pointer to the codec context for the video stream
    pCodecCtxOrig = pFormatCtx->streams[videoStream]->codec;
    // Find the decoder for the video stream
    pCodec = avcodec_find_decoder(pCodecCtxOrig->codec_id);
    if (pCodec == NULL)
    {
       fprintf(stderr, "Unsupported codec!\n");
       _finished = true; // Codec not found
    }
    pCodecCtx = avcodec_alloc_context3(pCodec);
    if (avcodec_copy_context(pCodecCtx, pCodecCtxOrig) != 0)
    {
       fprintf(stderr, "Couldn't copy codec context");
       _finished = true; // Error copying codec context
    }
    // Open codec
    if (avcodec_open2(pCodecCtx, pCodec, NULL) &lt; 0)
    {
       _finished = true; // Could not open codec
    }
    // Allocate video frame
    pFrame = av_frame_alloc();
    sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
    pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
    AV_PIX_FMT_YUV420P,
    SWS_BILINEAR,
    NULL,
    NULL,
    NULL);
    yPlaneSz = pCodecCtx->width * pCodecCtx->height;
    uvPlaneSz = pCodecCtx->width * pCodecCtx->height / 4;
    yPlane = (Uint8*)malloc(yPlaneSz);
    uPlane = (Uint8*)malloc(uvPlaneSz);
    vPlane = (Uint8*)malloc(uvPlaneSz);
    if (!yPlane || !uPlane || !vPlane)
    {
       fprintf(stderr, "Could not allocate pixel buffers - exiting\n");
       exit(1);
    }
    uvPitch = pCodecCtx->width / 2;
    }
    void video::Update(SDL_Texture* texture)
    {
    if (av_read_frame(pFormatCtx, &amp;packet) >= 0)
    {
       // Is this a packet from the video stream?
       if (packet.stream_index == videoStream)
       {
           avcodec_decode_video2(pCodecCtx, pFrame, &amp;frameFinished, &amp;packet);
           // Did we get a video frame?
           if (frameFinished)
           {
               AVPicture pict;
               pict.data[0] = yPlane;
               pict.data[1] = uPlane;
               pict.data[2] = vPlane;
               pict.linesize[0] = pCodecCtx->width;
               pict.linesize[1] = uvPitch;
               pict.linesize[2] = uvPitch;
               // Convert the image into YUV format that SDL uses
               sws_scale(sws_ctx, (uint8_t const * const *) pFrame->data,pFrame->linesize, 0, pCodecCtx->height, pict.data,pict.linesize);
               SDL_UpdateYUVTexture(texture,NULL,yPlane,pCodecCtx->width,uPlane,uvPitch,vPlane,uvPitch);
           }
       }
       // Free the packet that was allocated by av_read_frame
       av_packet_unref(&amp;packet);
       av_freep(&amp;packet);
    }
    else
    {
       av_packet_unref(&amp;packet);
       av_freep(&amp;packet);
       _finished = true;
    }
    }
    bool video::getFinished()
    {
    return _finished;
    }
    video::~video()
    {
    av_packet_unref(&amp;packet);
    av_freep(&amp;packet);
    av_frame_free(&amp;pFrame);
    av_freep(&amp;pFrame);
    free(yPlane);
    free(uPlane);
    free(vPlane);
    // Close the codec
    avcodec_close(pCodecCtx);
    avcodec_close(pCodecCtxOrig);
    sws_freeContext(sws_ctx);
    // Close the video file
    for (int i = 0; i &lt; pFormatCtx->nb_streams; i++)
    {
       AVStream *stream = pFormatCtx->streams[i];
       avcodec_close(stream->codec);
    }
    avformat_close_input(&amp;pFormatCtx);
    /*av_dict_free(&amp;optionsDict);  
    sws_freeContext(sws_ctx);
    av_free_packet(&amp;packet);
    av_free(pFrameYUV);
    av_free(buffer);
    avcodec_close(pCodecCtx);
    avformat_close_input(&amp;pFormatCtx);*/
    }

    video.h

    #include <string>
    #include <sdl2></sdl2>SDL.h>
    #ifdef __cplusplus
    extern "C" {
    #endif
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libswscale></libswscale>swscale.h>
    #ifdef __cplusplus
    }
    #endif

    class video
    {
      private:
       bool _finished;
       AVFormatContext *pFormatCtx;
       int videoStream;
       unsigned i;
       AVCodecContext *pCodecCtxOrig;
       AVCodecContext *pCodecCtx;
       AVCodec *pCodec;
       AVFrame *pFrame;
       AVPacket packet;
       int frameFinished;
       struct SwsContext *sws_ctx;
       Uint8 *yPlane, *uPlane, *vPlane;
       size_t yPlaneSz, uvPlaneSz;
       int uvPitch;
      public:
       video(const std::string&amp; name);
       ~video();
       void Update(SDL_Texture* texture);
       bool getFinished();
    };
    </string>

    I’m looking forward to your answers