Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (103)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (11629)

  • Error "Broken file, keyframe not correctly marked" [on hold]

    11 avril 2017, par Александр Патейчук

    When I open a certain .ogv file, I get an error in the dump

    [ogg @ 00f0c400] Broken file, keyframe not correctly marked.

    And I can not play the audio track (the video plays normally), although the VLC media player reproduces this format normally.

    Is it possible to fix "keyframe not correctly marked" ? Other formats with the same code are played back normally.

  • C++ ffmpeg and SDL2 video rendering memory leak

    10 avril 2017, par kj192

    I have made a small program what plays a video in SDL2.0 and FFmpeg.
    The software does work and do it is purpose.
    I have left the software running and I have faced a huge memory consumption and started to look online what can I do against it.
    I have used the following tutorials :
    http://www.developersite.org/906-59411-FFMPEG
    http://ardrone-ailab-u-tokyo.blogspot.co.uk/2012/07/212-ardrone-20-video-decording-ffmpeg.html

    I wonder if someone can give advice what do I do wrong. I have tried valgrind but I can’t find any information. I have did try to comment out sections and what I have seen even if I’m not rendering to the display the memory usage is growing and after delete something still not been freed up :

    if (av_read_frame(pFormatCtx, &packet) >= 0)

    the whole source code is here :
    main :

    #include
    #include <ios>
    #include <iostream>
    #include <fstream>
    #include
    #include <sdl2></sdl2>SDL.h>
    #include "video.h"
    using namespace std;

    void memory()
    {
    using std::ios_base;
    using std::ifstream;
    using std::string;

    double vm_usage     = 0.0;
    double resident_set = 0.0;

    // 'file' stat seems to give the most reliable results
    //
    ifstream stat_stream("/proc/self/stat",ios_base::in);

    // dummy vars for leading entries in stat that we don't care about
    //
    string pid, comm, state, ppid, pgrp, session, tty_nr;
    string tpgid, flags, minflt, cminflt, majflt, cmajflt;
    string utime, stime, cutime, cstime, priority, nice;
    string O, itrealvalue, starttime;

    // the two fields we want
    //
    unsigned long vsize;
    long rss;

    stat_stream >> pid >> comm >> state >> ppid >> pgrp >> session >> tty_nr
               >> tpgid >> flags >> minflt >> cminflt >> majflt >> cmajflt
               >> utime >> stime >> cutime >> cstime >> priority >> nice
               >> O >> itrealvalue >> starttime >> vsize >> rss; // don't care about the rest

    stat_stream.close();

    long page_size_kb = sysconf(_SC_PAGE_SIZE) / 1024; // in case x86-64 is configured to use 2MB pages
    vm_usage     = vsize / 1024.0;
    resident_set = rss * page_size_kb;
    std::cout&lt;&lt;"VM: " &lt;&lt; vm_usage &lt;&lt; " RE:"&lt;&lt; resident_set &lt;&lt; std::endl;
    }


    int main()
    {
    //This example using 1280x800 video
    av_register_all();
    if( SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER ))
    {
       fprintf(stderr, "Could not initialize SDL - %s\n", SDL_GetError());
       exit(1);
    }
    SDL_Window* sdlWindow = SDL_CreateWindow("Video Window", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 1280, 800, SDL_WINDOW_OPENGL);
    if( !sdlWindow )
    {
       fprintf(stderr, "SDL: could not set video mode - exiting\n");
       exit(1);
    }
    SDL_Renderer* sdlRenderer = SDL_CreateRenderer(sdlWindow, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC | SDL_RENDERER_TARGETTEXTURE);
    SDL_Texture* sdlTexture = SDL_CreateTexture(sdlRenderer, SDL_PIXELFORMAT_YV12, SDL_TEXTUREACCESS_STREAMING, 1280, 800);
    if(!sdlTexture)
    {
       return -1;
    }
    SDL_SetTextureBlendMode(sdlTexture,SDL_BLENDMODE_BLEND );
    //VIDEO RESOLUTION
    SDL_Rect sdlRect;
    sdlRect.x = 0;
    sdlRect.y = 0;
    sdlRect.w = 1280;
    sdlRect.h = 800;    
    memory();
    for(int i = 1; i &lt; 6; i++)
    {
       memory();  
       video* vid = new video("vid.mp4");  
       while (!vid -> getFinished())
       {
           memory();
           vid -> Update(sdlTexture);
           SDL_RenderCopy(sdlRenderer,sdlTexture,&amp;sdlRect,&amp;sdlRect);
           SDL_RenderPresent(sdlRenderer);
       }
       delete vid;
       memory();
    }  
    SDL_DestroyTexture(sdlTexture);
    SDL_DestroyRenderer(sdlRenderer);
    SDL_DestroyWindow(sdlWindow);
    SDL_Quit();
    return 0;
    }
    </fstream></iostream></ios>

    video.cpp

    #include "video.h"

    video::video(const std::string&amp; name) : _finished(false)
    {
    av_register_all();
    pFormatCtx = NULL;
    pCodecCtxOrig = NULL;
    pCodecCtx = NULL;
    pCodec = NULL;
    pFrame = NULL;
    sws_ctx = NULL;
    if (avformat_open_input(&amp;pFormatCtx, name.c_str(), NULL, NULL) != 0)
    {
    _finished = true; // Couldn't open file
    }
    // Retrieve stream information
    if (avformat_find_stream_info(pFormatCtx, NULL) &lt; 0)
    {
    _finished = true; // Couldn't find stream information
    }
    videoStream = -1;
    for (i = 0; i &lt; pFormatCtx->nb_streams; i++)
    {
       if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
       {
           videoStream = i;
           break;
       }
    }
    if (videoStream == -1)
    {
       _finished = true; // Didn't find a video stream
    }
    // Get a pointer to the codec context for the video stream
    pCodecCtxOrig = pFormatCtx->streams[videoStream]->codec;
    // Find the decoder for the video stream
    pCodec = avcodec_find_decoder(pCodecCtxOrig->codec_id);
    if (pCodec == NULL)
    {
       fprintf(stderr, "Unsupported codec!\n");
       _finished = true; // Codec not found
    }
    pCodecCtx = avcodec_alloc_context3(pCodec);
    if (avcodec_copy_context(pCodecCtx, pCodecCtxOrig) != 0)
    {
       fprintf(stderr, "Couldn't copy codec context");
       _finished = true; // Error copying codec context
    }
    // Open codec
    if (avcodec_open2(pCodecCtx, pCodec, NULL) &lt; 0)
    {
       _finished = true; // Could not open codec
    }
    // Allocate video frame
    pFrame = av_frame_alloc();
    sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
    pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
    AV_PIX_FMT_YUV420P,
    SWS_BILINEAR,
    NULL,
    NULL,
    NULL);
    yPlaneSz = pCodecCtx->width * pCodecCtx->height;
    uvPlaneSz = pCodecCtx->width * pCodecCtx->height / 4;
    yPlane = (Uint8*)malloc(yPlaneSz);
    uPlane = (Uint8*)malloc(uvPlaneSz);
    vPlane = (Uint8*)malloc(uvPlaneSz);
    if (!yPlane || !uPlane || !vPlane)
    {
       fprintf(stderr, "Could not allocate pixel buffers - exiting\n");
       exit(1);
    }
    uvPitch = pCodecCtx->width / 2;
    }
    void video::Update(SDL_Texture* texture)
    {
    if (av_read_frame(pFormatCtx, &amp;packet) >= 0)
    {
       // Is this a packet from the video stream?
       if (packet.stream_index == videoStream)
       {
           avcodec_decode_video2(pCodecCtx, pFrame, &amp;frameFinished, &amp;packet);
           // Did we get a video frame?
           if (frameFinished)
           {
               AVPicture pict;
               pict.data[0] = yPlane;
               pict.data[1] = uPlane;
               pict.data[2] = vPlane;
               pict.linesize[0] = pCodecCtx->width;
               pict.linesize[1] = uvPitch;
               pict.linesize[2] = uvPitch;
               // Convert the image into YUV format that SDL uses
               sws_scale(sws_ctx, (uint8_t const * const *) pFrame->data,pFrame->linesize, 0, pCodecCtx->height, pict.data,pict.linesize);
               SDL_UpdateYUVTexture(texture,NULL,yPlane,pCodecCtx->width,uPlane,uvPitch,vPlane,uvPitch);
           }
       }
       // Free the packet that was allocated by av_read_frame
       av_packet_unref(&amp;packet);
       av_freep(&amp;packet);
    }
    else
    {
       av_packet_unref(&amp;packet);
       av_freep(&amp;packet);
       _finished = true;
    }
    }
    bool video::getFinished()
    {
    return _finished;
    }
    video::~video()
    {
    av_packet_unref(&amp;packet);
    av_freep(&amp;packet);
    av_frame_free(&amp;pFrame);
    av_freep(&amp;pFrame);
    free(yPlane);
    free(uPlane);
    free(vPlane);
    // Close the codec
    avcodec_close(pCodecCtx);
    avcodec_close(pCodecCtxOrig);
    sws_freeContext(sws_ctx);
    // Close the video file
    for (int i = 0; i &lt; pFormatCtx->nb_streams; i++)
    {
       AVStream *stream = pFormatCtx->streams[i];
       avcodec_close(stream->codec);
    }
    avformat_close_input(&amp;pFormatCtx);
    /*av_dict_free(&amp;optionsDict);  
    sws_freeContext(sws_ctx);
    av_free_packet(&amp;packet);
    av_free(pFrameYUV);
    av_free(buffer);
    avcodec_close(pCodecCtx);
    avformat_close_input(&amp;pFormatCtx);*/
    }

    video.h

    #include <string>
    #include <sdl2></sdl2>SDL.h>
    #ifdef __cplusplus
    extern "C" {
    #endif
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libswscale></libswscale>swscale.h>
    #ifdef __cplusplus
    }
    #endif

    class video
    {
      private:
       bool _finished;
       AVFormatContext *pFormatCtx;
       int videoStream;
       unsigned i;
       AVCodecContext *pCodecCtxOrig;
       AVCodecContext *pCodecCtx;
       AVCodec *pCodec;
       AVFrame *pFrame;
       AVPacket packet;
       int frameFinished;
       struct SwsContext *sws_ctx;
       Uint8 *yPlane, *uPlane, *vPlane;
       size_t yPlaneSz, uvPlaneSz;
       int uvPitch;
      public:
       video(const std::string&amp; name);
       ~video();
       void Update(SDL_Texture* texture);
       bool getFinished();
    };
    </string>

    I’m looking forward to your answers

  • Merge commit '6c916192f3d7441f5896f6c0fe151874fcd91fe4'

    9 avril 2017, par Clément Bœsch
    Merge commit '6c916192f3d7441f5896f6c0fe151874fcd91fe4'
    

    * commit '6c916192f3d7441f5896f6c0fe151874fcd91fe4' :
    mimic : Convert to the new bitstream reader
    metasound : Convert to the new bitstream reader
    lagarith : Convert to the new bitstream reader
    indeo : Convert to the new bitstream reader
    imc : Convert to the new bitstream reader
    webp : Convert to the new bitstream reader

    This merge is a noop, see
    http://ffmpeg.org/pipermail/ffmpeg-devel/2017-April/209609.html

    Merged-by : Clément Bœsch <u@pkh.me>