Recherche avancée

Médias (0)

Mot : - Tags -/organisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (39)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (7090)

  • C++ ffmpeg and SDL2 video rendering memory leak

    10 avril 2017, par kj192

    I have made a small program what plays a video in SDL2.0 and FFmpeg.
    The software does work and do it is purpose.
    I have left the software running and I have faced a huge memory consumption and started to look online what can I do against it.
    I have used the following tutorials :
    http://www.developersite.org/906-59411-FFMPEG
    http://ardrone-ailab-u-tokyo.blogspot.co.uk/2012/07/212-ardrone-20-video-decording-ffmpeg.html

    I wonder if someone can give advice what do I do wrong. I have tried valgrind but I can’t find any information. I have did try to comment out sections and what I have seen even if I’m not rendering to the display the memory usage is growing and after delete something still not been freed up :

    if (av_read_frame(pFormatCtx, &packet) >= 0)

    the whole source code is here :
    main :

    #include
    #include <ios>
    #include <iostream>
    #include <fstream>
    #include
    #include <sdl2></sdl2>SDL.h>
    #include "video.h"
    using namespace std;

    void memory()
    {
    using std::ios_base;
    using std::ifstream;
    using std::string;

    double vm_usage     = 0.0;
    double resident_set = 0.0;

    // 'file' stat seems to give the most reliable results
    //
    ifstream stat_stream("/proc/self/stat",ios_base::in);

    // dummy vars for leading entries in stat that we don't care about
    //
    string pid, comm, state, ppid, pgrp, session, tty_nr;
    string tpgid, flags, minflt, cminflt, majflt, cmajflt;
    string utime, stime, cutime, cstime, priority, nice;
    string O, itrealvalue, starttime;

    // the two fields we want
    //
    unsigned long vsize;
    long rss;

    stat_stream >> pid >> comm >> state >> ppid >> pgrp >> session >> tty_nr
               >> tpgid >> flags >> minflt >> cminflt >> majflt >> cmajflt
               >> utime >> stime >> cutime >> cstime >> priority >> nice
               >> O >> itrealvalue >> starttime >> vsize >> rss; // don't care about the rest

    stat_stream.close();

    long page_size_kb = sysconf(_SC_PAGE_SIZE) / 1024; // in case x86-64 is configured to use 2MB pages
    vm_usage     = vsize / 1024.0;
    resident_set = rss * page_size_kb;
    std::cout&lt;&lt;"VM: " &lt;&lt; vm_usage &lt;&lt; " RE:"&lt;&lt; resident_set &lt;&lt; std::endl;
    }


    int main()
    {
    //This example using 1280x800 video
    av_register_all();
    if( SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER ))
    {
       fprintf(stderr, "Could not initialize SDL - %s\n", SDL_GetError());
       exit(1);
    }
    SDL_Window* sdlWindow = SDL_CreateWindow("Video Window", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 1280, 800, SDL_WINDOW_OPENGL);
    if( !sdlWindow )
    {
       fprintf(stderr, "SDL: could not set video mode - exiting\n");
       exit(1);
    }
    SDL_Renderer* sdlRenderer = SDL_CreateRenderer(sdlWindow, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC | SDL_RENDERER_TARGETTEXTURE);
    SDL_Texture* sdlTexture = SDL_CreateTexture(sdlRenderer, SDL_PIXELFORMAT_YV12, SDL_TEXTUREACCESS_STREAMING, 1280, 800);
    if(!sdlTexture)
    {
       return -1;
    }
    SDL_SetTextureBlendMode(sdlTexture,SDL_BLENDMODE_BLEND );
    //VIDEO RESOLUTION
    SDL_Rect sdlRect;
    sdlRect.x = 0;
    sdlRect.y = 0;
    sdlRect.w = 1280;
    sdlRect.h = 800;    
    memory();
    for(int i = 1; i &lt; 6; i++)
    {
       memory();  
       video* vid = new video("vid.mp4");  
       while (!vid -> getFinished())
       {
           memory();
           vid -> Update(sdlTexture);
           SDL_RenderCopy(sdlRenderer,sdlTexture,&amp;sdlRect,&amp;sdlRect);
           SDL_RenderPresent(sdlRenderer);
       }
       delete vid;
       memory();
    }  
    SDL_DestroyTexture(sdlTexture);
    SDL_DestroyRenderer(sdlRenderer);
    SDL_DestroyWindow(sdlWindow);
    SDL_Quit();
    return 0;
    }
    </fstream></iostream></ios>

    video.cpp

    #include "video.h"

    video::video(const std::string&amp; name) : _finished(false)
    {
    av_register_all();
    pFormatCtx = NULL;
    pCodecCtxOrig = NULL;
    pCodecCtx = NULL;
    pCodec = NULL;
    pFrame = NULL;
    sws_ctx = NULL;
    if (avformat_open_input(&amp;pFormatCtx, name.c_str(), NULL, NULL) != 0)
    {
    _finished = true; // Couldn't open file
    }
    // Retrieve stream information
    if (avformat_find_stream_info(pFormatCtx, NULL) &lt; 0)
    {
    _finished = true; // Couldn't find stream information
    }
    videoStream = -1;
    for (i = 0; i &lt; pFormatCtx->nb_streams; i++)
    {
       if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
       {
           videoStream = i;
           break;
       }
    }
    if (videoStream == -1)
    {
       _finished = true; // Didn't find a video stream
    }
    // Get a pointer to the codec context for the video stream
    pCodecCtxOrig = pFormatCtx->streams[videoStream]->codec;
    // Find the decoder for the video stream
    pCodec = avcodec_find_decoder(pCodecCtxOrig->codec_id);
    if (pCodec == NULL)
    {
       fprintf(stderr, "Unsupported codec!\n");
       _finished = true; // Codec not found
    }
    pCodecCtx = avcodec_alloc_context3(pCodec);
    if (avcodec_copy_context(pCodecCtx, pCodecCtxOrig) != 0)
    {
       fprintf(stderr, "Couldn't copy codec context");
       _finished = true; // Error copying codec context
    }
    // Open codec
    if (avcodec_open2(pCodecCtx, pCodec, NULL) &lt; 0)
    {
       _finished = true; // Could not open codec
    }
    // Allocate video frame
    pFrame = av_frame_alloc();
    sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
    pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
    AV_PIX_FMT_YUV420P,
    SWS_BILINEAR,
    NULL,
    NULL,
    NULL);
    yPlaneSz = pCodecCtx->width * pCodecCtx->height;
    uvPlaneSz = pCodecCtx->width * pCodecCtx->height / 4;
    yPlane = (Uint8*)malloc(yPlaneSz);
    uPlane = (Uint8*)malloc(uvPlaneSz);
    vPlane = (Uint8*)malloc(uvPlaneSz);
    if (!yPlane || !uPlane || !vPlane)
    {
       fprintf(stderr, "Could not allocate pixel buffers - exiting\n");
       exit(1);
    }
    uvPitch = pCodecCtx->width / 2;
    }
    void video::Update(SDL_Texture* texture)
    {
    if (av_read_frame(pFormatCtx, &amp;packet) >= 0)
    {
       // Is this a packet from the video stream?
       if (packet.stream_index == videoStream)
       {
           avcodec_decode_video2(pCodecCtx, pFrame, &amp;frameFinished, &amp;packet);
           // Did we get a video frame?
           if (frameFinished)
           {
               AVPicture pict;
               pict.data[0] = yPlane;
               pict.data[1] = uPlane;
               pict.data[2] = vPlane;
               pict.linesize[0] = pCodecCtx->width;
               pict.linesize[1] = uvPitch;
               pict.linesize[2] = uvPitch;
               // Convert the image into YUV format that SDL uses
               sws_scale(sws_ctx, (uint8_t const * const *) pFrame->data,pFrame->linesize, 0, pCodecCtx->height, pict.data,pict.linesize);
               SDL_UpdateYUVTexture(texture,NULL,yPlane,pCodecCtx->width,uPlane,uvPitch,vPlane,uvPitch);
           }
       }
       // Free the packet that was allocated by av_read_frame
       av_packet_unref(&amp;packet);
       av_freep(&amp;packet);
    }
    else
    {
       av_packet_unref(&amp;packet);
       av_freep(&amp;packet);
       _finished = true;
    }
    }
    bool video::getFinished()
    {
    return _finished;
    }
    video::~video()
    {
    av_packet_unref(&amp;packet);
    av_freep(&amp;packet);
    av_frame_free(&amp;pFrame);
    av_freep(&amp;pFrame);
    free(yPlane);
    free(uPlane);
    free(vPlane);
    // Close the codec
    avcodec_close(pCodecCtx);
    avcodec_close(pCodecCtxOrig);
    sws_freeContext(sws_ctx);
    // Close the video file
    for (int i = 0; i &lt; pFormatCtx->nb_streams; i++)
    {
       AVStream *stream = pFormatCtx->streams[i];
       avcodec_close(stream->codec);
    }
    avformat_close_input(&amp;pFormatCtx);
    /*av_dict_free(&amp;optionsDict);  
    sws_freeContext(sws_ctx);
    av_free_packet(&amp;packet);
    av_free(pFrameYUV);
    av_free(buffer);
    avcodec_close(pCodecCtx);
    avformat_close_input(&amp;pFormatCtx);*/
    }

    video.h

    #include <string>
    #include <sdl2></sdl2>SDL.h>
    #ifdef __cplusplus
    extern "C" {
    #endif
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libswscale></libswscale>swscale.h>
    #ifdef __cplusplus
    }
    #endif

    class video
    {
      private:
       bool _finished;
       AVFormatContext *pFormatCtx;
       int videoStream;
       unsigned i;
       AVCodecContext *pCodecCtxOrig;
       AVCodecContext *pCodecCtx;
       AVCodec *pCodec;
       AVFrame *pFrame;
       AVPacket packet;
       int frameFinished;
       struct SwsContext *sws_ctx;
       Uint8 *yPlane, *uPlane, *vPlane;
       size_t yPlaneSz, uvPlaneSz;
       int uvPitch;
      public:
       video(const std::string&amp; name);
       ~video();
       void Update(SDL_Texture* texture);
       bool getFinished();
    };
    </string>

    I’m looking forward to your answers

  • Issue in FFmpegAndroid library when i compress video it convert video time into 1 or 2 second

    25 mai 2017, par Fateh Singh Saini

    Used this dependencies :
    compile ’com.writingminds:FFmpegAndroid:0.3.2’

    I used blow code for video compress
    public static final String VIDEOCODEC = "-vcodec" ;
    public static final String AUDIOCODEC = "-acodec" ;

    public static final String VIDEOBITSTREAMFILTER = "-vbsf";
    public static final String AUDIOBITSTREAMFILTER = "-absf";

    public static final String VERBOSITY = "-v";
    public static final String FILE_INPUT = "-i";
    public static final String SIZE = "-s";
    public static final String FRAMERATE = "-r";
    public static final String FORMAT = "-f";
    public static final String BITRATE_VIDEO = "-b:v";

    public static final String BITRATE_AUDIO = "-b:a";
    public static final String CHANNELS_AUDIO = "-ac";
    public static final String FREQ_AUDIO = "-ar";

    String[] complexCommand = "-y", FILE_INPUT, yourRealPath, SIZE, "480x360", FRAMERATE, "25", VIDEOCODEC, "mpeg4", BITRATE_VIDEO, "150k", BITRATE_AUDIO, "48000", CHANNELS_AUDIO, "2", FREQ_AUDIO, "22050", filePath ;

    /**
    * Executing ffmpeg binary
    */
    private static String execFFmpegBinary(final String[] command) {


       try {
           ffmpeg.execute(command, new ExecuteBinaryResponseHandler() {
               @Override
               public void onFailure(String s) {
                   Log.d(TAG, "FAILED with output : " + s);
               }

               @Override
               public void onSuccess(String s) {
                   Log.d(TAG, "SUCCESS with output : " + s);
               }

               @Override
               public void onProgress(String s) {
                   Log.d(TAG, "Started command : ffmpeg " + command);
                   Log.d(TAG, "progress : " + s);
               }

               @Override
               public void onStart() {
                   Log.d(TAG, "Started command : ffmpeg " + command);
               }

               @Override
               public void onFinish() {
                   Log.d(TAG, "Finished command : ffmpeg " + command);

               }
           });
       } catch (FFmpegCommandAlreadyRunningException e) {
           // do nothing for now
       }
       return filePath;
    }
  • x264 Downsides of a high CRF (22) intermediary codec between conversions instead of lossless

    18 décembre 2019, par bobtheencoder

    I have a huge collection of video files that are in the range of CRF 16-20 taking up TB’s of space. The only need I have for these originals is that I have to encode them from time to time but the CRF of these final encodes is very low (CRF 26-28).

    I understand that a lossy to lossy converstion ALWAYS results in some quality loss but my question is what if the intermediate file is almost visually lossless compared to the final output.

    So to sum up, what quality difference should I expect from the following routes ?

    CRF 18 (original) -----> CRF 28 (final)
    CRF 18 (original) -----> CRF 22 (long-term storage) -----> Lossy  CRF 28 (final)