Recherche avancée

Médias (0)

Mot : - Tags -/flash

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (59)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (7271)

  • Could not open encoder using ffmpeg C APi for MOV format

    22 juin 2016, par lupod

    Context

    I am writing a program for doing some video processing on an input file. I wrote two classes for handling the "reading/writing frames" part that essentially wrap the functions of ffmpeg. These classes may be instantiated by providing an input and output file name, and in their constructor I initialize everything that is needed (or at least I hope so).

    This are the two routines that are called inside the constructors :

    // InputVideoHandler.cpp
    void InputVideoHandler::init(char* name) {
     streamIndex = -1;
     int numStreams;

     if (avformat_open_input(&formatCtx, name, NULL, NULL) != 0)
       throw std::exception("Invalid input file name.");

     if (avformat_find_stream_info(formatCtx, NULL)<0)
       throw std::exception("Could not find stream information.");

     numStreams = formatCtx->nb_streams;

     if (numStreams < 0)
       throw std::exception("No streams in input video file.");

     for (int i = 0; i < numStreams; i++) {
       if (formatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
         streamIndex = i;
         break;
       }
     }

     if (streamIndex < 0)
       throw std::exception("No video stream in input video file.");

     // find decoder using id
     codec = avcodec_find_decoder(formatCtx->streams[streamIndex]->codec->codec_id);
     if (codec == nullptr)
       throw std::exception("Could not find suitable decoder for input file.");

     // copy context from input stream
     codecCtx = avcodec_alloc_context3(codec);
     if (avcodec_copy_context(codecCtx, formatCtx->streams[streamIndex]->codec) != 0)
       throw std::exception("Could not copy codec context from input stream.");

     if (avcodec_open2(codecCtx, codec, NULL) < 0)
       throw std::exception("Could not open decoder.");

     codecCtx->refcounted_frames = 1;
    }

    // OutputVideoBuilder.cpp
    void OutputVideoBuilder::init(char* name, AVCodecContext* inputCtx) {
     if (avformat_alloc_output_context2(&formatCtx, NULL, NULL, name) < 0)
       throw std::exception("Could not determine file extension from provided name.");

     codec = avcodec_find_encoder(inputCtx->codec_id);
     if (codec == nullptr) {
       throw std::exception("Could not find suitable encoder.");
     }

     codecCtx = avcodec_alloc_context3(codec);
     if (avcodec_copy_context(codecCtx, inputCtx) < 0)
       throw std::exception("Could not copy output codec context from input");

     codecCtx->time_base = inputCtx->time_base;

     if (avcodec_open2(codecCtx, codec, NULL) < 0)
       throw std::exception("Could not open encoder.");

     stream = avformat_new_stream(formatCtx, codec);
     if (stream == nullptr) {
       throw std::exception("Could not allocate stream.");
     }

     stream->id = formatCtx->nb_streams - 1;
     stream->codec = codecCtx;
     stream->time_base = codecCtx->time_base;

     av_dump_format(formatCtx, 0, name, 1);
     if (!(formatCtx->oformat->flags & AVFMT_NOFILE)) {
       if (avio_open(&formatCtx->pb, name, AVIO_FLAG_WRITE) < 0) {
         throw std::exception("Could not open output file.");
       }
     }

     if (avformat_write_header(formatCtx, NULL) < 0) {
       throw std::exception("Error occurred when opening output file.");
     }

    }

    As you see, for the Init function of the output handler I require that an AVCodecContext must be provided. In my code, I pass to the constructor the AVCodecContext that is stored in the input handler and that was previously created.

    Question :

    The two functions work fine when I test my program with some video formats, like .mpg or .avi. When I try to process .mov/.mp4 files, however, my code throws the exception that I labeled "Could not open encoder." in OutputVideoBuilder::Init(). Why is this happening ? I read from the General documentation of ffmpeg that that format should be supported as well by ffmpeg. I am assuming that I am doing something wrong in my code, which I do not completely understand because it was created by trying to adapt the tutorials of the documentation to my specific case. Also for this reason, any comments on things that are useless or things that are missing will be greatly appreciated.

  • avconv : init filtergraphs only after we have a frame on each input

    27 mai 2016, par Anton Khirnov
    avconv : init filtergraphs only after we have a frame on each input
    

    This makes sure the actual stream parameters are used, which is
    important mainly for hardware decoding+filtering cases, which would
    previously require various weird workarounds to handle the fact that a
    fake software graph has to be constructed, but never used.
    This should also improve behaviour in rare cases where
    avformat_find_stream_info() does not provide accurate information.

    • [DBH] avconv.c
    • [DBH] avconv.h
    • [DBH] avconv_filter.c
    • [DBH] avconv_opt.c
    • [DBH] avconv_qsv.c
  • vaapi_encode : Maintain a pool of bitstream output buffers

    5 juin 2016, par Mark Thompson
    vaapi_encode : Maintain a pool of bitstream output buffers
    

    Previously we would allocate a new one for every frame. This instead
    maintains an AVBufferPool of them to use as-needed.

    Also makes the maximum size of an output buffer adapt to the frame
    size - the fixed upper bound was a bit too easy to hit when encoding
    large pictures at high quality.

    • [DBH] libavcodec/vaapi_encode.c
    • [DBH] libavcodec/vaapi_encode.h