Recherche avancée

Médias (1)

Mot : - Tags -/livre électronique

Autres articles (79)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

Sur d’autres sites (9280)

  • Could not open encoder using ffmpeg C APi for MOV format

    22 juin 2016, par lupod

    Context

    I am writing a program for doing some video processing on an input file. I wrote two classes for handling the "reading/writing frames" part that essentially wrap the functions of ffmpeg. These classes may be instantiated by providing an input and output file name, and in their constructor I initialize everything that is needed (or at least I hope so).

    This are the two routines that are called inside the constructors :

    // InputVideoHandler.cpp
    void InputVideoHandler::init(char* name) {
     streamIndex = -1;
     int numStreams;

     if (avformat_open_input(&formatCtx, name, NULL, NULL) != 0)
       throw std::exception("Invalid input file name.");

     if (avformat_find_stream_info(formatCtx, NULL)<0)
       throw std::exception("Could not find stream information.");

     numStreams = formatCtx->nb_streams;

     if (numStreams < 0)
       throw std::exception("No streams in input video file.");

     for (int i = 0; i < numStreams; i++) {
       if (formatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
         streamIndex = i;
         break;
       }
     }

     if (streamIndex < 0)
       throw std::exception("No video stream in input video file.");

     // find decoder using id
     codec = avcodec_find_decoder(formatCtx->streams[streamIndex]->codec->codec_id);
     if (codec == nullptr)
       throw std::exception("Could not find suitable decoder for input file.");

     // copy context from input stream
     codecCtx = avcodec_alloc_context3(codec);
     if (avcodec_copy_context(codecCtx, formatCtx->streams[streamIndex]->codec) != 0)
       throw std::exception("Could not copy codec context from input stream.");

     if (avcodec_open2(codecCtx, codec, NULL) < 0)
       throw std::exception("Could not open decoder.");

     codecCtx->refcounted_frames = 1;
    }

    // OutputVideoBuilder.cpp
    void OutputVideoBuilder::init(char* name, AVCodecContext* inputCtx) {
     if (avformat_alloc_output_context2(&formatCtx, NULL, NULL, name) < 0)
       throw std::exception("Could not determine file extension from provided name.");

     codec = avcodec_find_encoder(inputCtx->codec_id);
     if (codec == nullptr) {
       throw std::exception("Could not find suitable encoder.");
     }

     codecCtx = avcodec_alloc_context3(codec);
     if (avcodec_copy_context(codecCtx, inputCtx) < 0)
       throw std::exception("Could not copy output codec context from input");

     codecCtx->time_base = inputCtx->time_base;

     if (avcodec_open2(codecCtx, codec, NULL) < 0)
       throw std::exception("Could not open encoder.");

     stream = avformat_new_stream(formatCtx, codec);
     if (stream == nullptr) {
       throw std::exception("Could not allocate stream.");
     }

     stream->id = formatCtx->nb_streams - 1;
     stream->codec = codecCtx;
     stream->time_base = codecCtx->time_base;

     av_dump_format(formatCtx, 0, name, 1);
     if (!(formatCtx->oformat->flags & AVFMT_NOFILE)) {
       if (avio_open(&formatCtx->pb, name, AVIO_FLAG_WRITE) < 0) {
         throw std::exception("Could not open output file.");
       }
     }

     if (avformat_write_header(formatCtx, NULL) < 0) {
       throw std::exception("Error occurred when opening output file.");
     }

    }

    As you see, for the Init function of the output handler I require that an AVCodecContext must be provided. In my code, I pass to the constructor the AVCodecContext that is stored in the input handler and that was previously created.

    Question :

    The two functions work fine when I test my program with some video formats, like .mpg or .avi. When I try to process .mov/.mp4 files, however, my code throws the exception that I labeled "Could not open encoder." in OutputVideoBuilder::Init(). Why is this happening ? I read from the General documentation of ffmpeg that that format should be supported as well by ffmpeg. I am assuming that I am doing something wrong in my code, which I do not completely understand because it was created by trying to adapt the tutorials of the documentation to my specific case. Also for this reason, any comments on things that are useless or things that are missing will be greatly appreciated.

  • Encoder (codec png) not found for output stream #0:0 [duplicate]

    7 juin 2016, par Anubhav Dhawan

    This question already has an answer here :

    I’m trying to create a NodeJS app that converts a video into a GIF image.

    I’m using node-gify plugin for this purpose, which uses FFmpeg and GraphicsMagick.

    Here’s my sample code :

    var gify = require('./');
    var http = require('http');
    var fs = require('fs');

    var opts = {
     height: 300,
     rate: 10
    };

    console.time('convert');
    gify('out.mp4', 'out.gif', opts, function(err) {
     if (err) throw err;
     console.timeEnd('convert');
     var s = fs.statSync('out.gif');
     console.log('size: %smb', s.size / 1024 / 1024 | 0);
    });

    And here’s my console error :

    > gify@0.2.0 start /home/daffodil/repos/node-gify-master
    > node example.js

    /home/daffodil/repos/node-gify-master/example.js:24
     if (err) throw err;
              ^

    Error: Command failed: /bin/sh -c ffmpeg -i out.mp4 -filter:v scale=-1:300 -r 10 /tmp/IP5OXJZELd/%04d.png
    ffmpeg version 3.0.2 Copyright (c) 2000-2016 the FFmpeg developers
     built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04)
     configuration: --disable-yasm
     libavutil      55. 17.103 / 55. 17.103
     libavcodec     57. 24.102 / 57. 24.102
     libavformat    57. 25.100 / 57. 25.100
     libavdevice    57.  0.101 / 57.  0.101
     libavfilter     6. 31.100 /  6. 31.100
     libswscale      4.  0.100 /  4.  0.100
     libswresample   2.  0.101 /  2.  0.101
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'out.mp4':
     Metadata:
       major_brand     : mp42
       minor_version   : 1
       compatible_brands: mp42mp41
       creation_time   : 2005-02-25 02:35:57
     Duration: 00:01:10.00, start: 0.000000, bitrate: 106 kb/s
       Stream #0:0(eng): Audio: aac (LC) (mp4a / 0x6134706D), 8000 Hz, stereo, fltp, 19 kb/s (default)
       Metadata:
         creation_time   : 2005-02-25 02:35:57
         handler_name    : Apple Sound Media Handler
       Stream #0:1(eng): Video: mpeg4 (Advanced Simple Profile) (mp4v / 0x7634706D), yuv420p, 192x242 [SAR 1:1 DAR 96:121], 76 kb/s, 15 fps, 15 tbr, 600 tbn, 1k tbc (default)
       Metadata:
         creation_time   : 2005-02-25 02:35:57
         handler_name    : Apple Video Media Handler
       Stream #0:2(eng): Data: none (rtp  / 0x20707472), 4 kb/s (default)
       Metadata:
         creation_time   : 2005-02-25 02:35:57
         handler_name    : hint media handler
       Stream #0:3(eng): Data: none (rtp  / 0x20707472), 3 kb/s (default)
       Metadata:
         creation_time   : 2005-02-25 02:35:57
         handler_name    : hint media handler
    Output #0, image2, to '/tmp/IP5OXJZELd/%04d.png':
     Metadata:
       major_brand     : mp42
       minor_version   : 1
       compatible_brands: mp42mp41
       Stream #0:0(eng): Video: png, none, q=2-31, 128 kb/s (default)
       Metadata:
         creation_time   : 2005-02-25 02:35:57
         handler_name    : Apple Video Media Handler
    Stream mapping:
     Stream #0:1 -> #0:0 (mpeg4 (native) -> ? (?))
    Encoder (codec png) not found for output stream #0:0

       at ChildProcess.exithandler (child_process.js:213:12)
       at emitTwo (events.js:100:13)
       at ChildProcess.emit (events.js:185:7)
       at maybeClose (internal/child_process.js:827:16)
       at Socket.<anonymous> (internal/child_process.js:319:11)
       at emitOne (events.js:90:13)
       at Socket.emit (events.js:182:7)
       at Pipe._onclose (net.js:471:12)
    </anonymous>

    PS : I had a couple of problems installing FFmpeg on my Ubuntu 14.04.

    • First, FFmpeg is removed from Ubuntu 14.04 (legal issues AFAIK). But I managed to apt-get it through this.
    • Second, when I tried to ./configure (as mentioned in its README.md), I got this error - yasm/nasm not found or too old. Use --disable-yasm for a crippled build.. So I used ./configure --disable-yasm instead, and it (somehow) worked.

    Update #1

    After read this log a couple of times, I managed to produce a sample GIF from my mp4 file, by changing the command, which example.js tries to run :

    From

    ffmpeg -i out.mp4 -filter:v scale=-1:300 -r 10 /tmp/Lz43nx6wv1/%04d.png

    To

    ffmpeg -i out.mp4 -filter:v scale=-1:300 -r 10 out.gif

    But it’s still using command line, I need to do this by code.

    So I dived into the code and found that this wrong url is coming from the plugin’s index.js :

    ...

    // tmpfile(s)
     var id = uid(10);
     var dir = path.resolve('/tmp/' + id);
     var tmp  = path.join(dir, '/%04d.png');

    ...

    Is this an issue with the plugin, or am I doing something wrong here ?
    In any case, please put the correct stub here, because I don’t want to touch this part unless I know what I’m doing ?

    Update #2

    Now I installed zlib1g-dev, and then reinstalled both FFmpeg and graphicsMagick, and now I see this error :

    gm convert: No decode delegate for this image format (/tmp/ZQbEAynAcf/0702.png).

    Thanks in advance :)

  • Losing quality when encoding with ffmpeg

    22 mai 2016, par lupod

    I am using the c libraries of ffmpeg to read frames from a video and create an output file that is supposed to be identical to the input.
    However, somewhere during this process some quality gets lost and the result is "less sharp". My guess is that the problem is the encoding and that the frames are too compressed (also because the size of the file decreases quite significantly). Is there some parameter in the encoder that allows me to control the quality of the result ? I found that AVCodecContext has a compression_level member, but changing it that does not seem to have any effect.

    I post here part of my code in case it could help. I would say that something must be changed in the init function of OutputVideoBuilder when I set the codec. The AVCodecContext that is passed to the method is the same of InputVideoHandler.
    Here are the two main classes that I created to wrap the ffmpeg functionalities :

    // This class opens the video files and sets the decoder
    class InputVideoHandler {
    public:
     InputVideoHandler(char* name);
     ~InputVideoHandler();
     AVCodecContext* getCodecContext();
     bool readFrame(AVFrame* frame, int* success);

    private:
     InputVideoHandler();
     void init(char* name);
     AVFormatContext* formatCtx;
     AVCodec* codec;
     AVCodecContext* codecCtx;
     AVPacket packet;
     int streamIndex;
    };

    void InputVideoHandler::init(char* name) {
     streamIndex = -1;
     int numStreams;

     if (avformat_open_input(&amp;formatCtx, name, NULL, NULL) != 0)
       throw std::exception("Invalid input file name.");

     if (avformat_find_stream_info(formatCtx, NULL)&lt;0)
       throw std::exception("Could not find stream information.");

     numStreams = formatCtx->nb_streams;

     if (numStreams &lt; 0)
       throw std::exception("No streams in input video file.");

     for (int i = 0; i &lt; numStreams; i++) {
       if (formatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
         streamIndex = i;
         break;
       }
     }

     if (streamIndex &lt; 0)
       throw std::exception("No video stream in input video file.");

     // find decoder using id
     codec = avcodec_find_decoder(formatCtx->streams[streamIndex]->codec->codec_id);
     if (codec == nullptr)
       throw std::exception("Could not find suitable decoder for input file.");

     // copy context from input stream
     codecCtx = avcodec_alloc_context3(codec);
     if (avcodec_copy_context(codecCtx, formatCtx->streams[streamIndex]->codec) != 0)
       throw std::exception("Could not copy codec context from input stream.");

     if (avcodec_open2(codecCtx, codec, NULL) &lt; 0)
       throw std::exception("Could not open decoder.");
    }

    // frame must be initialized with av_frame_alloc() before!
    // Returns true if there are other frames, false if not.
    // success == 1 if frame is valid, 0 if not.
    bool InputVideoHandler::readFrame(AVFrame* frame, int* success) {
     *success = 0;
     if (av_read_frame(formatCtx, &amp;packet) &lt; 0)
       return false;
     if (packet.stream_index == streamIndex) {
       avcodec_decode_video2(codecCtx, frame, success, &amp;packet);
     }
     av_free_packet(&amp;packet);
     return true;
    }

    // This class opens the output and write frames to it
    class OutputVideoBuilder{
    public:
     OutputVideoBuilder(char* name, AVCodecContext* inputCtx);
     ~OutputVideoBuilder();
     void writeFrame(AVFrame* frame);
     void writeVideo();

    private:
     OutputVideoBuilder();
     void init(char* name, AVCodecContext* inputCtx);
     void logMsg(AVPacket* packet, AVRational* tb);
     AVFormatContext* formatCtx;
     AVCodec* codec;
     AVCodecContext* codecCtx;
     AVStream* stream;
    };

    void OutputVideoBuilder::init(char* name, AVCodecContext* inputCtx) {
     if (avformat_alloc_output_context2(&amp;formatCtx, NULL, NULL, name) &lt; 0)
       throw std::exception("Could not determine file extension from provided name.");

     codec = avcodec_find_encoder(inputCtx->codec_id);
     if (codec == nullptr) {
       throw std::exception("Could not find suitable encoder.");
     }

     codecCtx = avcodec_alloc_context3(codec);
     if (avcodec_copy_context(codecCtx, inputCtx) &lt; 0)
       throw std::exception("Could not copy output codec context from input");

     codecCtx->time_base = inputCtx->time_base;
     codecCtx->compression_level = 0;

     if (avcodec_open2(codecCtx, codec, NULL) &lt; 0)
       throw std::exception("Could not open encoder.");

     stream = avformat_new_stream(formatCtx, codec);
     if (stream == nullptr) {
       throw std::exception("Could not allocate stream.");
     }

     stream->id = formatCtx->nb_streams - 1;
     stream->codec = codecCtx;
     stream->time_base = codecCtx->time_base;



     av_dump_format(formatCtx, 0, name, 1);
     if (!(formatCtx->oformat->flags &amp; AVFMT_NOFILE)) {
       if (avio_open(&amp;formatCtx->pb, name, AVIO_FLAG_WRITE) &lt; 0) {
         throw std::exception("Could not open output file.");
       }
     }

     if (avformat_write_header(formatCtx, NULL) &lt; 0) {
       throw std::exception("Error occurred when opening output file.");
     }

    }

    void OutputVideoBuilder::writeFrame(AVFrame* frame) {
     AVPacket packet = { 0 };
     int success;
     av_init_packet(&amp;packet);

     if (avcodec_encode_video2(codecCtx, &amp;packet, frame, &amp;success))
       throw std::exception("Error encoding frames");

     if (success) {
       av_packet_rescale_ts(&amp;packet, codecCtx->time_base, stream->time_base);
       packet.stream_index = stream->index;
       logMsg(&amp;packet,&amp;stream->time_base);
       av_interleaved_write_frame(formatCtx, &amp;packet);
     }
     av_free_packet(&amp;packet);
    }

    This is the part of the main function that reads and write frames :

    while (inputHandler->readFrame(frame,&amp;gotFrame)) {

       if (gotFrame) {
         try {
           outputBuilder->writeFrame(frame);
         }
         catch (std::exception e) {
           std::cout &lt;&lt; e.what() &lt;&lt; std::endl;
           return -1;
         }
       }
     }