Recherche avancée

Médias (2)

Mot : - Tags -/doc2img

Autres articles (52)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

Sur d’autres sites (5361)

  • Resultant video stream unplayable

    29 novembre 2017, par Nikolai Linetskiy

    Currently I’m writing software for transcoding media files using ffmpeg libs. The problem is that in case of H264 QuickTime cannot play result stream and shows black screen. Audio streams work as expected. I have read that QuickTime can deal only with yuv420p pixel format and that is true for encoded video.

    I looked through the ffmpeg examples and ffmpeg source code and could not find anything find any clues where the problem might be. I would really appreciate any help.

    The only thing I managed to get from QuickTime is
    SeqAndPicParamSetFromCFDictionaryRef, bad config record message in console. Same thing is logged by AVPlayer from AVFoundation.

    Here is the initialization of output streams and encoders.

    int status;

    // avformat_alloc_output_context2()
    if ((status = formatContext.open(destFilename)) < 0) {
       return status;
    }

    AVDictionary *fmtOptions = nullptr;
    av_dict_set(&fmtOptions, "movflags", "faststart", 0);
    av_dict_set(&fmtOptions, "brand", "mp42", 0);

    streams.resize(input->getStreamsCount());
    for (int i = 0; i < input->getStreamsCount(); ++i) {
       AVStream *inputStream = input->getStreamAtIndex(i);
       CodecContext &decoderContext = input->getDecoderAtIndex(i);

       // retrieve output codec by codec id
       auto encoderCodecId = decoderContext.getCodecID();;
       if (decoderContext.getCodecType() == AVMEDIA_TYPE_VIDEO || decoderContext.getCodecType() == AVMEDIA_TYPE_AUDIO) {
           int codecIdKey = decoderContext.getCodecType() == AVMEDIA_TYPE_AUDIO ? IPROC_KEY_INT(TargetAudioCodecID) : IPROC_KEY_INT(TargetVideoCodecID);
           auto codecIdParam = static_cast<avcodecid>(params[codecIdKey]);
           if (codecIdParam != AV_CODEC_ID_NONE) {
               encoderCodecId = codecIdParam;
           }
       }
       AVCodec *encoder = nullptr;
       if ((encoder = avcodec_find_encoder(encoderCodecId)) == nullptr) {
           status = AVERROR_ENCODER_NOT_FOUND;
           return status;
       }

       // create stream with specific codec and format
       AVStream *outputStream = nullptr;
       // avformat_new_stream()
       if ((outputStream = formatContext.newStream(encoder)) == nullptr) {
           return AVERROR(ENOMEM);
       }


       CodecContext encoderContext;
       // avcodec_alloc_context3()
       if ((status = encoderContext.init(encoder)) &lt; 0) {
           return status;
       }

       outputStream->disposition = inputStream->disposition;
       encoderContext.getRawCtx()->chroma_sample_location = decoderContext.getRawCtx()->chroma_sample_location;

       if (encoderContext.getCodecType() == AVMEDIA_TYPE_VIDEO) {
           auto lang = av_dict_get(input->getStreamAtIndex(i)->metadata, "language", nullptr, 0);
           if (lang) {
               av_dict_set(&amp;outputStream->metadata, "language", lang->value, 0);
           }

           // prepare encoder context
           int targetWidth = params[IPROC_KEY_INT(TargetVideoWidth)];
           int targetHeight = params[IPROC_KEY_INT(TargetVideHeight)];



           encoderContext.width() = targetWidth > 0 ? targetWidth : decoderContext.width();
           encoderContext.height() = targetHeight > 0 ? targetHeight : decoderContext.height();
           encoderContext.pixelFormat() = encoder->pix_fmts ? encoder->pix_fmts[0] : decoderContext.pixelFormat();;
           encoderContext.timeBase() = decoderContext.timeBase();
           encoderContext.getRawCtx()->level = 31;
           encoderContext.getRawCtx()->gop_size = 25;

           double far = static_cast<double>(encoderContext.getRawCtx()->width) / encoderContext.getRawCtx()->height;
           double dar = static_cast<double>(decoderContext.width()) / decoderContext.height();
           encoderContext.sampleAspectRatio() = av_d2q(dar / far, 255);


           encoderContext.getRawCtx()->bits_per_raw_sample = FFMIN(decoderContext.getRawCtx()->bits_per_raw_sample,
                                                                   av_pix_fmt_desc_get(encoderContext.pixelFormat())->comp[0].depth);
           encoderContext.getRawCtx()->framerate = inputStream->r_frame_rate;
           outputStream->avg_frame_rate = encoderContext.getRawCtx()->framerate;

           VideoFilterGraphParameters params;
           params.height = encoderContext.height();
           params.width = encoderContext.width();
           params.pixelFormat = encoderContext.pixelFormat();
           if ((status = generateGraph(decoderContext, encoderContext, params, streams[i].filterGraph)) &lt; 0) {
               return status;
           }

       } else if (encoderContext.getCodecType() == AVMEDIA_TYPE_AUDIO) {
           auto lang = av_dict_get(input->getStreamAtIndex(i)->metadata, "language", nullptr, 0);
           if (lang) {
               av_dict_set(&amp;outputStream->metadata, "language", lang->value, 0);
           }

           encoderContext.sampleRate() = params[IPROC_KEY_INT(TargetAudioSampleRate)] ? : decoderContext.sampleRate();
           encoderContext.channels() = params[IPROC_KEY_INT(TargetAudioChannels)] ? : decoderContext.channels();
           auto paramChannelLayout = params[IPROC_KEY_INT(TargetAudioChannelLayout)];
           if (paramChannelLayout) {
               encoderContext.channelLayout() = paramChannelLayout;
           } else {
               encoderContext.channelLayout() = av_get_default_channel_layout(encoderContext.channels());
           }

           AVSampleFormat sampleFormatParam = static_cast<avsampleformat>(params[IPROC_KEY_INT(TargetAudioSampleFormat)]);
           if (sampleFormatParam != AV_SAMPLE_FMT_NONE) {
               encoderContext.sampleFormat() = sampleFormatParam;
           } else if (encoder->sample_fmts) {
               encoderContext.sampleFormat() = encoder->sample_fmts[0];
           } else {
               encoderContext.sampleFormat() = decoderContext.sampleFormat();
           }

           encoderContext.timeBase().num = 1;
           encoderContext.timeBase().den = encoderContext.sampleRate();

           AudioFilterGraphParameters params;
           params.channelLayout = encoderContext.channelLayout();
           params.channels = encoderContext.channels();
           params.format = encoderContext.sampleFormat();
           params.sampleRate = encoderContext.sampleRate();
           if ((status = generateGraph(decoderContext, encoderContext, params, streams[i].filterGraph)) &lt; 0) {
               return status;
           }
       }

       // before using encoder, we should open it and update its parameters
       printf("Codec bits per sample %d\n", av_get_bits_per_sample(encoderCodecId));
       AVDictionary *options = nullptr;
       // avcodec_open2()
       if ((status = encoderContext.open(encoder, &amp;options)) &lt; 0) {
           return status;
       }
       if (streams[i].filterGraph) {
           streams[i].filterGraph.setOutputFrameSize(encoderContext.getFrameSize());
       }
       // avcodec_parameters_from_context()
       if ((status = encoderContext.fillParamters(outputStream->codecpar)) &lt; 0) {
           return status;
       }
       outputStream->codecpar->format = encoderContext.getRawCtx()->pix_fmt;

       if (formatContext.getRawCtx()->oformat->flags &amp; AVFMT_GLOBALHEADER) {
           encoderContext.getRawCtx()->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }

       if (encoderContext.getRawCtx()->nb_coded_side_data) {
           int i;

           for (i = 0; i &lt; encoderContext.getRawCtx()->nb_coded_side_data; i++) {
               const AVPacketSideData *sd_src = &amp;encoderContext.getRawCtx()->coded_side_data[i];
               uint8_t *dst_data;

               dst_data = av_stream_new_side_data(outputStream, sd_src->type, sd_src->size);
               if (!dst_data)
                   return AVERROR(ENOMEM);
               memcpy(dst_data, sd_src->data, sd_src->size);
           }
       }

       /*
        * Add global input side data. For now this is naive, and copies it
        * from the input stream's global side data. All side data should
        * really be funneled over AVFrame and libavfilter, then added back to
        * packet side data, and then potentially using the first packet for
        * global side data.
        */
       for (int i = 0; i &lt; inputStream->nb_side_data; i++) {
           AVPacketSideData *sd = &amp;inputStream->side_data[i];
           uint8_t *dst = av_stream_new_side_data(outputStream, sd->type, sd->size);
           if (!dst)
               return AVERROR(ENOMEM);
           memcpy(dst, sd->data, sd->size);
       }

       // copy timebase while removing common factors
       if (outputStream->time_base.num &lt;= 0 || outputStream->time_base.den &lt;= 0) {
           outputStream->time_base = av_add_q(encoderContext.timeBase(), (AVRational){0, 1});
       }

       // copy estimated duration as a hint to the muxer
       if (outputStream->duration &lt;= 0 &amp;&amp; inputStream->duration > 0) {
           outputStream->duration = av_rescale_q(inputStream->duration, inputStream->time_base, outputStream->time_base);
       }

       streams[i].codecType = encoderContext.getRawCtx()->codec_type;
       streams[i].codec = std::move(encoderContext);
       streams[i].streamIndex = i;
    }

    // avio_open() and avformat_write_header()
    if ((status = formatContext.writeHeader(fmtOptions)) &lt; 0) {
       return status;
    }

    formatContext.dumpFormat();
    </avsampleformat></double></double></avcodecid>

    Reading from stream.

    int InputProcessor::performStep() {
       int status;

       Packet nextPacket;
       if ((status = input->getFormatContext().readFrame(nextPacket)) &lt; 0) {
           return status;
       }
       ++streams[nextPacket.getStreamIndex()].readPackets;
       int streamIndex = nextPacket.getStreamIndex();
       CodecContext &amp;decoder = input->getDecoderAtIndex(streamIndex);
       AVStream *inputStream = input->getStreamAtIndex(streamIndex);

       if (streams[nextPacket.getStreamIndex()].readPackets == 1) {
           for (int i = 0; i &lt; inputStream->nb_side_data; ++i) {
               AVPacketSideData *src_sd = &amp;inputStream->side_data[i];
               uint8_t *dst_data;

               if (src_sd->type == AV_PKT_DATA_DISPLAYMATRIX) {
                   continue;
               }
               if (av_packet_get_side_data(nextPacket.getRawPtr(), src_sd->type, nullptr)) {
                   continue;
               }
               dst_data = av_packet_new_side_data(nextPacket.getRawPtr(), src_sd->type, src_sd->size);
               if (!dst_data) {
                   return AVERROR(ENOMEM);
               }
               memcpy(dst_data, src_sd->data, src_sd->size);
           }
       }

       nextPacket.rescaleTimestamps(inputStream->time_base, decoder.timeBase());

       status = decodePacket(&amp;nextPacket, nextPacket.getStreamIndex());
       if (status &lt; 0 &amp;&amp; status != AVERROR(EAGAIN)) {
           return status;
       }
       return 0;
    }

    Here is decoding/encoding code.

    int InputProcessor::decodePacket(Packet *packet, int streamIndex) {
       int status;
       int sendStatus;

       auto &amp;decoder = input->getDecoderAtIndex(streamIndex);

       do {
           if (packet == nullptr) {

               sendStatus = decoder.flushDecodedFrames();
           } else {
               sendStatus = decoder.sendPacket(*packet);
           }

           if (sendStatus &lt; 0 &amp;&amp; sendStatus != AVERROR(EAGAIN) &amp;&amp; sendStatus != AVERROR_EOF) {
               return sendStatus;
           }
           if (sendStatus == 0 &amp;&amp; packet) {
               ++streams[streamIndex].decodedPackets;
           }

           Frame decodedFrame;
           while (true) {
               if ((status = decoder.receiveFrame(decodedFrame)) &lt; 0) {
                   break;
               }
               ++streams[streamIndex].decodedFrames;
               if ((status = filterAndWriteFrame(&amp;decodedFrame, streamIndex)) &lt; 0) {
                   break;
               }
               decodedFrame.unref();
           }
       } while (sendStatus == AVERROR(EAGAIN));

    return status;
    }

    int InputProcessor::encodeAndWriteFrame(Frame *frame, int streamIndex) {
       assert(input->isValid());
       assert(formatContext);

       int status = 0;
       int sendStatus;

       Packet packet;

       CodecContext &amp;encoderContext = streams[streamIndex].codec;

       do {
           if (frame) {
               sendStatus = encoderContext.sendFrame(*frame);
           } else {
               sendStatus = encoderContext.flushEncodedPackets();
           }
           if (sendStatus &lt; 0 &amp;&amp; sendStatus != AVERROR(EAGAIN) &amp;&amp; sendStatus != AVERROR_EOF) {
               return status;
           }
           if (sendStatus == 0 &amp;&amp; frame) {
               ++streams[streamIndex].encodedFrames;
           }

           while (true) {
               if ((status = encoderContext.receivePacket(packet)) &lt; 0) {
                   break;
               }
               ++streams[streamIndex].encodedPackets;
               packet.setStreamIndex(streamIndex);
               auto sourceTimebase = encoderContext.timeBase();
               auto dstTimebase = formatContext.getStreams()[streamIndex]->time_base;
               packet.rescaleTimestamps(sourceTimebase, dstTimebase);
               if ((status = formatContext.writeFrameInterleaved(packet)) &lt; 0) {
                   return status;
               }
               packet.unref();
           }
       } while (sendStatus == AVERROR(EAGAIN));

       if (status != AVERROR(EAGAIN)) {
           return status;
       }

       return 0;
    }

    FFprobe output for original video.

    Input #0, matroska,webm, from 'testvideo':
     Metadata:
       title           : TestVideo
       encoder         : libebml v1.3.0 + libmatroska v1.4.0
       creation_time   : 2014-12-23T03:38:05.000000Z
     Duration: 00:02:29.25, start: 0.000000, bitrate: 79549 kb/s
       Stream #0:0(rus): Video: h264 (High 4:4:4 Predictive), yuv444p10le(pc, bt709, progressive), 2048x858 [SAR 1:1 DAR 1024:429], 24 fps, 24 tbr, 1k tbn, 48 tbc (default)
       Stream #0:1(rus): Audio: pcm_s24le, 48000 Hz, 6 channels, s32 (24 bit), 6912 kb/s (default)

    Transcoded :

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '123.mp4':
     Metadata:
       major_brand     : mp42
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf57.71.100
     Duration: 00:02:29.27, start: 0.000000, bitrate: 4282 kb/s
       Stream #0:0(rus): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 192:143 DAR 1024:429], 3940 kb/s, 24.01 fps, 24 tbr, 12288 tbn, 96 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(rus): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, 5.1, fltp, 336 kb/s (default)
       Metadata:
         handler_name    : SoundHandler
  • Transcoded video stream unplayable in QuickTime player

    30 novembre 2017, par Nikolai Linetskiy

    Currently I’m writing software for transcoding media files using ffmpeg libs. The problem is that in case of H264 QuickTime cannot play result stream and shows black screen. Audio streams work as expected. I have read that QuickTime can deal only with yuv420p pixel format and that is true for encoded video.

    I looked through the ffmpeg examples and ffmpeg source code and could not find anything find any clues where the problem might be. I would really appreciate any help.

    The only thing I managed to get from QuickTime is
    SeqAndPicParamSetFromCFDictionaryRef, bad config record message in console. Same thing is logged by AVPlayer from AVFoundation.

    Here is the initialization of output streams and encoders.

    int status;

    // avformat_alloc_output_context2()
    if ((status = formatContext.open(destFilename)) &lt; 0) {
       return status;
    }

    AVDictionary *fmtOptions = nullptr;
    av_dict_set(&amp;fmtOptions, "movflags", "faststart", 0);
    av_dict_set(&amp;fmtOptions, "brand", "mp42", 0);

    streams.resize(input->getStreamsCount());
    for (int i = 0; i &lt; input->getStreamsCount(); ++i) {
       AVStream *inputStream = input->getStreamAtIndex(i);
       CodecContext &amp;decoderContext = input->getDecoderAtIndex(i);

       // retrieve output codec by codec id
       auto encoderCodecId = decoderContext.getCodecID();;
       if (decoderContext.getCodecType() == AVMEDIA_TYPE_VIDEO || decoderContext.getCodecType() == AVMEDIA_TYPE_AUDIO) {
           int codecIdKey = decoderContext.getCodecType() == AVMEDIA_TYPE_AUDIO ? IPROC_KEY_INT(TargetAudioCodecID) : IPROC_KEY_INT(TargetVideoCodecID);
           auto codecIdParam = static_cast<avcodecid>(params[codecIdKey]);
           if (codecIdParam != AV_CODEC_ID_NONE) {
               encoderCodecId = codecIdParam;
           }
       }
       AVCodec *encoder = nullptr;
       if ((encoder = avcodec_find_encoder(encoderCodecId)) == nullptr) {
           status = AVERROR_ENCODER_NOT_FOUND;
           return status;
       }

       // create stream with specific codec and format
       AVStream *outputStream = nullptr;
       // avformat_new_stream()
       if ((outputStream = formatContext.newStream(encoder)) == nullptr) {
           return AVERROR(ENOMEM);
       }


       CodecContext encoderContext;
       // avcodec_alloc_context3()
       if ((status = encoderContext.init(encoder)) &lt; 0) {
           return status;
       }

       outputStream->disposition = inputStream->disposition;
       encoderContext.getRawCtx()->chroma_sample_location = decoderContext.getRawCtx()->chroma_sample_location;

       if (encoderContext.getCodecType() == AVMEDIA_TYPE_VIDEO) {
           auto lang = av_dict_get(input->getStreamAtIndex(i)->metadata, "language", nullptr, 0);
           if (lang) {
               av_dict_set(&amp;outputStream->metadata, "language", lang->value, 0);
           }

           // prepare encoder context
           int targetWidth = params[IPROC_KEY_INT(TargetVideoWidth)];
           int targetHeight = params[IPROC_KEY_INT(TargetVideHeight)];



           encoderContext.width() = targetWidth > 0 ? targetWidth : decoderContext.width();
           encoderContext.height() = targetHeight > 0 ? targetHeight : decoderContext.height();
           encoderContext.pixelFormat() = encoder->pix_fmts ? encoder->pix_fmts[0] : decoderContext.pixelFormat();;
           encoderContext.timeBase() = decoderContext.timeBase();
           encoderContext.getRawCtx()->level = 31;
           encoderContext.getRawCtx()->gop_size = 25;

           double far = static_cast<double>(encoderContext.getRawCtx()->width) / encoderContext.getRawCtx()->height;
           double dar = static_cast<double>(decoderContext.width()) / decoderContext.height();
           encoderContext.sampleAspectRatio() = av_d2q(dar / far, 255);


           encoderContext.getRawCtx()->bits_per_raw_sample = FFMIN(decoderContext.getRawCtx()->bits_per_raw_sample,
                                                                   av_pix_fmt_desc_get(encoderContext.pixelFormat())->comp[0].depth);
           encoderContext.getRawCtx()->framerate = inputStream->r_frame_rate;
           outputStream->avg_frame_rate = encoderContext.getRawCtx()->framerate;

           VideoFilterGraphParameters params;
           params.height = encoderContext.height();
           params.width = encoderContext.width();
           params.pixelFormat = encoderContext.pixelFormat();
           if ((status = generateGraph(decoderContext, encoderContext, params, streams[i].filterGraph)) &lt; 0) {
               return status;
           }

       } else if (encoderContext.getCodecType() == AVMEDIA_TYPE_AUDIO) {
           auto lang = av_dict_get(input->getStreamAtIndex(i)->metadata, "language", nullptr, 0);
           if (lang) {
               av_dict_set(&amp;outputStream->metadata, "language", lang->value, 0);
           }

           encoderContext.sampleRate() = params[IPROC_KEY_INT(TargetAudioSampleRate)] ? : decoderContext.sampleRate();
           encoderContext.channels() = params[IPROC_KEY_INT(TargetAudioChannels)] ? : decoderContext.channels();
           auto paramChannelLayout = params[IPROC_KEY_INT(TargetAudioChannelLayout)];
           if (paramChannelLayout) {
               encoderContext.channelLayout() = paramChannelLayout;
           } else {
               encoderContext.channelLayout() = av_get_default_channel_layout(encoderContext.channels());
           }

           AVSampleFormat sampleFormatParam = static_cast<avsampleformat>(params[IPROC_KEY_INT(TargetAudioSampleFormat)]);
           if (sampleFormatParam != AV_SAMPLE_FMT_NONE) {
               encoderContext.sampleFormat() = sampleFormatParam;
           } else if (encoder->sample_fmts) {
               encoderContext.sampleFormat() = encoder->sample_fmts[0];
           } else {
               encoderContext.sampleFormat() = decoderContext.sampleFormat();
           }

           encoderContext.timeBase().num = 1;
           encoderContext.timeBase().den = encoderContext.sampleRate();

           AudioFilterGraphParameters params;
           params.channelLayout = encoderContext.channelLayout();
           params.channels = encoderContext.channels();
           params.format = encoderContext.sampleFormat();
           params.sampleRate = encoderContext.sampleRate();
           if ((status = generateGraph(decoderContext, encoderContext, params, streams[i].filterGraph)) &lt; 0) {
               return status;
           }
       }

       // before using encoder, we should open it and update its parameters
       printf("Codec bits per sample %d\n", av_get_bits_per_sample(encoderCodecId));
       AVDictionary *options = nullptr;
       // avcodec_open2()
       if ((status = encoderContext.open(encoder, &amp;options)) &lt; 0) {
           return status;
       }
       if (streams[i].filterGraph) {
           streams[i].filterGraph.setOutputFrameSize(encoderContext.getFrameSize());
       }
       // avcodec_parameters_from_context()
       if ((status = encoderContext.fillParamters(outputStream->codecpar)) &lt; 0) {
           return status;
       }
       outputStream->codecpar->format = encoderContext.getRawCtx()->pix_fmt;

       if (formatContext.getRawCtx()->oformat->flags &amp; AVFMT_GLOBALHEADER) {
           encoderContext.getRawCtx()->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }

       if (encoderContext.getRawCtx()->nb_coded_side_data) {
           int i;

           for (i = 0; i &lt; encoderContext.getRawCtx()->nb_coded_side_data; i++) {
               const AVPacketSideData *sd_src = &amp;encoderContext.getRawCtx()->coded_side_data[i];
               uint8_t *dst_data;

               dst_data = av_stream_new_side_data(outputStream, sd_src->type, sd_src->size);
               if (!dst_data)
                   return AVERROR(ENOMEM);
               memcpy(dst_data, sd_src->data, sd_src->size);
           }
       }

       /*
        * Add global input side data. For now this is naive, and copies it
        * from the input stream's global side data. All side data should
        * really be funneled over AVFrame and libavfilter, then added back to
        * packet side data, and then potentially using the first packet for
        * global side data.
        */
       for (int i = 0; i &lt; inputStream->nb_side_data; i++) {
           AVPacketSideData *sd = &amp;inputStream->side_data[i];
           uint8_t *dst = av_stream_new_side_data(outputStream, sd->type, sd->size);
           if (!dst)
               return AVERROR(ENOMEM);
           memcpy(dst, sd->data, sd->size);
       }

       // copy timebase while removing common factors
       if (outputStream->time_base.num &lt;= 0 || outputStream->time_base.den &lt;= 0) {
           outputStream->time_base = av_add_q(encoderContext.timeBase(), (AVRational){0, 1});
       }

       // copy estimated duration as a hint to the muxer
       if (outputStream->duration &lt;= 0 &amp;&amp; inputStream->duration > 0) {
           outputStream->duration = av_rescale_q(inputStream->duration, inputStream->time_base, outputStream->time_base);
       }

       streams[i].codecType = encoderContext.getRawCtx()->codec_type;
       streams[i].codec = std::move(encoderContext);
       streams[i].streamIndex = i;
    }

    // avio_open() and avformat_write_header()
    if ((status = formatContext.writeHeader(fmtOptions)) &lt; 0) {
       return status;
    }

    formatContext.dumpFormat();
    </avsampleformat></double></double></avcodecid>

    Reading from stream.

    int InputProcessor::performStep() {
       int status;

       Packet nextPacket;
       if ((status = input->getFormatContext().readFrame(nextPacket)) &lt; 0) {
           return status;
       }
       ++streams[nextPacket.getStreamIndex()].readPackets;
       int streamIndex = nextPacket.getStreamIndex();
       CodecContext &amp;decoder = input->getDecoderAtIndex(streamIndex);
       AVStream *inputStream = input->getStreamAtIndex(streamIndex);

       if (streams[nextPacket.getStreamIndex()].readPackets == 1) {
           for (int i = 0; i &lt; inputStream->nb_side_data; ++i) {
               AVPacketSideData *src_sd = &amp;inputStream->side_data[i];
               uint8_t *dst_data;

               if (src_sd->type == AV_PKT_DATA_DISPLAYMATRIX) {
                   continue;
               }
               if (av_packet_get_side_data(nextPacket.getRawPtr(), src_sd->type, nullptr)) {
                   continue;
               }
               dst_data = av_packet_new_side_data(nextPacket.getRawPtr(), src_sd->type, src_sd->size);
               if (!dst_data) {
                   return AVERROR(ENOMEM);
               }
               memcpy(dst_data, src_sd->data, src_sd->size);
           }
       }

       nextPacket.rescaleTimestamps(inputStream->time_base, decoder.timeBase());

       status = decodePacket(&amp;nextPacket, nextPacket.getStreamIndex());
       if (status &lt; 0 &amp;&amp; status != AVERROR(EAGAIN)) {
           return status;
       }
       return 0;
    }

    Here is decoding/encoding code.

    int InputProcessor::decodePacket(Packet *packet, int streamIndex) {
       int status;
       int sendStatus;

       auto &amp;decoder = input->getDecoderAtIndex(streamIndex);

       do {
           if (packet == nullptr) {

               sendStatus = decoder.flushDecodedFrames();
           } else {
               sendStatus = decoder.sendPacket(*packet);
           }

           if (sendStatus &lt; 0 &amp;&amp; sendStatus != AVERROR(EAGAIN) &amp;&amp; sendStatus != AVERROR_EOF) {
               return sendStatus;
           }
           if (sendStatus == 0 &amp;&amp; packet) {
               ++streams[streamIndex].decodedPackets;
           }

           Frame decodedFrame;
           while (true) {
               if ((status = decoder.receiveFrame(decodedFrame)) &lt; 0) {
                   break;
               }
               ++streams[streamIndex].decodedFrames;
               if ((status = filterAndWriteFrame(&amp;decodedFrame, streamIndex)) &lt; 0) {
                   break;
               }
               decodedFrame.unref();
           }
       } while (sendStatus == AVERROR(EAGAIN));

    return status;
    }

    int InputProcessor::encodeAndWriteFrame(Frame *frame, int streamIndex) {
       assert(input->isValid());
       assert(formatContext);

       int status = 0;
       int sendStatus;

       Packet packet;

       CodecContext &amp;encoderContext = streams[streamIndex].codec;

       do {
           if (frame) {
               sendStatus = encoderContext.sendFrame(*frame);
           } else {
               sendStatus = encoderContext.flushEncodedPackets();
           }
           if (sendStatus &lt; 0 &amp;&amp; sendStatus != AVERROR(EAGAIN) &amp;&amp; sendStatus != AVERROR_EOF) {
               return status;
           }
           if (sendStatus == 0 &amp;&amp; frame) {
               ++streams[streamIndex].encodedFrames;
           }

           while (true) {
               if ((status = encoderContext.receivePacket(packet)) &lt; 0) {
                   break;
               }
               ++streams[streamIndex].encodedPackets;
               packet.setStreamIndex(streamIndex);
               auto sourceTimebase = encoderContext.timeBase();
               auto dstTimebase = formatContext.getStreams()[streamIndex]->time_base;
               packet.rescaleTimestamps(sourceTimebase, dstTimebase);
               if ((status = formatContext.writeFrameInterleaved(packet)) &lt; 0) {
                   return status;
               }
               packet.unref();
           }
       } while (sendStatus == AVERROR(EAGAIN));

       if (status != AVERROR(EAGAIN)) {
           return status;
       }

       return 0;
    }

    FFprobe output for original video.

    Input #0, matroska,webm, from 'testvideo':
     Metadata:
       title           : TestVideo
       encoder         : libebml v1.3.0 + libmatroska v1.4.0
       creation_time   : 2014-12-23T03:38:05.000000Z
     Duration: 00:02:29.25, start: 0.000000, bitrate: 79549 kb/s
       Stream #0:0(rus): Video: h264 (High 4:4:4 Predictive), yuv444p10le(pc, bt709, progressive), 2048x858 [SAR 1:1 DAR 1024:429], 24 fps, 24 tbr, 1k tbn, 48 tbc (default)
       Stream #0:1(rus): Audio: pcm_s24le, 48000 Hz, 6 channels, s32 (24 bit), 6912 kb/s (default)

    Transcoded :

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '123.mp4':
     Metadata:
       major_brand     : mp42
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf57.71.100
     Duration: 00:02:29.27, start: 0.000000, bitrate: 4282 kb/s
       Stream #0:0(rus): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 192:143 DAR 1024:429], 3940 kb/s, 24.01 fps, 24 tbr, 12288 tbn, 96 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(rus): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, 5.1, fltp, 336 kb/s (default)
       Metadata:
         handler_name    : SoundHandler
  • How to get xml/json output when ffprobe could not find codec parameters for a stream

    9 juin 2015, par user2421731

    I was using ffprobe on a mkv file in order to get some info about the chapter structure so as to split the mkv using ffmpeg.

    ffprobe miku.mkv -print_format xml

    However ffprobe encountered an scodec error(does not affect chapter info) and I could not get the info by setting print format. I know there are ways like pipeline to bypass the error, but I still want to get the xml/json file so it can be parsed easily.

    I wonder if there is a way to ingore the error and output the xml/json file or is there a solution to the error. (But I prefer a solution to the former, because I don’t know what any errors I might encounter and I’d like to see it works as long as the chapter info is available)

    The error was like this.

    ffprobe version N-60899-ga8ad7e4 Copyright (c) 2007-2014 the FFmpeg developers
      built on Feb 25 2014 04:04:01 with gcc 4.8.2 (GCC)
      configuration : —enable-gpl —enable-version3 —disable-w32threads —enable-avisynth —enable-bzlib —enable-fontconfig —enable-frei0r —enable-gnutls —enable-iconv —enable-libass —enable-libbluray —enable-libcaca —enable-libfreetype —enable-libgsm —enable-libilbc —enable-libmodplug —enable-libmp3lame —enable-libopencore-amrnb —enable-libopencore-amrwb —enable-libopenjpeg —enable-libopus —enable-librtmp —enable-libschroedinger —enable-libsoxr —enable-libspeex —enable-libtheora —enable-libtwolame —enable-libvidstab —enable-libvo-aacenc —enable-libvo-amrwbenc —enable-libvorbis —enable-libvpx —enable-libwavpack —enable-libx264 —enable-libx265 —enable-libxavs —enable-libxvid —enable-zlib
      libavutil      52. 66.100 / 52. 66.100
      libavcodec     55. 52.102 / 55. 52.102
      libavformat    55. 33.100 / 55. 33.100
      libavdevice    55. 10.100 / 55. 10.100
      libavfilter     4.  1.103 /  4.  1.103
      libswscale      2.  5.101 /  2.  5.101
      libswresample   0. 18.100 /  0. 18.100
      libpostproc    52.  3.100 / 52.  3.100
    [matroska,webm @ 0000000002945080] Could not find codec parameters for stream 3 (Subtitle : hdmv_pgs_subtitle) : unspecified size
    Consider increasing the value for the ’analyzeduration’ and ’probesize’ options
    Input #0, matroska,webm, from ’[Hatsune Miku Magical Mirai 2013][JPN][BDRIP][1080P][H264_FLAC_DTS-HDMA].mkv’ :
      Metadata :
        encoder : libebml v1.3.0 + libmatroska v1.4.1
        creation_time : 2014-02-18 22:57:12
      Duration : 01:58:00.08, start : 0.000000, bitrate : 16495 kb/s
        Chapter #0.0 : start 0.000000, end 30.030000
        Metadata :
          title : Start
        Chapter #0.1 : start 30.030000, end 149.749000
        Metadata :
          title : 00. Opening Music
    ......