Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (46)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

  • Les statuts des instances de mutualisation

    13 mars 2010, par

    Pour des raisons de compatibilité générale du plugin de gestion de mutualisations avec les fonctions originales de SPIP, les statuts des instances sont les mêmes que pour tout autre objets (articles...), seuls leurs noms dans l’interface change quelque peu.
    Les différents statuts possibles sont : prepa (demandé) qui correspond à une instance demandée par un utilisateur. Si le site a déjà été créé par le passé, il est passé en mode désactivé. publie (validé) qui correspond à une instance validée par un (...)

Sur d’autres sites (9053)

  • Corrupt AVFrame returned by libavcodec

    2 janvier 2015, par informer2000

    As part of a bigger project, I’m trying to decode a number of HD (1920x1080) video streams simultaneously. Each video stream is stored in raw yuv420p format within an AVI container. I have a Decoder class from which I create a number of objects within different threads (one object per thread). The two main methods in Decoder are decode() and getNextFrame(), which I provide the implementation for below.

    When I separate the decoding logic and use it to decode a single stream, everything works fine. However, when I use the multi-threaded code, I get a segmentation fault and the program crashes within the processing code in the decoding loop. After some investigation, I realized that the data array of the AVFrame filled in getNextFrame() contains addresses which are out of range (according to gdb).

    I’m really lost here ! I’m not doing anything that would change the contents of the AVFrame in my code. The only place where I attempt to access the AVFrame is when I call sws_scale() to convert the color format and that’s where the segmentation fault occurs in the second case because of the corrupt AVFrame. Any suggestion as to why this is happening is greatly appreciated. Thanks in advance.

    The decode() method :

    void decode() {

       QString filename("video.avi");

       AVFormatContext* container = 0;

       if (avformat_open_input(&container, filename.toStdString().c_str(), NULL, NULL) < 0) {
           fprintf(stderr, "Could not open %s\n", filename.toStdString().c_str());
           exit(1);
       }

       if (avformat_find_stream_info(container, NULL) < 0) {
           fprintf(stderr, "Could not find file info..\n");
       }

       // find a video stream
       int stream_id = -1;
       for (unsigned int i = 0; i < container->nb_streams; i++) {
           if (container->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
               stream_id = i;
               break;
           }
       }

       if (stream_id == -1) {
           fprintf(stderr, "Could not find a video stream..\n");
       }

       av_dump_format(container, stream_id, filename.toStdString().c_str(), false);

       // find the appropriate codec and open it
       AVCodecContext* codec_context = container->streams[stream_id]->codec;   // Get a pointer to the codec context for the video stream

       AVCodec* codec = avcodec_find_decoder(codec_context->codec_id);  // Find the decoder for the video stream

       if (codec == NULL) {
           fprintf(stderr, "Could not find a suitable codec..\n");
           return -1; // Codec not found
       }


       // Inform the codec that we can handle truncated bitstreams -- i.e.,
       // bitstreams where frame boundaries can fall in the middle of packets
       if (codec->capabilities & CODEC_CAP_TRUNCATED)
           codec_context->flags |= CODEC_FLAG_TRUNCATED;

       fprintf(stderr, "Codec: %s\n", codec->name);

       // open the codec
       int ret = avcodec_open2(codec_context, codec, NULL);
       if (ret < 0) {
           fprintf(stderr, "Could not open the needed codec.. Error: %d\n", ret);
           return -1;
       }


       // allocate video frame
       AVFrame *frame = avcodec_alloc_frame();  // deprecated, should use av_frame_alloc() instead

       if (!frame) {
           fprintf(stderr, "Could not allocate video frame..\n");
           return -1;
       }

       int frameNumber = 0;

       // as long as there are remaining frames in the stream
       while  (getNextFrame(container, codec_context, stream_id, frame)) {

           // Processing logic here...
           // AVFrame data array contains three addresses which are out of range

       }

       // freeing resources
       av_free(frame);

       avcodec_close(codec_context);

       avformat_close_input(&container);
    }

    The getNextFrame() method :

    bool getNextFrame(AVFormatContext *pFormatCtx,
                     AVCodecContext *pCodecCtx,
                     int videoStream,
                     AVFrame *pFrame) {

       uint8_t inbuf[INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE];

       char buf[1024];

       int len;

       int got_picture;
       AVPacket avpkt;

       av_init_packet(&avpkt);

       memset(inbuf + INBUF_SIZE, 0, FF_INPUT_BUFFER_PADDING_SIZE);

       // read data from bit stream and store it in the AVPacket object
       while(av_read_frame(pFormatCtx, &avpkt) >= 0) {

           // check the stream index of the read packet to make sure it is a video stream
           if(avpkt.stream_index == videoStream) {

               // decode the packet and store the decoded content in the AVFrame object and set the flag if we have a complete decoded picture
               avcodec_decode_video2(pCodecCtx, pFrame, &got_picture, &avpkt);

               // if we have completed decoding an entire picture (frame), return true
               if(got_picture) {

                   av_free_packet(&avpkt);

                   return true;
               }
           }

           // free the AVPacket object that was allocated by av_read_frame
           av_free_packet(&avpkt);
       }

       return false;

    }

    The lock management callback function :

    static int lock_call_back(void ** mutex, enum AVLockOp op) {
       switch (op) {
           case AV_LOCK_CREATE:
               *mutex = (pthread_mutex_t *) malloc(sizeof(pthread_mutex_t));
               pthread_mutex_init((pthread_mutex_t *)(*mutex), NULL);
               break;
           case AV_LOCK_OBTAIN:
               pthread_mutex_lock((pthread_mutex_t *)(*mutex));
               break;
           case AV_LOCK_RELEASE:
               pthread_mutex_unlock((pthread_mutex_t *)(*mutex));
               break;
           case AV_LOCK_DESTROY:
               pthread_mutex_destroy((pthread_mutex_t *)(*mutex));
               free(*mutex);
               break;
       }

       return 0;
    }
  • ffmpeg : Cannot find a matching stream for unlabeled input pad 0 on filter Parsed_pad_5

    26 mars 2019, par rsswtmr

    This shouldn’t be that hard. I’m trying to combine three disparate video sources. I’m upscaling them to a consistent 1280x720 frame, with black backgrounds for letterboxing, and trying to concatenate to the output file. The two input files are show segments, and the bumper is a random commercial that goes in the middle.

    On an iMac Pro, System 10.14.3, ffmpeg 4.1.1. The command I’m trying to make work is :

    ffmpeg -y -hide_banner -i "input1.mkv" -i "bumper.mkv" -i "input2.mkv" -filter_complex '[0:v]scale=1280x720:force_original_aspect_ratio=increase[v0],pad=1280x720:max(0\,(ow-iw)/2):max(0\,(oh-ih)/2):black[v0]; [1:v]scale=1280x720:force_original_aspect_ratio=increase[v1],pad=1280x720:max(0\,(ow-iw)/2):max(0\,(oh-ih)/2):black[v1]; [2:v]scale=1280x720:force_original_aspect_ratio=increase[v2],pad=1280x720:max(0\,(ow-iw)/2):max(0\,(oh-ih)/2):black[v2]; [v0][0:a][v1][1:a][v2][2:a]concat=n=3:v=1:a=1 [outv] [outa]' -map "[outv]" -map "[outa]" 'output.mkv'

    The resulting frame I get back is :

    [h264 @ 0x7fbec9000600] [verbose] Reinit context to 720x480, pix_fmt: yuv420p
    [info] Input #0, matroska,webm, from 'input1.mkv':
    [info]   Metadata:
    [info]     encoder         : libebml v0.7.7 + libmatroska v0.8.1
    [info]     creation_time   : 2009-07-20T01:33:54.000000Z
    [info]   Duration: 00:12:00.89, start: 0.000000, bitrate: 1323 kb/s
    [info]     Stream #0:0(eng): Video: h264 (High), 1 reference frame, yuv420p(progressive, left), 708x480 (720x480) [SAR 10:11 DAR 59:44], 23.98 fps, 23.98 tbr, 1k tbn, 47.95 tbc (default)
    [info]     Stream #0:1(eng): Audio: ac3, 48000 Hz, stereo, fltp, 160 kb/s (default)
    [info]     Metadata:
    [info]       title           : English AC3
    [info]     Stream #0:2(eng): Subtitle: subrip
    [h264 @ 0x7fbec9019a00] [verbose] Reinit context to 304x240, pix_fmt: yuv420p
    [info] Input #1, matroska,webm, from 'bumper.mkv':
    [info]   Metadata:
    [info]     CREATION_TIME   : 2019-03-15T15:16:00Z
    [info]     ENCODER         : Lavf57.7.2
    [info]   Duration: 00:00:18.18, start: 0.000000, bitrate: 274 kb/s
    [info]     Stream #1:0: Video: h264 (Main), 1 reference frame, yuv420p(tv, smpte170m/smpte170m/bt709, progressive, left), 302x232 (304x240) [SAR 1:1 DAR 151:116], 29.97 fps, 29.97 tbr, 1k tbn, 180k tbc (default)
    [info]     Stream #1:1: Audio: aac (LC), 44100 Hz, stereo, fltp, delay 2111 (default)
    [info]     Metadata:
    [info]       title           : Stereo
    [error] Truncating packet of size 3515 to 1529
    [h264 @ 0x7fbec9014600] [verbose] Reinit context to 704x480, pix_fmt: yuv420p
    [h264 @ 0x7fbec9014600] [info] concealing 769 DC, 769 AC, 769 MV errors in I frame
    [matroska,webm @ 0x7fbec9011e00] [error] Read error at pos. 50829 (0xc68d)
    [info] Input #2, matroska,webm, from 'input2.mkv':
    [info]   Metadata:
    [info]     encoder         : libebml v0.7.7 + libmatroska v0.8.1
    [info]     creation_time   : 2009-07-19T22:37:48.000000Z
    [info]   Duration: 00:10:07.20, start: 0.000000, bitrate: 1391 kb/s
    [info]     Stream #2:0(eng): Video: h264 (High), 1 reference frame, yuv420p(progressive, left), 704x480 [SAR 10:11 DAR 4:3], 23.98 fps, 23.98 tbr, 1k tbn, 47.95 tbc (default)
    [info]     Stream #2:1(eng): Audio: ac3, 48000 Hz, stereo, fltp, 160 kb/s (default)
    [info]     Metadata:
    [info]       title           : English AC3
    [info]     Stream #2:2(eng): Subtitle: subrip
    [Parsed_scale_0 @ 0x7fbec8716540] [verbose] w:1280 h:720 flags:'bilinear' interl:0
    [Parsed_scale_2 @ 0x7fbec8702480] [verbose] w:1280 h:720 flags:'bilinear' interl:0
    [Parsed_scale_4 @ 0x7fbec8702e40] [verbose] w:1280 h:720 flags:'bilinear' interl:0
    [fatal] Cannot find a matching stream for unlabeled input pad 0 on filter Parsed_pad_5
    [AVIOContext @ 0x7fbec862bfc0] [verbose] Statistics: 104366 bytes read, 2 seeks
    [AVIOContext @ 0x7fbec870a100] [verbose] Statistics: 32768 bytes read, 0 seeks
    [AVIOContext @ 0x7fbec87135c0] [verbose] Statistics: 460284 bytes read, 2 seeks

    I have no idea what Parsed_pad_5 means. I Googled Cannot find a matching stream for unlabeled input pad and found absolutely no explanation, anywhere. I’m a relative ffmpeg newbie. Before I start rooting around in the source code, can anyone point me in the right direction ? Thanks in advance.

  • How to fill an AVFrame structure in order to encode an YUY2 video (or UYVY) into H265

    22 avril, par Rich Deng

    I want to compress a video stream in YUY2 or UYVY format to, say H265. If I understand the answers given this thread correctly, I should be able use the function av_image_fill_arrays() to fill the data and linesize arrays of an AVFrame object, call avcodec_send_frame(), and then avcodec_receive_packet() to get encoded data :

    


    bool VideoEncoder::Init(const AM_MEDIA_TYPE* pMediaType)
{
    // we should have a valid pointer
    if (pMediaType)
    {
        m_mtInput.Empty();
        m_mtInput.Set(*pMediaType);
    }
    else
        return false;

        // find encoder
    m_pCodec = m_spAVCodecDlls->avcodec_find_encoder(AV_CODEC_ID_HEVC);
    m_pCodecCtx = m_spAVCodecDlls->avcodec_alloc_context3(m_pCodec);
    if (!m_pCodec || !m_pCodecCtx)
    {
        Log.Log(_T("Failed to find or allocate codec context!"));
        return false;
    }

    AVPixelFormat ePixFmtInput = GetInputPixelFormat();
    if (CanConvertInputFormat(ePixFmtInput) == false)
    {
        return false;
    }

    // we are able to convert
    // so continue with setting it up
    int nWidth = m_mtInput.GetWidth();
    int nHeight = m_mtInput.GetHeight();

    // Set encoding parameters

    // Set bitrate (4 Mbps for 1920x1080)
    m_pCodecCtx->bit_rate = (((int64)4000000 * nWidth / 1920) * nHeight / 1080);  

    m_pCodecCtx->width = nWidth;  
    m_pCodecCtx->height = nHeight;


    // use reference time as time_base
    m_pCodecCtx->time_base.den = 10000000;  
    m_pCodecCtx->time_base.num = 1;

    SetAVRational(m_pCodecCtx->framerate, m_mtInput.GetFrameRate());
    //m_pCodecCtx->framerate = (AVRational){ 30, 1 };
    m_pCodecCtx->gop_size = 10;  // GOP size
    m_pCodecCtx->max_b_frames = 1;

    // set pixel format
    m_pCodecCtx->pix_fmt = ePixFmtInput;  // YUV 4:2:0 format or YUV 4:2:2

    // Open the codec
    if (m_spAVCodecDlls->avcodec_open2(m_pCodecCtx, m_pCodec, NULL) < 0)
    {
        return false;
    }

        return true;
}

bool VideoEncoder::AllocateFrame()
{

    m_pFrame = m_spAVCodecDlls->av_frame_alloc();
    if (m_pFrame == NULL)
    {
        Log.Log(_T("Failed to allocate frame object!"));
        return false;
    }

    m_pFrame->format = m_pCodecCtx->pix_fmt;
    m_pFrame->width = m_pCodecCtx->width;
    m_pFrame->height = m_pCodecCtx->height;

    m_pFrame->time_base.den = m_pCodecCtx->time_base.den;
    m_pFrame->time_base.num = m_pCodecCtx->time_base.num;


    return true;
}

bool VideoEncoder::Encode(IMediaSample* pSample)
{
    if (m_pFrame == NULL)
    {
        return false;
    }

    // get the time stamps
    REFERENCE_TIME rtStart, rtEnd;
    HRESULT hr = pSample->GetTime(&rtStart, &rtEnd);
    m_rtInputFrameStart = rtStart;
    m_rtInputFrameEnd = rtEnd;


    // get length
    int nLength = pSample->GetActualDataLength();

    // get pointer to actual sample data
    uint8_t* pData = NULL;
    hr = pSample->GetPointer(&pData);

    if (FAILED(hr) || NULL == pData)
        return false;

    m_pFrame->flags = (S_OK == pSample->IsSyncPoint()) ? (m_pFrame->flags | AV_FRAME_FLAG_KEY) : (m_pFrame->flags & ~AV_FRAME_FLAG_KEY);

    // clear old data
    for (int n = 0; n < AV_NUM_DATA_POINTERS; n++)
    {
        m_pFrame->data[n] = NULL;// (uint8_t*)aryData[n];
        m_pFrame->linesize[n] = 0;// = aryStride[n];
    }


    int nRet = 0;
    int nStride = m_mtInput.GetStride();
    nRet = m_spAVCodecDlls->av_image_fill_arrays(m_pFrame->data, m_pFrame->linesize, pData, ePixFmt, m_pFrame->width, m_pFrame->height, 32);
    if (nRet < 0)
    {
        return false;
    }

    m_pFrame->pts = (int64_t) rtStart;
    m_pFrame->duration = rtEnd - rtStart;
    nRet = m_spAVCodecDlls->avcodec_send_frame(m_pCodecCtx, m_pFrame);
    if (nRet == AVERROR(EAGAIN))
    {
        ReceivePacket();
        nRet = m_spAVCodecDlls->avcodec_send_frame(m_pCodecCtx, m_pFrame);
    }

    if (nRet < 0)
    {
        return false;
    }

    // Receive the encoded packets
    ReceivePacket();

    return true;
}

bool VideoEncoder::ReceivePacket()
{
    bool bRet = true;
    AVPacket* pkt = m_spAVCodecDlls->av_packet_alloc();
    while (m_spAVCodecDlls->avcodec_receive_packet(m_pCodecCtx, pkt) == 0)
    {
        // Write pkt->data to output file or stream
        m_pCallback->VideoEncoderWriteEncodedSample(pkt);
        if (m_OutFile.IsOpen())
            m_OutFile.Write(pkt->data, pkt->size);
        m_spAVCodecDlls->av_packet_unref(pkt);
    }
    m_spAVCodecDlls->av_packet_free(&pkt);

    return bRet;
}


    


    I must have done something wrong. The result is not correct. For example, rather than a video with a person's face showing in the middle of the screen, I get a mostly green screen with parts of the face showing up at the lower left and lower right corners.

    


    Can someone help me ?